Ever had lots of screenshots that you liked to index and search?

Well, this is the objective of this post. We’ll use minimal code “Just enough to get things working”

To do so, Some python Libraries and their dependencies should be installed 1st

    • PIL “Python imaging library”  -> https://pypi.org/project/PIL/
    • Pytesserct “Python Optical Character Recognition Tool”
    • OpenCV “Open Source Computer Vision Library”  *optional

The 1st method is:

from PIL import Image import pytesseract image = Image.open('/path/to/image.png') recognized_text = pytesseract.image_to_string(image)

If you are using an image with a foreign language, then you may specify this in the last line. This will require downloading the language files to the tessdata directory.

ex: /usr/share/tesseract-ocr/4.00/tessdata/ita.traineddata

recognized_text = pytesseract.image_to_string(image, lang='ita')

The 2nd method is using OpenCV pytesseract:

import cv2 import pytesseract image = cv2.imread('/path/to/timage.png') recognized_text = pytesseract.image_to_string(image, lang='ara')

So, getting a practical usage of the above code snippets would be indexing all the images/screenshots in a folder:
Note: OCR is a resource intensive function, so using this code in a folder with lots of image will cause a high cpu usage for a while.

import os from PIL import Image import pytesseract screenshots_path = '/path/to/directory' recognized_text = {} # the dictionary where We are going to store the data # filtering the screenshots out of all the other files in the path directory using the ".png" in the filename for screenshot in [x for x in os.listdir(screenshots_path) if '.png' in x.lower()]:     recognized_text[screenshot] = pytesseract.image_to_string(image)

The data is indexed and stored in the dictionary “recognized_text”, So it can be used for further searching.
Or the data can be written in a different way & the image files can be opened based on searched text in case of a match.

These 2 methods should wor properly for clean screenshots with not much noise, As the noise requires extra steps.

All The credit for the tools and libraries used go for their contributors.

[amazon_link asins=’B075JGW5YK,8850329156,1449355730,8850333978,8848125956′ template=’ProductCarousel’ store=’ipvx-21′ marketplace=’IT’ link_id=’ec7049ca-950f-11e8-ab8f-93399c697f23′]