Stroke Width Transform (SWT) implementation (Python)
There is a complete Library SWTloc here a Python 3 implementation of the algorithm
v2.0.0 onwards
Install the library
pip install swtloc
Transforming Image
import swtloc as swt
imgpath = 'images/path_to_image.jpeg'
swtl = swt.SWTLocalizer(image_paths=imgpath)
swtImgObj = swtl.swtimages[0]
swt_mat = swtImgObj.transformImage(text_mode='lb_df',
auto_canny_sigma=1.0,
maximum_stroke_width=20)
Localizing Letters
localized_letters = swtImgObj.localizeLetters(minimum_pixels_per_cc=100,
maximum_pixels_per_cc=10_000,
acceptable_aspect_ratio=0.2)
Localizing Words
localized_words = swtImgObj.localizeWords()
Full Disclosure : I am the author of this library
I implemented something similar to the distance transform based SWT described in 'ROBUST TEXT DETECTION IN NATURAL IMAGES WITH EDGE-ENHANCED MAXIMALLY STABLE EXTREMAL REGIONS by Huizhong Chen, Sam S. Tsai, Georg Schroth, David M. Chen, Radek Grzeszczuk, Bernd Girod'.
It's not the same as described in the paper but a rough approximation that served my purpose. Thought I should share it so somebody might find it useful (and point out any errors/improvements). It is implemented in C++ and uses OpenCV.
// bw8u : we want to calculate the SWT of this. NOTE: Its background pixels are 0 and forground pixels are 1 (not 255!)
Mat bw32f, swt32f, kernel;
double min, max;
int strokeRadius;
bw8u.convertTo(bw32f, CV_32F); // format conversion for multiplication
distanceTransform(bw8u, swt32f, CV_DIST_L2, 5); // distance transform
minMaxLoc(swt32f, NULL, &max); // find max
strokeRadius = (int)ceil(max); // half the max stroke width
kernel = getStructuringElement(MORPH_RECT, Size(3, 3)); // 3x3 kernel used to select 8-connected neighbors
for (int j = 0; j < strokeRadius; j++)
{
dilate(swt32f, swt32f, kernel); // assign the max in 3x3 neighborhood to each center pixel
swt32f = swt32f.mul(bw32f); // apply mask to restore original shape and to avoid unnecessary max propogation
}
// swt32f : resulting SWT image
Ok so here goes:
The link that has details on the implementation with the code download link at the bottom: SWT
For the sake of completeness, also mentioning that SWT or Stroke Width Transform was devised by Epshtein and others in 2010 and has turned out to be one of the most successful text detection methods til date. It does not use machine learning or elaborate tests. Basically after Canny edge detection on the input image, it calculates the thickness of each stroke that makes up objects in the image. As text has uniformly thick strokes, this can be a robust identifying feature.
The implementation given in the link is using C++, OpenCV and the Boost library they use for the connected graph traversals etc. after the SWT step is computed. Personally I've tested it on Ubuntu and it works quite well (and efficiently), though the accuracy is not exact.