How do I detect that two images are "the same" even if one has slightly different cropping/ratio?
You may want to take a look at feature matching. The idea is to find features in two images and match them. This method is commonly used to find a template (say a logo) in another image. A feature, in essence, can be described as things that humans would find interesting in an image, such as corners or open spaces. There are many types of feature detection techniques out there however my recommendation is to use a scale-invariant feature transform (SIFT) as a feature detection algorithm. SIFT is invariant to image translation, scaling, rotation, partially invariant to illumination changes, and robust to local geometric distortion. This seems to match your specification where the images can have slightly different ratios.
Given your two provided images, here's an attempt to match the features using the FLANN feature matcher. To determine if the two images are the same, we can base it off some predetermined threshold which tracks the number of matches that pass the ratio test described in Distinctive Image Features from Scale-Invariant Keypoints by David G. Lowe. A simple explanation of the test is that the ratio test checks if matches are ambiguous and should be removed, you can treat it as a outlier removal technique. We can count the number of matches that pass this test to determine if the two images are the same. Here's the feature matching results:
Matches: 42
The dots represent all matches detected while the green lines represent the "good matches" that pass the ratio test. If you don't use the ratio test then all the points will be drawn. In this way, you can use this filter as a threshold to only keep the best matched features.
I implemented it in Python, I'm not very familiar with Rails. Hope this helps, good luck!
Code
import numpy as np
import cv2
# Load images
image1 = cv2.imread('1.jpg', 0)
image2 = cv2.imread('2.jpg', 0)
# Create the sift object
sift = cv2.xfeatures2d.SIFT_create(700)
# Find keypoints and descriptors directly
kp1, des1 = sift.detectAndCompute(image2, None)
kp2, des2 = sift.detectAndCompute(image1, None)
# FLANN parameters
FLANN_INDEX_KDTREE = 1
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks=50) # or pass empty dictionary
flann = cv2.FlannBasedMatcher(index_params,search_params)
matches = flann.knnMatch(des1,des2,k=2)
# Need to draw only good matches, so create a mask
matchesMask = [[0,0] for i in range(len(matches))]
count = 0
# Ratio test as per Lowe's paper (0.7)
# Modify to change threshold
for i,(m,n) in enumerate(matches):
if m.distance < 0.15*n.distance:
count += 1
matchesMask[i]=[1,0]
# Draw lines
draw_params = dict(matchColor = (0,255,0),
# singlePointColor = (255,0,0),
matchesMask = matchesMask,
flags = 0)
# Display the matches
result = cv2.drawMatchesKnn(image2,kp1,image1,kp2,matches,None,**draw_params)
print('Matches:', count)
cv2.imshow('result', result)
cv2.waitKey()
Because ImageMagick is very old, advanced and a many-featured tool, it would be difficult to build an interface that covers most of the features. As great as it is, rmagick does not (and neither do the many attempts python has taken) come close to covering all of the features.
I imagine for many use cases, it'll be safe-enough and much easier to just execute a command line method and read from that. In ruby that'll look like this;
require 'open3'
def check_subimage(large, small)
stdin, stdout, stderr, wait_thr = Open3.popen3("magick compare -subimage-search -metric RMSE #{large} #{small} temp.jpg")
result = stderr.gets
stderr.close
stdout.close
return result.split[1][1..-2].to_f < 0.2
end
if check_subimage('a.jpg', 'b.jpg')
puts "b is a crop of a"
else
puts "b is not a crop of a"
end
I'll cover important stuff and then talk about additional notes.
The command uses magick compare to check if the second image (small
) is a subimage of the first (large
). This function does not check that small is strictly smaller than large (both height and width). The number I put for the similarity is 0.2 (20% error), and the value for the images you provided is about 0.15. You may want to fine tune this! I find that images that are a strict subset get less than 0.01.
- If you want less error(smaller numbers) on cases where you have 90% overlap but the second image has some extra stuff the first one doesn't, you can run it once, then crop the first large image to where the subimage is contained, then run it again with the cropped image as the "small" one and the original "small" image as the large one.
- If you really wanted a nice object oriented interface in Ruby, rmagick uses the MagicCore API. This (link to docs) command is probably what you want to use to implement it, and you can open a pr to rmagick or package the cext yourself.
- Using open3 will start a thread (see docs). Closing
stderr
andstdout
is not "necessary" but you're supposed to. - The "temp" image that's the third arg specifies a file to output an analysis onto. With a quick look, I couldn't find a way not to require it, but it does just overwrite automatically and could be good to save for debugging. For your example, it would look like this;
- The full output is in the format of 10092.6 (0.154003) @ 0,31. The first number is the rmse value out of 655535, the second one (which I use) is normalized percentage. The last two numbers represent the location of the original image from which the small image begins.
- Since there is not an objective source of truth for how "similar" images are, I picked RMSE (see more metric options here). It's a fairly common measure of differences between values. An Absolute Error count (AE) might seem like a good idea, however it seems some cropping software does not perfectly preserve pixels so you might have to adjust fuzz and it's not a normalized value, so then you'd have to compare the error count with the size of the image and whatnot.
Get the histogram of both the images and compare them. This would work very well for crop and Zoom unless there is too drastic a change because of these.
This is better than the current approach where you are directly subtracting the images. But this approach still has few.