Comparing UIImage
If you have two UIImages, you should get their CGImageRef
quartz representations from those objects. Then create two new bitmap contexts backed by a memory buffer that you create and pass in, one for each of the images. Then use CGContextDrawImage
to draw the images into the bitmap contexts. Now the bytes of the images are in the buffers. You can then loop through manually or memcmp
to check for differences.
Apple's own detailed explanation and sample code around creating bitmap contexts and drawing into them is here:
https://developer.apple.com/library/content/documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_context/dq_context.html
The difference for you is that you're drawing an existing image into the context. Use CGContextDrawImage
for this.
This is what I use in my unit tests to compare images. Unlike other methods (e.g., UIImagePNGRepresentation
), it works even if the images have a different color space (e.g., RGB and grayscale).
@implementation UIImage (HPIsEqualToImage)
- (BOOL)hp_isEqualToImage:(UIImage*)image
{
NSData *data = [image hp_normalizedData];
NSData *originalData = [self hp_normalizedData];
return [originalData isEqualToData:data];
}
- (NSData*)hp_normalizedData
{
const CGSize pixelSize = CGSizeMake(self.size.width * self.scale, self.size.height * self.scale);
UIGraphicsBeginImageContext(pixelSize);
[self drawInRect:CGRectMake(0, 0, pixelSize.width, pixelSize.height)];
UIImage *drawnImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return UIImagePNGRepresentation(drawnImage);
}
@end
It's not very efficient, so I would recommend against using it in production code unless performance is not an issue.