Is there a way to detect if an image is blurry?

Another very simple way to estimate the sharpness of an image is to use a Laplace (or LoG) filter and simply pick the maximum value. Using a robust measure like a 99.9% quantile is probably better if you expect noise (i.e. picking the Nth-highest contrast instead of the highest contrast.) If you expect varying image brightness, you should also include a preprocessing step to normalize image brightness/contrast (e.g. histogram equalization).

I've implemented Simon's suggestion and this one in Mathematica, and tried it on a few test images:

test images

The first test blurs the test images using a Gaussian filter with a varying kernel size, then calculates the FFT of the blurred image and takes the average of the 90% highest frequencies:

testFft[img_] := Table[
  (
   blurred = GaussianFilter[img, r];
   fft = Fourier[ImageData[blurred]];
   {w, h} = Dimensions[fft];
   windowSize = Round[w/2.1];
   Mean[Flatten[(Abs[
       fft[[w/2 - windowSize ;; w/2 + windowSize, 
         h/2 - windowSize ;; h/2 + windowSize]]])]]
   ), {r, 0, 10, 0.5}]

Result in a logarithmic plot:

fft result

The 5 lines represent the 5 test images, the X axis represents the Gaussian filter radius. The graphs are decreasing, so the FFT is a good measure for sharpness.

This is the code for the "highest LoG" blurriness estimator: It simply applies an LoG filter and returns the brightest pixel in the filter result:

testLaplacian[img_] := Table[
  (
   blurred = GaussianFilter[img, r];
   Max[Flatten[ImageData[LaplacianGaussianFilter[blurred, 1]]]];
   ), {r, 0, 10, 0.5}]

Result in a logarithmic plot:

laplace result

The spread for the un-blurred images is a little better here (2.5 vs 3.3), mainly because this method only uses the strongest contrast in the image, while the FFT is essentially a mean over the whole image. The functions are also decreasing faster, so it might be easier to set a "blurry" threshold.


Yes, it is. Compute the Fast Fourier Transform and analyse the result. The Fourier transform tells you which frequencies are present in the image. If there is a low amount of high frequencies, then the image is blurry.

Defining the terms 'low' and 'high' is up to you.

Edit:

As stated in the comments, if you want a single float representing the blurryness of a given image, you have to work out a suitable metric.

nikie's answer provide such a metric. Convolve the image with a Laplacian kernel:

   1
1 -4  1
   1

And use a robust maximum metric on the output to get a number which you can use for thresholding. Try to avoid smoothing too much the images before computing the Laplacian, because you will only find out that a smoothed image is indeed blurry :-).