How to recover an unfocussed image

Unfortunately there is a loss of information for physical images i.e. images with finite signal to noise ratio per pixel. An out of focus lens acts like a linear transformation, i.e. a matrix between the focused ideal image and the actual image. To reverse that transformation we have to calculate the inverse of the matrix. Depending on the severity of the blurring that inverse may not exist or it may, if it exists, amplify noise and sampling errors in the resulting image very strongly. Imagine the worst case blurring matrix of a two pixel image:

$\begin{bmatrix}0.5&0.5\\0.5&0.5\end{bmatrix}$

This matrix is singular and can not be inverted at all.

Take a less severe case (20% blurring), now the matrix is

$\begin{bmatrix}0.8&0.2\\0.2&0.8\end{bmatrix}$

and the inverse of that is:

$\begin{bmatrix}4/3&-1/3\\-1/3&4/3\end{bmatrix}$

There are two problems with this one: because of negative coefficients in the inverse you may end up with negative pixel values in the reconstructed image, which is unphysical. Secondly the diagonal elements are larger than one, which amplifies noise.

Having said that, one can achieve remarkable results if the resulting image has a very high signal to noise ratio and if the inverse transformation can be reconstructed with high precision.

If you are interested in this area I would urge you to do your own experiments with a few matrices to get a feel for what's going on. Ideally image blurring is a local phenomenon, i.e. we can restrict ourselves to areas of an image that are only a few (maybe 2-5) pixels wide. This reduces the problem to small matrices. Wolfram Alpha can do the matrix inversion for you, so you don't have to set up any math package (although numpy is easy to use, if you know Python).

As for the experimental side of it, the proper way to calibrate a lens requires to produce a series of high contrast test images of either pinholes (delta functions) to retrieve the blurring matrix directly or, even better, to use high frequency stripe patterns to measure the blurring in the Fourier domain.


Unfocussing is convolving the image with a low-pass filter (that's what the optical system does to the source signal here). If high-frequencies are distroyed, you can't expect to recover them by simple algebra, since it would mean dividing by zero in the spectral domain. (filter high freqs are zero in real world, because the gaussian or the Bessel tails would be to low to make 1 bit of grey level).

Still, nowadays there are solutions to that, how unintuitive they can be. They consist in using purposely made "bad lenses" so that the Fourier transform of the "bluring filter" contains the high frequencies - and indeed, be dense in the Fourier domain - , so that you can invert them (i.e. process a "unconvolving"). Keyword is "coded aperture", and applies to many similar cases (depth of field, motion blur, etc). Several papers of the MIT graphics group developped this.

Appart from this: Note also than under some regularising hypothesis, you can sometime also recover the high frequencies if you have multiple images (with varying view angles). This is basically how tomography works.


This should help: http://yuzhikov.com/articles/BlurredImagesRestoration1.htm

He is able to calculate/approximate the kernel and there is code to implement it as well which could be translated into Python.

A more precise deconvolution, 'blind deconvolution' is achieved using an iteratively tuned method with a Wiener filter that is used to take into account the randomness of noise in the image.