Programmatic approach to HDR photography

Edit.

I have produced an image which is "cleaner" looking than my original attempt, and the processing is faster too.

As before we start by loading the images in order from darkest to brightest, and cropping away the artifacts from alignment.

files = Reverse@FileNames["memorial*.png"];
images = ImagePad[Import[#], {{-2, -12}, {-35, -30}}] & /@ files;

HDR image construction:

Like Thies's approach, this uses averaging over multiple images to obtain pixel values. Intensity data is extracted from the images and low or high values are set to zero to flag potentially noisy or saturated pixels. The exposure ratio between two images is estimated by considering only those pixels which are non-zero in both images. After compensating for exposure differences, the pixel values from all images are combined into a mean image, again using only the non-zero pixels. Finally the mean image is split into HSB components.

data = ImageData[First@ColorSeparate[#, "Intensity"]] & /@ images;
data = Map[Clip[#, {0.1, 0.97}, {0, 0}] &, data, {3}];

exposureratios = Module[{x, A, g}, 
  First@Fit[Cases[Flatten[#, {{2, 3}, {1}}], {Except[0], Except[0]}, 1],
  x, x] & /@ Partition[data, 2, 1]];
exposurecompensation = 1/FoldList[Times, 1, exposureratios];

data = MapThread[Times, {exposurecompensation, Unitize[data] (ImageData /@ images)}];
data = Transpose[data, {3, 1, 2, 4}];
meanimage = Map[Mean[Cases[#, Except[{0., 0., 0.}]]] &, data, {2}];

{h, s, b} = ColorSeparate[ColorConvert[ImageAdjust@Image[meanimage], "RGB"], "HSB"];

Tone mapping:

We now have a brightness channel containing a range of values from 0 to 1, but with a very non-uniform distribution.

ImageHistogram[b]

enter image description here

First I do a histogram equalisation on the brightness data:

cdf = Rescale@Accumulate@BinCounts[Flatten@ImageData@b, {0, 1, 0.00025}];
cdffunc = ListInterpolation[cdf, {{0, 1}}];
histeq = Map[cdffunc, ImageData[b], {2}];
ImageHistogram[Image@histeq]

enter image description here

Next I apply a sort of double-sided gamma adjustment to reduce the number of very low and very high values (we don't want too many deep shadow or bright highlights).

b2 = Image[1 - (1 - (histeq^0.25))^0.5];
ImageHistogram[b2]

enter image description here

Final image:

Finally I apply a built-in Sharpen filter to the new brightness channel, to boost local contrast a little bit, and apply a gamma adjustment to the saturation channel to make it a little more colourful. The HSB channels are then recombined into the final colour image.

ColorCombine[{h, ImageAdjust[s, {0, 0, 0.75}], Sharpen[b2]}, "HSB"]

enter image description here

Original version

Here's an attempt at the Stanford Memorial Church image using a local contrast filter to do the tone mapping.

First load the images and crop to remove the artifacts around the edges of some of them:

files = Reverse @ FileNames["memorial*.png"];
images = ImagePad[Import[#], -40] & /@ files;

Next create small grayscale versions and use these to estimate the brightness scaling between the images

small = ImageData[ImageResize[ColorConvert[#, "Grayscale"], 50]] & /@ images;
imageratios = FoldList[Times, 1, 
  Table[a /. Last@
    FindMinimum[Total[(small[[i]] - a small[[i + 1]])^2, -1], {a, 1}], 
  {i, Length@small - 1}]]

Now select the "best" image from which to take each pixel value, and scale that value accordingly. I've defined the "best" image for a given pixel as the one for which the median of the {R,G,B} numbers is closest to 0.5.

data = Transpose[ImageData /@ images, {3, 1, 2, 4}];
bestimage = Map[Module[{best},
 best = Ordering[(Median /@ # - 0.5)^2, 1][[1]];
 #[[best]]*imageratios[[best]]] &, data, {2}];

Next apply a local contrast enhancement to the brightness channel of the image. This is quite simple and slow. For each pixel the filter sorts the unique values in the pixel's neighbourhood and finds the pixel's position in that list. The pixel value is set to the fractional list position. For example if a pixel is the brightest one in its neighbourhood, it gets a value of 1. The size value in the localcontrast function must match the range parameter in the ImageFilter.

localcontrast = With[{size = 20}, Compile[{{x, _Real, 2}}, 
Block[{a, b, val}, val = x[[size + 1, size + 1]];
a = Union[Flatten[x]];
b = Position[a, val][[1, 1]];
b/Length[a]]]];

{h, s, b} = ColorSeparate[ColorConvert[Image[bestimage], "RGB"], "HSB"];     
newb = ImageFilter[localcontrast, b, 20];

Finally combine the contrast-enhanced brightness channel with the original saturation and hue to get the final image:

ColorCombine[{h, s, newb}, "HSB"]

enter image description here

It's not brilliant, but I think the general HDRI effect is there. The contrast enhancement could probably be toned down a bit by increasing the size parameter, though it'll be slower.


For starters i tried an easy intuitive approach, namely, combining the best parts from each image adjusted for the different exposure times all into one HDR image.

Let's start by importing all the images

imageurls = "http://upload.wikimedia.org/wikipedia/commons/thumb/" <> # & /@
  {"0/09/StLouisArchMultExpEV-4.72.JPG/320px-StLouisArchMultExpEV-4.72.JPG",
   "c/c3/StLouisArchMultExpEV-1.82.JPG/320px-StLouisArchMultExpEV-1.82.JPG", 
   "8/89/StLouisArchMultExpEV%2B1.51.JPG/320px-StLouisArchMultExpEV-1.51.JPG",
   "8/8f/StLouisArchMultExpEV%2B4.09.JPG/320px-StLouisArchMultExpEV-4.09.JPG"};

FPImage = Image[#, "Real"]& (* change image rep. to floating point *)
inputimages = SortBy[
    Composition[FPImage, Import] /@ imageurls,
    (* sort images by increasing exposure *)
    Composition[Mean, Flatten, ImageData, First, ColorSeparate[#, "Intensity"] &]
  ]

Input images

where we changed the internal representation to floating point to avoid rounding errors later and already sorted in ascending order by exposure times.

Next we compare neighbour pairs of images and try to guess their exposure ratios. (Probably there is a more robust and effective method, but for first try it seems to work and give reasonably exact values).

ScaledImageSquaredError[img1_Image, img2_Image, s_?NumericQ] :=
  Composition[Total, Flatten, ImageData, ImageApply[#^2 &, #] &,
              First, ColorSeparate[#, "Intensity"] &][
      ImageDifference[img2, ImageClip@ImageApply[s # &, img1]]
    ];

(* Guess exposure ratio. img2 must be the brighter than img1 *)
GuessExposureRatio[img1_Image, img2_Image] := NArgMin[
  {ScaledImageSquaredError[img1, img2, \[Alpha]],
   1 < \[Alpha] < 10},
   \[Alpha], Method -> {"NelderMead", "RandomSeed" -> 24}
]

exposureratios = GuessExposureRatio @@@ Partition[inputimages, 2, 1]
(* {4.66667, 5.89289, 3.74074} *)

(* brightness scaling factors to get all images to the same exposure level *)
unifyingexposures = #/Max[#] &@Reverse@FoldList[Times, 1, Reverse@exposureratios]
(* {1., 0.214286, 0.0363634, 0.00972091} *)

Next we want to generate masks for each image, marking the usable portion of each image. Our criterium will be the brightness of each pixel. We don't want the low-brightness range because of possibly bad signal-to-noise ratios and we don't want the brightnesses near 1.0 because of possible clipped values.

Therefore we define a (kind of arbitrary and adjustable) brightness weight curve

brightnessweights = Function[b, 
    Piecewise[
      {{Interpolation[{{{0}, 0, 0}, {{0.2}, 1, 0}, {{0.95}, 1, 0}, {{1}, 0, 0}},
                      InterpolationOrder -> 3][b],
        0 <= b <= 1}}, 0
    ]
  ]
Plot[brightnessweights[b], {b, 0, 1}, PlotRange -> All, ImageSize -> Small]

brightness weight curve

and see how the different exposure regions overlap and combine

LogLinearPlot[
  Evaluate[brightnessweights[b/#] & /@ unifyingexposures],
  {b, 0.0002, 1},
  PlotRange -> All, Filling -> Axis
]

combined brightness regions

We have good overlap, but not too much, and no gaps in brightness levels, so that looks nice.

Next, we'll construct the weight maps by applying the brightness curve to the brightnesses of each image pixel and then normalizing by the sum of all weight maps at each pixel.

GenerateWeightMask = Composition[
    ImageApply[brightnessweights, #] &, First, ColorSeparate[#, "Intensity"] &
  ]
ImageTotal = Fold[ImageAdd, First@#, Rest@#] &

nrimages = Length[inputimages];
brightnessmasks = GenerateWeightMask /@ inputimages
scaledmasks = ImageMultiply[#, 1/nrimages] & /@ brightnessmasks;
masksum = ImageTotal[scaledmasks];
invmasksum = ImageApply[1/(nrimages #) &, masksum];
weightmasks = ImageMultiply[#, invmasksum] & /@ brightnessmasks

brightnessmasks

weightmasks

Now we multiply our input images with the weight masks and adjust for the different exposure levels and then combine all those into the final HDR composite image.

sameexposureimages = Function[{s, i}, ImageApply[s # &, i]] @@@
    Transpose[{unifyingexposures, inputimages}]
hdrimage = ImageTotal[
    ImageMultiply @@@ Transpose[{weightmasks, sameexposureimages}]
  ]

hdrimage

which we can now watch at different artificial exposures

Manipulate[ImageMultiply[hdrimage, \[Alpha]], {{\[Alpha], 1}, 1, 100}]

hdr image with custom exposure

Also we could now start compressing the dynamic range again by applying tone mapping algorithms and the like.

Update An example for an easy global tone mapping algorithm can be achieved by applying #/(#+C) & to every pixel intensity which looks like this:

Manipulate[
  ImageApply[#/(# + \[Alpha]) &, hdrimage], {{\[Alpha], 0.2}, 0.001, 0.2}
]

tonemapped HDR image


Version 9 answer - use built-in functionality

The symbols Image`CreateHDRI and Image`ToneMapHDRI were present in version 8 but didn't seem to do anything. In version 9 there is functioning code behind them. This is all undocumented, and therefore liable to change before it is officially released, but here is what I've managed to dig up.

Image`CreateHDRI

This function takes a list of images at different exposures and composes them into a single high dynamic range image. The basic usage is simply

Image`CreateHDRI[{image1, image2, ...}, options]

The following default options are defined:

Options[Image`CreateHDRI]
(* {"ExposureTimes" -> {}, "EV" -> {},
    "GenerateCameraResponseFunction" -> True, "AlignTranslation" -> False} *)

"ExposureTimes" and "EV" allow you to provide a list of known exposure times or exposure values. If both are left at the default empty list setting, the exposure times will be computed from the images.

"AlignTranslation" can be set to True to align the images prior to combining them. As the name suggests, the alignment is translation only, scale and rotation errors are not handled.

"GenerateCameraResponseFunction" presumably generates a camera response function :-) This takes suboptions, which default to {"Variance" -> 16., "PredefinedResponse" -> "Linear"}. Other settings for the predefined response curve are "Log10" and "Gamma". I'm not sure what the "Variance" setting does, or what happens if "GenerateCameraResponseFunction" is set to False.

Image`ToneMapHDRI

This function takes a single HDR image and applies one of a variety of tone-mapping algorithms, which are specified in a Method option. The default is:

Options[Image`ToneMapHDRI]
(* {Method -> "PhotographicToneReproduction"} *)

The available algorithms are:

algos = Image`HDRImageProcessingDump`$ToneMapAlgos
(* {"AdaptiveLog", "Photoreceptor", "FattalGradientDomain", 
     "PhotographicToneReproduction", "ContrastDomain"} *)

Each algorithm has a set of sub-options, the default settings are:

{#, Image`HDRImageProcessingDump`defaultsuboptions[#]}& /@ algos

(* {{"AdaptiveLog", {"Bias" -> 0.8, "Gamma" -> 1., "Saturation" -> 1.}},
 {"Photoreceptor", {"LuminousIntensity" -> 0., "ChromaticAdaptation" -> 0., "LightAdaptation" -> 1., "OverallContrast" -> 0.}},
 {"FattalGradientDomain", {"Threshold" -> 0.1, "Order" -> 0.8, "ParamX" -> Automatic, "Saturation" -> 0.8}},
 {"PhotographicToneReproduction", {"Key" -> 0.18, "Sharpening" -> 8., "UseLocalOperator" -> False, "NumberOfScales" -> 8, "SmallestScale" -> 1, "LargestScale" -> 43, "Saturation" -> 0.8}},
 {"ContrastDomain", {"ContrastScaleFactor" -> 0.3, "UseContrastEqualization" -> True, "Saturation" -> 0.8}}} *)

Not being an expert in HDR tone-mapping algorithms, I can't really comment on what these parameters control, but it is easy enough to play around with the values and see the results.

Example

Here is an example of processing the Stanford Memorial Church images with the new functionality:

(* load the source images *)
files = Reverse@FileNames["memorial*.png"];
images = ImagePad[Import[#], {{-2, -12}, {-35, -30}}] & /@ files;

(* create the HDR image *)
i = Image`CreateHDRI[images];

(* apply a tone-mapping algorithm *)
Image`ToneMapHDRI[i, Method -> {"FattalGradientDomain", {"Threshold" -> 0.3}}]

enter image description here