HDR to SDR tone-mapping

I’ve been playing a lot of Flight Simulator lately, and when I acquired a monitor with basic high dynamic range (HDR) capability, thought it might be fun to try out. Little did I know it would launch me into a world of image processing and color spaces…

First, what is HDR? And what is SDR? Standard dynamic range images are optimized for a fairly small dynamic range of minimum to maximum brightness. The sRGB color space, standard for most computer stuff these days, is specified in optimal conditions for a maximum screen brightness of 80 nits (1 nit == 1 candela per square meter) in a darkened room, though most people’s desktops are much brighter for daylight conditions.

High dynamic range images can have much higher brightnesses, while (hopefully) still maintaining good detail in darker regions of the image. The common HDR10 pixel format used for HDR video allows for a maximum luminance of 10,000 nits — 125 times the brightness of a standard-calibrated SDR signal! Common displays may be much more limited though — my monitor is rated as DisplayHDR 400, which provides a maximum brightness of just 400 nits (5 times the SDR standard). This is still plenty to show brighter whites and colors, and is actually really nice for Flight Simulator where bright daylight and dark shadows and interiors coexist all the time.

However now that I’m flying in HDR and taking screenshots of my simulated adventures, how do I share those photos with everyone with a normal monitor, in file formats that social media platforms support?

Naturally, I decided that converting files one-off with a viewer app I found wasn’t good enough, and wrote my own utility I can use for batch-processing. ;) Once cleaned up, this can also become useful for Wikipedia to render SDR thumbnails of HDR images (once we confirm which formats we can support without problems).

To illustrate how tone-mapping and mapping of out of gamut colors affects the rendering, I’ve taken a particularly dramatic screenshot from an early morning flight, at sunrise. See also original file in JPEG XR format.

If we just clip the brighter colors into SDR range, the entire sky is completely blown out:

Or if we drop the exposure a few stops to optimize for the brightest colors, we can’t see anything but the sunrise:

To map the wide range of input into the [0, 1] range, we need some non-linear operator that preserves most of the detail in the low end untouched, then squishes brighter stuff into the top end with some loss of contrast.

A common HDR to SDR tone-mapping operator is the Reinhard algorithm; where C is the input value and C_white is the maximum value to be preserved:

TMO(C) = C(1 + C/C_white²)/(1 + C)

Reinhard et al, 2009

If you apply this separately to the input Red, Green, and Blue channels, you end up with a result that isn’t displeasing, but causes a lot of color shifts as the color elements don’t scale at the same rates… in this case, the orange areas of the sky become much more yellow than they should be. There’s also a lot of desaturation of brighter areas, much more than I like personally:

If instead we apply the operator in the luminance domain, we can preserve colors more exactly. However there’s a big problem, which is that a pixel’s luminance (brightness) may be much lower than the maximum of its components! For instance a deep orange will have a very high red, a more modest green, and a much more modest blue. When we map the resulting colors into the output, the red clips at maximum before the green does, causing bright sky oranges to shift towards yellow and lose contrast:

One possibility is to map those too-bright colors back into gamut by progressively desaturating them. For both luminance and saturation changes I’m using the Oklab color space, which is similar to CIELUV and is designed to make it easy to scale and transition colors maintaining perceptual qualities. If I apply just enough desaturation to keep every pixel’s Red, Green, and Blue elements in gamut, I lose some color in the brightest parts of the image but it packs the full punch of the brightness of the sunrise:

Which one’s right? There’s no one right answer. But when you’re batch processing you gotta pick a default, and I kinda like this last one. ;) It maintains the luminance data, which is most important to the human visual system, and though it loses the pure color of the sun and immediate area of the sunrise, it keeps the surrounding area much better than my other versions so far.

So what would we need to support these sorts of images on Wikipedia? A few things to consider:

First, actual file formats are important!

  • My screenshots are saved by the NVIDIA game capture tool in JPEG XR (a Microsoft flavored standard, which may or may not have patent issues but should be covered by their open source patent license covenant because they released a codec library for it). If patents aren’t a problem, it’s easy enough to use that library directly or indirectly.
  • I assume HDR can be done in HEIC/HEIF which is based on HEVC, the codec my NVIDIA tool captures videos in.
  • AVIF is the open media / Google-flavored variant of HEIF based on AV1 codec instead of HEVC. Should be no problems for patents from our perspective. I hear there may be browser support in Chrome at least, but have not tested this either yet.
  • OpenEXR is a more classic HDR file format for photography and cinema production usage. I don’t know the patent state, but it’s implemented by widely used open source tools.
  • For video, VP9 should be fine and AV1 will work later, but we’ll need more complications in the pipeline to deal with transcoding SDR and HDR variants!

Second, rendering regular SDR thumbnails for browsers that don’t grok them natively or don’t know how to tone-map well: we could probably adapt the utility I wrote to do this as a filter that we can plug into Thumbor. The code’s written in Rust as a CLI utility, and runs cross-platform. Could be adapted to take raw data on stdin/stdout or call as a library.

Third, interactive browser display. Whether on an SDR or HDR monitor it would often be nice to be able to adjust exposure in the viewer, which necessitates being able to do the tone-mapping in real-time; this would be best done in WebGL with a shader, rather than something silly like compiling my Rust code to WebAssembly. :)

And then that would have to get integrated into MediaViewer, with suitable mobile and desktop interfaces if necessary.

If we actually want to display HDR thumbnails inline — well that’s another fun thing! AVIF would be the main target, I think, but I don’t know what the status of support is in browsers yet (both for the format, and for HDR specifically).

We might also want the thumbnail & initial display on zoom to be able to set an exposure multiplier, or even specify whether to use tone-mapping or clip the range, as image parameters in the wiki page.

All fun possibilities that need to be decided on and taken into account some time. :)