Blue light filtering
There is the question you asked, and the question you might have asked... and they are slightly different. I will answer both (I had answered the second before you clarified that you meant to ask the first... seems a shame to delete the second answer)
First - the question you asked (about electronically controlling the display properties). There are (at least) three different ways to control the color of a pixel electronically: you can set an RGB value, you can (on some systems) adjust the display properties (gamma, color temperature), and you can bypass the operating system and go straight to the video driver where you control the intensity of the "blue", "green" and "red" channels of the display. Obviously, only if you completely turn off the blue and green channels can you be sure you have only output from the red channels - otherwise, there are intermediate processing steps that control them. In other words, just because you instruct the OS to display a "red" color doesn't guarantee that the blue pixel won't be turned on ever so slightly (because someone had created a color conversion table that requires a little bit of blue to get the red "just right"). That means that only control at the video driver level will guarantee that you only get light from the pixels you want. But that doesn't guarantee that you have no blue light from your display... and that's the crux of my second answer. Read on.
To appreciate the difference between electronic and physical control of the output spectrum (that is, using a physical filter that lies on top of your screen and absorbs some wavelengths), you need to know the spectral properties of the phosphors used in your screen. This differs from one phone to another.
For example, from this page I extracted the following plot of backlight spectrum for three different models of phone (with different display technologies):
Next, you need to know the properties of the filters (inside the display, per-pixel) used to turn this into the RGB channels for your particular phone. These are "notch" filters, but not terribly narrow. The resulting output is the light that reaches our eyes. Again, I found a few spectra that are representative, but not an exhaustive set of all displays (compiled from this page):
If your application turns off the green and blue components, leaving you only red, then you have a well-shifted spectrum with very little blue component (although it's not zero - there is a "bump" at 450 nm). If you only suppress the blue channel, then depending on the display you use, there could still be a significant blue component if you use the iPhone 4s (circled in red on the "green" spectrum: showing that the filter used on the green pixels lets through a significant blue component).
On the other hand, if you use an external (physical) filter overlay, then the properties of the transmitted light are no longer (just) a function of the display - it becomes a function of the physical filter characteristics.
I found a site that describes a lot of physical filtering options - many of them are glasses with different filtering properties. A particularly effective one appears to be
It is clear that a physical filter can be better than video (electronic) control alone, because you can cut the blue component independent of the characteristics of the display itself. Wearing glasses such as the ones shown may be one way to reduce your blue exposure later in the day. It's still a bad idea to be on your phone until you go to bed...
I make f.lux, and I've chosen not to ship an overlay app for Android because I think it is a poor approach. It's hard to watch people say that these approaches are the "same" in any way, but I will try to bring numbers to the discussion, so maybe I will convince you.
Measuring "melanopic response per lux" answers the question: at the same visual brightness level, what's the impact to the melanopic action spectrum? Here's what you see on a Nexus 4 (an LCD panel):
- Nexus 4 (no filter): 0.93 m-lux/lux
- Nexus 4 (Twilight at 1500K): 0.83 m-lux/lux
- Nexus 4 (f.lux at 1900K): 0.31 m-lux/lux
Measuring color temperature on a white page:
- Nexus 4 (no filter): 7110K
- Nexus 4 (Twilight at 1500K): 5072K
- Nexus 4 (f.lux at 1900K): 2122K
For darker pixels, Twilight and other overlays make things brighter, not dimmer. So the more important thing is that overlays dramatically reduce contrast. Again, some measurements:
- Nexus 4 (native): 937.5:1
- f.lux 2700K: 425:1
- f.lux 1900K: 365:1
- Twilight 1500K (default): 70:1
- Twilight Bed Reading 1000K: 24:1
Our spectral data for these measurements uses a well-calibrated reference spectroradiometer and is Creative Commons licensed. You can see it here:
https://fluxometer.com/rainbow/#!id=Nexus%204%20Apps/6500k-nexus4
Finally (and I hope the numbers explain this), my opinion is that transparent overlays are mostly blue-light theater, and dimming a display is much much better when you can't do a real color transform. I do not want to make screens have lower contrast, because I think it makes them harder to read. Otherwise, I'd have shipped this!
My attempt of answer will be based on my non-specialized knowledge of programming and computers workings, because the links you provide don't seem to give out the details on the workings of the code. I'm a physicist with a non-academic conceptual and practical understanding of computers.
Whatever layers your software might have, the display will react equally under equal input signal received. At some point a final digital dataset (DD) is created and passed to whichever module converts it into electronic signals. So our comparison only cares about the differences before this moment, after which equal DD should yield equal color-brightness response of the display.
If a software works at the video driver level, the above means that the DD is built by it using some input describing the screen content, and using embedded code which takes into account daylight info to render the final DD.
If the software works by "superimposing a filter", the above statement would mean that in the input information sent to the video driver, the presence of the filter is encoded in the input describing the screen content, and the video driver builts the DD without caring about daylight info.
Whatever the specifics of either case, the final result will be a DD which tells each pixel of the display the RGB content and brightness and they will respond in consequence.
Whichever the case, the proper testing of the effectiveness of the method should be measuring and characterising the light emission of the screen (in color-content and brightness). And a successful result should be that the final result, the emission, complies with the adequate color content and brightness found by the studies of the human eye/brain response.
So conceptually it doesn't matter how you build the dataset because of two reasons:
- Both cases should be able to provide desired screen emission profile, I mean theoretically there should be a set of parameters for each of them, such that both methods can provide equal emission results, whatever these results were to be.
- If they are to protect your sleep routine, as they promise, they should comply with the same requirements set by optical/brain studies by providing similar screen emission/perception results.