Display colour calibration (and eyes and things)

I was recently linked to an interesting colour sorting and analysis test by a friend of mine. The basic premise is simple: sort a few rows of solid colour tiles into hue order, blending between two endpoint colours. Nice and simple in theory, and nice and simple in practice too: start at any block, scan either side until you find a similar hue, moving it to the left or right of your start block based on how close it is to the endpoints. Repeat, making as many passes across the row as you think you need until you’re done.

A lower score is better if you’re submitting results, and I scored 29, compared to my friend’s 23. I was pretty pleased with it until a friend’s fiancée hit the jackpot with a 0, and another friend — one with a great display setup, which I’ll get to shortly — managed a 12 without really trying.

Now, I could do with an eye test as much as the next person, but my focal ability and possible eye strain has little to do with it. My result is easily a greater function of the ambient lighting of the space I’m in, the colour calibration of the display I took the test with and how well it’s able to display the colour gamut of the colour space it’s set to, along with the graphics chip pushing the pixels, my web browser’s support for colour spaces and many other system-level things. The detail you could go into to extract the pure science behind it is myriad and deeply complex.

Yes, the human visual system gets involved after the light from the display hits your eye(s) and that whole system goes to work. That said, I believe that, on balance anyway, the system and display you use to take the test is more important than the physical behaviour of your visual system and how well that’s able to perform.

That brings me to the matter of the display subsystems connected to and driving the myriad screens in the modern person’s (person like me anyway) life. Take me as an example, with — counting as I type this…. — seven displays typically looked at during a day’s work and play. Only one is approaching being professionally calibrated, and it’s also the only one where I try and manage the display subsystem to output the best quality pixels. The other six, while being fairly high quality IPS LCDs in the main (really so in the iPhone 4’s case), are unmanaged, uncalibrated and completely different from all the others I gaze at. Some of them can’t be managed at all; iPhone 4, I look at you again.

The two displays connected to my workstation at work are the same model from the same vendor, and if I take a look at the manufacturing date I can see they are pretty close as far as production went, although not quite part of the same panel production run. At the same settings, connected to the same graphics board with the same cables, they look completely different. At the same brightness, colour temperature and other settings, the displays are noticeably different and produce quite different images.

If one model of display from one vendor, where the examples being compared are quite close in manufacturing date, can’t manage to produce a similar, consistent display, what hope does anyone have of unifying the output across all of their other displays, never mind those belonging to other people that they might have cause to see? None is obviously the answer. Not a single graphics professional I know has shown any interest, in all of the years I’ve been interested in graphics, in calibrating and getting consistent output from their display or displays. Not one, including me. It’s been mooted a few times on Beyond3D, but there’s never been a push to unify calibration or even basic output.

I know game developers that care about the minute details of the colour space they’re working in programatically, but not one of them operates a properly calibrated display in a well setup working environment, to my knowledge at least. The guy that gets closest wears glasses, which adds another level of problems.

Why the shortfall? The knowledge is there to properly look after the quality of output of a display (where possible), but nobody puts it into practice. The guy I know that cares the most about this stuff does active research into the response of the displays he buys but, as I’ve proven, the reviewed model has a fairly high chance of showing different quality pixels compared to the retail version that he (or any of you) buys, for better or for worse. He also can’t unify his displays or viewing environments, where control over the ambient lighting is a key factor in colour perception.

The solution? I have no practical idea. I’ve been meaning to write an article on display calibration at Beyond3D for years, but the problems to overcome in getting it right are immense, and professional calibration in your working environment is expensive and prone to error, possibly even just as much error as trying to do it yourself unprofessionally.

And that’s despite the obvious question of whether you should even give a shit. Does it even matter?

I’ll still get my eyes tested tomorrow, because 29 is slightly embarassing given high quality pixels are the currency I trade in day-to-day, plus I thought my eyes were in good shape and that the display I’m using right now — which I also took the test on — was well setup. Maybe my environment is at fault. Maybe a zero is a fluke.

I wish I had the answers.