How do we assess color accuracy between different images? Unfortunately, the commonly used RGB color space is not perceptually uniform, which means that equal changes in RGB values do not correspond to equal perceived changes in color. This inconsistency can lead to inaccuracies in color reproduction across various devices.
The Lab color space, on the other hand, is designed to be perceptually uniform. It separates lightness (L*) from color information (a* and b*), making it ideal for precise color manipulation and comparison. By converting images from RGB to Lab, we can measure color differences using Delta E (ΔE), ensuring consistent and accurate color reproduction in digital imaging workflows. Delta E quantifies the difference between two colors, allowing for objective assessment of color accuracy.
The Lab color space consists of three components:
1. L* (Lightness)**: Represents the lightness of the color, ranging from 0 (black) to 100 (white).
2. a* (Green-Red Axis)**: Represents the color position between green and red. Negative values indicate green, and positive values indicate red.
3. b* (Blue-Yellow Axis)**: Represents the color position between blue and yellow. Negative values indicate blue, and positive values indicate yellow.
Please see below for some graphical representations of this color space :
CIELAB color space representation. Source : Linshang technology
CIELAB color space topview. Source : wikipedia
Ok great, now we have a perceptually uniform color space. But how do I convert the RGB color form an image taken with my camera to the Lab colors space ? Let’s find out :
... continue reading