I've been working a lot on colours (or "color") in CSS, for csskit's minifier. This gives me the unfortunate burden of now having Opinions™ about colours. I also built a minifier test suite in the hopes that the ecosystem can get smarter. The minifier tests are not a vanity project: csskit currently has the worst pass rate and fails in some rather bad ways.
During this work, trying to minify oklch colours, I wondered "just how precise
is precise enough?". Is a color like oklch(0.659432 0.304219 234.75238)
needlessly precise? Spoiler alert: yes. I contend that you almost never need
more than 3 decimal places. For oklch and oklab that's a safe ceiling, and for
their less-than-ok variants (lab & lch) you can get away with even less. Writing
more is just wasting bytes.
So here's the TL;DR:
When writing colors: 3dp is enough. If your colour picker hands you
oklch(0.659432 0.304219 234.75238), round it tooklch(.659 .304 234.752)and move on. Virtually no one will see the difference in normal viewing conditions, and the maths hold up even if you're chaining colours throughcolor-mix()or relative colour syntax. The exceptions to this are so edge case they're irrelevant.Or don't. I'm not your dad. Minifiers should handle this for you instead.
Speaking of; minifiers (or try-hard developers) can be even more aggressive:
lab()andlch()operate on much larger scales, so they only need 1dp. sRGB specific notations likergb()&hsl(), or units like degrees can be 0dp. The details are below.
Does this matter? I mean, not really. A few extra digits is not going to harm anyone, but also if you're spending any significant time tweaking colour values by hand then anything beyond 2 or 3 decimal places is just a waste of time. If you're writing a minifier, like I am, then this stuff probably really matters to you.
The rest of this post is going to be me justifying that claim. If you trust me, you can stop reading now. If you don't, or if you want to hang around and play with some fun widgets, and hopefully learn something, then let's get into it!
How do you tell if two colours look the same?
First we need a way to measure whether two colours are actually different. Luckily the Europeans have been at it yet again. The International Commission on Illumination - CIE - inventors of the LAB colour space - made some fancy formula for figuring this out. Delta-E, shortened dE, or if you like fancy Unicode letters: ΔE. I'm not typing Delta-E all the time, and I am absolutely not copy pasting Δ symbols everywhere so it's dE from here on out.
You might see for example Delta-E CIE76 (often shortened to dE76) in older
literature. Its updated sibling: CIE2000 (shortened as dE2000 or dE00) is
the modern alternative that fixes issues with the first iteration. csskit
uses dE00 to compute distance but calls it just delta_e because dE00
is a terrible name for a method.
At its core this formula gives you a single number: how far apart two colours look. 0.0 means identical, 100.0 means you're comparing black and white. The magic number to remember is the "Just Noticeable Difference" (JND). For dE00, JND is around 2.0. Below that, people struggle to tell two colours apart. Below 1.0, the average person can't.
Worth being precise about what JND actually means. It is a 50% detection threshold: the difference detected on half of trials, under controlled conditions. Not "always noticeable". Not "barely noticeable". Half the time, average subject, in a lab. The tolerance data behind CIE94 came from automotive paint matching - trained observers comparing lacquered body panels under D65 illumination, judging acceptability for production. The dE00 formula adjusted those numbers further. So the "2.0 JND" is the residue of a very specific industrial experiment, not a universal law of vision.
Individual sensitivity also varies enormously. Some colour-deficient people cannot distinguish colours several dE00 apart. A small percentage have tetrachromatic vision and can distinguish colours that appear identical to most. And that's before considering the display itself - many screens cannot reproduce 100% of sRGB, while others cover DCI-P3 or wider. A colour difference that is theoretically above JND may be invisible on a poorly calibrated budget panel, and perfectly clear on a wide-gamut studio display. The JND is a useful engineering target for typical conditions, not a guarantee about any specific person on any specific screen.
Oklab and Oklch have their own flavour called dEOk. Same idea, but it plots the colours in Oklab space instead of CIE Lab. Because Oklab is "perceptually uniform" (equal distances actually correspond to equal perceptual differences, unlike CIE Lab which is... less okay about that), the numbers come out quite different. Also due to scaling (0-1 ranges vs 0-100) the dEOk numbers will be 1/100th of dE00's. So dEOk's JND is 0.02, not 2.0.
So if you want to act like a colour expert here are the things to remember:
- We use formulas to check if colours are "different", these are dE00 and dEOk.
- We use jargon like JND to determine "how different" colours are.
- dE00's JND is 2.0, but individual sensitivity varies; treat this as an average, not a ceiling.
- dEOk's JND is 0.02. It'll have different numbers due to being "perceptually uniform".
Below is an interactive widget showing both colour spaces side by side. Drag the A & B nodes around to see how distance is calculated. Try the blue area and notice how the CIE Lab node moves further out, while the green area looks much more similar in both spaces. Drag the lightness slider and watch the Lab nodes shift while Oklab stays stable: perceptual uniformity!
CIE Lab
OkLab
| dE00 | dEOk | css | ||
|---|---|---|---|---|
| A | 55.7979 | 0.5371 | |
|
| B | |
As you drag the points around, you'll notice that dEOk and dE00 increase the further apart the two nodes get. That's what this is about - measuring that distance. They can be the same shade (e.g both green), but far apart, and you'll get a large dE. The closer you drag the nodes together the smaller that number becomes as the colours become imperceptibly similar. Note how dE00 and dEOk can sometimes come up with very different numbers, due to the perceptual differences across the chart?
How sensitive is your own eye?
That's a lot to take in. Let's pause for a second and enjoy a nice game. Play a soothing round of What's My JND and get a feel for it all yourself. Once you're done also try Hard mode. I'll be here when you're done...
You're back? I hope you beat my score - 0.0028. Much better than the claimed
0.02 JND, but that's because this is a game, not lab conditions. If I'm
honest I was leaning and moving my head around (come on, be honest, you did this
too didn't you), so it was more my best guess than a standardised test.
Here's how everyone else did:
What you're looking at is not really a JND distribution. The game shows two swatches side by side with unlimited time - the opposite of sequential, time-limited, naive-subject conditions that JND studies use. It measures something closer to a "best noticeable difference": the smallest gap detectable under ideal conditions, by a motivated observer (you, a developer reading a post about colour precision) with a direct side-by-side comparison. JND is a 50% detection threshold under typical conditions; this is more like a 90%+ threshold under optimal ones. Different questions.
With that said, the distribution is still interesting. 98% of players beat 0.02. The median is around 0.004 - five times better than the published JND. The histogram spans two orders of magnitude, from 0.0003 up to 0.2. That spread reflects genuine variation in colour perception: screen calibration, ambient light, fatigue, colour deficiency, and which part of your retina happens to be doing the work.
Hard mode is even more interesting. Easy mode shows two large swatches side by side - the colours sit directly adjacent, so your visual system can use the transition between them as an extra cue. Hard mode removes that cue entirely: nine squares in a 3×3 grid with gaps between them, one square a different colour, and no gradient transition to lean on. Your brain has to hold the colour in short-term memory across the gap and compare. Much closer to how you'd actually spot a colour being wrong in a design.
The distribution shifts noticeably. Hard mode has a floor of 0.002 - the game doesn't test below that, because at that level 8-bit sRGB can't reliably produce distinct hex colours. 12% of hard mode players hit that floor bucket (0.002-0.004), compared to 3.3% of easy mode players reaching their floor of 0.0003-0.0008. The hard mode median is around 0.010, roughly twice the easy mode median of 0.004. Removing the adjacency cue costs about half a decimal place of sensitivity.
The top players tell a similar story. Easy mode top 10% scored ≤ 0.0023; hard mode top 10% scored ≤ 0.004. The sharpest hard mode players are about as sensitive as the average easy mode player. That gap is the gradient cue, doing a lot of perceptual heavy lifting you were not even aware of.
These scores can give us some real data to go on for how far we can push rounding of colour values. We know that even with the sharpest eyes a dEOk of ~0.00N is a challenge, and the medians might be more like ~0.0N, the 3dp worst case of 0.0005 is twenty times below the typical threshold.
Remember your scores though, you can refer to it throughout the rest of this article.
So what can we do with this?
Right, time to actually prove something. Armed with dE we can model what happens when you chop decimal places off a colour. Below is another picker plotting the colour at full precision (the interactive marker), 3dp (the white ◆), 2dp (teal ▲), 1dp (yellow ■) and integer (0dp, red ●).
Drag the point around and watch how far each marker drifts from the original. The table below shows the dE00 and dEOk values - green means invisible, red means you've gone too far.
| dp | dEOk | dE00 | css | |
|---|---|---|---|---|
| Full | |
|||
| ◆ 3 | |
|||
| ▲ 2 | |
|||
| ■ 1 | |
|||
| ● 0 | |
As you drag the marker node around you can see 0dp (red ●) seems stuck in the middle, at either white or black depending on the Lightness slider value. As you drag out of gamut the chroma goes above 0.5 and rounds to 1, popping it immediately out of gamut also. So we can conclude that rounding to 0dp would be a terrible idea.
1dp (yellow ■) is easily perceived as a different colour. It shifts too close to the center making colours visibly lighter or darker depending on the lightness. Rounding values effectively gives us a straight line from the given colour to the center of the chart. Watching the dE00 values often climb over 5.0 and as high as 10.0, we know it would be a bad idea to round all channels to 1dp and expect the same colour.
The 2dp marker (teal ▲) is within the "imperceptible dE00 range" for most values - (remember ~0.00N was our gamers' best scores). If you try hard you can find marginal territory for high chroma colours with unlucky rounding. That's because 2dp is right on the perceptual limit for static colours. 2dp shows promise.
It's unlikely you'll ever see the 3dp (white ◆) marker on the chart, as it's almost always overlapped by the 2dp marker. dE00 for 3dp never goes beyond 0.08 (way below the 2.0 JND value), dEOk values cap out at around 0.004 (again, below the JND of 0.02).
Okay so 2 decimal places is fine?
Well, not so fast. I thought this too, but 2dp is right on the perceptual limit,
which means it only works for static colours. As soon as you start doing colour
maths - building palettes, chaining operations - rounding errors can accumulate.
In the nastiest cases repeatedly scaling chroma, by say 0.9, desaturating step
by step then the 2dp-rounded value can get "stuck". This isn't hypothetical;
openswatch builds palette ramps by chaining oklch
values through calc() across 12 steps, adjusting lightness and chroma at each
one. So using 2dp even across vars or calc can cause small discrepancies to
compound. Small errors accumulate across the chain and cause colour drift.
Try it:
- C *=
- Steps
| dp | dEOk | dE00 | ||
|---|---|---|---|---|
| Full | |
|||
| ◆ 3 | |
|||
| ▲ 2 | |
This mess of markers plots 2dp (teal ▲) and 3dp
(white ◆) markers in a chain. With no error accumulation we'd expect to see both
sets of markers overlapping perfectly. Instead they get further apart. The table
below the picker shows the final step, and the resulting dE. At 3dp, dE remains well below standard JND and undetectable under normal
viewing. At 2dp, depending on the colour, there can be real visible differences;
the value gets stuck: 0.05 * 0.9 = 0.045 rounds back to 0.05, and the
chroma stops shrinking. After 20 steps the error crosses JND. Even at hundreds
of iterations 3dp never exceeds 0.001 dEOk. That is below standard JND, and
below the typical game score of 0.004. Only the sharpest eyes could detect it,
and only in a direct side-by-side.
Why does this happen? The answer lies in how much perceptual difference a single rounding step introduces.
For oklch, each component's sensitivity looks like this:
| Component | Range | Std JND limit | Typical limit | Recommended | Why |
|---|---|---|---|---|---|
| L (lightness) | 0 - 1 | 2dp | 3dp | 3dp | Typical sensitivity makes 2dp perceptible |
| C (chroma) | 0 - ~0.4 | 2dp | 3dp | 3dp | Same as L |
| h (hue) | 0 - 360 | 0dp | 1dp | 1dp | Even 1 degree is sub-JND at typical chroma |
This comes from how oklch maps to oklab. Changes in L or C directly produce
that change in dE. So +0.001 in either component is 0.001 dEOk.
At the published JND of 0.02, the third decimal place is twenty times below the threshold. But the game data suggests typical detection sits around 0.004 dEOk. Under those conditions a 2dp rounding error of 0.005 already sits above what the median player can detect in a direct comparison. Even discounting the game as "best possible conditions" and applying a generous 10x real-world penalty, 2dp is right on the edge for a typical observer. The original "2dp is the perceptual limit" claim was calibrated against 0.02 - which turns out to be the 99th percentile of game scores, not the median.
Hue is less sensitive though. A change of h degrees produces a dE of roughly
C * h * pi/180. Even at high chroma (C=0.3), one whole degree of hue gives you
a dEOk of only about 0.005. At typical sensitivity that is marginal; at standard
JND it is invisible. 1dp (0.1 degree worst-case error) keeps hue safely below
even sharp-eyed detection at any chroma level.
For oklab the story is simpler still. Recall dE is computed inside the Lab space, so all three components (L, a, b) map 1:1 to dEOk because dE is measured in the Lab.
So: 3dp is the right target for oklch L/C and oklab L/a/b. Not because 2dp fails the standard JND - it passes that comfortably - but because typical human sensitivity is meaningfully better than the standard figure, and because 2dp leaves no room for the chaining errors covered next. 3dp gives us a safety margin.
If hue's perceptual limit is 0dp we could round hues to integers then?
Well... we see the same problems here as we did with multiplication, but for
fractional hue rotation. Repeatedly adding h += 7.3 with integer rounding
accumulates a drift of 0.3 degrees per step. After 20 steps, that's 6 degrees of
error, enough to cross JND at moderate chroma. An additional 1dp safety buffer
helps reduce this down enough to be negligible.
Let's make another demo (listen I didn't spend all this effort building the colour picker just to show it off once). This one shows a set of markers across a computed hue wheel:
- h +=
- Steps
| dp | dEOk | dE00 | css | |
|---|---|---|---|---|
| Full | |
|||
| ■ 1dp hue | |
|||
| ● 0dp hue | |
And so again we can see multiplying a fractional hue can quickly cascade and what was a green-yellow is now a subtly different shade of mustard.
This pattern applies to all of the polar colour spaces - oklch and lch alike. Adding 1 extra decimal place to H moves us from "on the perceptual edge" to a literal order of magnitude lower than that which is enough to smoosh rounding issues.
So... 3dp for L & C and 1dp for H?
Yeah now you're getting it! But we can go deeper. While that works nicely for
oklch(), another notation, e.g. lab() operates on a completely different
scale. Lab's L runs 0-100, a and b span roughly ±128, while LCH's C goes
up to ~150. Compare that to Oklab/Oklch where L is 0-1 and a, b, C are
all under ±0.5 (in gamut at least). The ranges differ by two orders of
magnitude, which means integer rounding in Lab is already equivalent to 2dp
rounding in Oklab.
Mapping it all out to a table makes the pattern obvious. Each extra decimal place buys you an order of magnitude less error:
| Space | Channel | Range | 0dp worst | 1dp worst | 2dp worst | 3dp worst |
|---|---|---|---|---|---|---|
| oklch | L (lightness) | 0 - 1 | 0.5 dEOk | 0.05 dEOk | 0.005 | 0.0005 |
| oklch | C (chroma) | 0 - ~0.4 | 0.5 dEOk | 0.05 dEOk | 0.005 | 0.0005 |
| oklch | h (hue) | 0 - 360 | 0.003 | 0.0003 | ~0 | ~0 |
| oklab | L, a, b | 0 - 1 | 0.5 dEOk | 0.05 dEOk | 0.005 | 0.0005 |
| lch | L (lightness) | 0 - 100 | 0.5 dE00 | 0.05 dE00 | 0.005 | 0.0005 |
| lch | C (chroma) | 0 - ~150 | 0.5 dE00 | 0.05 dE00 | 0.005 | 0.0005 |
| lch | h (hue) | 0 - 360 | ~1.3 dE00 | ~0.13 | ~0.013 | ~0.001 |
| lab | L (lightness) | 0 - 100 | 0.5 dE00 | 0.05 dE00 | 0.005 | 0.0005 |
| lab | a, b | ~±128 | 0.7 dE00 | 0.07 dE00 | 0.007 | 0.0007 |
So from this table we can easily pick two columns right of the perceptual limit for that two-orders-of-magnitude safety net. dEOk JND is 0.02, so picking the 0.0005 column is a very safe bet. dE00's JND is 2.0, so 0.05 is the one to go for.
So yes, lab/lch at 0dp (plain integers!) already produce sub-JND errors, but the same rounding problem hits lch chroma at 0dp. So for lab/lch is 1dp for all channels. Integers are the perceptual limit, and 1dp gives the headroom needed for chained colour maths. It can be difficult to know if a colour will be mixed or blended, and so 1dp costs almost nothing and is uniformly safe.
What about sRGB notations like hsl or rgb?
You might be wondering if any of this matters for good old sRGB. The same range-based rule applies, just with different numbers.
rgb() channels are 0-255 integers. There's nothing to round. If you're using
the modern percentage syntax (rgb(100% 50% 0%)) or the 0-1 float form, the
channels are 0-100 or 0-1 respectively, so you'd use 1dp or 3dp. In practice
nobody writes rgb(50.284% 23.119% 80.773%). Browsers quantise to 8-bit sRGB
internally anyway, so any extra precision just evaporates.
hsl() and hwb() have hue on 0-360, and saturation/lightness (or
whiteness/blackness) on 0-100%. All large ranges, all 1dp. But these are
sRGB-gamut notations - the final rendered colour is clamped to 256 values per
channel. The 8-bit bottleneck swallows any sub-integer precision you might have
preserved. Integers are fine for static colours, 1dp for chained calculations.
For the gamma-encoded color() spaces - Display P3, A98 RGB, ProPhoto RGB,
Rec.2020 - all channels are 0-1. The gamma curve gives enough perceptual
uniformity that 3dp holds, same as oklab (worst-case dE00 under 0.17).
Things get a little messy talking about XYZ D50 & D65. Their CSS form uses 0-1
normalised coords, not the absolute 0-100 scale (CSS white is roughly
[0.95, 1.0, 1.09] in D65). XYZ is neither gamma-encoded nor perceptually
uniform. It needs 4dp - 3dp gives a worst-case dE00 of 2.45, above JND.
However, if you use XYZ directly in your stylesheets, I have questions.
Linear RGB (srgb-linear) is also an odd one out, and this is worth its own
section.
sRGB-linear and the perceptual non-uniformity problem
After the initial publication of this post, Allen Pestaluky helpfully pointed out that srgb-linear suffers at 3dp, and most likely needs 3 significant digits. Let's compare these 4 colours:
color(srgb-linear 0.0 0.001 0.0)
/* vs */
color(srgb-linear 0.0 0.002 0.0)
color(srgb-linear 0.0 0.901 0.0)
/* vs */
color(srgb-linear 0.0 0.902 0.0)
Both pairs differ by exactly 0.001 in the green channel. Same space, same precision. But the dE00 is very different: 1.86 near black, 0.03 near white. That is a 60x difference in perceptual distance for the same numeric step.
The issue is that srgb-linear is, literally, linear. sRGB is gamma-encoded.
That curve compresses near-black values so small numeric steps near zero map to
large perceptual distances. Oklab bakes that correction in by design. Linear RGB
has no such correction.
So let's make another demo with the srgb-linear colour space! The demo below shows how rounding to 3dp or 4dp can vary the colour widely depending on the lightness or the position in space. For values near black, 3dp gives a visible jump.
Try keeping the point in center and simply drag the lightness slider towards black, and you'll see the markers jump around. This shows the effect of this issue:
| dp | dEOk | dE00 | css | |
|---|---|---|---|---|
| Full | |
|||
| ▲ 3 | |
|||
| ■ 4 | |
So what is the right answer for srgb-linear? Originally I claimed "3dp -
same as oklab", but that's wrong. srgb-linear needs 4dp due to blacks.
3dp technically clears the single-step JND threshold, but only just - and it blows through it under chaining. The extra decimal place brings the worst-case margin in line with what 3dp gives for perceptually-uniform spaces like oklab. Think of it as 3dp being "the correct answer adjusted for linear gamma".
The other gamma-encoded color() spaces (Display P3, A98 RGB, ProPhoto RGB,
Rec.2020) are fine at 3dp. Their gamma encoding means near-black has the same
kind of compression that oklab has by design. srgb-linear is the odd one out
because you stripped that compression out.
What about edges of the colour space? Does 3dp hold uniformly?
Ah yes, I had the same question... which is why I wrote it down here and why
you're reading it I guess... anyway, near-zero chroma, near-black, near-white,
high chroma at the gamut boundary - I tested all of these. I knocked up scripts
which brute forced their way through all sorts of combinations in
oklch/lch/oklab/lab. The 3dp rule holds uniformly against standard JND. In fact,
at the extremes, rounding errors become smaller, which makes 3dp even safer.
When you think about it, the maths don't care where you are in the colour space.
A change of 0.001 in L is always 0.001 dEOk, whether L is 0.01 or 0.99.
Especially in Oklab, that's the whole point: perceptual uniformity. (The
worst-case 0.0005 dEOk per step is also below the typical game score of 0.004,
though as noted earlier the sharpest observers could theoretically detect it in
a direct comparison.)
What about cross-space conversions?
Rounding errors can propagate when converting between colour spaces, say, oklch to sRGB, or using color-mix(). The question is: does conversion amplify the error?
I tested this by scanning the entire sRGB gamut. For each colour, I converted to oklch, rounded at various precisions, converted back to oklab. Each iteration measured dEOk against the unrounded original. The results:
| Precision | Worst-case dEOk | Verdict |
|---|---|---|
| oklch 2dp L/C + 0dp h | 0.0074 | marginal (near-invisible) |
| oklch 3dp L/C + 1dp h | 0.0007 | invisible |
| oklch 4dp L/C + 2dp h | 0.0001 | invisible |
A single conversion doesn't meaningfully amplify errors. The rounding itself dominates. Nonlinear coordinate transforms (polar to rectangular, or gamma curves) add negligible noise on top. However, repeated cross-space round- trips compound. Engines don't always map directly between spaces. They rely on intermediate steps, e.g. sRGB to Linear RGB to XYZ to Lch to Oklch. The behaviour depends on whether you stay in the same colour family:
- Via
- Trips
| dp | dEOk | dE00 | ||
|---|---|---|---|---|
| Full | |
|||
| ◆ 3 | |
|||
| ▲ 2 | |
|||
| ■ 1 | |
|||
| ● 0 | |
Same-family round-trips (oklch ↔ oklab) settle to a fixed point after a single step. The 2dp worst case is 0.010 dEOk - marginal but never crosses JND. This is because oklab and oklch are just rectangular and polar views of the same space; the rounding grids are aligned.
Cross-family round-trips (oklch ↔ CIE Lab) are a different story. Transforming between Oklab and CIE Lab shaves off a bit each time, converging to a fixed point that can be significantly offset from the original. Drag the lightness slider down and crank up the conversions to see the worst drift.
In practice, you wouldn't normally bounce a colour between oklch and CIE Lab 10 times, let alone 100 times. But a single cross-family conversion with aggressive rounding on both sides could introduce more error than expected. This is another reason to keep an extra decimal place in reserve.
I also tested multi-step chains that combine rounding, multiplication, hue rotation, and cross-space conversion. Relative colour syntax might chain these operations under the hood. After 5 chained operations at oklch 3dp, the worst-case dEOk is 0.0018. Below standard JND, and fine in normal viewing - though it approaches what sharp-eyed observers can detect in a direct comparison. At oklch 2dp it reaches 0.027, above JND for virtually everyone. This is the strongest argument for the extra decimal place.
Wait... what about alpha?
We've covered every colour channel, but there's one more value that gets rounded: alpha. Alpha is always 0-1 (or some implementations store as a percentage of 0-100 like in csskit). Alpha isn't considered in dE calculations but we can model it. We need to be a little more thoughtful though.
Alpha on a fixed background is easy to figure out: blend the colour at full- precision alpha over a background. However, alpha is commonly used over complex colours, so there may be more noticeable divergence. We can blend approximate by blending over both black and white and take the larger error:
- Alpha
| on black | on white | dEOk | dE00 | alpha | |
|---|---|---|---|---|---|
| Full | |
||||
| ◆ 3dp | |
||||
| ▲ 2dp | |
||||
| ■ 1dp | |
||||
| ● 0dp | |
Play with the slider and you'll see that 2dp alpha can produce visible errors.
At alpha = 0.5 on a high-contrast colour, 2dp rounds to 0.50 which is
exact, but at alpha = 0.005 it rounds to 0.01 (doubling the opacity),
which pushes dEOk above JND. 3dp keeps the worst case well below standard JND
for any colour and any alpha value. That maps neatly to what you'd expect from the
channel range: alpha is 0-1, so it follows the same rule as oklab's L/a/b
channels - 3dp. If your format uses 0-100% for alpha, integers are
sufficient (same as lab/lch).
Do browsers optimise this stuff?
After six interactive demos and a dozen tables, you might be wondering: do browsers already do any of this? Do they pack values cleverly, or trim precision, or do anything smart at all?
No. They store f32s. That's it. That's the section.
When Firefox parses oklch(0.659 0.304 234.8), Stylo stores it as an
AbsoluteColor with three f32 components and a colour
space tag. Your decimal places are faithfully represented in 32-bit floating
point - which has about 7 significant digits for values in the 0-1 range. Far
more than you'd ever need.
Gradient stop colours are stored differently. StyleAbsoluteColor holds f32s
which are interpolated in f32 via Servo_InterpolateColor. For non-sRGB
interpolation spaces, the gradient renderer generates extra stops. This is
because WebRender only interpolates in sRGB internally.
sRGB colours are quantised down to u8, proving that fractional values on these would be pointless. Beyond 256 levels the information is just not retained.
I haven't dug into Chromium or WebKit in the same depth, but I'd expect similar behaviour. There's no real reason for any engine to pre-round.
Conclusions
So there we have it: 3dp for 0-1 ranges (with one notable exception in
srgb-linear), 1dp for larger ranges, and that applies uniformly to alpha too.
srgb-linear is an exceptional case as linear RGB has no gamma encoding, so
near-black values need 4dp. And XYZ D50/D65, which use 0-1 normalised coords
in CSS (not 0-100), also need 4dp. Great. I just made you read four thousand
words about colours and decimal places. Lol, lmao even.
Honestly though, this was a lot more than 3000 words worth of effort for me to figure this out. All of this was originally an attempt to improve minification of colours in csskit. I went through several terrible solutions before realising how stupidly simple it could be.
My first instinct was to make round() smart. Give it a tolerance parameter,
compute dE, do a per-channel binary search to find the minimum decimal places
that stay within tolerance. Then I tried a "greedy backoff" approach that
started aggressive and added precision back until the dE was acceptable.
In hindsight this was silly code written by a silly person. Computing dE dominated the cost of conversions, the binary search had edge cases near channel boundaries, and the whole thing was hard to reason about. The "smart" approach gave different precision for the same colour space depending on the specific colour value, which made output unpredictable.
That's what led me down this ridiculous journey, just to try and figure out a first principles approach to this. I produced a whole variety of scripts to iterate colours and try and brute force some kind of solution, then dove into the maths and read a whole bunch. Even asked some experts in the field.
So the implementation that will land in csskit is just a static lookup from colour space to per-channel decimal places. No dE computation at minify time, no binary search, no cleverness. I've updated the css-minify-tests to include these optimisations and hopefully other minifiers can pick up this win with little effort.
Was it worth it? I mean, no, of course not. A handful of constants that fell out of way too much research. At least we have an answer, I guess.
Special thanks goes to Jake and Lea for proof reading this, and giving much needed feedback.