Skip to content
Tech News
← Back to articles

It's OK to compare floating-points for equality

read original get Floating-Point Comparison Guide → more articles
Why This Matters

This article challenges the common advice to avoid comparing floating-point numbers for equality using epsilon-based methods. It emphasizes that, in many cases, direct comparison or alternative solutions are more appropriate, simplifying code and improving accuracy. Understanding the nature of floating-point numbers helps developers make better decisions when handling numerical comparisons in the tech industry.

Key Takeaways

It's OK to compare floating-points for equality

NB: The title of this post is an intentional clickbait. Even though I do stand for its statement, a more honest one would be something like: It's NOT OK to compare floating-points using epsilons.

You've probably heard the mantra that you must never compare floating-point numbers for exact equality, and you absolutely must use some kind of epsilon-comparison instead, like

bool approxEqual(float x, float y) { return abs(x - y) < 1e-4f; }

Over the 15+ years that I've been writing code, – which often deals with geometry, graphics, physics, simulations, etc, and thus has to work with floating-points on a daily basis, – I've encountered only one or two cases where such epsilon-comparison is actually a good solution. Pretty much always there is a better solution that either involves rewriting the code in some way, or simply compares floating-points just like x == y . And pretty much always the epsilon solution was actually one of the worst possible options.

I'll show a bunch of examples where adding some kind of epsilon might be your first instinct, but actually a much better – and often much simpler – solution exists. But first, let's talk about floating-point numbers.

Contents

Floating-points are not a black box

The whole idea of epsilon-comparison seems to come from the general perception of floating-point numbers as some kind of random black-box machine that sometimes produces inexact results because the gods of computing force it to. In reality it is a pretty deterministic (modulo compiler options, CPU flags, etc) and highly standardized system.

Floating-point numbers are necessarily inexact in that they cannot represent all possible real numbers. In fact, no finite amount of memory can, because that's how maths works – there are just way too many real numbers (or even just rational numbers, fwiw). Given that we probably only want to allocate a fixed (and not just a finite) amount of bits per such a number, we're forced to accept that only a finite set of numbers will be representable (specifically, at most \(2^{bits}\) of them), and for all others we'll have to deal with approximations.

... continue reading