Posted on April 27, 2026
As I’ve written about before, I’m the author/maintainer of a Haskell library for programmable CAD, called Waterfall-CAD.
Ever since I released this in 2023, it’s bothered me that I don’t really have tests for it.
Testing a CAD library like Waterfall-CAD is difficult, because the outputs of a Waterfall-CAD program are generally 3D models, and are difficult things to write good test assertions about.
In 2025, I added SVG support to Waterfall-CAD, converting the images in the README.md from screenshots of a mesh viewer, to vector diagrams generated directly within Haskell code.
While testing solid models seems inherently tricky, there’s an established field of “Visual Regression Testing” tools, so having SVG output seemed vastly more testable than 3d model output .
“Visual Regression Testing” is mostly used when testing UI code.
“Visual Regression Tools” work by storing a snapshot of an image generated by an application , generating a visual diff of the current behaviour compared to that snapshot, and failing the test if the current behaviour looks significantly different. They also usually provide a mechanism to “accept” the current behaviour; overwriting the snapshots with new outputs, as well as a mechanism to visualize the difference between the expected and actual behaviour, highlighting the parts of the snapshot that have changed.
“Golden Testing” is the name of a testing technique where the expected output of a program is stored in a file, and the program is tested by comparing the current output to that file. “Golden Testing” tools also generally provide a mechanism to “accept” the current program behaviour, and “overwrite” the files with the current behaviour.
It seems useful to me to treat “Visual Regression Testing” as a special case of “Golden Testing” where the test files are images, and the “diff” is a visual diff, rather than a precise comparison of the binary structure of the files.
... continue reading