The technique of using custom notations—often referred to as ‘spherical cow‘ modeling—is a deliberate simplification strategy where practitioners intentionally suppress or abstract away major design variables and complexities. This reduction allows for deeper, more focused analytical thinking about a single critical aspect of a problem without the cognitive burden of simultaneous consideration of all real-world factors. The term originates from a physics joke about a physicist who, when asked to help increase milk production, begins by saying ‘assume a spherical cow in a vacuum’—illustrating how scientists and engineers sometimes create highly simplified models that strip away real-world complexity to make a problem mathematically or conceptually tractable. While these simplified models may seem unrealistic, they serve as powerful tools for gaining fundamental insights that would be obscured by the full complexity of reality. Functional programming is a convenient notation that is only one way of dealing with the design variables presented by simple sequencer chips (“CPUs”) bolted onto mutable, random access memory. FP, though, is only a “spherical cow”. The spherical cow of function-based programming demands that conservation of memory - a design variable - be ignored. This is a convenient suppression that leads to solving a certain class of problems. If one insists on using this very same spherical cow to solve all classes of problems, one ends up with workarounds, increasing complexity, and gotchas. Examples: callback hell, Mars Pathfinder fiasco, syntactic baubles like async and .then, thread safety, multi-tasking, pattern matching (originally known as “parsing”), etc. Other spherical cows exist, like Prolog, Forth, BASIC, Postscript, HTML, SVG, Ohm, string interpolation, async message passing, etc., etc. I feel that one kind of spherical cow - default async dataflow - is mostly missing from our tool belts. I tend to champion the underdog, hence, I keep mentioning asynchrony and continue to contrast it with the currently popular spherical cow. That doesn’t mean that I think that asynchrony is the one and only, perfect spherical cow. That doesn’t mean that I don’t like functional programming. I think that there are many valuable ways to solve problems and that we should not be laser-focussed on only one approach at the expense of other possibilities. Our workflows should allow us to use and combine multiple spherical cows to solve any one problem. FP does not represent how hardware works. To be kind, we’ve spent several decades twisting hardware to make the FP spherical cow work “faster”, at the expense of exponential growth in memory usage, and, some would argue, at the expense of increased fragility of software. Alan Kay, also, speaks of this effect. Cosmology had/has the same problems as CompSci. The notation used for describing our understanding limits how we can think about this stuff. The notation is based on quill and papyrus. It was warped horribly 600 years ago by the invention of Gutenberg press technology. We didn’t even need parentheses until typesetting arrived. For some reason, we continue to treat computers (RMs in my lingo - Reprogrammable Machines) as electronic Gutenberg typesetting machines, instead of something newer and better. Using one spherical cow to express another spherical cow, e.g. asynchronous, pure dataflow written in a C-like language, is a process of extrusion, e.g. spherical cow ** 2, not purity nor convenience of thought. Mechanical Engineers are taught to create different views of a physical object, e.g. top, side, front. In my view, types and type-checking are just one kind of view on programs. In digital electronics, there are only two fundamental types Bit Pointer to bit (This was later optimized to byte and pointer to byte, and so on, due to hardware efficiency biases). Sequencing, asynchronous mevents (event + port tag), etc., are other kinds of views. Each view can have its own “language” / visualization / whatever. In my view, “C” popularized the function-based spherical cow. C didn’t invent the concept, but, C adopted it, and, made it appear natural. Before C, languages like Fortran and BASIC differentiated between subroutines and functions. With heaps of added complexity, this has grown into the current fad of “functional programming”. The current version of C - ANSI C - isn’t even compatible with the original version of K&R C. UNIX pipelines gave us a taste for the productivity gains possible in using asynchronous software components. Pipelines have mostly been ignored in the programming language community due to the conflation of pipelines with the heavy-weight implementation of pipelines in operating systems the tendency to conflate pure asynchronous dataflow with the use of synchronous functions and function-based notations the tendency to overlook the extreme difference between LIFO and FIFO data passing (stack-based vs. queue-based, respectively) the influence of textual thinking on the idea of pipelines. UNIX processes have multiple input FDs and multiple output FDs, but, are poorly served by /bin/sh textual syntax which encourages the use of only one input and only one output. You can use fugly textual notations like “2>” and 2>&1”, but the idea deserves something better and less restrictive. The notation “d = b + c” is easier to read than something pipeline-y like Yet, non-sequential-function operations are more easily expressed with something like See Also Email: [email protected] Substack: paultarvydas.substack.com Videos: https://www.youtube.com/@programmingsimplicity2980 Discord: https://discord.gg/65YZUh6Jpq Leanpub: [WIP] https://leanpub.com/u/paul-tarvydas Twitter: @paul_tarvydas Bluesky: @paultarvydas.bsky.social Mastodon: @paultarvydas (earlier) Blog: guitarvydas.github.io References: https://guitarvydas.github.io/2024/01/06/References.html Leave a comment Share