I'd say this is a setup for a joke later on in the blog post, except the joke doesn't even make sense, so I don't really know what this is. Generated by ChatGPT.
There’s a semi-well-known adage in software development that says when you have a hard code change, you should “first make the hard change easy, and then make the easy change.” In other words, refactor the code (or do whatever else you need to do) to simplify the change you’re trying to make before trying to make the change. This is especially good advice if you have a good automated test suite—you can first do your refactor without changing any program behaviour, and be relatively confident that you haven’t introduced any bugs or regressions in your refactor; and then you can make a smaller change that actually introduces your new feature.
Today I’m going to talk about how I followed the opposite of this advice for SimKube: I first made the easy change hard, and then I made the hard change. Because I can never do anything easy. Anyways this is going to be a fairly technical post, with a bit of a deep dive into SimKube internals and also a bunch of Rust programming techniques, so I hope you’ve had your coffee!
Who owns the owned objects?
First, a quick overview of the problem I needed to solve: in Kubernetes, resources can be owned by other resources. For example, a Deployment owns zero or more ReplicaSets, and each ReplicaSet owns zero or more Pods. These are tracked by setting an OwnerReference on the owned resource; if you wanted to know what Deployment a Pod belongs to, you would follow the chain of OwnerReference s from the Pod to the ReplicaSet to the Deployment.
Recall that in SimKube, there is a component called the tracer, which lives in your production Kubernetes cluster and watches for changes to the resources in that cluster. You can then export those changes to a trace file for replay in your simulated environment. In the tracer, you can configure what resources to watch like this:
trackedObjects: apps/v1.Deployment: podSpecTemplatePaths: - /spec/template
The above config file tells the tracer to watch for changes to Deployments in the cluster. In this case, when you replay a trace in your simulated environment, SimKube would create all of the Deployments specified in the trace file, and then the controller manager in your simulated cluster would create the ReplicaSets and Pods that are owned by those Deployments. This is all fine and good, but what would happen if you had a config file that looked like this?
trackedObjects: apps/v1.Deployment: podSpecTemplatePaths: - /spec/template apps/v1.ReplicaSet: podSpecTemplatePaths: - /spec/template
Now, the tracer would record changes to all Deployments and ReplicaSets in the cluster; when we replay this trace in the simulated environment, SimKube would create all of the Deployments, as well as all of the ReplicaSets owned by those Deployments. At the same time, the controller manager in the simulation cluster would create a second copy of the ReplicaSets owned by those deployments. Whoops! Now we have two copies of our ReplicaSets. That’s no good.
... continue reading