Preparing for the .NET 10 GC Maoni0 20 min read · Just now Just now -- Listen Share
In .NET 9 we enabled DATAS by default. But .NET 9 is not an LTS release so for many people they will be getting DATAS for the first time when they upgrade to .NET 10. This was a tough decision because GC features are usually the kind that don’t require user intervention — but DATAS is a bit different. That’s why this post is titled “preparing for” instead of just “what’s new” 😊.
If you’re using Server GC, you might notice a performance profile that’s more noticeably different than what you saw in previous runtime upgrades. Memory usage may look drastically different (very likely smaller) — and that may or may not be desirable. It all depends on whether the tradeoff is noticeable, and if it is, whether it aligns with your optimization goals. I’d recommend taking at least a quick look at your application performance metrics to see if you are happy with the results of this change. Many people will absolutely welcome it — but if you are not one of them, don’t panic. I encourage you to read on to see whether it makes sense to simply turn DATAS off or if a bit of tuning could make it work in your favor.
I’ll talk about how we generally decide which performance features to add, why DATAS is so different from typical GC features, and the tuning changes introduced since my last DATAS blog post. I’ll also share two examples of how I tuned DATAS in first-party scenarios.
If you’re mainly here to see which scenarios DATAS isn’t designed for — to help decide whether to turn it off — feel free to skip ahead to this section.
General policies of adding GC performance features
Most GC performance features — whether it’s a new GC flavor, a new mechanism that enables the GC to do something it couldn’t before, or optimizations that improve an existing mechanism — are typically lit up automatically when you upgrade to a new runtime version. We don’t require users to take action because these features are designed to improve a wide range of scenarios. In fact, that’s often why we choose to implement them: we analyze many scenarios to understand the most common problems, figure out what it would take to solve them, and then prioritize which ones to design and implement.
Of course, with any performance changes, there’s always the risk of regressions — and for a framework used by millions, you’re guaranteed to regress someone. These regressions can be especially visible in microbenchmarks, where the behavior is so extreme that even small changes can cause wild swings in results.
A recent example being the change we made in how we handle the free regions for UOH (ie, LOH + POH) generations. We changed from the budget based trimming policy to an age based because it’s more robust in general (so we don’t either quickly decommit memory and have to recommit again, or keep extra free regions around even after a long time because we continue not consuming nearly all of the UOH budgets). But this can totally change a microbenchmark that used to observe the primary memory go down to a very low value after one GC.Collect() now requires 3 GC.Collect() calls (because we have to wait for the UOH free regions to age out in 2 gen2 GCs and the 3rd one will put it on the decommit list).
But for DATAS, we knew it was by definition not necessarily for a wide range of scenarios. As I mentioned in my last blog post, there were 2 specific kinds of scenarios that DATAS targeted. I’ll reiterate them here –
... continue reading