Tech News
← Back to articles

Origin of the rule that swap size should be 2x of the physical memory

read original related products more articles

In short: The intent was never to fill it; the intent was to always have a contiguous free swap range for a a contiguous memory range.

Today it is often weird. My laptop has only 256GB SSD hard drive with 32GB RAM. Following this, a fourth of my disk would be swap.

At the time, case was different. In 1997, a machine with 8 MB physical RAM and 320 MB disk was typical. That was only 1/20 of the disk.

The difference of this ratio is because RAM became very big recently, being needed by the latest Windows. But disks did not grow so much, because everything was "in cloud". Beside that, everyone switched to SSD because the NTFS / Windows use cases were always suboptimal. But SSD in the same size is still more costly than HDD; although the difference is ceasing because the SSD gets much more development investment.

The reason of the 2x was swap fragmentation. Those disks were yet HDDs, with a seek time. Thus, it was essentially important to write everything in them in consecutive chunks.

That was no goal to fill it; the goal was that the system always has a place to find a consecutive empty block, to write into swap a consecutive memory range.

The guys essentially not understanding how system paging and block cache works existed even at the time. I have even heard university sysadmins saying, that "we can solve the memory need of our local server without forced compromises, like swap". I understood only decades later, that he was not an expert knowing something what I do not, but he did not know even what I already knew (at the time). Many people had the very stupid concept of that "swap is slow so I turn it off". But it happened only among "Linuxers" (incl. FreeBSD / Solaris etc) guys, because the Windows guys mostly did not know what is swap at all. They also did not know how to turn it off.

My impression is also that the software behaved a bit differently. At the time, if you had 8 MB RAM, your processes used, for example, 14 MB, thus you had a minimal block cache and used 8 MB swap; that was slow but fine. Today it would be an nearly unusable system. My impression is that it is because today the processes are more happy to regularly touch all their memory pages, particularly the VMs/interpreters (like a JVM garbage collection regularly walks over the whole heap of a process).

The swap == 2x of your physical ram came from that you need to be able to swap out all your physical RAM, but you need to do it in a way that you can do it always into contiguous empty disk ranges.

To your extension: FreeBSD was special, early FreeBSD had no paging only segmentation. As far I remember, it could swap out only whole processes (or segments). Note, same is for the win3x. Obviously the need to contiguous block allocation was far more important in this case, although the x2 remained.

... continue reading