Tech News
← Back to articles

How memory maps (MMAP) deliver faster file access in Go

read original related products more articles

One of the slowest things you can do in an application is making system calls. They're slow because you do have to enter the kernel, which is quite expensive. What should you do when you need to do a lot of disk I/O but you care about performance? One solution is to use memory maps.

Memory maps are a modern Unix mechanism where you can take a file and make it part of the virtual memory. In Unix context, modern means that it was introduced in the 1980s or later. You have a file, containing data, you mmap it and you'll get a pointer to where this resides. Now, instead of seeking and reading, you just read from this pointer, adjusting the offset to get to the right data.

Performance

To show what kind of performance you can get using memory maps, I've written a little Go library that allows you to read from a file using a memory map or a ReaderAt. ReaderAt will do a pread(), which is a seek/read combo, while mmap will just read from the memory map.

Random lookup (ReaderAt): 416.4 ns/op Random lookup (mmap): 3.3 ns/op --- Iteration (ReaderAt): 333.3 ns/op Iteration (mmap): 1.3 ns/op

This almost feels like magic. Initially, when we launched Varnish Cache back in 2006, this was one of the features that made Varnish Cache very fast when delivering content. Varnish Cache would use memory maps to deliver content at blistering speeds.

Also, since you can operate with pointers into memory that is allocated by the memory map, you'll reduce memory pressure as well as raw latency.

The Downside of Memory Maps

... continue reading