Tech News
← Back to articles

Fast-Servers

read original related products more articles

fast-servers

There's a network-server programming pattern which is so popular that it's the canonical approach towards writing network servers:

...

This design is easy to recognise: The main loop waits for some event, then dispatches based on the file descriptor and state that the file descriptor is in. At one point it was in vogue to actually fork() so that each file descriptor could be handled by a different thread, but now "worker threads" are usually created that all perform the same task and rely on the kernel to schedule file descriptors to them.

A much better design is possible because of the epoll and kqueue, however most people use these "new" system calls using a wrapper like libevent which just encourages the same slow design people have been using for over twenty years now.

The design I currently use and recommend involves two major points:

One thread per core, pinned (affinity) to separate CPUs, each with their own epoll/kqueue fd Each major state transition (accept, reader) is handled by a separate thread, and transitioning one client from one state to another involves passing the file descriptor to the epoll/kqueue fd of the other thread.

...

This design has no decision points, simple blocking/IO calls, and makes simple one-page performant servers that easily get into the 100k requests/second territory on modern systems.

Creating the thread pool

... continue reading