Eyot is a new language I’m building to make offloading work to the GPU as seamless as spawning a background thread.
Eyot source code is transparently compiled for both CPU and GPU, with communication between the two handled by the runtime. Traditional GPU programming expects you to handle many tasks, such as memory allocation, compiling the kernel, scheduling work, etc. These have long been handled by a language runtime when writing code for the CPU, and Eyot extends that convenience to code destined for the GPU as well.
The intended users are those in areas where the GPU or other accelerators are used heavily, e.g. game development, numerical analysis and AI.
It is early days for Eyot. It is not ready for real work, but you can experiment with it, and if you do, I’d love to hear your thoughts. To take a simple example (available in the playground)
fn square(value i64) i64 { print_ln("square(", value, ")") return value * value } cpu fn main() { // 1. call it directly print_ln("square(2) on cpu = ", square(2)) // 2. call it as a worker running on cpu let cpu_worker = cpu square send(cpu_worker, [i64]{ 3 }) print_ln(receive(cpu_worker)) // 3. call it as a worker running on the gpu let gpu_worker = gpu square send(gpu_worker, [i64]{ 4, 5, 6 }) print_ln(receive(gpu_worker)) }
First, this declares the square function which takes and returns a 64 bit integer. main then calls this in 3 different ways that illustrate Eyot’s distinguishing feature
The square function is called as you’d expect, directly, and on the CPU A CPU worker is created from the square function ( let cpu_worker = cpu square ). This worker processes values sent to it with the send function on a background CPU thread. After squaring the number, the worker returns it through the call to receive This time a GPU worker is created rather than a CPU worker ( let gpu_worker = gpu square ). This causes the square function to be compiled as a kernel, and run on the GPU, otherwise it acts identically. As you can see Eyot’s print_ln works GPU-side
Motivation
I’ve worked on many projects where shifting computation to the GPU presented an obvious path to better performance that got ignored due to the difficulty of doing so. These projects were not just in obvious areas like computer vision or game development, but also in unlikely matches for GPU programming, like desktop application development.
For example, back when I worked on Texifier, a macOS LaTeX editor, I adjusted the venerable TeX typesetting system to output polygons directly into GPU memory, rather than writing a PDF. This reduced latency far enough that we could update the output in real time. The feature was popular, but the difficulty of making it work left me questioning if the project was worth it.
... continue reading