Tech News
← Back to articles

Rendering a Game in Real-Time with AI

read original related products more articles

I made a game. It’s all in ASCII. I wondered if it would be possible to turn it into full motion graphics. In real time. With AI. Let me share how I did it.

Let’s start with the game. Lately, I’ve been exploring just how far I can push old-school ASCII RPG style game frameworks. My latest one is called “Thunder Lizard,” which procedurally generates a prehistoric island populated with dinosaurs fighting for dominance as an active volcano threatens the whole island. You can go play it if you’d like.

To render it in AI, the basic plan was to grab a frame from the game, run it through an image generation model, and replace the displayed frame with the resulting image, for every frame. This presented a number of challenges and requirements that lead me on a deep dive into the wide offering of cutting edge image generation models offered today. But first, let me show how it turned out.

The need for speed

The main constraint for real-time AI rendering is latency. Most games run at least at 30 frames per second (FPS) which only gives you 30 milliseconds to do the following:

Connect (and authenticate) with an inference provider

Transmit the prompt (including source image data)

Wait generation to complete

Receive the new image data and display it

Considering a normal latency to load an image can be a couple hundred milliseconds, this constraint seems impossible. However, fal.ai specializes in offering “lightning-fast inference capabilities” for generative media, including a few Latent Consistency Models (LCM) that approach 100ms generation times. To further minimize latency, fal.ai also offers a WebSocket connection to remove the connect and authenticate steps from subsequent requests. Finally, they offer the option to stream images as Base64 encoded data for immediate direct access.

... continue reading