On Tuesday, Nvidia announced it will begin taking orders for the DGX Spark, a $4,000 desktop AI computer that wraps one petaflop of computing performance and 128GB of unified memory into a form factor small enough to sit on a desk. Its biggest selling point is likely its large integrated memory that can run larger AI models than consumer GPUs. Nvidia will begin taking orders for the DGX Spark on Wednesday, October 15 through its website, with systems also available from manufacturing partners and select US retail stores. The DGX Spark, which Nvidia previewed as "Project DIGITS" in January and formally named in May, represents Nvidia's attempt to create a new category of desktop computer workstation specifically for AI development. With the Spark, Nvidia seeks to address a problem facing some AI developers: Many AI tasks exceed the memory and software capabilities of standard PCs and workstations (more on that below), forcing them to shift their work to cloud services or data centers. However, the actual market for a desktop AI workstation remains uncertain, particularly given the upfront cost versus cloud alternatives, which allow developers to pay as they go. Nvidia's Spark reportedly includes enough memory to run larger-than-typical AI models for local tasks, with up to 200 billion parameters and fine-tune models containing up to 70 billion parameters without requiring remote infrastructure. Potential uses include running larger open-weights language models and media synthesis models such as AI image generators. According to Nvidia, users can customize Black Forest Labs' Flux.1 models for image generation, build vision search and summarization agents using NVIDIA's Cosmos Reason vision language model, or create chatbots using the Qwen3 model optimized for the DGX Spark platform. Big memory in a tiny box Nvidia has squeezed a lot into a 2.65-lb. box that measures 5.91 x 5.91 x 1.99 inches and uses 240 watts of power. The system runs on Nvidia's GB10 Grace Blackwell Superchip, includes ConnectX-7 200 Gb/s networking, and uses NVLink-C2C technology that provides five times the bandwidth of PCIe Gen 5. It also includes the aforementioned 128GB of unified memory that is shared between system and GPU tasks.