Latest Tech News

Stay updated with the latest in technology, AI, cybersecurity, and more

Filtered by: diffusion Clear Filter

Nvidia Is Not Happy With the Gain AI Act, Says As Much

In a move drawing considerable attention across the tech industry, Nvidia Corporation has publicly critiqued the recently proposed Gain AI Act, emphasizing its potential to stifle competition in the rapidly evolving artificial intelligence sector. The GAIN AI Act, which stands for Guaranteeing Access and Innovation for National Artificial Intelligence Act, was introduced as part of the U.S. National Defense Authorization Act, with the goal of ensuring that the United States is the dominant mark

The Diffusion Dilemma

On the sun-baked plains of the American Midwest in 1892, a revolution was loudly sputtering to life: the tractor, an engine which signaled the end of the era of animal power and the beginning of the age of machine power. This machine was not just a piece of equipment; the tractor was a manifestation of an exponential shift in energy density, from animal metabolism to coal burning, empowered by discoveries in thermodynamics. But diffusion of the tractor, screeching across the horizon, took much l

The Hidden Ingredients Behind AI’s Creativity

The original version of this story appeared in Quanta Magazine. We were once promised self-driving cars and robot maids. Instead, we’ve seen the rise of artificial intelligence systems that can beat us in chess, analyze huge reams of text, and compose sonnets. This has been one of the great surprises of the modern era: physical tasks that are easy for humans turn out to be very difficult for robots, while algorithms are increasingly able to mimic our intellect. Another surprise that has long p

The Pixel 10 Pro’s 100x zoom is Google’s most controversial use of AI yet — here’s why

Google loves AI, and it’s doubled down on the tech with every new Pixel generation. But this year’s Pixel 10 Pro and Pro XL take things to another level, introducing a diffusion model to upscale images from the phone’s conservative 5x optical zoom into telescopic-length 100x photos. Google is no stranger to computational photography or AI-assisted imaging — features like Add Me and Astrophotography mode laid the groundwork for its ongoing evolution. However, the introduction of diffusion models

The (Unfinished) PDE Coffee Table Book

THE (UNFINISHED) PDE COFFEE TABLE BOOK Lloyd N. Trefethen and Kristine Embree, editors Unpublished, 2001 During 2000-2001 a group project based in the Oxford University was begun to write this book. The vision was 100 2-page spreads, each one giving exactly the most useful possible starting information about a different partial differential equation, with beautiful color illustrations. Many people at Oxford and around the world contributed drafts, which were then extensively rewritten and e

Apple just released a weirdly interesting coding language model

Apple quietly dropped a new AI model on Hugging Face with an interesting twist. Instead of writing code like traditional LLMs generate text (left to right, top to bottom), it can also write out of order, and improve multiple chunks at once. The result is faster code generation, at a performance that rivals top open-source coding models. Here’s how it works. The nerdy bits Here are some (overly simplified, in the name of efficiency) concepts that are important to understand before we can move

Apple just released a weirdly interesting coding language model

Apple quietly dropped a new AI model on Hugging Face with an interesting twist. Instead of writing code like traditional LLMs generate text (left to right, top to bottom), it can also write out of order, and improve multiple chunks at once. The result is faster code generation, at a performance that rivals top open-source coding models. Here’s how it works. The nerdy bits Here are some (overly simplified, in the name of efficiency) concepts that are important to understand before we can move

Researchers Uncover Hidden Ingredients Behind AI Creativity

We were once promised self-driving cars and robot maids. Instead, we’ve seen the rise of artificial intelligence systems that can beat us in chess, analyze huge reams of text and compose sonnets. This has been one of the great surprises of the modern era: physical tasks that are easy for humans turn out to be very difficult for robots, while algorithms are increasingly able to mimic our intellect. Another surprise that has long perplexed researchers is those algorithms’ knack for their own, str

4Real-Video-V2: Feedforward Reconstruction for 4D Scene Generation

1Snap Inc. 2KAUST 4Real-Video-V2 is capable of computing a 4D spatio-temporal grid of video frames and 3D Gaussian particles for each time step using a feed-forward architecture. Its architecture has two main components, a 4D video diffusion model and a feedforward reconstruction model. Your browser does not support the video tag. This represents a major upgrade over 4Real-Video, introducing a new 4D video diffusion model architecture that adds no additional parameters to the base video model

I have reimplemented Stable Diffusion 3.5 from scratch in pure PyTorch

miniDiffusion miniDiffusion is a reimplementation of the Stable Diffusion 3.5 model in pure PyTorch with minimal dependencies. It's designed for educational, experimenting, and hacking purposes. It's made with the mindset of having the least amount of code necessary to recreate Stable Diffusion 3.5 from scratch, with only ~2800 spanning from VAE to DiT to the Train and Dataset scripts. -Files: The main Stable Diffusion model code is located in dit.py, dit_components.py, and attention.py. The d

Beyond GPT architecture: Why Google’s Diffusion approach could reshape LLM deployment

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Last month, along with a comprehensive suite of new AI tools and innovations, Google DeepMind unveiled Gemini Diffusion. This experimental research model uses a diffusion-based approach to generate text. Traditionally, large language models (LLMs) like GPT and Gemini itself have relied on autoregression, a step-by-step approach where each