Latest Tech News

Stay updated with the latest in technology, AI, cybersecurity, and more

Filtered by: 270m Clear Filter

Google releases pint-size Gemma open AI model

Big tech has spent the last few years creating ever-larger AI models, leveraging rack after rack of expensive GPUs to provide generative AI as a cloud service. But tiny AI matters, too. Google has announced a tiny version of its Gemma open model designed to run on local devices. Google says the new Gemma 3 270M can be tuned in a snap and maintains robust performance despite its small footprint. Google released its first Gemma 3 open models earlier this year, featuring between 1 billion and 27 b

Google unveils ultra-small and efficient open source AI model Gemma 3 270M that can run on smartphones

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now Google’s DeepMind AI research team has unveiled a new open source AI model today, Gemma 3 270M. As its name would suggest, this is a 270-million-parameter model — far smaller than the 70 billion or more parameters of many frontier LLMs (parameters being the number of internal settings governing the model’s behavior). While more parameters

Gemma 3 270M: Compact model for hyper-efficient AI

Today, we're adding a new, highly specialized tool to the Gemma 3 toolkit: Gemma 3 270M , a compact, 270-million parameter model designed from the ground up for task-specific fine-tuning with strong instruction-following and text structuring capabilities already trained in. The last few months have been an exciting time for the Gemma family of open models. We introduced Gemma 3 and Gemma 3 QAT , delivering state-of-the-art performance for single cloud and desktop accelerators. Then, we announce

Gemma 3 270M: The compact model for hyper-efficient AI

Today, we're adding a new, highly specialized tool to the Gemma 3 toolkit: Gemma 3 270M , a compact, 270-million parameter model designed from the ground up for task-specific fine-tuning with strong instruction-following and text structuring capabilities already trained in. The last few months have been an exciting time for the Gemma family of open models. We introduced Gemma 3 and Gemma 3 QAT , delivering state-of-the-art performance for single cloud and desktop accelerators. Then, we announce