Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now
Google’s DeepMind AI research team has unveiled a new open source AI model today, Gemma 3 270M.
As its name would suggest, this is a 270-million-parameter model — far smaller than the 70 billion or more parameters of many frontier LLMs (parameters being the number of internal settings governing the model’s behavior).
While more parameters generally translates to a larger and more powerful model, Google’s focus with this is nearly the opposite: high-efficiency, giving developers a model small enough to run directly on smartphones and locally, without an internet connection, as shown in internal tests on a Pixel 9 Pro SoC.
Yet, the model is still capable of handling complex, domain-specific tasks and can be quickly fine-tuned in mere minutes to fit an enterprise or indie developer’s needs.
AI Scaling Hits Its Limits Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are: Turning energy into a strategic advantage
Architecting efficient inference for real throughput gains
Unlocking competitive ROI with sustainable AI systems Secure your spot to stay ahead: https://bit.ly/4mwGngO
On the social network X, Google DeepMind Staff AI Developer Relations Engineer Omar Sanseviero added that it Gemma 3 270M can also run directly in a user’s web browser, on a Raspberry Pi, and “in your toaster,” underscoring its ability to operate on very lightweight hardware.
This can run in your toaster or directly in your browser
... continue reading