Tech News
← Back to articles

Liquid AI wants to give smartphones small, fast AI that can see with new LFM2-VL model

read original related products more articles

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now

Liquid AI has released LFM2-VL, a new generation of vision-language foundation models designed for efficient deployment across a wide range of hardware — from smartphones and laptops to wearables and embedded systems.

The models promise low-latency performance, strong accuracy, and flexibility for real-world applications.

LFM2-VL builds on the company’s existing LFM2 architecture, extending it into multimodal processing that supports both text and image inputs at variable resolutions.

According to Liquid AI, the models deliver up to twice the GPU inference speed of comparable vision-language models, while maintaining competitive performance on common benchmarks.

AI Scaling Hits Its Limits Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are: Turning energy into a strategic advantage

Architecting efficient inference for real throughput gains

Unlocking competitive ROI with sustainable AI systems Secure your spot to stay ahead: https://bit.ly/4mwGngO

“Efficiency is our product,” wrote Liquid AI co-founder and CEO Ramin Hasani in a post on X announcing the new model family:

meet LFM2-VL: an efficient Liquid vision-language model for the device class. open weights, 440M & 1.6B, up to 2× faster on GPU with competitive accuracy, Native 512×512, smart patching for big images.

... continue reading