Brilliant Labs/ZDNET Follow ZDNET: Add us as a preferred source on Google. ZDNET's key takeaways Brilliant Labs announces partnership with Liquid AI. Liquid AI makes vision-language foundation models. These models will be available in Brilliant Labs' products. Smart glasses are often regarded as the best form factor for AI, as they can feed AI everything you see at every moment for the best assistance. However, this is only possible if the glasses can accurately interpret the visual content they are fed. Also: Snap's next smart glasses get a major OS overhaul to rival Meta Ray-Bans Enter the Brilliant Labs and Liquid AI partnership. LFM2-VL series Brilliant Labs, founded by ex-Apple employee Bobak Tavangar, specializes in making AI smart glasses. It launched its most recent Halo AI glasses in July. Through this new partnership with Liquid AI, Brilliant Labs will integrate Liquid AI's vision-language foundation models into its products, starting with the Halo AI glasses. Also: The best AI chatbots of 2025: ChatGPT, Copilot, and notable alternatives Brilliant Labs/Liquid MIT-born Liquid AI is a foundation model company that has developed Liquid Foundation Models (LFMs). The LFM2-VL series can take text and images of various resolutions and transform them into what the company says are "detailed, accurate, and creative description of the scenes provided by a camera sensor with millisecond latency." Agentic experience Through the agreement, Brilliant Labs will license both current and future multimodal Liquid foundation models (LFMs) to optimize how its AI glasses understand scenes fed to them. The ability to accurately interpret the world around them is especially necessary with the Halo AI glasses, which feature a long-term agentic memory that can create a personalized knowledge base for the user and analyze life context for future questions. Also: Your next job interviewer could be an AI agent - here's why that's a good thing For the agentic experience to be helpful, it needs to accurately identify the day's events, so when a user has a question about an earlier moment, it can respond in a way that is useful and matches the user's lived experience.