A few months ago, Apple released FastVLM, a Visual Language Model (VLM) that offered near-instant high-resolution image processing. Now, you can take it for a spin, provided you have an Apple Silicon-powered Mac. Here’s how.
When we first covered FastVLM, we explained that it leveraged MLX, Apple’s own open ML framework specifically designed for Apple Silicon, to deliver up to 85 times faster video captioning, while being more than 3 times smaller than similar models.
Since then, Apple has worked on the project further, which can now be found on Hugging Face, not just on GitHub. On Hugging Face, you can load the lighter version, FastVLM-0.5B, right on your browser and check it out for yourself.
Depending on your hardware, it may take a bit to load. It took a couple of minutes on my 16GB M2 Pro MacBook Pro. But as soon as it loaded, the model started to accurately describe my appearance, the room behind me, different expressions, and objects I would bring into view.
On the bottom left corner, you can adjust the prompt that the model will take into consideration as it live updates the caption, or you can pick from a few suggestions, such as:
Describe what you see in one sentence.
What is the color of my shirt?
Identify any text or written content visible.
What emotions or actions are being portrayed?
Name the object I am holding in my hand.
... continue reading