While this is not the day Apple will release a revamped, LLM-powered Siri, today’s updates bring multiple welcome additions to the Apple Intelligence feature set. Here are some highlights.
During WWDC25, Apple made sure to (almost passive-aggressively) highlight all the Apple Intelligence features it had already released, as it tried to counter the fact that it is behind on AI.
Then, it proceeded to announce new Apple Intelligence features, making sure to showcase only the features it was sure it could deliver right on the first wave of betas, or just after that.
Here are some of the coolest Apple Intelligence features you can use as soon as you update your Apple Inteligence-compatible devices starting today.
Foundation Models Framework
This is by far the most significant change in today’s updates. Now, apps can plug directly into Apple’s on-device model, which means they can offer AI features without having to rely on external APIs or even the web.
For users, this means more private AI-based features, since prompts never leave the device. For developers, a faster and more streamlined experience, since they can leverage the same on-device power Apple itself is leveraging for its own AI offerings.
For now, developers won’t have access to Apple’s cloud-based model. This is on-device only. Still, the fact that any developer will get to adopt AI features with no extra cost should make for extremely interesting features and use cases.
Shortcuts and Automations
Shortcuts and automation play a big part in today’s updates, particularly on macOS Tahoe 26.
Now, you can bake Apple Intelligence right into your shortcuts, which in practice means including Writing Tools, Image Playground, and even Apple’s own models or ChatGPT, as steps in your workflow to get things done faster.
If you’ve never dabbled in automation before, this may feel overwhelming. But I highly recommend you poke around and try to find ways to bring AI-powered automation into your workflow. Depending on your line of work, this may truly change how you get things done.
Live Translation
Although the headline feature is Live Translation with AirPods, which automatically translates what a person may be telling you in a different language, Live Translation features also permeate across Messages, Phone, and FaceTime.
In Messages, the Live Translation feature automatically translates incoming and outgoing text messages, while during a FaceTime call, it displays live translated captions on screen. When it comes to phone calls, you’ll hear spoken translations, much like the Live Translation with AirPods feature.
Here’s Apple’s fine print on which languages are compatible with which features:
Live Translation with AirPods works on AirPods 4 with Active Noise Cancellation or AirPods Pro 2 and later with the latest firmware when paired with an Apple Intelligence–enabled iPhone, and supports English (U.S., UK), French (France), German, Portuguese (Brazil), and Spanish (Spain).
Live Translation in Messages is available in Chinese (Simplified), English (UK, U.S.), French (France), German, Italian, Japanese, Korean, Portuguese (Brazil), and Spanish (Spain) when Apple Intelligence is enabled on a compatible iPhone, iPad, or Mac, as well as on Apple Watch Series 9 and later and Apple Watch Ultra 2 and later when paired with an Apple Intelligence–enabled iPhone.
Live Translation in Phone and FaceTime is available for one-on-one calls in English (UK, U.S.), French (France), German, Portuguese (Brazil), and Spanish (Spain) when Apple Intelligence is enabled on a compatible iPhone, iPad, or Mac.
And speaking of phone calls, Apple Intelligence now provides voicemail summaries, which will appear inline with your missed calls.
Visual Intelligence
Apple has extended its visual search feature beyond the camera, and now it also supports what is on the screen.
This means that when you take a screenshot of what is on your screen, iOS will now recognize the content and let you highlight a specific portion, search for similar images, or even ask ChatGPT about what appears in the screenshot.
Visual Intelligence also extracts date, time, and location from screenshots, and proactively suggests actions such as adding events to the calendar, provided the image is in English.
Genmoji and Image Playground
Now, you can combine two different emoji (and, optionally, a text description) to create a new Genmoji.
You can also add expressions, or modify personal attributes such as hairstyle and facial hair for Genmoji based on people from your Photos library.
On Image Playground, the biggest news is that you can now use OpenAI’s image generator, picking from preset styles, such as Watercolor and Oil, or by sending a text prompt of what you’d like to have created.
You can also use Image Playground to create Messages Backgrounds, as well as Genmoji without having to leave the app.
9to5Mac’s take
It is very obvious that Apple is struggling when it comes to getting a handle on the Siri of it all, when it comes to AI. From multiple high-profile departures, to Tim Cook’s pep talk following Apple’s financial results, the company is clearly in trouble.
That said, some of today’s additions to its Apple Intelligence offerings are truly well-thought-out and implemented, and may really help users get things done faster and better. I’ve been using Apple Intelligence with my shortcuts, and it has really improved parts of my workflow, which is something that until very recently, felt out of reach if I wanted to keep things native.
Apple still has a long, long way to go, but the set of AI-powered features it is releasing today is a good one, and it may be worth checking out, even if you’ve dismissed them in the past.
Be sure to check out 9to5Mac’s coverage of today’s Apple OS releases:
Accessory deals on Amazon