Artificial intelligence has obviously been the industry craze for the past couple years. While large language models are incredibly capable, they’ve always come with a bit of a compromise: privacy. Ultimately, you’re still using a model hosted in the cloud, and all of your conversations are stored on some server.
Apple’s always had privacy at the core of its products – so in the months leading up to the debut of Apple’s AI features, many people wondered: how would Apple handle it? Today, we delve into exactly how.
9to5Mac is brought to you by Incogni: Protect your personal info from prying eyes. With Incogni, you can scrub your deeply sensitive information from data brokers across the web, including people search sites. Incogni limits your phone number, address, email, SSN, and more from circulating. Fight back against unwanted data brokers with a 30-day money back guarantee.
On-device models
This one is a bit obvious, but Apple Intelligence features are powered by on-device models. That’s why there’s such a strict device requirement. You can only utilize Apple Intelligence if you have an iPhone 15 Pro or iPhone 16 model. On the iPad and Mac side, it’s a bit more lenient, with all M1 devices being supported.
Either way, that device requirement isn’t strict for no reason, it’s because running large language models on device requires a lot of power and a lot of memory. To be specific, it requires 8GB of unified memory, which Apple only begun shipping on the iPhone 15 Pro.
With on device models, any requests you make don’t leave your phone. Granted, not every Apple Intelligence request will run on-device. Apple also has private cloud compute for heavier requests, which I’ll explain in a bit. That said, for the most part, current Apple Intelligence features lean heavily on the chipset in your device, such as Notification summaries and Genmoji. This keeps your requests and personal information local.
Apple’s also expanded the capabilities of on device models this year at WWDC25, allowing developers to implement features using Apple’s on device models, otherwise known as Apple Foundational Models. This’ll potentially deter developers from using providers like OpenAI or Google Gemini for passing your private user data to, though there is the major fact that Apple Intelligence is only available on a small number of devices.
Private Cloud Compute
As alluded to earlier, Apple also has a cloud solution for handling Apple Intelligence requests. It wasn’t in action a ton during iOS 18, but it’s being utilized a fair bit more with iOS 26. For example, you can now use Siri Shortcuts to send prompts to Apple models, including cloud models.
... continue reading