A few months ago, Apple hosted the Workshop on Privacy-Preserving Machine Learning, which featured presentations and discussions on privacy, security, and other key areas in responsible machine learning development. Now, it has made the presentations public. Here are three highlights.
As it did recently with the presentations from the 2024 Workshop on Human-Centered Machine Learning, Apple published a post in its Machine Learning Research blog with a few videos and a long list of studies and papers that were presented during the two-day hybrid event held on March 20–21, 2025.
Quick note on differential privacy
Interestingly, most (if not all) papers touch on differential privacy, which has been Apple’s preferred method for the past few years to protect user data when dealing with servers (despite some criticism).
Very basically, differential privacy adds noise to user data before it’s uploaded, which aims at making it impossible to trace the real data back to an individual, in case the data is intercepted or analyzed.
Here’s how Apple frames it:
The differential privacy technology used by Apple is rooted in the idea that statistical noise that is slightly biased can mask a user’s individual data before it is shared with Apple. If many people are submitting the same data, the noise that has been added can average out over large numbers of data points, and Apple can see meaningful information emerge.
Three studies Apple showcased from the event
This study was published on March 14, building on top of another study from 2010. Rothblum co-authored both.
While the 2010 study investigated a way to keep information private even if an analytics system or server was compromised, this new study applies that idea to personal devices.
... continue reading