Today, Apple published the list of studies it will present at the 39th annual Conference on Neural Information Processing Systems (NeurIPS) in San Diego. Here are the details.
This year’s NeurIPS is set to take place in San Diego from December 2 to 7, with a satellite event running in Mexico City from November 30 to December 5.
At the San Diego event, Apple will present multiple papers, including “The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity,” which drew criticism from industry researchers earlier this year.
In addition to its slate of research presentations, Apple is also sponsoring multiple affinity groups, including Women in Machine Learning, LatinX in AI, and Queer in AI, all of which will also host Apple employees.
Apple’s presentations will cover a variety of topics related to machine learning research, including privacy, strengths and limitations of reasoning models, innovative approaches to generative AI, and more. Here is the full list of studies that Apple will present at NeurIPS, some of which 9to5Mac has covered in the past:
At the event, Apple will also have a booth (#1103) where attendees will be able to interact with live demos of the multiple machine learning initiatives from the company, including:
MLX – an open source array framework designed for Apple silicon that enables fast and flexible ML and scientific computing on Apple hardware. The framework is optimized for Apple silicon’s unified memory architecture and leverages both the CPU and GPU. Visitors will be able to experience two MLX demos: Image generation with a large diffusion model on an iPad Pro with M5 chip Distributed compute with MLX and Apple silicon: Visitors will be able to explore text and code generation with a 1 trillion-parameter model running in Xcode on a cluster of four Mac Studios equipped with M3 Ultra chips, each operating with 512 GBs of unified memory.
FastVLM – a family of mobile-friendly vision language models, built using MLX. These models use a mix of CNN and Transformer architectures for vision encoding designed specifically for processing high-resolution images. Together, they demonstrate a strong approach that achieves an optimal balance between accuracy and speed. Visitors will get to experience a real-time visual question-and-answer demo on iPhone 17 Pro Max.
To learn more about Apple’s presence at NeurIPS, follow this link.
Accessory deals on Amazon