Artificial intelligence is consuming enormous amounts of electricity in the United States. According to the International Energy Agency, AI systems and data centers used about 415 terawatt hours of power in 2024. That accounts for more than 10% of the country's total electricity production, and demand is projected to double by 2030.
This rapid growth has raised concerns about sustainability. In response, researchers at a School of Engineering have created a proof-of-concept AI system designed to be far more efficient. Their approach could reduce energy use by up to 100 times while also improving performance on tasks.
A Hybrid Approach Called Neuro-Symbolic AI
The research comes from the laboratory of Matthias Scheutz, Karol Family Applied Technology Professor. His team is developing neuro-symbolic AI, which combines traditional neural networks with symbolic reasoning. This method mirrors how people approach problems by breaking them into steps and categories.
The work will be presented at the International Conference of Robotics and Automation in Vienna in May and will appear in the conference proceedings.
Teaching Robots to See, Understand, and Act
Unlike familiar large language models (LLMs) such as ChatGPT and Gemini, the team focuses on AI systems used in robotics. These systems are known as visual-language-action (VLA) models. They extend LLM capabilities by incorporating vision and physical movement.
VLA models take in visual data from cameras and instructions from language, then translate that information into real-world actions. For example, they can control a robot's wheels, arms, or fingers to complete a task.
Why Traditional AI Struggles With Simple Tasks
Conventional VLA systems rely heavily on data and trial-and-error learning. If a robot is asked to stack blocks into a tower, it must first analyze the scene, identify each block, and determine how to place them correctly.
... continue reading