Will computers ever match or surpass human-level intelligence — and, if so, how? When the Association for the Advancement of Artificial Intelligence (AAAI), based in Washington DC, asked its members earlier this year whether neural networks — the current star of artificial-intelligence systems — alone will be enough to hit this goal, the vast majority said no. Instead, most said, a heavy dose of an older kind of AI will be needed to get these systems up to par: symbolic AI.
Will AI ever win its own Nobel? Some predict a prize-worthy science discovery soon
Sometimes called ‘good old-fashioned AI’, symbolic AI is based on formal rules and an encoding of the logical relationships between concepts1. Mathematics is symbolic, for example, as are ‘if–then’ statements and computer coding languages such as Python, along with flow charts or Venn diagrams that map how, say, cats, mammals and animals are conceptually related. Decades ago, symbolic systems were an early front-runner in the AI effort. However, in the early 2010s, they were vastly outpaced by more-flexible neural networks. These machine-learning models excel at learning from vast amounts of data, and underlie large language models (LLMs), as well as chatbots such as ChatGPT.
Now, however, the computer-science community is pushing hard for a better and bolder melding of the old and the new. ‘Neurosymbolic AI’ has become the hottest buzzword in town. Brandon Colelough, a computer scientist at the University of Maryland in College Park, has charted the meteoric rise of the concept in academic papers (see ‘Going up and up’). These reveal a spike of interest in neurosymbolic AI that started in around 2021 and shows no sign of slowing down2.
Plenty of researchers are heralding the trend as an escape from what they see as an unhealthy monopoly of neural networks in AI research, and expect the shift to deliver smarter and more reliable AI.
A better melding of these two strategies could lead to artificial general intelligence (AGI): AI that can reason and generalize its knowledge from one situation to another as well as humans do. It might also be useful for high-risk applications, such as military or medical decision-making, says Colelough. Because symbolic AI is transparent and understandable to humans, he says, it doesn’t suffer from the ‘black box’ syndrome that can make neural networks hard to trust.
Source: updated from ref. 2
There are already good examples of neurosymbolic AI, including Google DeepMind’s AlphaGeometry, a system reported last year3 that can reliably solve maths Olympiad problems — questions aimed at talented secondary-school students. But working out how best to combine neural networks and symbolic AI into an all-purpose system is a formidable challenge.
“You’re really architecting this kind of two-headed beast,” says computer scientist William Regli, also at the University of Maryland.
War of words
... continue reading