Tech News
← Back to articles

This AI Can Beat You At Rock-Paper-Scissors

read original related products more articles

Rock-paper-scissors is usually a game of psychology, reverse psychology, reverse-reverse psychology, and chance. But what if a computer could understand you well enough to win every time? A team at Hokkaido University and the TDK Corporation (of cassette-tape fame), both based in Japan, has designed a chip that can do just that.

Okay, the chip does not read your mind. It uses a sensor placed on your wrist to measure your motion, and learns which motions represent paper, scissors, or rock. The amazing thing is, once it’s trained on your particular gestures, the chip can run the calculation predicting what you’ll do in the time it takes you to say “shoot,” allowing it to defeat you in real time.

The technique behind this feat is called reservoir computing, which is a machine-learning method that uses a complex dynamical system to extract meaningful features from time-series data. The idea of reservoir computing goes as far back as the 1990s. With the growth of artificial intelligence, there has been renewed interest in reservoir computing due to its comparatively low power requirements and its potential for fast training and inference.

The research team saw power consumption as a target, says Tomoyuki Sasaki, section head and senior manager at TDK, who worked on the device. “The second target is the latency issue. In the case of the edge AI, latency is a huge problem.”

To minimize the energy and latency of their setup, the team developed a CMOS hardware implementation of an analog reservoir computing circuit. The team presented their demo at the Combined Exhibition of Advanced Technologies conference in Chiba, Japan in October and are presenting their paper at the International Conference on Rebooting Computing in San Diego, California this week.

What is reservoir computing?

A reservoir computer is best understood in contrast to traditional neural networks, the basic architecture underlying much of AI today.

A neural network consists of artificial neurons, arranged in layers. Each layer can be thought of as a column of neurons, with each neuron in a column connecting to all the neurons in the next column via weighted artificial synapses. Data enters into the first column, and propagates from left to right, layer by layer, until the final column.

During training, the output of the final layer is compared to the correct answer, and this information is used to adjust the weights in all the synapses, this time working backwards layer by layer in a process called backpropagation.

This setup has two important features. First, the data only travels one way—forward. There are no loops. Second, all of the weights connecting any pair of neurons are adjusted during the training process. This architecture has proven extremely effective and flexible, but it is also costly; adjusting what sometimes ends up being billions of weights takes both time and power.

... continue reading