An anonymous reader quotes a report from Ars Technica: The nervous system does an astonishing job of tracking sensory information, and does so using signals that would drive many computer scientists insane: a noisy stream of activity spikes that may be transmitted to hundreds of additional neurons, where they are integrated with similar spike trains coming from still other neurons. Now, researchers have used spiking circuitry to build an artificial robotic skin, adopting some of the principles of how signals from our sensory neurons are transmitted and integrated. While the system relies on a few decidedly not-neural features, it has the advantage that we have chips that can run neural networks using spiking signals, which would allow this system to integrate smoothly with some energy-efficient hardware to run AI-based control software. [...] There are four ways that these trains of spikes can convey information: the shape of an individual pulse, through their magnitude, through the length of the spike, and through the frequency of the spikes. Spike frequency is the most commonly used means of conveying information in biological systems, and the researchers use that to convey the pressure experienced by a sensor. The remaining forms of information are used to create something akin to a bar code that helps identify which sensor the reading came from. In addition to registering the pressure, the researchers had each sensor send a "I'm still here" signal at regular time intervals. Failure to receive this would be an indication that something has gone wrong with a sensor. The spiking signals allow the next layer of the system to identify any pressure being experienced by the skin, as well as where it originated. This layer can also do basic evaluation of the sensory input: "Pressure-initiated raw pulses from the pulse generator accumulated in the signal cache center until a predefined pain threshold is surpassed, activating a pain signal." This can allow the equivalent of basic reflex reactions that don't involve higher-level control systems. For example, the researchers set up a robotic arm covered with their artificial skin, and got it to move the arm whenever it experiences pressure that can cause damage. The second layer also combines and filters signals from the skin before sending the information on to the arm's controller, which is the equivalent of the brain in this situation. So, the same system caused a robotic face to change expressions based on how much pressure its arm was sensing. [...] The skin is designed to be assembled from a collection of segments that can snap together using magnetic interlocks. These automatically link up any necessary wiring, and each segment of skin broadcasts a unique identity code. So, if the system identifies damage, it's relatively easy for an operator to pop out the damaged segment and replace it with fresh hardware, and then update any data that links the new segment's ID with its location. The researchers call their development a neuromorphic robotic e-skin, or NRE-skin. "Neuromorphic" as a term is a bit vague, with some people using it to mean a technology that directly follows the principles used by the nervous system. That's definitely not this skin. Instead, it uses "neuromorphic" far more loosely, with the operation of the nervous system acting as an inspiration for the system.The findings have been published in the journal PNAS.
Read more of this story at Slashdot.