The aim of the research was to build upon studies that are showing how computer science and artificial intelligence can take brain research in new directions. The study demonstrates how a self-learning algorithm can decodes human brain signals, as measured by an electroencephalogram. The research is part of studies involving brain-machine interfaces; these involve a direct communication pathway between an enhanced or wired brain and an external device. Such research is directed at researching, mapping, assisting, augmenting, or repairing human cognitive or sensory-motor functions.
What is interesting about the research is that whereas other research has attempted to decode what happens when a person thinks about walking and then proceeds to walk, the new study focuses on hand and foot movements that were merely thought of by the test subject. The collected information about how a brain signal alters as a person imagines what it is like to make a gesture was then used to develop an algorithm, with the algorithm being later used with an artificial intelligence system.
According to lead researcher Professor Robin Tibor Schirrmeister: “Our software is based on brain-inspired models that have proven to be most helpful to decode various natural signals such as phonetic sounds. The system learns to recognize and differentiate between certain behavioral patterns from various movements as it goes along.”
This works because the logarithm analyzes connections between nerve cells in the human body in which electric signals from synapses are directed from cellular protuberances to the cell’s core and back again.
The developed algorithm was found to work rapidly and it has been used in several tests to assess predetermined brain signal characteristics. The aim of this was to further understand the diverse intersections between human and machine and to better develop artificial intelligence for medical science, in relation to interpreting brains scans. For example, the algorithm could be developed to assist with the early detection of epileptic seizures or to improve communication possibilities for severely paralyzed patients. This is the type of technology that will appeal to a range of medical device technology companies, as well as academic research centers.
The research is published in the journal Human Brain Mapping, under the title “Deep learning with convolutional neural networks for EEG decoding and visualization.”