History of a fusion: brain and machine

The artifice of creating something derives essentially from the animal capacity to transform the world. We are not unique in this. Birds create nests with branches, beavers build dams around their burrows using logs and mud, spiders weave the void with very fine threads. These transformations configure a new material space, which endows them with extended capacities. Nests, beaver dams, and spider webs are inventions that these species exploited for their own evolutionary benefit. The human being began by copying ideas, but things went much further.

Today we travel the planet in planes and boats, we dive to where our lungs can’t take it or we transform energy to take it where we need it. At some point, we saw the opportunity to remedy the ills that afflicted us. So we put lenses in front of our eyes to correct vision, implant teeth in our jaws, develop titanium and ceramic hip replacements, and insert a pacemaker into our hearts. Seen this way, it doesn’t seem serious to merge with machines. Except, perhaps, for the body that has allowed all this to be created.

It is said that the brain is the most complex, the most challenging organ; the ‘summit’ of a long process that nature began millions of years ago. Understanding it is understanding ourselves. The brain endows us with the ability to receive signals and interpret them, to integrate them into our memory register, to project ourselves mentally and to act. It is so special that nature ended up keeping it inside a hard and well-sealed chest. There is something scary about the idea of ​​opening the Pandora’s box of our minds.

Understanding the way the brain operates has always seduced us. The first attempts to copy it came from logic and mathematics, which provided the basic tools to conceptualize reasoning. For example, propositional logic, which is a type of formal logic, allows knowledge to be represented in the form of statements of the type “if A, then B”. These statements can be represented mathematically using logical operators (AND and OR, for example) to create much more sophisticated “if A and B, then C” knowledge.

A first step in the invention of artificial reasoning and the automation of logical operators was materialized by cogwheels. To implement an AND operator there was an output gear that rotated only if input gears A and B did so. In an OR operator, this happened if at least one of the two rotated. From these first computers until today, the exercise of imitation of our most complex organ has been dizzying. Vacuum tubes, transistors, integrated circuits and microprocessors have allowed the successive development of increasingly compact and powerful computers. With them, mathematical operations gained speed and precision, their use became popular and invaded all areas of life. But at the beginning of the 21st century they were still something else. Nothing to do with the complexity of the simplest biological brains, that is, that of a worm or a mouse.

An unstoppable imitation game

Classical computers need commands to execute very specific functions. The computer follows these instructions to the letter, without any ability to adapt or learn from experience. The big turn came with the development of artificial intelligence. In the 1950s and 1960s we began to explore the concept of neural networks, interrelated mathematical equations that mimic the structure of brain circuits. However, due to limitations in the power and storage capacity of those computers, progress was slow and machine learning remained largely a theoretical concept, confined primarily to research centers.

Around the 1990s, imitation gaming began to scale unstoppably. Algorithms were developed that could analyze large amounts of data, allowing machines to recognize patterns and make predictions. The development of techniques such as cluster analysis, decision trees, and support vector machines allowed for a variety of applications. These machine learning strategies are based on identifying trends in the data through an iterative comparison process (“if A and B, then C”) that reallocates data between groups as far apart as possible.

In parallel, neurosciences were revealing some of the secrets of the brain. How do we represent information? How do we handle it to decide? How do we learn? The new computing and storage power catapulted the development of neuro-inspired algorithms such as the use of multiple neural layers of processing, reinforcement learning, or self-training during sleep-like phases, which have turned the most modern computers into powerful brains. artificial. To do this, different neural architectures are trained to recognize previously labeled patterns, which they achieve by modifying the weights of their connections in a learning process. These new machines are not only capable of recognizing text, voice or images, but also of beating us at chess or other games in which the number of possibilities and the strategies are difficult to conceptualise.

Nowadays no one is afraid to talk to devices. Communicating with machines is something natural. But what about the merger? Until now, the invention has remained separate from our body, allowing interaction through interfaces that let us write or dictate what we want.

Brain-machine interfaces are a type of technology that seeks to control electronic devices using one’s own brain activity. At the core of these interfaces is the algorithm that decodes brain activity. The first attempts performed very simple tasks, such as moving a cursor. These prototypes were based on the use of electroencephalography signals recorded on the scalp. It soon became clear that decoding movement intentionality accurately from external brain activity alone was proving difficult.

Instead, the states of attention and fatigue can be read in a general way from the changes in the rhythms of the electroencephalogram. When we relax our eyes, activity in the visual cortex on the back of the head ranges from eight to fourteen cycles per second. This rhythm, known as Alpha, can help determine a person’s level of cognitive alertness. It’s not just about wanting to move a finger, maybe you have to pay some attention to it. When algorithms are trained to integrate information from mental states (relaxation, anxiety, fatigue) with that from other external sensors (eye movement, orientation), controllability is substantially improved. This has, for example, made it possible to select sequences of letters on a computer screen more consistently, exploiting the attention span.

The development of brain-machine interfaces has gone hand in hand with the desire to help people with severe motor disabilities to interact with the world around them. In many of these cases, it is possible to access the interior of the brain, using intracranial recordings of the regions corresponding to the motor cortex responsible for natural movement. These interfaces work by detecting and translating neural activity into commands that can be used to control other devices, such as a robotic arm, in very precise ways. But, these more invasive applications involve piercing the skull with the risks that this entails. Today we know that the body-brain axis can help us understand much better what we have in mind. It may not be necessary to open the box completely.

the closed loop

The neurosciences are enabling significant progress in the design of new brain-machine interfaces. It has been seen that the brain connected directly to the machine is capable of learning to move external devices using the same mechanisms with which we learn to walk. This process requires a large amount of training and calibration to ensure that the system is accurate and reliable. For this reason, work is still being done to better understand the neural code, reconstructing the way in which we represent information and our intentions, how we access memory or how we prioritize decisions in a space of open options.

When we directly connect neural activity with external devices using new generation computers, our brain closes the loop between intention and action. This reinforcement operates in two equivalent neural circuits: the natural one that learns according to biological mechanisms, and the artificial one that learns with new neuro-inspired algorithms. This assembly, this fusion between the model and its copy, seeks to establish communication in the most efficient way possible, speaking a common language. The one that operates within our gray matter, and that we continue trying to copy with increasingly refined technologies.

Brain-machine fusion is not too different from that between the pacemaker and the heart, but it is more frightening. Question what until now has remained hidden under the skull. The origin of everything we have managed to build. It removes certainties and tacit agreements, shaking the security of a world operated at our will. The time ahead will bring profound debates about who we are, and will change everything we know. It’s hard to imagine how far this merger will take us, just as it was hard to anticipate how far we could go with spinning gears simulating a comparison between A and B.

A hybrid world

The next few years will see spectacular advances in brain-machine fusion. Its performance and accuracy will be improved with algorithms that communicate more efficiently with the brain. This basic knowledge can only be obtained through experimentation with animals and humans, accessing the activity of multiple neurons along with precise information on their connectivity and mechanisms for learning and memory.

New interfaces are moving beyond the realm of research, becoming accessible to the general public. In his essay is the readjustment and sustainable integration of a technology that should help us make life better. Simple devices such as EEG headsets already exist, and we can expect to see more products and applications in the future such as integration with virtual and augmented reality to create immersive experiences in education or cultural entertainment.

The clinical applications of the new interfaces will be revolutionary. At the current rate, an enormous impact on medicine is expected by providing new ways of diagnosing and remediating neurological disorders, such as epilepsy, strokes, or brain metastases, among others.

With the use of these technologies new ethical problems arise. In a world hybridized with machines, the human being must continue to be the center, and the search for the common good the only objective. From that nest we made to shelter ourselves, imitating birds, to the dam built to contain the tides like beavers, we have walked a gigantic distance. We continue to build a transformed world, in which we seek to weave with very fine threads a fabric that connects our mind with the most complex machines we have invented.

Related articles