Brain-Computer Technology Is Accelerating. Will We Soon Be Typing With Our Minds?

There is much excitement surrounding the field of brain-computer interfaces (BCI).

Take, for example, recent headline-grabbing announcements from Neuralink, founded by Elon Musk, which has the long-term goal of helping to “secure humanity’s future as a civilization relative to AI.”

Then, there is Facebook’s development of wearable technology that hopes to achieve “hands-free communication without saying a word.

Could this actually happen?  

While there are no guarantees that telepathy will ever exist, equally, there is no guarantee that it will not. Meanwhile, companies and organizations are making tremendous advancements and we can expect more effective and widespread use of BCIs as they become more sophisticated. Hands-free control of computers and entering data using the brain alone represents a turning point for a number of industries—and is seen as probable, not improbable. 

Let’s go back to basics for a minute…

What are brain-computer interfaces? 

BCIs, also known as neural interfaces, connect the brain or nervous system to equipment such as digital devices or IT systems. Interfaces placed inside the brain, or body, are known as internal, invasive or implanted technologies, as opposed to external, non-invasive or wearable devices.

As immersive technology continues to advance, we will see the interfaces we use to link the physical and digital worlds virtually disappear. Currently, we associate these experiences with cumbersome headsets or interactive touchscreens. In the future, our environments, clothes, or even contact lenses will be the gateway to a new reality. It’s also possible that our very brainwaves could provide the commands needed to let AI-driven immersive systems know what we want. 

However, BCIs are not just the future—they are the here and now. In fact, the basic building blocks of neural interfaces have been around for years (we already have brain-controlled artificial limbs) and with the amount of investment that neural technology is receiving, innovation is accelerating. 

Interfaces and medicine

Neural interfaces are already widely used in medicine. The cochlear implant is the most extensively used form of internal interface today—worn by 400,000 people worldwide—it allows users to experience hearing, despite damage to parts of their cochlea or inner ear. Other sensory implants, such as retinal and vestibular implants, are at a much earlier stage of development. 

On the external side of things, one of the most mature external interfaces has been around since the 1960s—Functional Electrical Stimulation (FES)—which helps people to recover motor function. Outside of medicine, external interfaces are increasingly being used to play games, control equipment and enhance memory, concentration, and physical performance. There are no cases of internal devices used for non-medical purposes, other than research.

Electroencephalography (EEG) sensors

In areas other than medicine, a number of forward-thinking companies are already looking at BCIs—the gaming world is making significant advancements. For example, Valve is exploring the use of brain-computer interfaces to create adaptive gameplay able to respond to the emotions or ability of the player. This could be achieved through placing Electroencephalography (EEG) sensors in VR headsets. 

EEG sensors are used to record brain signals and are one of the most widely used external interfaces. Historically, they have been used in medicine and research, but other industries are now showing interest. For example, automotive companies have already used EEG to analyze signals of drowsy driving—eye blink level and yawning—as the electric fields produced by brain activity are a highly effective physiological indicator for assessing vigilance states.

Brain-computer interfaces for the transportation and AEC industries

Trimble and Neurable have partnered to explore the use of brain-computer interfaces for the transportation and AEC industries. The two companies share a vision of using neurotechnology to support digital transformation by providing a bi-directional feedback loop, driving increased safety and productivity. 

Trimble and Neurable will leverage biosignals, such as brain activity combined with eye-tracking technology, to improve training efficiency, driver safety, and high-risk front-line worker safety, as well as provide insights to augment the benefits of a simulation and design evaluation.

Speaking with Aviad Almagor, senior director of Mixed Reality and BCI at Trimble, I learned that the company explores the use of biofeedback to identify and capture client experience during the design evaluation workflow. “The multimodal biofeedback approach fuses virtual reality (VR), electroencephalogram (EEG) and eye-tracking to provide insight into human response and enrich designers’ understanding of the potential impact of their work. The suggested solution enables quantification of the experience as part of an evidence-based design workflow.”

For Trimble, the benefits of using BCI are clear, “Using BCI as part of an evidence-based design process can help designers better understand the impact of their work, define target experiences, and design for affordances to support the occupants’ productivity and wellbeing.” 

Looking to the future, Almagor said, “The disruption we should be preparing for is fusion-based; integration of ubiquitous computing, XR, BCI, and AI. This disruption will completely merge digital with the physical, transform the nature of our experiences and the way we perceive and interact with the world.”

Even though BCIs are rapidly progressing, the brain is a massive thing to tackle and many implants require open-brain surgery. Then there are the ethical questions—access to people’s thoughts could be an abuse of human rights and there is already evidence of the possibility of hacking neural interfaces. 

In its neural interfaces report, iHuman, the Royal Society covers the current and future applications and explores the potential benefits and risks of the technologies. The Royal Society proposes the public are given a clear voice in shaping how this technology is used and regulated over the coming years. 

An independent public dialogue exercise commissioned by the Royal Society and conducted by Hopkins Van Mil found strong support for neural interfaces in situations where they enable patients to recover something that has been lost due to injury or a medical condition; but less support for the technology when it is used to enhance functions such as memory, concentration or physical skills among healthy people.

Despite the potential issues, the development of interfaces that allow us to combine the intricacies of human thought with the processing might of AI is an amazing advancement for humankind. We just must make sure the technology is used in the right way, controls are in place, and a regulatory framework manages the impact. 

This post was written by Solomon Rogers and originally appeared on Forbes.com.