Translating Thoughts using AI

 by Siha Hoque


Artificial intelligence is where computer systems are engineered to replicate the human thinking process - essentially making machines that can behave like humans. You may have heard of some examples, such as AI chess opponents. AI holds a lot of potential in our society today, particularly in medicine; it’s likely to make large advances in the treatment of injuries and illnesses, and in increasing the quality of life for those who have been affected by them.

One of the most recent of these developments include an AI device that can quite literally read minds. Scientists at the University of Technology Sydney have trained an AI model that can roughly formulate a continuous stream of text from the silent thoughts of a person listening to a story or reading one in their head. It is called the DeWave model.

This device’s capabilities rely on the electrical impulses within our bodies. Our nerves constantly use electricity to communicate with each other, allowing us to be mobile, thinking, feeling organisms. Our cells use ions of elements found within our bodies, like sodium and calcium, each with specific charges, to generate electricity. They control the flow of ions in and out of the cells through their cell membranes, which then allows an electrical current to form, as generally more positive ions are kept on the outside of cells, and negative ones on the inside - an imbalance between charges on the inside and outsides of cells. The flow is controlled by proteins known as ion channels, which create small openings in the membranes of cells that positive ions can pass in through when the cell is stimulated. The change in the charge of the cell inside can trigger an action potential - electrical pulse. These action potentials, when in specific patterns, are used to produce movements and thoughts. 


A key component of the DeWave model is a cap of sorts, which monitors the electrical activity in the brain - an electroencephalogram - or the EEG for short. During an EEG test, many electrodes made up of small metal discs are placed on the scalp, and they pick up on the electrical charges from the brain cells’ activity, amplify them, and then are formatted on a graph. Here is an example of an EEG. There is a normal range for the waves that appear on the graphs, which would indicate good health. However, often, EEGs are used to test for disorders like epilepsy. During a seizure there is a burst of abnormal electrical signals interrupting the normal ones in the brain. On an EEG graph, this can be seen by prominent spikes in the typically uniform waves appearing on it.

An AI model is a program that can receive and analyse data from collections of data sets, and then recognise specific patterns within it too. After being ‘trained’ with them, the models can make predictions when prompted, using the patterns previously recognised from data to draw conclusions it may not have necessarily seen before. Unfortunately, these data sets have to be very wide and varied for accuracy, as any biassed data can result in biassed predictions from the AI. In the case of the DeWave model, AI can be trained to spot patterns in brainwaves and what stimuli they link to. 


When decoding the thoughts of the participants that were silently reading passages, they had to allow the EEG to record their brain’s electrical activity - wearing a tight cap containing the sensors. Then, the AI model, DeWave converts the EEG waves into a code, which can then be matched to phrases and words. It translates from brain waves to text, trained to recognise which combinations of these translated brain signals correspond to certain words. It is currently at a 60% accuracy, and scientists intend to train it until it reaches 90%.


Another type of AI which has the potential to ‘read minds’ is the MinD-Vis, developed by researchers in Singapore, and it is currently in the testing phase. MinD-Vis translates the thoughts of a person looking at an image and recreates them. An MRI scan is used - magnetic resonance imaging - to produce an image of the brain. It works by generating a large magnetic field, which results in the atoms in an organ aligning with it. A magnetic impulse will deflect them out of alignment for a short amount of time, thus releasing energy, before returning to alignment. This energy can be translated into an image. Participants would be given an MRI scan, while looking at over 1000 images. Their brain activities would be recorded during this and sent to an AI translator which encodes the specific patterns it picks up on, so that an AI model called Stable Diffusion can understand. Then, when sent to Stable Diffusion, the image the brain waves responded to can be recreated. It would not be an exact copy of the original image, yet it would depict the same events and objects.

Many people are left unable to speak due to having sustained paralysing injuries. The DeWave AI model and other translators like it that are being developed could return their ability to communicate almost normally with others, being relatively non-invasive and holding the potential to increase in accuracy.

Comments