Scientists Use AI to Read Mouse’s Brain and Reconstruct Movie Clip It’s Watching

Scientists used artificial intelligence (AI) to decode a mouse’s brain signals as it was watching a film and accurately reproduce the movie clip it was seeing.

A team of researchers from École Polytechnique Fédérale de Lausanne (EPFL) developed an AI tool that can interpret a rodent’s brain signals in real-time and then reconstruct what the video that the mouse is watching.

The scientist’s machine-learning algorithm, named “CEBRA,” was trained to map neural activity to specific frames in videos. The algorithm could then predict and reconstruct the movie clip that a mouse is looking at.

In a video shared by EPFL, the scientists reveal how a mouse was shown a 1960s black-and-white film clip of a man running to a car who opens the trunk.

A separate screen shows what CEBRA thinks the mouse is looking at and the AI’s reconstructed footage is almost identical — although the video does glitch intermittently.

From Data into a Film

In a study published in Nature today, scientists revealed that they measured and recorded the rodents’ brain activity using electrode probes inserted into their brains’ visual cortex region — as well as optical probes for mice that had been genetically engineered so that the neurons in their brains glow green when firing and transmitting information.

The researchers trained CEBRA using movies watched by mice and their real-time brain activity. Using this data, CEBRA learned which brain signals are associated with which frames of a particular movie.

Then the machine learning algorithm was given some new brain activity it had not encountered before, from a mouse watching a slightly different example of the movie clip.

From that, CEBRA was able to predict what frame the mouse had been watching in real-time, and the researchers turned this data into a film of its own.

This is not the first time researchers have decoded brain signals to generate images. Last month, PetaPixel reported on researchers in Osaka University, Japan who were able to reconstruct high-resolution and highly accurate images from brain activity by using the popular Stable Diffusion model.

Meanwhile, scientists at Radboud University in the Netherlands developed “mind-reading” technology that can translate a person’s brainwaves into photographic images.


Image credits: Header photo via YouTube/EPFL

Discussion