An ordinary act for a bird, a big step for science. Scientists were able to recreate a bird’s song just with the help of reading its brain activity. The vocalizations were reproduced by a machine in the original tone, volume and timbre. The study could be the first step towards the development of vocal prostheses for humans who have lost the ability to speak. The study results were published in the scientific journal Cell, on June 16th.
simple neural network
According to the study summary, songbirds – those that have harmonious songs – are able to sing complex songs learned by them. The scientists responsible for the study found that a model of the vocal organ of birds is able to synthesize the melody normally produced by animals through few control parameters. That’s how the team synthesized the music driven by the bird’s neurons: with a simple neural network.
Graphical summary of the songbird experiment, published in the journal Cell.Source: Cell Magazine/Reproduction
in a conversation in the discussion forum Reddit, the second author of the study, under the nickname “brokenarcher”, explained details of the article. For him, who was responsible for the architecture of the neural network, although his team’s findings are promising, there is a much higher level of complexity in the application of the technology in the case of human beings.
Brain-Machine Interfaces (ICM) hold promise for people with impaired motor function and serve as powerful tools for studying the learning of motor skills. But the scientists said that speech prostheses still needed a similar animal model, as well as being more limited in terms of neural interface technology, brain coverage and behavioral study design.
The songbirds were chosen as a model for complex vocal behavior because, the team said, their songs share a number of unique similarities with human speech. The study captured an overview of the multiple mechanisms and circuits that underlie the learning, execution and maintenance of vocal motor skill and the certainty that the biomechanics of song production in birds is similar to that of human speech and that of some non-human primates. humans.
How the study was conducted
The bird used in the study was the zebra finch.Source: Pixabay
The team developed a voice synthesizer for birdsong using mapping of neural activities, using simple computational methods that are implementable in real time. The activities were recorded from electrode arrays implanted in the pre-motor core of the birds. The use of a generative biomechanical model of the vocal organ (called syrinx) allowed the synthesis of vocalizations that correspond to the bird’s own song.
According to the article, the results provided a proof of concept that complex, high-dimensional natural behaviors can be synthesized directly from ongoing neural activity, ie, the study can inspire similar approaches to prostheses in other species, leveraging the knowledge peripheral systems and the structure for their production.
How to replicate the study in humans?
In the case of birds, neural activities were recorded by electrodes that were implanted directly in their brains, which – according to the author – would be super-invasive and would in no way pass in the case of tests on humans. The scientist believes that, in this case, it would be necessary to use electroencephalography (EEG) or other non-invasive technology to map the brain dynamics of speech.
The researcher also stated that birdsong is relatively simple compared to human speech, especially in the species used in the study (the zebra finch), which, for him, has a very stereotyped singing pattern. The team is now working on bird species with more complex songs.