We often think of sleep as a chance to switch off from the outside world, leaving us blissfully ignorant of anything going on around us. But neuroscience research has shown this is a fantasy – we still monitor the environment and respond to particular sounds while we’re sleeping (at least in some stages of sleep) – a fact that will be unsurprising to anyone who has woken up after hearing someone say their name.
Now a study published in Nature Human Behaviour has revealed more about the brain’s surprisingly sophisticated levels of engagement with the outside world during sleep. Not only does the sleeping brain respond to certain words or sounds – it can even select between competing signals, prioritising the one that is more informative.
For obvious reasons it’s a challenge for researchers to figure out what people are paying attention to while they’re asleep. Guillaume Legendre at the École Normale Supérieure in Paris and his colleagues overcame this problem by looking at the changing patterns of their volunteers’ brainwaves using EEG (electroencephalography, which uses scalp electrodes to record the brain’s electrical activity).
The team recruited 24 French participants to complete a series of listening tasks while awake and asleep. In the initial part of the experiment, the team recorded the participants’ EEG signal while they listened to one-minute-long excerpts of speech. Some of these excerpts were taken from real news reports, stories, movies, and Wikipedia entries. Others were from passages of so-called “Jabberwocky” text, which had normal sentence structure but with content that was gibberish (as in Lewis Carroll’s nonsense poem of the same name: “’Twas brillig and the slithy toves / Did gyre and gimble in the wabe..”).
The team then trained a computer algorithm to learn how the participants’ brainwaves varied according to the audio they’d heard. After this training, the researchers could present the algorithm with a participant’s brainwave recording and it could reconstruct the audio signal that they’d been listening to at the time.
Next, to see how the brain responded to two competing inputs, the researchers analysed the EEG brainwave data when participants heard both the real and nonsense texts simultaneously, one playing in each ear (this started while they were awake and were focused on the meaningful speech, and it continued after they fell asleep).
The researchers’ computer algorithm was able to reconstruct both of the audio signals from the participants’ brainwaves, suggesting that they were processing both forms of speech even while asleep. And crucially, during both wakefulness and sleep, the reconstruction was better for the signal from the meaningful text than the nonsense text, suggesting that the brain had been “amplifying” the meaningful story in some way so that it left clearer traces in the EEG data.
When participants were awake, 60.6 per cent of trials were reconstructed better for the meaningful text, while this number decreased to 52.4 per cent of sleep trials. But the smaller amplification effect during sleep was still significantly higher than the 50 per cent that would be expected by chance had the brain no longer been able to prioritise the meaningful speech.