双语新闻:大脑处理声音的方式,取决于音量以及是否专注

英语作文    发布时间:2023-06-07  
划词翻译

We now have a good explanation for how our brain keeps track of a conversation while we are in a loud, crowded room, a discovery that could improve hearing aids.

当我们在一个嘈杂、拥挤的房间里时,我们的大脑是如何记录谈话的,现在有了一个很好的解释,这一发现可能会改善助听器。

 

The general idea for speech perception is that only the voice of the person you are paying attention to gets processed by the brain, says Vinay Raghavan at Columbia University in New York. “But my issue with that idea is that when someone shouts in a crowded place, we don’t ignore it because we’re focused on the person we’re talking to, we still pick it up.”

纽约哥伦比亚大学(Columbia University)的维奈·拉加万(Vinay Raghavan)说,关于言语感知的一般观点是,只有你正在关注的人的声音才会被大脑处理。“但我对这个想法的看法是,当有人在拥挤的地方大喊大叫时,我们不会忽视它,即便我们专注于与我们交谈的人,我们仍然会听到它。”

 

To better understand how we process multiple voices, Raghavan and his colleagues implanted electrodes into the brains of seven people to monitor the organ’s activity while they underwent surgery for epilepsy. The participants, who were awake throughout the surgery, listened to a 30-minute audio clip of two voices.

为了更好地理解我们是如何处理多种声音的,拉加万和他的同事在七个人接受癫痫手术时,将电极植入他们的大脑,以监测器官的活动。参与者在整个手术过程中都是清醒的,他们听了一段30分钟的两种声音的音频片段。

 

During the half-hour period, the participants were repeatedly asked to change their focus between the two voices, one of which belonged to a man and the other to a woman. The voices spoke over each other and were largely the same volume, but, at various points in the clip, one was louder than the other, mimicking the changing volumes of background conversations in a crowded space.

在半小时的时间里,参与者被反复要求在两种声音之间转换注意力,其中一种声音来自男性,另一种来自女性。这些声音相互压过,音量基本相同,但在视频片段的不同地方,一个声音比另一个声音更大,模仿了拥挤空间中背景对话音量的变化。

 

The team then used this brain activity data to produce a model that predicted how the brain processes the quieter and louder voices and how that might differ depending on which voice the participant was asked to focus on.

然后,研究小组利用这些大脑活动数据建立了一个模型,该模型预测了大脑如何处理更安静和更大声的声音,以及参与者被要求专注于哪种声音会有什么不同。

 

The researchers found that the louder of the two voices was encoded by both the primary auditory cortex, which is thought to be responsible for the conscious perception of sound, and the secondary auditory cortex, responsible for more complex sound processing, even if the participant was told not to focus on the louder voice.

研究人员发现,即使参与者被告知不要把注意力集中在更大的声音上,两种声音中的较大声音也是由初级听觉皮层和次级听觉皮层编码的,初级听觉皮层被认为负责有意识地感知声音,而次级听觉皮层负责更复杂的声音处理。

 

“This is the first study to show using neuroscience that your brain does encode speech that you’re not paying attention to,” says Raghavan. “It opens the door to understanding how your brain processes things you’re not paying attention to.”

拉加万说:“这是第一个利用神经科学表明大脑确实会对你没有注意到的言语进行编码的研究。”“它打开了一扇门,让我们了解大脑如何处理你没有注意到的事情。”

 

The researchers found that the quieter voice was only processed by the brain, also in the primary and secondary cortices, if they asked the participants to focus on that voice. It then took the brain about 95 milliseconds longer to process this voice as speech compared with when the participants were asked to focus on the louder voice.

研究人员发现,如果他们要求参与者专注于较安静的声音,那么这种声音只会在大脑的初级和次级皮质中被处理。然后,与参与者被要求专注于更大的声音时相比,大脑将这个声音作为语音处理的时间要长95毫秒。

 

“The findings suggest that the brain likely uses different mechanisms for encoding and representing these two different volumes of voices when there is a background conversation ongoing,” says Raghavan.

拉加万说:“研究结果表明,当有背景对话进行时,大脑可能会使用不同的机制来编码和表示这两种不同的声音。”