Across 60 cultures, songs sung in similar social contexts have shared musical features.
No matter where you are, a bop is a bop. Whether a melody makes people get up and dance, soothes their sadness, fall in love, or lull them to sleep, similar rhythms and tones make music a universal language, as the saying goes. Now, there might be science to back it up.
To better understand the similarities in music that could provide insight into its biological roots, a team of researchers focused on music with lyrics. They started by looking at ethnographic descriptions of music in 315 cultures worldwide, all of which featured vocal music, before analyzing musical recordings from 60 well-documented cultures, according to a study published in the journal Science.
W. Tecumseh Fitch, a cognitive biologist at the University of Vienna who was not involved in the study, writes in a commentary that accompanied the research in Science:
The authors find that not only is music universal (in the sense of existing in all sampled cultures) but also that similar songs are used in similar contexts around the world.
“Music is something that has bedeviled anthropologists and biologists since Darwin,” Luke Glowacki, an anthropologist at Pennsylvania State University and a co-author on the paper, tells the Wall Street Journal’s Robert Lee Hotz. “If there were no underlying principles of the human mind, there would not be these regularities.”
Basically, the team found that humans share a “musical grammar,” explains the study’s lead author Samuel Mehr, a psychologist at Harvard University. He tells Jim Daley at Scientific American, “music is built from similar, simple building blocks the world over.”
The team used a combination of methods—including machine learning, expert musicologists and 30,000 amateur listeners from the United States and India—to analyze a public database of music. In one part of the study, online amateur listeners were asked to categorize random music samples as lullabies, dance songs, healing songs, or love songs. Dance songs were the easiest to catch. In other parts of the study, the music samples were annotated by listeners and transcribed into a musical staff, which is a form of musical notation in Western cultures. When this data was fed to a computer, it was able to tell different kinds of songs apart at least two-thirds of the time.
Critics have questioned the use of machine learning algorithms and Western notation because of the biases that come with both.