Nearly 200 years ago, Henry Wadsworth Longfellow asserted that “music is the universal language of mankind”. In recent years, an international team of researchers have explored music from around the world to determine if this is true or just cliché – and a Princetonian has calculated the numbers.
“Our article finds universal patterns in vocal music – both the social contexts in which it occurs and the aural structure of the song,” said Dean Knox, researcher in computational social sciences and assistant professor of Politics at Princeton. “These patterns are similar in hundreds of small-scale companies. Among other results, we show that machine learning techniques can reliably recognize the social function of a song – i.e. dance songs, healing songs, love songs, lullabies – even without knowing anything about the culture or region that created the song, based solely on patterns learned from other companies. Their paper appears today in the journal Science.
The research team included psychologists, anthropologists, biologists, musicians, linguists and other experts from 11 institutions on three continents, including Harvard University, Victoria University of Wellington in New Zealand, l ‘Eastman School of Music at the University of Rochester, the Max Planck Institute for Empirical Aesthetics in Germany, McGill University in Canada, as well as two political scientists to manage the extraordinary data set: Knox and his third-year roommate. cycle, Christophe lucas, now an assistant professor of political science at Washington University in St. Louis.
“I guess you wouldn’t imagine political scientists being part of the team to analyze music – but here, that’s one of our main jobs,” Lucas said. “Everything we do can be analyzed as data. Twenty years ago in political science you studied polls and votes. Today we are studying the way politicians speak, the audio recordings, the way people are affected. … The tools we have developed to study this in politics also pack a lot of punch when applied elsewhere.
The team of 19 people set out to answer big questions: is music a cultural universal? If so, what musical qualities overlap in disparate societies? If not, why does it seem so ubiquitous?
To answer these questions, they needed a data set of unprecedented breadth and depth. Over a five-year period, the team searched for hundreds of records in libraries and private collections of scientists halfway around the world.
“We’re so used to being able to find any piece of music we like on the Internet,” said Samuel Mehr, senior researcher at Harvard Music Lab and first author of the article. “But there are thousands and thousands of recordings buried in archives that cannot be accessed online. We weren’t sure what we were going to find: At one point, we found a strange phone number, asked a Harvard librarian for help, and 20 minutes later she pulled out a shopping cart. approximately 20 cases of traditional Celtic music reel recordings.
The research team ultimately examined 315 societies across the planet, all but six of which were found in ethnographic documents listed in the Organization of files in the field of human relations. They collected around 5,000 song descriptions from a subset of 60 cultures spanning 30 distinct geographic regions. For discography, they have collected 118 songs from a total of 86 cultures, again covering 30 geographic regions.
Their deep dive into song helped create the National History of Song (NHS) Ethnography, for which they coded dozens of variables. Researchers recorded details about singers and audience members, time of day, duration of singing, presence of instruments, and more details for thousands of passages on songs in the ethnographic corpus. The discography was analyzed in four different ways: machine summaries, listener notes, expert annotations and expert transcriptions.
“It’s an important project,” Lucas said. “Exceptionally difficult. Data collection was very important. For the analysis, approximately 10,000 lines of code were written. Dean and I were in there, doing the actual statistical analysis of the paper. Knox and Lucas compiled and analyzed data, including nearly 500,000 words gleaned from song descriptions at these 315 companies and coding them so that each company had a median group of almost 50 songs to review.
Their big questions led to one big answer: Music permeates social life in a similar way all over the world.
“Academics have made general claims about the universality of music and music-related behavior, but those claims are incredibly difficult to test,” Knox said. “We were working with unstructured audio recordings and textual ethnographic descriptions of songs, not just the digital data that most analysts are used to. The question is, how can we empirically evaluate these ideas with complex and messy data? For example, each ethnographer has their own prejudices about what to describe, so we had to think carefully about the right way to use their accounts to draw statistical conclusions about the patterns of music.
They found that in all societies, music is associated with behaviors such as infant care (lullabies), healing, dancing, love, mourning, war, processions, rituals – and that these behaviors are not very different from one society to another. They found that the music for these universal behaviors tended to have similar musical characteristics, which Knox and Lucas were able to train a computer to recognize.
“One of our most surprising results is that machines, which have no knowledge of human psychology or music theory, can be trained to recognize lullabies, healing songs, love songs and dance songs, even in cultures they’ve never seen before, ”says Knox.
For Mehr, the study is a first step towards unlocking the rules governing “musical grammar”. This idea has circulated among music theorists, linguists and music psychologists for decades, but has never been demonstrated across cultures.
“In music theory, tonality is often assumed to be an invention of Western music, but our data raises the controversial possibility that it may be a universal feature of music,” Mehr said. “It raises pressing questions about the structure behind music everywhere – and if and how our minds are designed to make music.”
For Knox, the most exciting part is that the machines are learning to decode emotions and tone. “Chris Lucas and I have successfully trained machines to identify even complex human concepts like skepticism, and learn how and why humans use them the way they do, ”he said. “We are getting closer and closer to being able to characterize and analyze human communication in all its richness.
“Universality and diversity of human song,” through Samuel A. Mehr, Manvir Singh, Dean Knox, Daniel M. Ketter, Daniel Pickens-Jones, S. Atwood, Christopher Lucas, Nori Jacoby, Alena A. Egner, Erin J. Hopkins, Rhea M. Howard, Joshua K. Hartshorne , Mariela V. Jennings, Jan Simson, Constance M. Bainbridge, Steven Pinker, Timothy J. O’Donnell, Max M. Krasnow, Luke Glowacki, appears in the November 22 issue of the journal Science. (DOI: 10.1126 / science.aax0868). This study was funded in part by the Harvard Data Science Initiative, the National Institutes of Health Director’s Early Independence Award (DP5OD024566), the National Science Foundation Graduate Research Fellowship Program, the Natural Sciences and Engineering Research Council of Canada, and the Microsoft Research Postdoctoral Fellowship Program.
This article includes contributions by Jed Gottlieb at Harvard University, Chuck Finder in Washington University of St. Louis and Liz Fuller-Wright at Princeton University.