furia furialog · Every Noise at Once · New Particles · The War Against Silence · Aedliga (songs) · photography · other things
For the past couple months the Echo Nest blog has been running a series of posts about the changing attributes of popular music over time: danceability, energy, tempo, loudness, acousticness, valence, mechanism (how rigidly consistent a song's tempo stays), organism (a combination of acoustic instrumentation and non-rigid human tempo-fluidity) and bounciness (spikiness and dynamic variation vs atmospheric density).  

The series was based on what we refer to as my "research". But what this really means is that one night it occurred to me that I had this pile of code for analyzing the aggregate acoustic characteristics of genres (one byproduct of which is Every Noise at Once), and that if I snipped off the part at the beginning that picked sets of songs by genre, I could feed other sets of songs into it and get other analytical slices through the vast Echo Nest database of songs.  

Slicing them by year was pretty obviously interesting. But it was only a few minutes of easy work to also get slices by popularity (which we call hotness) and country, so I did those, too.  

The obvious question about any of these analyses is: how sure are we that they actually mean anything? The question maybe seems less pressing in cases where the math indicates a trend that we already suspect exists from our regular experience of the world, like songs increasing in drum-machineyness over the decades since the invention of the drum machine. It hangs more heavily over non-trend assertions like our claims that danceability, valence and tempo have stayed relatively constant over time.  

The answer is that I'm pretty sure. And the reason is this table:  
 

# metric genre year hotness country random
1danceability 0.840 0.106 0.112 0.279 0.038
2energy 0.882 0.552 0.112 0.200 0.016
3tempo 0.321 0.140 0.053 0.111 0.020
4loudness 0.897 0.474 0.125 0.183 0.020
5acousticness 0.958 0.713 0.161 0.242 0.019
6valence 0.601 0.102 0.075 0.334 0.030
7dynamic range 0.932 0.332 0.175 0.347 0.031
8flatness 0.681 0.319 0.143 0.168 0.020
9beat strength 0.819 0.201 0.129 0.282 0.038
10mechanism 0.870 0.636 0.111 0.199 0.024
11organism 0.961 0.746 0.137 0.225 0.023
12bounciness 0.911 0.274 0.159 0.329 0.033
 

This shows you the 12 audio metrics I have been measuring, across the four different analytical song-slices and a fifth one in which I fed to the same process sets of similar size that were actually selected randomly. The numbers are discrimination scores*, with higher numbers indicating that a given metric is better at explaining the differences between that kind of slice. So the high 0.840 for danceability/genre means that on the whole genres mostly tend towards internally consistency in danceability, and that the set of genres covers a relative wide range of danceability values. The low 0.106 for danceability/year, on the other hand, means that the songs from a given year don't tend to exhibit any consistent pattern of danceability, and the aggregate scores don't vary much from year to year.  

There's like a small picaresque novel in this table, but here are some of the main things I think it suggests:  

- All of these metrics are useful for describing and categorizing musical styles. I picked these 12 for that purpose in the first place, of course, so that's not particularly surprising, but it's confirmation that I haven't been totally wasting my time with something irrelevant.  

- Tempo is by far the weakest of these metrics for characterizing genres. There are a few reliably slower (new age, classical) and faster (trance, house) genres, of course, but there are a lot of genres that allow both fast and slow songs with equanimity, and the average tempos for most genres hover around the average tempo for music in general.  

- The metrics that show the most discriminatory power over time are organism, acousticness and mechanism. This is kind of 2.5 insights, not 3, since my "organism" score is actually a Euclidean combination of acousticness and (inverse) mechanism. It's not news that popular music has gotten more electrified and quantized over time, of course, but it's minor news that I got computers to reach this conclusion through math, and maybe also news that none of the other trends in music are as quantifiably dramatic.  

- The subjective impressions that music is getting louder and more energetic over time (these two aren't wholly independent, either, but loudness is only a small component of energy) are supported by numbers.  

- None of these metrics show any particularly notable correlation with hotness. That is, popular songs don't tend to be reliably one way or another along any of these dimensions. Which doesn't by any means prove that there isn't a secret formula for the predictable production of pop hits, but it at least doesn't seem to involve a linear combination of these 12 factors.  

- The music produced by people from a country tends to be less distinct on aggregate than the music produced within a genre. That is, maybe, the communities we choose tend to have even stronger identities than the communities into which we are born.  

- That last thing said, there is still discernible variation between and among countries. The most discriminatory metric here turns out to be bounciness, which I'm pleased to see since that's one I sort of made up myself. For the record, the bounciest countries are Jamaica, the Dominican Republic, Senegal, Ghana and Mali, and the least bouncy are Finland, Sweden and Norway. Although Denmark is bouncier than Viet Nam, and Russia is bouncier than Guatemala, so it's not strictly a function of latitude.  

- All the scores for random slices are far lower than the scores for non-random slices. In fact, even the lowest real score (0.053 for tempo/hotness, indicating that popular songs are by and large no faster or slower than unknown or unpopular songs) is higher than the highest random score. So that gives us a baseline confidence that both the metrics and the non-random slices are registering genuine variation. And helps explain why efficient music discovery is a more complicated problem than "pick a random song that exists and see if you like it".  
 

* For those of you who care, here's how these discrimination scores work. For each metric/slice combination I calculated both the mean and standard deviation across a few thousand relevant songs from Echo Nest data (we have a lot of data, including detailed audio analysis for something like 250 million tracks). The genre slice has more than 700 genres, I used the 64 years from 1950 to 2013, hotness was done in 80 buckets, there were 97 countries for which we had a critical mass of data, and the random slice used 26 random buckets. I then took the standard deviation of the average values for the slice (so a wider spread of averages across the genres/years/whatever means more discrimination) and divided by the average of the standard deviations (so tighter ranges of values within each genre/year/whatever means more discrimination). I feel like I probably didn't invent this, but I couldn't easily find any evidence of its previous existence online.  

[And because it seems sad to have a thing about music without some actual music to listen to, I leave you with The Sound of Everything, my autogenerated genre-overview playlist, which has one song for every genre we track, in alphabetical order by genre-name. This grows and changes as the genre map expands, but as of 7 November 2013 it has 756 songs for a total running time of 55:29:28. And that's just one song per genre. Almost any one of these genres could consume your entire life. (Almost. There are some exceptions.)]  

[As of 20 May 2016 it has 1454 songs and lasts 105 hours.]
Site contents published by glenn mcdonald under a Creative Commons BY/NC/ND License except where otherwise noted.