furia furialog · Every Noise at Once · New Particles · The War Against Silence · Aedliga (songs) · photography · other things · contact
25 October 2007 to 26 September 2007
There are no dead-ends in data. Everything connects to something, and if anybody tells you otherwise, you should suspect them of hoping you won't figure out the connection they've omitted.  

But we have suffered, for most of humanity's life with data, without good tools for really recognizing the connectivity of all things. We have cut down trees of wood and used them to make trees of data. Trees are full of dead-ends, of narrower and narrower branches. Books are mostly trees. Documents are usually trees. Spreadsheets tend to be trees. Speeches are trees. Trees say "We built 5 solar-power plants this year", and you either trust them, or else you break off the end of the branch and go looking for some other tree this branch could have come from. Trees are ways of telling stories that yearn constantly to end, of telling stories you can circumscribe, and saw through, and burn.  

And stories you can cut down and burn are Evil's favorite medium. Selective partial information is ignorance's fastest friend. They built 5 solar plants, they say. Is that right? And how big were they? And where? And why did they say "we built", past-tense, not "we are now running"? And how many of last year's 5 did they close this year? And how many shoddy coal plants did they bolt together elsewhere, while the PR people were shining the sun in our eyes? There are statements, and then there are facts, and then there is Truth; and Truth is always tied up in the connections.  

Not that we haven't ever tried to fix this, of course. Indices help. Footnotes help. Dictionaries and encyclopedias and catalogs help. Librarians help. Archivists and critics and contrarians and journalists help. Anything helps that lets assertions carry their context, and makes conclusions act always also as beginnings. Human diligence can weave the branches back together a little, knit the trees back into a semblance of the original web of knowledge. But it takes so much effort just to keep from losing what we already knew, effort stolen from time to learn new things, from making connections we didn't already throw away.  

The Web helps, too, by giving us in some big ways the best tools for connection that we've ever had. Now your assertions can be packaged with their context, at least loosely and sometimes, if you make the effort. Now unsupported conclusions can be, if nothing else, terms for the next Google or Wikipedia search. This is more than we had before. It is a little harder for Evil to hide now, harder to lie and get away with it, harder to control the angle from which you don't see the half-truth's frayed ends.  

But these are all still ultimately tenuous triumphs of constant human vigilance. The machines don't care what we say. The machines do not fact-check or cross-reference, of their own volition, and only barely help us when we try to do the work ourselves. The Web ought to be a web, a graph, but mostly it's just more trees. Mostly, any direction you crawl, you keep ending up on the narrowest branches, listening for the crack. All the paths of Truth may exist somewhere, but that doesn't mean you can follow them from any particular here to any specific there.  

And even if linking were thoroughly ubiquitous, and most of the Web weren't SQL dumps occasionally fogged in by tag clouds, this would still be far from enough. The links alone are nowhere near enough, and believing they are is selling out this revolution before it has deposed anything, before it has done much more than make some posters. It is not enough for individual assertions to carry their context. It is not enough for our vocabulary of connection to be reduced to "see also", even if that temporarily seems like an expansion. It is not enough to link the self-aggrandizing press-release about solar plants to the company's web site, and hope you can find their SEC filings under Investor Relations somewhere. It is not enough to link the press-release to the filings, or for your blog-post about their operations in China to make Digg for six hours, or to take down one company or expose one lie. We've built a system that fountains half-truths at an unprecedented speed, and it is nowhere near enough to complete the half-truths one at a time.  

The real revolution in information consists of two fundamental changes, neither of which have really begun yet in anything like the pervasive way they must:  

1. The standard tools and methods for representing and presenting information must understand that everything connects, that "information" is mostly, or maybe exactly, those connections. As easy as it once became to print a document, and easier than it has become to put up web-pages and query-forms and database results-lists, it must become to describe and create and share and augment sets of data in which every connection, from every point in every direction, is inherently present and plainly evident. Not better tools for making links, tools that understand that the links are already inextricably everywhere.  

2. The standard tools for exploring and consuming and analyzing connected information must move far beyond dealing with the connections one at a time. It is not enough to look up the company that built those plants. It is not enough to look up each of their yearly financial reports, one by one, for however many years you have patience to click. It's time to let the machines actually help us. They've been sitting around mostly wasting their time ever since toasters started flying, and we can't afford that any more. We need to be able to ask "What are the breakdowns of spending by plant-type for all companies that have built solar plants?", and have the machines go do all the clicking and collating and collecting. Otherwise our fancy digital web-pages might as well be illuminated manuscripts in bibliographers' crypts for all the good they do us. Linked pages must give way to linked data even more sweepingly and transformationally than shelved documents have given way to linked pages.  
 

And because we can't afford to wait until machines learn to understand human languages, we will have to begin by speaking to them like machines, like we aren't just hoping they'll magically become us. We will have to shift some of our attention, at least some of us some of the time, from writing sentences to binding fields to actually modeling data, and to modeling the tools for modeling data. From Google to Wikipedia to Freebase, from search terms to query languages to exploration languages, from multimedia to interactive to semantic, from commerce to community to evolving insight. We have not freed ourselves from the tyranny of expertise, we've freed expertise from the obscurity of stacks. Escaping from trees is not escaping from structure, it is freeing structure. It is bringing alive what has been petrified.  

There are no dead-ends in knowledge. Everything we know connects, by definition. We connect it by knowing. We connect. This is what we do, and thus what we must do better, and what we must train and allow our tools to help us do, and the only way Truth ever defeats Evil. Connecting matters. Truths, tools, links, schemata, graph alignment, ontology, semantics, inference: these things matter. The internet matters. This is why the internet matters.
It is possible for even the most preternaturally precocious child to actually miss a diaper from one tenth of an inch away.
Here is a gift, of unspecified value, to the field of set-comparison math: The Empath Coefficient, an alternate measure of the alignment between two sets. Conceptually this is intended as a rough proxy for measuring the degree to which the unseen or impractical-to-measure motivation behind the membership of set A also informs the membership of set B, but the math is what it is, so the next time you find yourself comparing the Cosine, Dice and Tanimoto coefficients, looking for something faster than TF-IDF to make some sense of your world, here's another thing to try. This is the one I used in empath, my recent similarity-analysis of heavy-metal bands, if you want to see lots of examples of it in action.  

At its base, the Empath Coefficient is an asymmetric measure, based on the idea that in a data distribution with some elements that appear in many sets and some that appear in only a few, it is not very interesting to discover that everything is "similar" to the most-popular things. E.g., "People who bought Some Dermatological Diseases of the Domesticated Turtle also bought Harry Potter and the...". In the Empath calculation, then, the size of the Harry Potter set (the one you're comparing) affects the similarity more than the size of the Turtle set (the one you're trying to learn about). I have arrived at a 1:3 weighting through experimenting with a small number of data-sets, and do not pretend to offer any abstract mathmatical justification for this ratio, so if you want to parameterize the second-set weight and call that the Npath Coefficient, go ahead.  

Where the Dice Coefficient, then, divides the size of the overlap by the average size of the two sets (call A the size of the first set, B the size of the second set, and V the size of the overlap):

V/((A+B)/2)
or
2V/(A+B)

the core of the Empath Coefficient adjusts this to:

V/((A+3B)/4)
or
4V/(A+3B)

By itself, though, that calculation will still be uninformatively dominated by small overlaps between small sets, so I further discount the similarities based on the overlap size. Like this:

(1-1/(V+1)) * V/((A+3B)/4)
or
4V(1-1/(V+1))/(A+3B)

So if the overlap size (V) is only 1, the core score is multiplied by 1/2 [1-1/(1+1)], if it's 2 the core score is multiplied by 2/3 [1-1/(2+1)], etc. And then, for good measure, I parameterize the whole thing to allow the assertion of a minimum overlap size, M, which goes into the adjustment numerator like this:

4V(1-M/(V+1))/(A+3B)

This way the sample-size penalties are automatically calibrated to the threshold, and below the threshold the scores actually go negative. You can obviously overlay a threshold on the other coefficients in pre- or post-processing, but I think it's much cooler to have the math just take care of it.  



I also sometimes use another simpler asymmetric calculation, the Subset Coefficient, which produces very similar rankings to Empath's for any given A against various Bs (especially if the sets are all large):

(V-1)/B

The concept here is that we take A as stipulated, and then compare B to A's subset of B, again deducting points for small sample-sizes. The biggest disadvantage of Subset is that scores for As of different sizes are not calibrated against each other, so comparing A1/B1 similarity to A2/B2 similarity won't necessarily give you useful results. But sometimes you don't care about that.  

This is the one I used for calculating artist clusters from 2006 music-poll data, where cross-calibration was inane to worry about because the data was so limited to begin with.  



Here, then, are the forumlae for all five of these coefficients:  

Cosine: V/sqrt(AB)
Dice: 2V/(A+B)
Tanimoto: V/(A+B-V)
Subset: (V-1)/B
Empath: 4V(1-M/(V+1))/(A+3B)  

And here are some example scores and ranks:  

# A B V Dice Rank Tanimoto Rank Cosine Rank Subset Rank Empath Rank
1100 100 100 1.000 1 1.000 1 1.000 1 0.990 1 0.990 1
210 10 10 1.000 1 1.000 1 1.000 1 0.900 2 0.909 2
310 10 5 0.500 3 0.333 3 0.500 5 0.400 4 0.417 4
410 10 2 0.200 12 0.111 12 0.200 12 0.100 11 0.133 12
510 5 3 0.400 6 0.250 6 0.424 6 0.400 5 0.360 5
65 10 3 0.400 6 0.250 6 0.424 6 0.200 8 0.257 8
710 5 2 0.267 10 0.154 10 0.283 10 0.200 7 0.213 10
85 10 2 0.267 10 0.154 10 0.283 10 0.100 11 0.152 11
96 6 2 0.333 9 0.200 9 0.333 9 0.167 9 0.222 9
106 4 2 0.400 6 0.250 6 0.408 8 0.250 6 0.296 6
116 2 2 0.500 3 0.333 3 0.577 3 0.500 3 0.444 3
122 6 2 0.500 3 0.333 3 0.577 3 0.167 9 0.267 7
 

A few things to note:  

- In 1 & 2, notice that Dice, Tanimoto and Cosine all produce 1.0 scores for congruent sets, no matter what their size. Subset and Empath only approach 1, and give higher scores to larger sets. The idea is that the larger the two sets are, the more unlikely it is that they coincide by chance.  

- 5 & 6, 7 & 8 and 11 & 12 are reversed pairs, so you can see how the two asymmetric calculations handle them.  

- Empath produces the finest granularity of scores, by far, including no ties even within this limited set of examples. Whether this is good or bad for any particular data-set of yours is up to you to decide.  

- Since all of these work with only the set and overlap sizes, none of them take into account the significance of two sets overlapping at some specific element. If you want to probability-weight, to say that sharing a seldom-shared element is worth more than sharing an often-shared element, then look up term frequency -- inverse document frequency, and plan to spend more calculation cycles. Sometimes you need this. (I used tf-idf for comparing music-poll voters, where the set of set-sizes was so small that without taking into account the popularity/obscurity of the albums on which voters overlapped, you couldn't get any interesting numbers at all.)  



There may or may not be some clear mathematical way to assess the fitness of each of these various measurements for a given data-set, based on its connectedness and distribution, but at any rate I am not going to provide one. If you actually have data with overlapping sets whose similarity you're trying to measure, I suggest trying all five, and examining their implications for some corner of your data you personally understand, where you yourself can meta-evaluate the scores and rankings that the math produces. I do not contend that my equations produce more-objective truths than the other ones; only that the stories they tell me about things I know are plausible, and the stories they have told me about things I didn't know have usually proven to be interesting.
My favorite moment in the New England Revolution's 3-2 defeat of FC Dallas for the 2007 U.S. Open Cup is not unheralded rookie Wells Thompson beating semi-heralded international Adrian Serioux to the ball and slipping it past almost-semi-heralded international Dario Sala for what ends up being the game-winning goal. It is not once-unheralded-rookie Pat Noonan's no-look back-flick into Thompson's trailing run. It is not once-unheralded-rookie Taylor Twellman's pinpoint cross to Noonan's feet. It is before that, as Twellman and Noonan work forward, and Noonan's pass back towards Twellman goes a little wide. As he veers to chase it down, Twellman gives one of his little chugging acceleration moves, a tiny but unmistakable physical manifestation of his personality. I've watched him do this for years. I can recognize him out of the corner of my eye on a tiny TV across a crowded room, just from how he runs. Just from how he steps as he runs.  

This is the Revolution's first trophy, after losing three MLS Cups and one prior Open Cup, all in overtime or worse. They are my team because I live here, that's how sports fandom mainly works. But I care about them, not just support them, because Steve Nicol runs the team with deliberate atavistic moral clarity. The Revs develop players. Of the 12 players who appeared in this victory, 8 were Revolution draft picks, 1 was a Revolution discovery player, 2 of the remaining 3 were acquired in trades before Nicol took over, and the last one (Matt Reis) was acquired in an off-season trade before Nicol's first season even started. Of the 5 other field-players on the bench last night, even, 3 are Nicol draft-picks and other 2 are Revs discoveries. As is Shalrie Joseph, suspended for a red-card picked up in the semi-final during an altercation with, ironically enough, yet another Revolution draft-pick now playing in the USL. Even the Revs' misfortunes are products of their own dedication.  

Arguably other teams, using more opportunistic methods, have acquired better players. Several of them have acquired more trophies. But none of them, I think, are more coherently themselves. None of them can hold up a trophy and know that they earned it, as a self-contained organization, this completely.  

Sometimes, as a sports fan, you get to be happy. Much more rarely, you get to be proud.


It is a small, powerful thing to rescue small truths hidden in seas of numbers. It is an even smaller and unfathomably deeper joy to hover, enraptured, in the countless endless instants between when some tiny thing happens to her for the first time, and when she shrieks with the joy of a universe expanding.
The Deciblog just published Justin Foley's reply to my implication that he botched his analysis of first letters of heavy-metal band names. [Read those if you want the rest of this to make any sense, not that I'm saying you need to want that...]  

Foley cc'ed a bunch of other people in the actual email, and in an ensuing thread that got well-underway before I noticed it in my spam filter (which wouldn't have happened if I'd had the good sense to put all Southern Lord label personnel in my Address Book proactively), someone beat me to taking statistical issue with Foley's idea that my 50,000+ EM-derived sample-size was "too large", but agreed that in the abstract some sort of weighting scheme could account for the idea that Metallica earns M more points than some unkown band called The Austerity Program earns for A (or, in Foley's original analysis, T).  

To all of which I said:  



Weighting is easy. Let's say that a band only counts if somebody has actually bothered to write a review of one of their releases, and we'll weight them by the number of releases that have reviews. This method counts 6778 of EM's artists, who have 14057 releases between them.  

Here are the percentages from the whole sample, the smaller sample unweighted, and the smaller sample weighted:

? All SU SW
# 0.3 0.4 0.3
A 9.1 9.8 9.8
B 5.9 6.2 6.2
C 6.3 6.4 6.0
D 8.9 8.1 8.3
E 4.9 4.6 4.3
F 3.6 3.7 3.1
G 3.0 3.5 3.4
H 3.9 3.9 3.7
I 3.7 3.5 3.7
J 0.6 0.6 0.8
K 2.2 2.3 2.6
L 3.1 3.0 2.8
M 7.4 6.8 8.2
N 4.2 4.0 4.0
O 2.3 2.4 2.6
P 4.0 3.7 3.6
Q 0.2 0.2 0.3
R 3.3 2.9 3.2
S 10.8 10.6 10.5
T 4.5 4.6 4.4
U 1.3 1.3 1.2
V 2.7 2.8 2.8
W 2.7 3.3 2.8
X 0.3 0.4 0.4
Y 0.2 0.3 0.3
Z 0.7 0.7 0.5


As you see, both restricting the sample and weighting do make small differences in the percentages, but S still wins, and D is still only in third.  

It's also easy to rigorously calculate the most metal of all names, in essentially exactly the way [FH] suggests. Using only the smaller sample, we can build up the name by at each position taking the most common letter (again weighting each band name by the number of reviewed releases) among the names which match what we have so far, working towards a goal length obtained in the same weighted-average fashion. This produces this incremental search result:  

goal length: 10
searching: [] 6778 partial matches
searching: [s] 718 partial matches
searching: [sa] 122 partial matches
searching: [sac] 23 partial matches
searching: [sacr] 23 partial matches
searching: [sacri] 9 partial matches
searching: [sacrif] 5 partial matches
searching: [sacrifi] 5 partial matches
searching: [sacrific] 5 partial matches
searching: [sacrifici] 4 partial matches
searching: [sacrificia] 3 partial matches  

I submit that when Daree Eeee and the mighty Sacrificia tour together, Daree Eeee will be going on first, and carrying their own mangy amps off the stage when they're done with their 3 crappy songs...  

glenn  
 

PS: I most definitely did not type in any numbers by hand.
PPS: Excel is a fine tool for lots of things. Not *these* things, though.



The aforementioned FH then clarified the less-rigorous most-metal algorithm he had in mind, which was also easy to produce:  



It's more or less just as easy to do it that way, considering only the weighted likelihood of a given letter at a given position with a given preceeding character.  

searching: [] 6778 candidates
searching: [s] 718 candidates
searching: [sa] 1109 candidates
searching: [sar] 870 candidates
searching: [sara] 533 candidates
searching: [saran] 450 candidates
searching: [saran ] 521 candidates
searching: [saran o] 419 candidates
searching: [saran or] 271 candidates
searching: [saran ore] 270 candidates
searching: [saran orer] 182 candidates  

I think Saran Orer get a guitar tech and some sandwiches, and go on after Daree Eeee, but they're still playing for people who are there to hail Sacrificia.



I hope everything is clear now, as I'm way overdue to get back to posting pictures of my daughter...  
 

[Discussion, if you can bear the thought, here on vF.]
And for completeness, here are the top bands by average rating across all releases, counting only the bands that have reviews from at least 10 different reviewers.  

# Artist Reviewers Average Spread
1Repulsion 11 96.43 3.064
2Esoteric (UK) 13 96.2 3.544
3Gorguts 15 95.43 4.03
4Lykathea Aflame 12 94.6 4.363
5Atheist 12 94.56 4.272
6Solitude Aeturnus 10 93.9 5.485
7Sacramentum 11 93.7 6.067
8Disembowelment 10 93.43 3.959
9Deeds of Flesh 12 93.33 5.375
10The Axis of Perdition 10 93.33 4.607
11Martyr (Can) 11 93.29 4.399
12Cult of Luna 10 92.56 6.735
13Persuader 11 92.4 5.886
14Katharsis (Ger) 10 92.0 5.715
15Novembre 12 91.86 5.436
16Vintersorg 21 91.67 6.968
17Demilich 15 91.58 7.522
18Saint Vitus 13 91.36 6.526
19Belphegor (Aut) 15 90.94 6.571
20Manticora 11 90.64 7.889
21Windir 21 90.64 6.986
22Agent Steel 15 90.2 6.002
23Negurã Bunget 10 90.17 10.123
24Pentagram (US) 10 90.13 6.827
25Maudlin of the Well 11 90.08 8.558
26Deströyer 666 15 90.07 7.676
 

Vintersorg and Windir are the only bands to get an average above 90 with 20 or more reviewers. So clearly those are the greatest bands in all of heavy metal.  

The worst metal band in the world is Apocalypse, who got an average review of 7.0 from 14 reviewers. Dishonorable mention to Six Feet Under, the only band with at least 4 releases and 10 reviewers who averaged below 50 (49.91 from 47 reviewers).
And here are the 25 least consistent:  

# Artist Spread Average
1Sepultura 25.022 67.55
2In Flames 21.621 60.74
3Megadeth 19.333 71.54
4Krieg 18.455 71.19
5Deicide 18.007 74.15
6Deathspell Omega 17.925 83.93
7Metallica 17.863 69.16
8Virgin Steele 17.747 74.5
9Six Feet Under (US) 17.601 55.15
10Dissection (Swe) 17.275 70.67
11Sentenced 16.626 66.79
12Moonspell 16.312 78.28
13Nuclear Assault 16.234 72.04
14Mayhem (Nor) 15.754 67.69
15Machine Head (US) 15.453 55.06
16Within Temptation 15.379 61.25
17Slayer (US) 14.537 73.97
18Children of Bodom 13.943 77.63
19Black Label Society 13.937 75.05
20Pantera 13.921 69.97
21Celtic Frost 13.717 73.4
22Cannibal Corpse 13.476 74.89
23Motörhead 13.319 78.23
24Danzig 13.037 80.0
25Pain of Salvation 12.934 87.77
 

Most of these follow the "great once, crap now" pattern (I think we can now officially call this "Sepulturding"), which makes one wonder whether developing a fan-base is really worth the bother in the end. Deathspell Omega deserve a special note: if they'd had the sense to release Infernal Battles under a different name, their other 4 albums would give them a standard deviation of 1.66 on an average of 92.86, and we could have a very obscure statistical argument over whether that means they are in fact even greater than Fates Warning.
My analytical tools make various otherwise-elusive questions easy to answer, so while I'm playing with heavy-metal data, here's another thing I wondered about: which bands have the narrowest and widest ranges of ratings? To answer this meaningfully I counted only releases that have 4 or more reviews, and only bands that have 4 or more of these releases and at least 10 different reviewers. For these I then averaged the ratings for each such release, and ran standard deviations on the sets of averages. So a low standard deviation means there's some consensus that the quality of the band's output is consistent. High means consensus that the quality varies widely.  

Here are 25 most consistent. "Spread" is the standard deviation, "Average" is the average rating of the releases used in the calculation.  

# Artist Spread Average
1Coroner 0.908 88.21
2Helstar 1.455 90.54
3Moonsorrow 1.676 89.98
4Dark Angel (US) 1.767 82.15
5Candlemass 1.842 89.78
6Lamb of God 1.845 68.5
7Obituary 2.004 85.32
8Type O Negative 2.035 89.16
9Accept 2.193 88.06
10Agent Steel 2.479 90.49
11Fates Warning 2.531 93.36
12Alice in Chains 2.538 88.83
13Iron Savior 3.025 88.25
14Falconer 3.083 84.42
15Therion (Swe) 3.159 90.38
16Sodom 3.294 83.4
17Kamelot 3.463 90.52
18Gorgoroth 3.496 84.71
19Judas Iscariot 3.602 89.03
20Bolt Thrower 3.652 88.31
21Suffocation (US) 3.701 86.48
22Angra 3.758 88.63
23Enslaved (Nor) 3.926 88.85
24Vader 4.162 85.78
25Bal-Sagoth 4.249 89.9
 

I sense a hastily-assembled cash-in Coroner boxset in our future. I think this also means that Fates Warning is the most consistently great band in all of heavy metal. So now we know. And Lamb of God gets some sort of weird prize for being the most consistently mediocre.
Site contents published by glenn mcdonald under a Creative Commons BY/NC/ND License except where otherwise noted.