The History of Music according to the Compact Disc
where a solitary listener reaches uncharted depths of his collection
~
1. One's Own Library – The New Year's Sound
2. More Jazz Masterpieces, More Discography Confusion
3. Organizing Your Jazz Listening with Cook and Morton's Penguin Guide to Jazz
4. Keep It All Unsunken – Summer Stasis
5. Numbered Ratings, Horrorshow
6. Prioritizing Obsessive Listmaking
~
Numbered Ratings, Horrorshow
When I was a teenager, I read the Spin Alternative Record Guide frequently. The way the book worked: ahead of the essay written by one of many contributors about a selected artist was a list of that artist's albums, each rated on a scale of 1 to 10. At that point in my life, I was familiar enough with certain artists' music to respond to these ratings. The Smiths' The Queen Is Dead gets a 10, but Meat Is Murder only a 2? Bold move, Rob Sheffield! With or without such strong takes, the ratings provided a modicum of guidance when exploring an artist's work. I realized, for example, that certain listeners really dislike the later Black Flag albums. Or I noticed that Talking Heads are considered more consistent, more worthy of close attention, compared to, say, their peers Blondie, whose Autoamerican is cruelly given a 3 rating. Three out of 10? That is bad. Rob Sheffield again. To be fair, Sheffield probably enjoyed these challenges, while Jeff Salamon got the easier job of pointing out that Talking Heads kick ass.
The ratings that seemed unjustifiably harsh are not the only parts of the book permanently embossed on my brain. Other reviews provided clues to my young, untutored self about the ideas and attitudes that characterized the listening-lifestyle subcultures called Punk, Indie, Goth, Industrial, etc. For example, Simon Reynolds clutching his pearls about Nick Cave's supposed misogyny, one of many examples I have come across over the years of British Punk-era critics prefiguring "cancel culture" and its inability to distinguish between a speaker-narrator and the artist himself. Another thing that stuck: Jim Greer's curt summation (not endorsement), in his Echo and the Bunnymen entry, of American distaste for much British Rock of this era: "nothing U. K. succeeding the first two Wire albums or the Buzzcocks' Singles Going Steady was worth your American dollars." I loved anything British for a brief part of my teenage-hood, so Greer's exaggerated take corresponded to a harsh reality that I lived everyday, surrounded as I was by lunkheads who did not like Suede or Aphex Twin.
The book represented such a diverse selection of personal opinions, critical stances, and historical interpretations that any teenager with intense, yet fleeting, fervor about his cultural preferences could love-hate it. That is, it provided fodder for debate, helped novice listeners feel engaged with the "Alternative Rock" moment that we were living through. Appropriately, the copy that I owned, though never given a prominent place in my bookshelves, got remarkably battered and soiled from regular use, moving from my parent's home where it had been left behind for several years to apartments and shared rental houses.
Over the years, a curious, yet unsurprising, way that the book influenced me is that I felt impelled to rank albums on the same 1-10 scale. When I did so, the images in my mind were the ratings as presented on the pages of the Spin guide, if not the exact ratings. But I kept this practice private: I would quietly, in my head, rate the albums of an artist that I was especially familiar with. I have never writtten such ratings down, with the sole exception of randomly contributing to the amateur-review behemoth Rate Your Music.
The Spin guide was influential in other, more educational, ways. It covered more than just singer-oriented music, including a significant number of avant-garde Jazz/ experimental artists; this proved to be crucial for me. Growing up in Athens, Georgia, I had met artists and music enthusiasts who had emphasized the significance of the likes of Ornette Coleman, Sun Ra, European free improvisation, John Zorn and the New York "Downtown" scene, and similar groupings. The related entries in the Spin guide were thus like kindle to a fire. Over time, I bristled at the imposition of the "alternative" designation to divide, say, John Coltrane (not included) from artists with less public recognition. This distinction, though explained well by Eric Weisbard in the book's introduction, eventually became a nuisance to me (probably about the upmteenth time I met some doofus in love with Captain Beefheart but greeting any mention of Frank Zappa with awkward silence). But of course no-one was going to claim that the Spin guide was a good primer on Jazz. The guide's inclusion of experimental music reflected genuine interest, while still, to an extent, being mere window dressing. So one did not worry too much about its exclusion of Evan Parker or Keith Jarrett or whomever.
Indeed, the book's numbered ratings for Jazz and similar musics do not quite grab the reader's attention like they do for Rock and other vocal-oriented artists. Accolades, noticeably high or low, for an obscure album merely distract new, uncertain listeners trying to make sense out of an unwieldy discography, whereas a contrary opinion about a Rock album that is well known, easily accessible, and connected to cliquish social scenes in turn easily incites strong emotions. Numbered ratings bring out these emotional responses even quicker; that is, I see Meat Is Murder get a 2, immediately want to jump to its defense, and dismiss the review (without noticing who wrote it) without reading it closely. That is why I can know by heart certain ratings that Rock artists received in the book, but I would not be able to tell you much about what any given contributor had to say about Negativland or Nusrat Fateh Ali Khan. In other words, the tactics of attention-grabbing headlines in tabloid newspapers or clickbait in the digital realm; as argued in 'Click Crit', these are part and parcel of lists of the best-ever albums or songs, as is the Jazz/ experimental tokenism noted above. The Spin guide, with the numbered ratings for individual albums presented first, impels the reader to consider the ratings—disembodied numbers—before reading. Regular reviews in periodicals that sport numbered ratings are not much better in this regard; even if the rating is put at the end of the review, the reader obviously is going to look at it first.
The sensationalistic aspect of numbered ratings is only the most blatant of several problems with them. Of course, the application of ratings to artistic works is awkward, putting a square peg in a round hole; that is a given. The review can only hope the reader acknowledges the difficulty of this task that the reviewer has taken on, generally at the behest of editors and publishers with dollar signs in their eyes. What has become more annoying about most rating systems for me, though, is that they cannot account for a wide range of quality without making the average rating quite low. If I were to give Q: Are We Not Men? A: We Are Devo! a 10, I might give the next three Devo albums each a 7 or 8. Except that, if I were to rate the later albums first, I might want to give the debut album a 12 or 20. Alas, there is no 12 or 20 rating, or 50 or 80. Ratings also fail to capture significant differences in quality among the tracks of an album, unless the reviewer were to rate each track separately, a task hardly ever attempted. If, in a review of Metal Box, the triple 12-inch 45 that served as the second album by John Lydon's Public Image Ltd., the writer emphasizes that three tracks (‘Memories’, ‘Swan Lake’—originally titled ‘Death Disco’, and ‘Poptones’) stand apart as perhaps the greatest-ever sequence of tracks within an album, a mesmerizing melange of otherwordly sounds, especially those made by guitarist Keith Levene, and gut-wrenching vocalizing by Lydon, but then the writer has to give a tawdry rating to the whole album, which is quite a mess (a loveable mess, but still...) the result is unsatisfactory. Do you give it a 10 (or "five stars," the musicians reduced to kindergarteners) to highlight the significance of those three tracks, or a honest 7 reflecting the album's overall quality?
The tendency in recent decades to rate albums more precisely, going to the tenths or hundredths place after the decimal, spurred by Pitchfork's tenths-place ratings and websites calculating the numerical averages of ratings from multiple reviewers and user-generated reviews (such as at the aforementioned Rate Your Music), is supposedly a solution to the clunky imprecision of a whole-number 1-10 scale (or school-style letter grades favored by only a few critics). Yes, it is a solution of sorts; but it reeks of the tongue-in-cheek ironism of Nineties Indie scenes even as it has become mainstream, so that few now find the practice of rating a work of art as a 7.6 out of 10 as being crass or gauche, though it is. At the same time, to be fair, no-one seems to care too much about the entire issue; at Rate Your Music, the user can rate on a scale of a half-star to five stars, with half-increments allowed (that is, a 10-point scale); only the averages are allowed greater hundredths-place precision. Are the users there clamoring to be allowed to give ratings to the tenths or hundredths place? For the sake of our sanity, let's assume not. Pitchfork's tenths-place ratings, more simply put, go to an extreme no-one asked for. Strictly speaking, since their scale allows for a rating of 0, and proceeds upward at one-tenth increments to 10, there are 101 potential ratings, an absurd system for a reviewer to use, assuming that the reviewers take the system seriously—likely a wrong assumption.
How then to rate albums without the cheeky pseudo-accuracy of tenths-place ratings or the general sensationalism of numbered ratings? Should someone not getting paid to give numeric ratings even try to take the task seriously? One amateur critic, John McFerrin certainly does. At his site Reviews of Music, he uses a 17-point rating scale with the differences in quality between the 17 points not necessarily being equal. As suggested by my potential Devo ratings above, I like the direction McFerrin is taking us. It is a bit too mathematical for my tastes, but it at least gets us closer to a rating system that allows for greater gradations in quality.
On the other hand, I am wary of the dominant tendency in contemporary mainstream culture toward high accolades, which McFerrin's system, by his admission, seems to be part of. Plenty of commentators have been lamenting the turn away from strong critical engagement with art (either positive or negative), seen in music in the fawning attention devoted to stars like Taylor Swift or Beyoncé, and to K-Pop music, but also the weird thing that the All-Music Guide does (because being weird is what they do) of giving a large number of albums four or four-and-one-half stars. How many so many albums be so good? What is the point of all the other ratings? See, for example: Parquet Courts: two albums get four-and-a half, three albums get four. The Hold Steady: only one out of nine albums gets less than four stars. Protomartyr: one out of seven albums gets less than four stars. For many older artists, All-Music reviewers seem to give themselves permission to give low ratings when the critics' consensus is already negative: say, Talking Heads' True Stories and Naked. The further one goes back, the lower the ratings can go, especially, again, if older critics did the heavy lifting for them. Poor Whitesnake, with all those two-star ratings. I guess we are just lucky not to be at Amazon, where the average user ratings for all sorts of products always seem to amount to four and one-half. Yes, all four and one-half, everything at the "everything store," in our Twenty-First-Century-America white-and-griege hell.
Before we get too ahead of ourselves, though, there are two good reasons why a particular reviewer, or even a publication or reference guide, would tend toward positive reviews. The biggest reason is that we all want to write about what inspires us. Needless to say, plenty of us in the age of "social media" may have the mistaken impression that harsh, negative takes on an issue inspire us. The result? The online Troll culture of message-board pile-ons, "what-about" ism, and in general histrionic screeds driven by misinformation, petty grievances, or a self-proclaimed mission to defend a class of victims for whom the defender has conveniently chosen to speak instead of said victims speaking. Indeed, mainstream culture circa 2025 gives us little confidence in the common man (ourselves) with regard to serious political and social divisions, let alone criticism about music and movies, as the latter requires putting aside our pet peeves and favored well-worn parroted cant. In addition, more obviously because internet media made sound recordings a cheap entertainment option for consumers, the guidance provided by negative reviews is no longer considered necessary. (Snarky mental vomit from Trolls? That, apparently, is.)
For good reason, then, writers might avoid bad reviews entirely, in favor of work that encourages the reader to be fellow listeners. That is, we not only want to write about music we like but also to find others to enjoy it with us. A critic known only as "Gabbie," at the blog New Bands for Old Heads, avoids album reviews because apparently she wrote so many of them that she got burnt out. Her piece ‘Album Scores Are Meaningless’ relates her story and provides a good overview of the overall silliness of numeric ratings. John McFerrin's rating system is weighted towards higher-scoring, positive reviews precisely because as an amateur reviewer he writes about music in his own collection that he (appropriately enough) likes to listen to.
As already noted, I have taken a course similar to Gabbie, without having ever written tons of traditional reviews. I may be an amateur writer, but I support elitism with regard to all forms of art. Books and classical music do not get numeric ratings, so in my view neither should Jazz and popular music—or movies or video games or whatever. Put simply: numeric ratings are stupid, we all seem to know they are stupid, so let's not act stupidly by doing this stupid thing. We will not entirely succeed doing so, of course, but the goal with regard to numeric ratings should be, e. g. friends are out drinking or getting high and, in their blissful commingling of opinions and attitudes, decide to rank the studio albums of King Gizzard and the Lizard Wizard—yes, all of them!— on a scale of 1 to whatever number the band is currently at; that is, if there are 50 albums, the best album gets a score of 50. In other words, they are ranking the albums in order, but in a pointed way, making fun of that band's ridiculous prolificacy and suggesting that, as much as they like the band, that their worst albums might deservedly get tagged with a single-digit numeric rating on such a wide scale. Harsh. They take notes on some napkins because the conversation gets frenzied, reaches unexpected depths; and in the morning some consider documenting the conversation, using the soiled napkins. But in the end all involved realize that the conversation should remain as it was, a real-time interaction, its participants likely sloppily spouting opinions that would ultimately embarrass them.
But what would I do if I experimented with giving numeric ratings? Again to go back to the example noted above, I would use a complex system comparable to McFerrin's: apropos the Devo example given above, perhaps a 20-point scale. Such a system both encourages the writer to focus on music that he likes but also allows for mid-level rankings not to be seen as rejection slips or put-downs. We have imprinted on our minds that a 6 is good or at least adequate and thus, as with McFerrin's 17-point scale, the reviewer has a lot of room to maneuver, stating plainly that a particular album or song, as good as it might be, is plainly, sadly insignificant. That brings me to the second major reason why reviewers are likely to avoid negative reviews: many or most of the kind of persons who write about music are nice, and do not want to point out to musicians that the music they have made is, most likely, irrelevant in any grand scheme of things. Of course, popular-music critics do not have a reputation for being nice. But that is unfair. The few exceptions prove the rule; for every egomaniac like Robert Christgau or Jann Wenner, there are five or ten or more critics similar to, say, Richard Williams or Richie Unterberger, not to mention plenty of academics and amateur writers who are not exactly rendering verdicts on the quality of music, but rather studying it and helping us understand it better.
The average popular-music critic over the years, writing for magazines and newspapers, perhaps writing a few books, is a fan. And fans want to get to know the musicians, they might even want to hang out with the musicians. But before all that—here's the rub—they already knew a lot of musicians, not famous, just run-of-the-mill strivers hoping for more success and greater artistry and generally failing to achieve those goals. When these critics review music at the regional or local level, made by their acquaintances and friends, they often find something nice to say, because if they compared the music to... the Beatles... Duke Ellington... Ludwig van Beethoven?—the result would not be pretty socially speaking.
Thus, on one hand, I want a rating scale that allows for greater gradation in quality. On the other, I want to make clear that a rating of 5 or 6 (even out of 20) is a good rating. More low ratings, not more bad reviews. In other words, celebrate that the artist achieved a 6; after all, the reviewer has probably not achieved that. To put it more pointedly, the reviewer's review, as a literary work, probably does not warrant a 4 out of 20, if the standard for a 20 is something like Oscar Wilde's ‘The Critic as Artist’ or Susan Sontag's ‘Against Interpretation’.
An embrace of middling scores requires a rejection of a letter-grade system. We all know from our school days that a C is not good; in this era of grade inflation, a B isn't even good. Another mononymic amteur reviewer, "Burch," at his blog The Crooked Wanderer, explains (inadequately) his adoption of a letter-grade system at the post ‘Making the Grade’. He adds the top-level S grade, common in Japan. With plus or minus levels for each of the grades D, C, B, and A, plus F and S, we have a 14-point system. No explanation, though, as to why he does merely use the ratings 1-14 or 0-13.
The obvious rejoinder here is that local musicians playing Indie Rock thirty years too late, being reviewed by a writer whom they run into on a regular basis at shows and record shops and such, would still see that the reviewer does not think too highly of them if (going back to my original example) Devo's debut album gets a very-high score in a complex rating scale (say, a 16 out of 20) but their album only got a 7. That is a good point. But, again, if the reviewer makes clear that a 6 is a good score, what else can he do? Go back to our societal norm of critics complaining about the lack of negative reviews while not giving many any negative reviews? Instead, he could engage with the music itself, not personalities or styles, be negative without being mean, and, if he cannot avoid numbered ratings, those emboldened snippets of text screaming so loudly at us, then at least take the bite out of them and make the reader realize, as with McFerrin's system, that the number might not mean exactly what it initially seems to mean.
The other rejoinder, going back to the fractional ratings so common now, is: why not keep a 10-point scale, but allow for half-number ratings, so that you still have 20 rating points, from one-half to 10. Why not, indeed. But also: why? I simply do not countenance the notion of there being any mathematical precision behind the ratings. One review will tend to have a big gap between a 7 and 8, another between 6 and 7. A 20-point scale at least keeps the base-10 system while, again, allowing for the gradations in quality among albums that have been apparent, and observed—heard, felt, studied—for decades now. Would a 20-point scale be necessary for ratings of individual songs? Is a 10-point scale even necessary, instead of five points? Good questions.
Finally, while I operate almost entirely in the world of amateur criticism, I have been close enough to those in "the industry," or merely read enough about it, that I find a good reviewer, writing and promoting his work like it's 2000 and all involved are still "making bank," to be refreshing. An interview with Anthony Fantano, of Needle Drop fame, offers a clear-sighted, straight-to-the-point defense of numeric ratings and highlights the importance of bad reviews. Being a video star, I am guessing he does make money and, in an age when zealous devotees of certain music artists have been known to harass reviewers, he has survived having a reputation for being negative.
To end, an example of albums rated on 20-point scale: The Byrds. These ratings of course accompany the series of essays about the Byrds already featured at Rockissue; fuller accounts of the opinions offered below are to be found there. Any album that gets a "6" is potentially good or significant enough for me to warrant inclusion in the Rock Annual lists (and any album that gets a 7 definitely being included there). As explained (I hope) in the Byrds and other Rockissue essays, a track being "album glue" is not a bad thing; such a track is good, it presents an artist doing what that artist does and doing it well; but it is not the artist's exemplary work. For now, having made a list of the 123 "Sexiest Albums Alive" covering the peak of album-oriented Rock (1965-1997) at ‘Click Crit’, you can see that, in my listening experience, not many albums get a 14 rating or higher, as The Notorious Byrd Brothers is one of the 123 and gets a 14. That said, there are probably hundreds upon hundreds of albums of Jazz music, as well as varied traditional and Classical musics, "Experimental" music, and works of similar broad categorizations, that rank so highly.
Mr. Tambourine Man 7
As I claim in my Byrds pieces, commentators and fans who have turned their attention to this album cannot seem to accept that the album featuring one of the most influential sound recordings of modern music, if not human history, ‘Mr. Tambourine Man’, is, beyond this track, a bit of a disappointment, though it has some fine material, especially that composed by Gene Clark. Obviously, the first version of the Byrds had a unique, captivating sound: McGuinn's 12-inch guitar and the hazy, calming sound of his harmonies with Clark and David Crosby are both quintessential "Sixties" artifacts. But good material? Not so much.
Turn! Turn! Turn! 5
As with many bands' second albums, not so much a sophomore slump as a sophomore rush job. The song ‘Turn! Turn! Turn!’, brilliant and iconic as it is, plus the pretty and poignant John F. Kennedy tribute ‘He Was a Friend of Mine’, do not make the rest of the album little more than filler.
Fifth Dimension 7
McGuinn's greatest song, ‘Eight Miles High’, an upbeat rocker ‘Mr. Spaceman’, and two lovely Folk-Rock pieces, ‘Wild Mountain Thyme’ and ‘I Come and Stand at Every Door’. The rest, on the other hand, are mediocre at best.
Younger than Yesterday 8
The six tracks of side A are masterful, together forming a major turning point in Sixties Rock. Unfortunately, the B side, beyond a cover of ‘My Back Pages’, is a let-down, with some intriguing, awkward experimentation on ‘Mind Garden’ plus three filler tracks.
The Notorious Byrd Brothers 14
So... mathematically speaking, this album is 75 percent better than any other Byrds album? Yes. Exactly. Thank you.
The Sweetheart of the Rodeo 8
Despite all the retrospective hoopla, this album is only a minor success. The recording process was a mess and one hears it in the final product despite several high points.
Dr. Byrds and Mr. Hyde 4
The uneasy beginning of the Byrds Mark II, a relatively stable line-up across a four-year span. This album is under-rated, featuring as it does adventurous pieces like ‘Child of the Universe’ and ‘King Apathy III’, but there is also plenty of filler and only one stand-out track, a cover of ‘This Wheel's on Fire’.
Ballad of Easy Rider 3
Only two essential songs, the titular track and ‘Jesus Is Just Alright’, the latter a successful curio-experiment; then, a lot of boring fluff, as the new version of the band continued to struggle.
(Untitled) 8
The Byrds' big double album. Half live, half studio; the former featuring two exclusive songs (‘Lover of the Bayou’ and a cover of ‘Positively 4th Street’), quick run-throughs of a few major Byrds cuts, and a side-long jam version of ‘Eight Miles High’. The studio half features one bonafide Byrds classic, ‘Chesnut Mare’, and generally avoids the awkwardness of other studio recordings made by late Byrds. That said, it is ultimately a rather plain album, doing its job well but as such feeling too streamlined.
Byrdmaniax 3
Of these 11 tracks, some are McGuinn album glue (track nos. 1, 2, 3) and Clarence White album glue (nos. 8, 9, 11)—without an album to glue together! Stylistically jumpy, the rest of the material is mediocre at best, with one outright dud (no. 6); a forgettable album that could have been abandoned.
Farther Along 3
Of these 11 tracks, two (nos. 5, 10) are exemplars of an alternate history of this era of the Byrds, in which the band would have been led by White instead of McGuinn; three or four arguably are "album glue" (nos. 2, 3, 11, and maybe 4, but also maybe not 2), again with no album to glue together; five (nos. 1, 6, 7, 8, 9) are bad or forgettable-mediocre.
Additional takes on numbered ratings include ‘What's in a Number?’ by "NoNonsenseHomo" at Infinte State Machine; ‘To Score or Not to Score' by Mark Yoder at Afterglow; ‘The Problem with Ranking Music in 2017’ by Geoffrey Himes at Paste; ‘No, There Weren't Only 8 Bad Albums in the Last 4 Years’ by Wren Greaves at Consequence; ‘Where Are the Negative Music Reviews? How the Lack of Negativity Is Killing Journalism’ by Aaron Cooper at Bearded Gentlemen Music (yes, that is an actual name of a publication; not as bad as Coke Machine Glow?); and an episode of the Australian radio program Minefield called ‘Is It Wrong to 'Rank' Works of Art?’. Finally, since in the end (to repeat) numeric ratings are stupid, perhaps the system presented by another writer, George Starostin at his web site Only Solitaire, is both stupid-er and better at the same time: five different parameters, Value, Adequacy, Listenability, Uniqueness, and Emotionality, are each rated on a scale of 1-5.
--
Online: Christian Death; Deerhoof; Hoodoo Gurus; Joe Jackson; Howard Jones; Mastodon; The Motels; Pharoah Sanders; Robert Wilkins; Urge Overkill.
Online: The Cure's Songs of a Lost World; Freddy Cannon; Jack DeJohnette; Girls against Boys; Guided by Voices; Jane's Addiction; Dr. John; Hermeto Pascoal; Professor Longhair; Queen; Squeeze; Emahoy Tseque-Maryam Guebrou; selections from the 14-volume Complete Motown Singles series; and more Todd Rundgren and Ryuichi Sakamoto. And some old C. D.-R.s... Guided by Voices - Alien Lanes and Under the Bushes under the Stars; and Royal Trux, 1992 self-titled album and Thank You.
Online: the Boo Radleys; Deep Purple; Fontaines D. C.; Peter Green; Herbie Hancock; House of Love; Waylon Jennings; Stan Kenton; Aimee Mann; M. G. M. T., The Springfields, Johnny Tillotson; U. S. Maple, Minako Yoshida, and the new super-deluxe version of Bruce Springsteen's Nebraska.
–Justin J. Kaw, November 2025