Fun With Wine Numbers
What percent of wines should be rated 90 points or more? What percent should be rated less than 80 points?
Should we expect that a certain percent of wines would be located in the top ranks? Should we assume a certain percent of wines from any given vintage are rated lower in quality? Sure we should expect this, but determining what percent are highly rated and which percent should be rated low is near impossible to determine, particularly since wines are not, and should not, be rated on a curve.
This is a game, I'm playing. So if you are not like me and like games, then you should probably stop reading here…….
…….OK, those of you that like games, what I was wondering is how the premier wine rating magazine answered the questions above. The Wine Spectator has been rating wines for quite some time. I looked only at CA Chardonnay ratings. I looked at the number of wines rated and what percent scored 90 points or better and what percent scored less than 80 points. I looked at vintages 1990 – 2007.
WHAT THE HECK HAPPENED IN 1994 AND 1995 VINTAGES? Did CA Chard all of a sudden get suddenly better? Or did the reviewers at the Wine Spectator have some sort of epiphany in 1996 and 1997 when they review these vintages? There is an exponential jump in quality in CA Chardonnay according to the Wine Spectator wine rating system in these two vintages. I mean, we are talking a HUGE jump in quality. The number fo wines that scored 90 or above increased significantly while the number of wines that scored less than 80 points fell precipitously.
And the interesting thing is that this is not a one time jump. Beginning with these two vintages, the Wine Spectator began consistently scoring CA Chardonnay higher than in the years previously.
The 1990 to 1994 vintages averaged 12% of the wines scoring less than 80 points. The 2004 through 2007 vintages averaged 2% of the wines scoring below 80 Points.
I honestly don't know what to make of this. It looks like point inflation taking place over the course of two years. On the other hand, it just may be that CA Chardonnays got a heck of a lot better over the course of two years. In any case…Numbers are interesting.
CA CHARDS REVIEWED BY THE WINE SPECTATOR
Vintage Total Reviewed % 90 Points + % -80 Points
2007 271 37 .04
2006 439 26 2
2005 421 26 3
2004 292 40 4
2003 304 23 4
2002 364 31 4
2001 424 27 5
2000 422 21 4
1999 401 26 5
1998 416 10 4
1997 518 36 2
569 36 3
1995 540 41 4
1994 500 26 4
1993 492 15 8
1992 485 15 9
1991 451 14 16
1990 453 14 14
I remember a recent article where the major critics copped to bumping ratings as a way of competing with each other. It makes sense: I give a wine a high score, the producer will use that in their promo material thus validating my authority.
By “recent” I mean within the last two years.
*2004 – short and hot year which shows in CA wines. early ripeners were said to reflect this in particular
*2005 and 2006 cooler and wetter years which also shows in the wines – and scores (if we are to accept that critics reward size and extraction)
*2007 – a very dry year and fairly cool – resulting in very spotty quality which sort of blows my theory here, unless you consider that these are chard ratings and Chard is a cool climate variety and to get it to the profile typical of CA, you’d have to hang it out longer…
curious what others think on this
Grade inflation, in very large measure, but not exclusively.
The late Jerry Mead said it best. “My editors”, meaning his newspaper column editors, “like high scores because high scores mean that people pay attention rather than turning the page”. He eventually got sufficiently concerned by his scoring that he invented a dual scoring system in which he gave one score for quality and another for value.
It is no secret that some pubs rank higher than others. Some readers like it one way and some another. Back in the days when AOL had a vibrant wine discussion board, this topic came up and many of the very vocal collector types argued that it was a change in wine quality and that wine is generally better now than it was in the 1970s and 1980s so higher scores were the natural order of things.
So, agreeing that grade inflation exists, there is more to it than that.
There are many more wines coming from good growing areas today than there used to be, and there are many more single-vineyard designates from those areas. So, when Hyde Vineyard Chardonnay goes out to half a dozen makers from Ramey to Patz and Hall to HdV, etc, all in relatively limited lot sizes, one should not be surprised by those good results.
Secondly, when CGCW started in publication, we used to average two to three percent fatally flawed wines. Today, that number is under one percent. Admittedly, as the number of fancy wines to review has grown over three decades, we no longer review the equivalent of Two Buck Chuck and its inexpensive cronies. Still, it is true that wine quality has certainly improved.
Now as to 35% over 90 points, I just cringe at those numbers because it means two things to me personally. The first is that if so many wines rate so high, then the scores no longer have the meaning they should have.
And the second is that I am in competition with these folks and even accepting that they have many more readers than I do, my rag is getting marginalized by the fact that our scores are substantially lower than theirs and thus we do not seem to create as much general excitement. Fortunately, we have maintained a base of loyal readers who understand our methodology, but it is just not in the cards that we can look at ourselves and accept 35+ % wines recommended as if they were manna from heaven.
I just counted up our October issue with Pinot Noir and Chardonnay–20% at 90 and above. There are those who might argue that we are too stingy. The wineries often say so.
I always wonder what the distribution of scores from 80-100 looks like, since that’s really the only range that matters in publication. A straight line going down with the fewest wines scoring 100? An exponential curve with even fewer scoring 100? Or a bell curve with similar quantities of 80 and 100 scores? Just a rough look seems to suggest a bell curve with a peak that’s been moving past 90 for the past 20 years.
Biting my tongue.
It certainly does look like a bit of inflation, but I think the weather had a huge part in this, as did the winemakers objective and techniques.
2007 was a terrific year overall for CA Sauv Blancs, so to see a low % under 80% for CA Chards is not a surprise. The folks over at Merry Edwards thought that year was near ideal (much the same as they feel 2009 has been ideal).
I think 2008 will be a true indicator. It was a tougher year then 2007, hopefully the scores are honest enough to indicate this (which I feel they will).
The real inflation has to be those early years represented. It may also simply be that CA vintners have purposely structured their Chards to Jim Laube’s palate preference?
If this is the case and you agree with Jim’s palate, then you have been a happy camper!
Didn’t anyone read the research of the climatologist at So. Oregon College? The trend to higher scores are because of global warming. Personally, I see more of a correlation with the date that Parker declared discovery of his first 100 point wine and the Wine Spectator’s similar discovery, though a different wine, a few months later. People don’t read these publications to find out what NOT to buy.
I think that there are so many wines submitted that the magazines can’t print every review without killing a lot of trees, so they only review a fraction of the wines. I know many winemakers and importers who submit wines and never see a review. So when deciding on what to publish(web or in print) they are biased towards mentioning better wines.
As submissions have gone up the scores have gone up since the editors have weeded out the poor wines they don’t wish to talk about.
Not grade inflation, but an editorial decision and it is theirs to make.
What about the fact that the total number of wines reviewed in the jump are substantially higher than the years leading up to it?
useful post !!!!
thanks for the sharing a such a great information.I am loving this information!!!!
See this wine blog circa 2017:
“The Wine Spectator prefers modern wine styles”
See this from The Gray Report wine blog:
“Grade inflation at a glance: a look at Robert Parker’s 1987 Wine Buyer’s Guide”
And see these from the Wine Gourd wine blog:
“Poor correlation among critics’ quality scores”
“How large is between-critic variation in quality scores?”