My New Puppy’s Kibble: 75 Points
Blake Gray did a stellar job in today’s SF Chronicle wine section profiling the 100 Point rating system. One part of the story really made me think. I’ve considered this before, but it deserves highlighting.
Certain varietals and wines simply don’t appear to be in contention for a 100 point rating. They include Sauvignon Blanc, Chenin Blanc, Beaujolais, Rose. You know, those wines that simply can’t, without a tremendous amount of magic applied, every be truly powerful and over-the-top.
This is not an indictment of the 100 point scale. It is rather an indictment of nearly every wine reviewer in the world as well as the simplicity of the American palate and mind.
I defy anyone to make a cogent argument for the propriety of the phenomena that is American Roses never attaining 100 points or, for that matter, as far as I know, any rose ever getting 100 points.
Please…somebody…please explain why it is proper that this has never occurred. Has there really never been a great Rose? Has there never been a rose with impeccable balance, well defined aromas and flavors, etc?
I don’t see how it is possible to argue with the reviewers that give 95-100 points for the most powerful of wines that hit their palates. We are talking about subjective evaluatiion that includes a personal opinion as to what makes a wine great. However, that personal criteria for greatness is as objectively authoritative as my new puppy’s apparent view that chopped steak is a greater meal than his kibble that stays untouched.
I say wines should be judged only on their merits as compared to others in their varietal category–otherwise, what’s the point of tasting the Chardonnays separately from the Cabernet? Why not just mix ’em all together!? It doesn’t make sense to say that a rose shouldn’t get 98 points because it’s “not as good” as a 98-point Cabernet. The two should never even be compared. I think consumers are smart enough to understand that. Aren’t they?
No argument from me that the 100-point system is flawed, perhaps irredeemably. Anyone in the wine biz knows it’s devolved into a 10-point system at best (90 – 100), and wines scoring over 95 are either impossible to acquire or hugely expensive or (most commonly) both. But why are there no 100-point rosés? Why not ask why are there no 100-point Siegerrebes? Why no 100-point Baco Noirs? Just because a wine is best of class does not qualify it for the 100-point club. It must compete at a quality level with other 100 point wines. Rosés, no matter how good, do not – with the exception of great rosé Champagne. For the record, after a decade of scoring wines, I have never gone higher than 98 points for anything.
“American Roses”? – I prefer English roses from David Austin.
When wines are judged in wine competitions, every category has a shot at a gold medal. Should the Sauv Blanc wines be limited to winning silver medals and the Beaujolais be limited to bronze?
If wines are rated against their peers (as Parker claims for HIS 100 point system) then there is no reason why the best wine should not receive a high score. A Rosé does not compete against a Cab in the WA, or WS for that matter.
However Blake Gray’s article has several problems that I can see. One is that he seems to assume that Parker’s oneline site has all the scores for all the wines reviewed(e.g. “The lowest score possible is 50, but he has not given a score below 70 in the past seven years.”) However if you ask eRPSupport they will tell you that in recent years only recommended wines appear on the list (i.e. those that score 84/85 and above). So for Australia, for example, you only see the scores for approx. 30% of the wines that Parker tastes; some 70% are not recommended, that is receive less than 85 points). You will find a few exceptions to this but only a few.
But Gray’s biggest problem, along with so many other writers on this subject, is that he credits Parker with the creation of the 100 point scoring system. Dan Murphy, Australian wine journalist, vigneron, and wine judge, began using 100 points to score wines a quarter of a century before Robert Parker, Jr came along.
Mike
“Just because a wine is best of class does not qualify it for the 100-point club. It must compete at a quality level with other 100 point wines. Rosés, no matter how good, do not – with the exception of great rosé Champagne.”
Well, Greg really hits it on the head and leads me to ask two questions:
1. Why shouldn’t a wine that is best in its class (vareital or style) receive 100 points?
2. What is it about the nature of rose that prevents it from competing with other 100 points wines?
I’ve been in winemaking 30+ years and when I lost ALL regard for the potential utility of a 100 pt scale about 15 yrs ago when the WS reviewed a Chenin Blanc (not a wine I made) as “a perfect Chenin Blanc” in the text and then proceeded to give it 75 pts!
Hi Tom
Is the intent of the 100 Point system set up as one persons opinion. Have we not all drank a wine rated 90+ that tasted like *%^&*. Wines have to be in differnet classes. It would be like saying wow that is an awesome Hyundai and comparing it to a Lexus. It may be a great summer time wine (Rose, but leave it on the self for twenty years and see how it compares to a nice Cab!
Ron
Mis-spoke ‘utility’ does occur – for marketing; ‘rationality’ is the word I should have used.
Ron, In the American psyche, after 10+ years of getting grades in school, I do not think the implied basis is “one person’s opinion”. Perhaps that is the reviewer’s stated position – but we’ve been trained to believe in absolute answers and it carries over here as well. As for cars, if one drives a car for reasons other than getting from one place to the other – then the comparison is apt. For myself, I’ve gone through a lot of evolution in tasting / evaluation over the years and the bottom line is all tasting is situational and as such a perfect wine may intrude at any time. Carpe diem!
Tom
“But Gray’s biggest problem, along with so many other writers on this subject, is that he credits Parker with the creation of the 100 point scoring system. Dan Murphy, Australian wine journalist, vigneron, and wine judge, began using 100 points to score wines a quarter of a century before Robert Parker, Jr came along.
Mike ”
I think the point being made, and it’s a fair and valid one, is that Parker was the first to popularize the 100 point scale to a large audience, and was followed up by the Wine Spectator, which probably was even more responsible for popularizing it.
All this debate is very encouraging to people in the industry who care about wine evolving. Numbers helped us; now they stunt us. I think the SF Chronicle articel got that message across. The real villains now, in my opinion, are retailers who parrot the ratings for E-Z selling and marketers who pluck the highest score they get from wherever they can. In truth, there are now only two scores: 90 or not. I think this is a sign that we are close to a tipping point where people — as in real drinkers — will be empowered to ignore scores and pursue their own tastes based on firsthand advice from mentors, peers, waiters, (good) retailers, educators, etc. Magazines – buying guides in particular – are getting very tired. Blogs are on the rise. In fact, I was very disappointed that Blake Gray made a factual arror and lumped Alder Yarrow of Vinography in with the 100-pointers; yes, ALder uses numbers, but it’s his own system, which separates him as a leader not a follower. FYI, for those pining for a critic who rates wines only within genre, thus avoiding all the 90-point this vs 90-point that debate, note that Jancis Robinson has always taken this path. As for me, I continue to insist that all wines deserve 88 points. Done.
“I think the point being made, and it’s a fair and valid one, is that Parker was the first to popularize the 100 point scale to a large audience, and was followed up by the Wine Spectator, which probably was even more responsible for popularizing it.
Tom”
If that was the case then all these writers would use the phrase “first to popularize”. But they don’t. Instead they use
1) “created by wine critic Robert M. Parker Jr.” – W. Blake Gray.
2) “The man credited with having invented the 100 point scale is Robert Parker,” – Paul Gregutt.
3) “as the critic who conceived the 100-point system, Parker has a special obligation” – Mike Steinberger.
4) “introduced the 100-point system to the wine world in 1978” – Gary Rivlin.
What is true is that Parker’s 100 point system is quite rudimentary (especially compared to Dan Murphy’s compex system that had small differences for different wine styles) and so quite easily understood. I think the question that needs to be posed, but would be difficult to answer, is whether Parker needed the 100 point system to achieve success? I’m pretty sure that Parker would argue that his success does not stem from the 100 point system, but from his palate.
Mike
No point system is perfect. I use one that averages the scores. (Including my score) I frequently tell people that if 4 wine professionals rate a wine Very Good to Excellent. Guess what? It is probably pretty good wine and worth the investment. Your own palate will be the judge of whether you deem it Very Good or Excellent. My system just tries to narrow the wines down to those that are most likely going to satisfy my readers. Cheers!
The 100 pt system is absurd for its false precision. Even among seasoned palates, tasting the same wine a week later will result in bestowing a different number on a given wine.
But the Chronicle’s recommendations err in the other direction with its four star (****) system. If they would add another star a la Decanter–though Decanter doesn’t award half stars–they would provide better guidance to the consumer.
So in a five star evaluation, for example, a **** wine can be viewed as equivalent to 90-92 points, ****1/2 = 92-94 pts, ***** = 95 and above. Or going the other way, ***1/2 stars=87-89 pts.; *** = 85-87, and so forth. The margin of error seems appropriate to me.
As for relative vs. absolute value, scorers might adopt a variation of the Jerry Mead approach. Rather than have the second score stand for value as it did in the Meadian format, it could stand for rank among peers (much like the awarding of medals in some competitions). So a rosé which would rank lower, in an absolute scale, than a Cab or Pinot, could garner a second higher score when compared to other rosés. ***/****1/2
Take a look at Claude Kolm’s Fine Wine review. He uses a 100 point scale, but translates it into letter grades. In Burgundy, for example a score of over 91 for a Bourgogne Rouge is an A+ and 88-90 is an A. For a Grand Cru, 89 is a B-. In addition, when rating finished wines, he assigns both a number and a letter grade. So a perfect Muscadet might only get 88 Points, but it would also get an A+ to show wht a perfect example it was. The flaw is that because it contains actual information, it is a bit less simple, and takes a reader who pays attention.
Keith:
So the letter grade that Claude submits is contextual, while the number grade is “global”? In other words, “While this Bourgogne Rouge can never reach the heights of a Grand Cru, it is an outstanding example of the the heights Bourgogne Rouge can reach.”.
Is that a correct interpretation?