Implications For All Involved
A variety of academic studies seem to have determined quite convincingly that the non-wine expert and even the non-wine interested don’t like the same kind of wines that the "experts" and those who have had wine training tend to like.
The corollary to this is that the ratings of wine experts and wine critics seem may have little value for those who are not trained in wine.
The most recent confirmation of this comes from a Working Paper published at the website of the American Association of Wine Economists entitled, "Do More Expensive Wines Taste Better. Evidence from a Large Sample of Blind Tastings."
The basic findings of this working paper are that the average person prefers less expensive wines, while the experienced wine drinker (called an "expert" in the paper) tend to prefer wines that are more expensive. The study included blind tastings by more than 6000 individuals.
At the end of the working paper, the following questions are posed: "is the difference between the ratings of experts and non-experts due to an acquired taste? Or is it due to an innate ability, which is correlated with self selection into wine training?"
Both excellent questions.
I think the implications of this and similar studies with similar findings is immense, yet I’m not sure I’ve even come close to wrapping my mind around their meaning. Last week I played with the notion that there can be no such thing as objective quality in wine and that any criteria for quality set down by experts or non experts alike is merely an assertion of preference and not anything that can be called objective, if not mere tradition that is capable of, and has, changed over time.
But there is something else to be considered here. Is it possible that a large percentage of those that eventually find themselves to be either experts on wine of taken by wine in general are also much more likely to be a part of that 25% of the population that are called "supertasters"? This has to be considered. Recently Dan Berger, in an article at Appellation America, took a much closer look at the "genetics" behind wine preference. I sense that what Dan might be on to and what the researchers behind this newest study are confirming, might just need to meet up in the middle.
Something else to consider given these findings is the real world role of the wine critic. Given these studies, is it over the top to suggest that articles in daily newspapers and general readership magazines that review wines would be better off not reviewing wines at all, but rather providing more general interest or business-related wine stories?
Finally, this. Among those of us who are interested in wine, we rarely, very rarely, drink a wine knowing little about its provenance, including the producer, the appellation and the price. And whether we say so or not, I believe we place a strong correlation on price and quality. This leads me to conclude that if we see very similar styles of wines being produced at the higher price categories, we may be in danger of cementing in place that style of wine as the style that is equated with "quality". The implications of this possibility are important to consider.
Perhaps, Tom, but at least I know one person who doesn’t live by your final paragraph.
The way I see the subject, there are those who like meat and potatoes, those who like a meal at the Ronald clown shop, those who think others who appreciate fine food are elitists, and those who appreciate fine food. It’s the same thing with wine, provided you take the view, which I do, that wine is food.
Wine critics are like politicians: the more we write about them (or vote for them) the more we encourage them to continue gaming us.
I agree with you that it’s near impossible to place an objective measure of quality over wine, but it’s not impossible. Objectively, nobody wants high VA in their wine, except those with a Jones for a certain Cali Cult…so there’s a measure of quality that can be agreed upon, except of course for those cult people.
Tom, I’d like to comment on the question of wine reviews in newspapers/general interest magazines. I don’t think that wine “reviews” are wrong for these publications, it’s HOW they are reviewed. So many critics fall into the 100 point trap, and this does not serve the reader well. What should appear are reviews that talk about the region the wine is from, the varietal(s), and the resulting typical flavors – at least what the reviewer is tasting (and maybe there should always be more than one reviewer in this kind of article). The problem is more in the scoring device than the tasting notes themselves, don’t you think?
The implications of this works is as immense as the correlations they report – which are SMALL – at best.
I encourage everyone to read the paper carefuly and think things out. This is not as sensational as would appear. Essentially, both groups were nearly equally split on their appraisals of the lwo and high price wine categories. (If I may, I have posted two images on my root directory to facilitate the understanding of this study and its results: http://www.redwinebuzz.com/a1.PNG and http://www.redwinebuzz.com/a3.PNG)
I’ve seen Dan’s article and have commented so defer to that discussion for my opinion on that subject.
The only statement here that I agree with wholeheartedly is that “if we see very similar styles of wines being produced at the higher price categories, we may be in danger of cementing in place that style of wine as the style that is equated with “quality”.”
I think that has already happened.
But the fact that the correlation to price and quality was very small indicates that there were many different wines of different styles presented to many tasters with presumably different *preferences*.
The questions I have are: what types of wines made up these categories? If we grouped the high priced CA cult, “monster wines” and the traditional old world wines, would we see different results? What if we compare how those that rated the cult wines highly rated wines of different style in the low price category? And so on…
(if the second image give you a 404 error, delete the “)” at the end of the url (sorry)
Mark, you’re right. The problem lies more with scoring than tasting notes. There’s a large group of so-called wine lovers who do not trust their tastebuds anymore, and those cherry pickers feel if the price is high and the scores are even higher then they are going to like the wines, and even more important – – impress their friends. Personally, I am so burnt out on that kind of thinking.
Here’s an interesting question: If the process of “cherry picking” did not work for the “cherry pickers”, would they continue to cherry pick?
Now, what constitutes “working for” is a good question. Perhaps it’s enough for these folks that they have the wines that folks are talking about. Perhaps the fact that the wines are rated high and others want them somehow make them taste good.
I don’t know.
“Now, what constitutes “working for” is a good question. Perhaps it’s enough for these folks that they have the wines that folks are talking about. Perhaps the fact that the wines are rated high and others want them somehow make them taste good.”
Tom, after many years of dealing with consumers, I vote for both “perhapses.”
Double blind tastings often enlighten greatly!
I still and probably will always consider the whole wine critic industry as a that Vegas thing about the house owning the odds.
Tom, that is a good question. If the point system was eliminated perhaps the cherry pickers would find another gauge or tool to gather the wines they feel will bring them happiness, success and popularity amongst their peers.
Re the 100 point system, I think the key issue here for consumers is safety – in what can be a very confusing world of wine, many folks are insecure (even us pros sometimes, right?) – and if a wine has a big score, who can speak against it? “Well, it may be a big alcoholic monster that overpowers any food it gets near, but ****** gave it a 93!” Maybe Thomas is right, we should begin a movement for consumers to have fun with their own blind tastings…