Is Wine Fake?

Scott Alexander

Wine commands wealth, prestige, and attention from aficionados. How much of what they admire is in their heads?

Your classiest friend invites you to dinner. They take out a bottle of Chardonnay that costs more than your last vacation and pour each of you a drink. They sip from their glass. “Ah,” they say. “1973. An excellent vintage. Notes of avocado, gingko and strontium.” You’re not sure what to do. You mumble something about how you can really taste the strontium. But internally, you wonder: Is wine fake?

A vocal group of skeptics thinks it might be. The most eloquent summary of their position is The Guardian’sWine-Tasting: It’s Junk Science,” which highlights several concerning experiments:

In 2001 Frédérick Brochet of the University of Bordeaux asked 54 wine experts to test two glasses of wine – one red, one white. Using the typical language of tasters, the panel described the red as “jammy" and commented on its crushed red fruit.

The critics failed to spot that both wines were from the same bottle. The only difference was that one had been coloured red with a flavourless dye.

And:

In 2011 Professor Richard Wiseman, a psychologist (and former professional magician) at Hertfordshire University invited 578 people to comment on a range of red and white wines, varying from £3.49 for a claret to £30 for champagne, and tasted blind. People could tell the difference between wines under £5 and those above £10 only 53% of the time for whites and only 47% of the time for reds. Overall they would have been just as successful flipping a coin to guess.

Wikipedia broadly agrees, saying:

Some blinded trials among wine consumers have indicated that people can find nothing in a wine’s aroma or taste to distinguish between ordinary and pricey brands. Academic research on blinded wine tastings have also cast doubt on the ability of professional tasters to judge wines consistently.

But I recently watched the documentary Somm, about expert wine-tasters trying to pass the Master Sommelier examination. As part of their test, they have to blind-taste six wines and, for each, identify the grape variety, the year it was produced, and tasting notes (e.g., “aged orange peel” or “hints of berry”). Then they need to identify where the wine was grown: certainly in broad categories like country or region, but ideally down to the particular vineyard. Most candidates — 92% — fail the examination. But some pass. And the criteria are so strict that random guessing alone can’t explain the few successes.

So what’s going on? How come some experts can’t distinguish red and white wines, and others can tell that it’s a 1951 Riesling from the Seine River Valley? If you can detect aged orange peel, why can’t you tell a $3 bottle from a $30 one?

In Vino Veritas

All of those things in Somm — grape varieties, country of origin and so on — probably aren’t fake.

The most convincing evidence for this is “Supertasters Among the Dreaming Spires,” from 1843 magazine (also summarized in The Economist). Here a journalist follows the Oxford and Cambridge competitive wine-tasting teams as they prepare for their annual competition. The Master Sommelier examination has never made its results public to journalists or scientists — but the Oxbridge contest did, confirming that some of these wine tasters are pretty good.

Top scorers were able to identify grape varieties and countries for four of the six wines. In general, tasters who did well on the reds also did well on the whites, suggesting a consistent talent. And most tasters failed on the same wines (e.g., the Grenache and Friulano), suggesting those were genuinely harder than others.

Results of the Oxford-Cambridge Varsity blind-tasting match February 15, 2017
Source: Economist.com

If the Oxbridge results are true, how come Brochet’s experts couldn’t distinguish red and white wine? A closer look at the original study suggests three possible problems.

First, the experts weren’t exactly experts. They were, in the grand tradition of studies everywhere, undergraduates at the researchers’ university. Their only claim to expertise was their course of study in enology, apparently something you can specialize in if you go to the University of Bordeaux. Still, the study doesn’t say how many years they’d been studying, or whether their studies necessarily involved wine appreciation as opposed to just how to grow grapes or run a restaurant.

Second, the subjects were never asked whether the wine was red or white. They were given a list of descriptors, some of which were typical of red wine, others of white wine, and asked to assign them to one of the wines. (They also had the option to pick descriptors of their own choosing, but it’s not clear if any did.) Maybe their thought process was something like “neither of these tastes red, exactly, but I’ve got to assign the red wine descriptors to one of them, and the one on the right is obviously a red wine because it’s red colored, so I’ll assign it to that one.”

Third, even if you find neither of these exculpatory, tricking people just works really well in general. Based on the theory of predictive coding, our brains first figure out what sensory stimuli should be, then see if there’s any way they can shoehorn actual stimuli to the the expected pattern. If they can’t, then the brain will just register the the real sensation, but as long as it’s pretty close they’ll just return the the prediction. For example, did you notice that the word “the” was duplicated three times in this paragraph? Your brain was expecting to read a single word “the,” just as it always has before, and when you’re reading quickly, the mild deviation from expected stimuli wasn’t enough to raise any alarms.

Or consider the famous Pepsi Challenge: Pepsi asked consumers to blind-taste-test Pepsi vs. Coke; most preferred Pepsi. But Coke maintains its high market share partly because when people are asked to nonblindly taste Coke and Pepsi (as they always do in the real world) people prefer Coke. Think of it as the brain combining two sources of input to make a final taste perception: the actual taste of the two sodas and a preconceived notion (probably based on great marketing) that Coke should taste better. In the same way, wine tasters given some decoy evidence (the color of the wine) combine that evidence with the real taste sensations in order to produce a conscious perception of what the wine tastes like. That doesn’t necessarily mean the same tasters would get it wrong if they weren’t being tricked.

Pineau et al. 1 conducted a taste test that removed some of these issues; they asked students to rank the berry tastes (a typical red wine flavor) of various wines while blinded to (but not deceived about) whether they were red or white. They were able to do much better than chance (p<0.001).

The Price Is Wrong

Just because wine experts can judge the characteristics of wine doesn’t mean we should care about their assessments of quality. Most of the research I found showed no blind preference for more expensive wines over cheaper ones.

Here my favorite study is Goldstein et al., 2 “Do More Expensive Wines Taste Better? Evidence From a Large Sample of Blind Tastings.” They look at 6,175 tastings from 17 wine tasting events and find that, among ordinary people (nonexperts), “the correlation between price and overall rating is small and negative, suggesting that individuals on average enjoy more expensive wines slightly less.” But experts might prefer more expensive wine; the study found that if wine A cost 10 times more than wine B, experts on average ranked it seven points higher on a 100-point scale. However, this effect was not quite statistically significant, and all that the authors can say with certainty is that experts don’t dislike more expensive wine the same way normal people do.

Harrar et al. 3 have a study in Flavour, which was somehow a real journal until 2017, investigating novice and expert ratings of seven sparkling wines. Somewhat contrary to the point I made above, everyone (including experts) did poorly in identifying which wines were made of mostly red vs. white grapes (although most of the wines were mixed, which might make it a harder problem than just distinguishing pure reds from pure whites). More relevant to the current question, they didn’t consistently prefer the most expensive champagne (£400) to the least expensive (£18).

Robert Hodgson 4 takes a slightly different approach and studies consistency among judges at wine competitions. If wine quality is real and identifiable, experts should be able to reliably judge identical samples of wine as identically good. In a series of studies, he shows they are okay at this. During competitions where wines are typically judged at between 80 and 100 points, blinded judges given the same wine twice rated on average about four points apart — in the language of wine tasting, the difference between “Silver−” and “Silver+”. Only 10% of judges were “consistently consistent” within a medal range, i.e., they never (in four tries) gave a wine “Silver” on one tasting and “Bronze” or “Gold” the next. Another 10% of judges were extremely inconsistent, giving wine Gold during one tasting and Bronze (or worse) during another. Most of the time, they were just a bit off. Judges were most consistent at the bottom of the range — they always agreed terrible wines were terrible — and least consistent near the top.

In another study, Hodgson 5 looks at wines entered in at least three competitions. Of those that won Gold in one, 84% received no award (i.e., neither Gold, Silver, nor Bronze) in at least one other. “Thus, many wines that are viewed as extraordinarily good at some competitions are viewed as below average at others.”

And here, too, a little bit of trickery can overwhelm whatever real stimuli people are getting. Lewis et al. 6 put wine in relabelled bottles, so that drinkers think a cheap wine is expensive or vice versa. They find that even people who had completed a course on wine tasting (so not quite “experts,” but not exactly ordinary people either) gave judgments corresponding to the price and prestige of the labeled wine, not to the real wine inside the bottles.

So experienced tasters generally can’t agree on which wines are better than others, or identify pricier wines as tasting better. Does this mean that wine is fake? Consider some taste we all understand very well, like pizza — not even fancy European pizza, just normal pizza that normal people like. I prefer Detroit pizza, tolerate New York pizza, and can’t stand Chicago pizza. Your tastes might be the opposite. Does this mean there’s no real difference between pizza types? Or that one of us is lying, or faking our love of pizza, or otherwise culpable?

I’ll make one more confession — sometimes I prefer pizza from the greasy pizza joint down the street to pizza with exotic cheeses from a fancy Italian restaurant that costs twice as much. Does this mean the fancy Italian restaurant is a fraud? Or that the exotic cheeses don’t really taste different from regular cheddar and mozzarella?

There can be objectively bad pizza — burnt, cold, mushy — but there isn’t really any objective best pizza. Fancier and more complicated pizzas can be more expensive, not because they’re better, but because they’re more interesting. Maybe wine is the same way.

Notes on Notes

What about the tasting notes — the part where experts say a wine tastes like aged orange peel or avocado or whatever?

There aren’t many studies that investigate this claim directly. But their claims make sense on a chemical level. Fermentation produces hundreds of different compounds, many are volatile (i.e., evaporate easily and can be smelled), and we naturally round chemicals off to other plants or foods that contain them.

When people say a wine has citrus notes, that might mean it has 9-carbon alcohols somewhere in its chemical soup. If they say chocolate, 5-carbon aldehydes; if mint, 5-carbon ketones.

(Do wines ever have 6-carbon carboxylic acids, or 10-carbon alkanes — i.e., goats, armpits or jet fuel? I am not a wine chemist and cannot answer this question. But one of the experts interviewed on Somm mentioned that a common tasting note is cat urine, but that in polite company you’re supposed to refer to it by the code phrase “blackcurrant bud.” Maybe one of those things wine experts say is code for “smells like a goat,” I don’t know.)

Scientists use gas chromatography to investigate these compounds in wine and sometimes understand them on quite a deep level. For example, from “Grape-Derived Fruity Volatile Thiols: Adjusting Sauvignon Blanc Aroma and Flavor Complexity”:

Three main volatile thiols are responsible for the tropical fruit nuances in wines. They are 3MH (3-mercaptohexan-1-ol), 3MHA (3-mercaptohexyl acetate) and 4MMP (4-mercapto-4-methylpentan-2-one). The smell is quite potent (or “punchy,” as the Kiwis say) at higher concentrations, and descriptors used include tropical fruit, passionfruit, grapefruit, guava, gooseberry, box tree, tomato leaf and black currant. Perception thresholds for 4MMP, 3MH and 3MHA in model wine are 0.8 ng/L, 60 ng/L and 4.2 ng/L, respectively.

These numbers don’t necessarily carry over to wines, where aromas exactly at the perception threshold might be overwhelmed by other flavors, but since some wines can have thousands or tens of thousands of nanograms per liter of these chemicals, it makes sense that some people can detect them. A few studies are able to observe this detection empirically. Prida and Chatonnet 7 found that experts rated wines with more furanic acid compounds as smelling oakier. And Tesfaye et al. 8 find good inter-rater reliability in expert tasting notes of wine vinegars.

Weil, 9 writing in the Journal of Wine Economics (another real journal!) finds that ordinary people can’t match wines to descriptions of their tasting notes at a better-than-chance level. I think the best explanation of this discrepancy is that experts can consistently detect these notes, but ordinary people can’t.

The Judgment of Paris

Until the 1970s, everyone knew French wines were the best in the world. Wine seller Steven Spurrier challenged the top French experts to a blind taste test of French vs. Californian wines. According to CNN:

The finest French wines were up against upstarts from California. At the time, this didn’t even seem like a fair contest — France made the world’s best wines and Napa Valley was not yet on the map — so the result was believed to be obvious.

Instead, the greatest underdog tale in wine history was about to unfold. Californian wines scored big with the judges and won in both the red and white categories, beating legendary chateaux and domaines from Bordeaux and Burgundy.

The only journalist in attendance, George M. Taber of Time magazine, later wrote in his article that “the unthinkable happened,” and in an allusion to Greek mythology called the event “The Judgment of Paris,” and thus it would forever be known.

“The unthinkable” is, if anything, underselling it. One judge, horrified, demanded her scorecard back. The tasting turned California’s Napa Valley from a nowhere backwater into one of the world’s top wine regions.

I bring this up because, well, the deliberately provocative title of this article was “Is Wine Fake?” Obviously wine is not fake: There is certainly a real drink made from fermented grapes. The real question at issue is whether wine expertise is fake. And that ties this question in with the general debate on the nature of expertise. There are many people who think many kinds of expertise are fake, and many other people pushing back against them; maybe wine is just one more front in this grander war.

And it would seem that wine expertise is real. With enough training (Master Sommelier candidates typically need 10 years of experience) people really can learn to identify wines by taste. Although ordinary people do not prefer more expensive to less expensive wine, some experts do, at least if we are willing to bend the statistical significance rules a little. And although ordinary people cannot agree on tasting notes, experts often can.

But although wine experts really do know more than you and I, the world of wine is insane. People spend thousands of dollars for fancy wine that they enjoy no more than $10 plonk from the corner store. Vintners obsess over wine contests that are probably mostly chance. False beliefs, like the superiority of French wine, get enshrined as unquestioned truths.

All the oenophiles and expert tasters of the 1960s and ’70s got one of the most basic questions in their field wrong. Why? Maybe patriotism: Most of the wine industry was in France, and they didn’t want to consider that other countries might be as good as they were. Maybe conformity: If nobody else was taking Californian wines seriously, why should you? Or maybe a self-perpetuating cycle, where if any expert had made a deep study of Californian wines, they would have been able to realize they were very good, but nobody thought such a study was worth it.

Wine is not fake. Wine experts aren’t fake either, but they believe some strange things, are far from infallible, and need challenges and blinded trials to be kept honest. How far beyond wine you want to apply this is left as an exercise for the reader.

  1. Bénédicte Pineau et al, “Olfactory Specificity of Red- and Black-Berry Fruit Aromas in Red Wines and Contribution to the Red Bordeaux Wine Concept,” OENO One 44, no. 1 (2010).
  2. Robin Goldstein et al, “Do More Expensive Wines Taste Better? Evidence from a Large Sample of Blind Tastings,” Journal of Wine Economics 3, no. 1 (2008): 1–9.
  3. Vanessa Harrar et al, “Grape Expectations: How the Proportion of White Grape in Champagne Affects the Ratings of Experts and Social Drinkers in a Blind Tasting,” Flavour 2, no. 1 (December 2013): 25.
  4. Robert Hodgson, “An Examination of Judge Reliability at a Major U.S. Wine Competition,Journal of Wine Economics 3, no. 2 (2008): 105–13.
  5. Robert Hodgson, “An Analysis of the Concordance Among 13 U.S. Wine Competitions,” Journal of Wine Economics 4, no. 1 (2009): 1–9.
  6. Geoffrey Lewis et al, “The Impact of Setting on Wine Tasting Experiments: Do Blind Tastings Reflect the Real-Life Enjoyment of Wine?International Journal of Wine Business Research 31, no. 4 (2019): 578–90.
  7. Andrei Prida and Pascal Chatonnet, “Impact of Oak-Derived Compounds on the Olfactory Perception of Barrel-Aged Wines,” American Journal of Enology and Viticulture 61, no. 3 (2010): 408.
  8. Wendu Tesfaye et al, “Descriptive Sensory Analysis of Wine Vinegar: Tasting Procedure and Reliability of New Attributes,” Journal of Sensory Studies 25, no. 2 (2010): 216–30.
  9. Roman Weil, “Debunking Critics’ Wine Words: Can Amateurs Distinguish the Smell of Asphalt from the Taste of Cherries?Journal of Wine Economics 2, no 2 (2007): 136–44.

Scott Alexander is a writer and psychiatrist based in Oakland, California. He blogs at astralcodexten.substack.com.

Published November 2022

Have something to say? Email us at letters@asteriskmag.com.

Further Reading

Subscribe