Why Wine Experts Disagree: The Subjectivity of Tasting

S

Sommy Team

Founder & Wine Educator

April 29, 2026

11 min read

TL;DR

Wine experts disagree because tasting is part objective — measurable acid, tannin, sweetness — and part subjective. Genetic differences in taste-bud sensitivity, decades of palate calibration, school-of-thought traditions like WSET versus Parker, regional aesthetics, and point inflation cycles all shift scores. Two trained experts can rate the same wine 88 and 96 and both be right.

Two sommeliers holding identical glasses of red wine, scoring sheets in front of them showing different numbers

TLDR

Wine experts disagree because tasting is part objective — measurable acid, tannin, sweetness — and part subjective. Genetic differences in taste-bud sensitivity, decades of palate calibration, school-of-thought traditions like WSET versus Parker, regional aesthetics, and point inflation cycles all shift scores. Two trained experts can rate the same wine 88 and 96 and both be right.

Why Wine Experts Disagree, in One Paragraph

The phrase why wine experts disagree captures one of the most misunderstood truths in wine criticism: tasting is partly objective and partly subjective, and trained palates split along the seam. Acid, tannin, residual sugar, and alcohol are measurable in a lab — experts almost always agree on those structural facts. Preference, threshold sensitivity, and style framework are not measurable — experts diverge sharply there. Genetic differences in TAS2R bitter receptors mean roughly twenty-five percent of the population are supertasters experiencing bitterness at much lower concentrations. Cultural training shapes what counts as "good" — French sommeliers value restraint, US critics often score for power. Style preferences, oxidative versus fresh, oaked versus unoaked, are aesthetic stances dressed as judgments. Decades of expertise plus repeated exposure gradually shift palates. Two well-trained experts can score the same wine 88 and 96 — both right within their own frameworks.

Two sommeliers tasting from identical glasses of the same wine, each writing different scores on their sheets

What Wine Experts Actually Agree On

Before unpacking the disagreement, it helps to be precise about the consensus. Wine experts agree on more than the headlines suggest.

The objective layer of tasting is largely settled. Trained tasters across schools and continents converge on:

  • Structural measurements — sweetness, acidity, tannin, body, and alcohol on a one-to-five scale match closely across experienced palates. The numbers are anchored to physical compounds in the wine.
  • Faults — cork taint, oxidation, brettanomyces, volatile acidity, and reduction are flagged by nearly every trained taster. Recognizing flaws is the easiest part of expert agreement. Our piece on how to identify wine faults by smell breaks down the specific markers.
  • Typicity — whether a Sauvignon Blanc smells like Sauvignon Blanc, whether a Burgundy reads as Pinot Noir from the Côte d'Or. Pattern recognition trained over thousands of bottles produces tight agreement.

Where experts diverge is the subjective layer — the quality conclusion, the score, the "is this wine excellent" judgment. That layer is where genetics, training, and aesthetics enter the room.

Reason 1: Genetic Taste Differences Are Real

Two tasters at the same flight are not running the same instrument. The hardware itself differs.

Supertasters, Tasters, and Non-Tasters

Roughly twenty-five percent of the population are supertasters — people with denser taste-bud distribution on the tongue and stronger sensitivity to bitter compounds. The genetic basis sits in a family of receptor genes called TAS2R, which detect bitterness from compounds like PROP and PTC. Supertasters experience bitter at much lower concentrations than the average drinker.

Another fifty percent are average tasters. The remaining twenty-five percent are non-tasters — they need higher concentrations to register bitterness at all.

Many wine professionals are supertasters. The sensitivity that drove them toward the field also shapes how they score wine. A grippy young tannic red can read as structured and balanced to a non-taster expert, and as harsh and astringent to a supertaster expert. Neither is wrong. Their hardware is calibrated differently.

A diagram showing three palate-sensitivity profiles labeled supertaster, average taster, and non-taster, with overlapping bell curves of bitter detection thresholds

Threshold Differences for Aroma Compounds

Beyond taste, smell sensitivity also varies genetically. The compound rotundone, responsible for the black-pepper note in Syrah and Grüner Veltliner, is undetectable to about twenty percent of people regardless of training. They cannot smell it at any concentration. A Syrah that reads as classically peppery to one reviewer is mute on that note for another.

The same applies to specific thiols, pyrazines, and esters. Two tasters describing a Sauvignon Blanc might genuinely smell different bouquets — not because one is paying attention and the other is not, but because their olfactory receptor genes differ. Our piece on retronasal smell in wine covers how aroma actually reaches the brain.

Reason 2: Palate Calibration Drifts Over Time

Even within one taster, the instrument changes. A reviewer at twenty-five and the same reviewer at fifty-five are not the same palate.

Sensitivity Shifts with Age

Most people lose smell discrimination noticeably by their fifties — the receptors regenerate more slowly, and the cumulative thresholds creep upward. Bitter sensitivity often increases with age, while sweetness perception can soften. The trajectory is not the same for everyone, but it is universal in direction.

A critic who scored a famous wine at thirty was a different sensory instrument than the one who scores its current vintage at sixty. The score in print is a snapshot of one moment in a moving career. Our explainer on how age affects palate covers what physically shifts and when.

Training Reshapes the Brain

Repeated exposure to specific styles trains the palate. After ten thousand glasses of Burgundy, a reviewer's reference point for what Pinot Noir "should taste like" tightens around what they have actually tasted. Their internal benchmark for excellence is built from their personal exposure curve.

This is why a critic who specializes in Italian wine sometimes scores Bordeaux differently than a Bordeaux specialist would — not from incompetence, but from a different reference library. The same dynamic affects how to evaluate wine quality across regions.

Fatigue and Recovery Cycles

Within any given session, palate fatigue dulls discrimination after thirty to forty wines, regardless of mitigation. A reviewer who tastes 150 wines on Tuesday morning and 30 wines on Wednesday morning will produce slightly different scores for the same bottle on each day. Our piece on wine palate fatigue explores the dulling curve in detail.

Reason 3: Schools of Thought Disagree by Design

Wine training is not one tradition. It is several, each with its own rubric, vocabulary, and aesthetic.

The WSET Tradition

The Wine & Spirit Education Trust dominates Anglophone wine training. Its Systematic Approach to Tasting walks through appearance, nose, palate, and conclusions in a fixed order. The quality conclusion uses a six-step ladder from Faulty to Outstanding, anchored by typicity, balance, intensity, length, and complexity.

WSET-trained reviewers tend to reward restraint — wines that show their grape and origin clearly without overworking themselves. Concentration is a virtue only if balance survives it. A polished reviewer who came up through Diploma and Master of Wine scores against this template by reflex. Our breakdown of the WSET systematic tasting approach walks through the full rubric.

The Parker School

Robert Parker's 100-point scale, popularized in the 1980s, came with an embedded aesthetic — power, ripeness, concentration, hedonic impact. A Parker-style reviewer rewards big wines that announce themselves immediately. The American Wine Spectator and Wine Enthusiast traditions inherit much of this DNA.

A wine that reads as outstanding to a Parker-school reviewer can read as overworked or out of balance to a WSET-trained one. The same bottle. Different schools. Different scores. The mechanics of the system itself are covered in our piece on the 100-point wine scale.

The French AOC Tradition

French wine evaluation centers on typicity — does the wine taste of its appellation. A Sancerre should taste like Sancerre. A Châteauneuf-du-Pas should read as Châteauneuf-du-Pape. Power and concentration matter less than place. A French sommelier may downgrade a technically impressive wine that does not taste of where it came from.

A New World expert trained on varietal expression often disagrees, because the grape is the reference, not the appellation.

A grid showing three rating frameworks side by side — WSET, Parker, and French AOC — with the same wine receiving different point allocations under each

Reason 4: Regional Aesthetic Preferences

Even within one school, regional aesthetics shape preference.

  • Old World vs New World — European reviewers often weight elegance, restraint, savory complexity. American and Australian reviewers often weight ripeness and power. Our piece on new world vs old world tasting style covers the divide in depth.
  • Oxidative vs reductive — Some traditions reward gentle oxidation (Sherry, white Rioja, vin jaune). Others reward bright reductive freshness (modern white Burgundy, Marlborough Sauvignon Blanc). The same nose can score high or low depending on the reviewer's tradition.
  • Oaked vs unoaked — A heavily oaked Chardonnay rewarded in California is sometimes penalized in Chablis, even if the wine is technically clean and well-balanced.

These are aesthetic stances dressed as judgments. Both can be defended. Both produce different scores for the same wine.

Reason 5: Point Inflation Cycles Distort Comparison

The 100-point scale was originally meant to use the full range. It does not anymore.

Across major publications, average scores have climbed roughly five points since the early 1990s. A wine that scored 88 then often scores 92 to 93 today. Multiple forces drive the inflation: better winemaking on average, retailer pressure for shelf-talker scores of 90 and above, competition between publications for the highest average, and a sliding rubric that quietly moved the bar.

This means two reviewers calibrated at different moments in the inflation cycle disagree even when they would otherwise agree. A reviewer trained in the 1990s, scoring strictly, might give a wine 89. A reviewer trained in the 2020s, scoring against modern norms, might give the same wine 93. Both are honest applications of their internal calibration. The cycle just shifted under them.

A timeline sketched on parchment showing how wine scores from the same publication have drifted upward by five points between 1990 and the present

Reason 6: When Consensus Does Emerge

Disagreement is the headline, but expert consensus is real and predictable in two scenarios.

Faulted Wines Pull Universal Agreement

A corked wine smells corked to almost every trained palate at concentrations above the threshold. An oxidized wine reads as oxidized. A volatile wine reads as volatile. Faults are the floor where expert agreement is essentially total. If three reviewers reject a bottle for cork taint, the bottle is corked.

This is one reason competition medals function reasonably well as a fault filter — getting a silver or higher means the panel agreed the wine was clean and well-made, even if they would have disagreed about whether to score it 87 or 91. Our walk-through of wine judging criteria covers exactly how panels handle disagreement.

Exceptional Vintages Concentrate Agreement

A genuinely outstanding wine — clean fruit, balanced structure, long finish, classical typicity — is harder to argue against than a borderline one. The 88-to-94 zone is where two experts most often diverge by three to five points. The 96-and-above zone is where agreement tightens, because excellence on every axis is easier to recognize than to dispute.

The middle of the scale is the noisy zone. The extremes are where consensus lives.

How to Use Expert Disagreement to Your Advantage

If two trained experts can score the same bottle 88 and 96, the practical move is not to average their scores — it is to find the expert whose calibration matches your palate.

A workable approach:

  1. Build your own scoring journal. A simple one-to-ten scale based on whether you would buy the bottle again. After thirty entries, you will see clear patterns. The piece on wine tasting journal tips covers a workable template.
  2. Compare your scores against three or four critics. For wines you have rated, look up what the critics gave them. Track agreement over twenty bottles.
  3. Follow the critic whose ratings best predict your enjoyment. That reviewer's palate is calibrated closest to yours. Their next 92 is more likely to land for you than the highest score on the shelf.
  4. Treat aggregate scores skeptically. Averaging two disagreeing experts produces a number that matches neither of their actual judgments. The signal is in alignment, not averaging.

The Sommy app's tasting flow is structured around this idea — every session captures a personal score on a consistent rubric, so the calibration data accumulates over time. Browse sommy.wine for a walk-through of how the journal builds out across bottles.

What This Means for Beginners

If you are new to wine, expert disagreement can feel discouraging — if professionals split by five points on the same bottle, what hope does an amateur palate have. The truth runs the other way.

The first lesson is that there is no single "correct" judgment hiding behind the experts. They genuinely see different things. Your job is not to match their ratings — it is to develop your own palate and learn which experts predict your enjoyment.

The second lesson is that the objective layer is teachable. You can learn to taste sweetness, acidity, tannin, body, and alcohol on a one-to-five scale within a few months of structured practice. That is the layer the experts agree on. Sommy's beginner course walks through this exact rubric. The piece on understanding tannins, acidity, and body covers the structural skeleton.

The third lesson is that the subjective layer is yours to own. Your preferences are not wrong. A drinker who prefers crisp unoaked whites is not less sophisticated than one who prefers buttery oak-aged Chardonnay. They are calibrated differently, like the experts. The skill is knowing where you sit and finding wines and critics aligned with that.

The Bottom Line

Wine experts disagree because tasting sits across an objective and a subjective layer. The objective layer — structure, faults, typicity — produces tight agreement. The subjective layer — quality, score, hedonic judgment — produces three-to-five-point spreads even between equally qualified critics.

Genetic differences in taste-bud sensitivity, decades of palate calibration, school-of-thought traditions, regional aesthetics, and point inflation cycles all push the subjective layer in different directions. None of them is the wrong answer. The expert who scored a wine 88 and the expert who scored it 96 are both reading honestly against their own internal rubric.

The practical takeaway is not to dismiss expert opinion — it is to find the experts whose calibration matches yours, and to build your own reference points alongside theirs. The Sommy app makes that process structured without making it clinical, so the patterns in your own journal compound across every bottle you taste.

Frequently Asked Questions

Why do two qualified wine experts score the same wine differently?

Each expert carries a unique genetic sensitivity to bitter, tannin, and acid, layered with years of training in a particular school of thought. A reviewer trained on bold New World reds rewards concentration. A reviewer trained on cool-climate elegance rewards restraint. Both score honestly against their internal rubric. A three to five point spread between two qualified critics on the same bottle is the norm, not the exception.

Are wine experts actually genetically different from regular drinkers?

Some are. Around twenty-five percent of the population are supertasters with denser taste-bud distribution and stronger TAS2R bitter-receptor sensitivity. Many become wine professionals because they detect subtle structure earlier than peers. The flip side is that supertasters often penalize tannin and bitter compounds more harshly than non-tasters, so a wine that reads as grippy and balanced to one expert can read as harsh to another.

What is the difference between the WSET school and the Parker school?

WSET trains tasters to evaluate against typicity, balance, length, and intensity using a fixed Systematic Approach to Tasting rubric — restraint and structure are rewarded. The Parker school, popularized by the 100-point scale in the 1980s, prioritizes power, ripeness, concentration, and hedonic impact. The same wine can earn very good marks from a WSET-trained reviewer and outstanding marks from a Parker-trained one without either being wrong.

Has wine criticism become more or less consistent over time?

Less consistent in absolute scoring, more consistent in identifying faults. Average published scores have climbed roughly five points since the early 1990s due to point inflation, sliding rubrics, and retailer pressure. But experts converge tightly on whether a wine is corked, oxidized, or volatile — agreement on faults is near-universal, while agreement on quality keeps loosening as the scale compresses upward.

When do wine experts actually agree?

Faulted wines pull near-universal consensus — corked, oxidized, brett-affected, or volatile bottles get flagged by almost every trained palate. Exceptional vintages also concentrate agreement, because clean fruit, balanced structure, and long finish are easier to recognize than to argue against. Style preference and point allocation diverge most in the middle — the 88-to-94 range is where two experts most often disagree by three to five points.

Should I trust a wine reviewer whose palate matches mine?

Yes, more than any aggregate score. After tracking thirty bottles in your own journal, compare your one-to-ten scores against three or four critics. The reviewer whose ratings most consistently predict your own enjoyment is the one calibrated to your palate. Following that critic produces better results than chasing the highest published score, because expert disagreement is not solved by averaging — it is solved by alignment.

Does palate drift mean an expert from 2008 cannot be trusted today?

Their old reviews are still useful as historical reference points, but the same critic at sixty tastes differently than at thirty. Smell discrimination softens in most people by the fifties, bitter sensitivity often climbs, and style preferences shift after decades of exposure. A reviewer's calibration is a moving target, which is why their recent reviews predict your experience with a current bottle better than their archival ones.

Get the free Wine 101 course

Start learning to taste wine like a pro with structured lessons and AI-guided practice.

wine-tastingwine-scoringsensory-trainingwine-criticismwine-basics
S

Sommy Team

LinkedIn

Founder & Wine Educator

The Sommy Team is building the world's most approachable wine education app, helping beginners develop real tasting skills through structured courses and AI-guided practice.

Keep Reading