The flaws of the scores

Image

Richard wonders why wine scores are so enduring when they are so obviously fallible. 

Last month, I tasted around 50 Condrieus from the 2017 vintage (see my notes here). I scored one particular wine at a measly 14.5, one of the lowest possible scores, and another one at 17.5 – one of only four to score that high for the appellation across the vintage. 

The trouble was that it was the same wine. 

The first time, it was the 13th wine tasted blind in a line up of 47 Condrieus, organised by promotional body Inter Rhône. The second time, it was at a relaxed dinner the following evening, with the winemaker himself sitting at our table – I spit thee not.

In less than 48 hours, the same wine went from being one of the very worst Condrieus of the vintage to one of the very best.

In the first instance, the 14.5 was given in the equivalent of laboratory conditions: a large, simple, brightly lit room with plain white tables featuring built-in stainless-steel sinks. The sort of room where you’d be content to have your appendix removed. Each wine was poured blind and had about three minutes to be tasted, scored and written up, alongside dozens of their peers. I noted that it was flabby, overcooked and dull.

The second time was in a quiet, comfortable restaurant, after a couple of restorative (small) beers, at the beginning of a pleasant dinner that involved tasting 10 or so wines presented by the winemaker in person. This time round I found it fresh, defined and precise.

There are many potential explanations for the discrepancy, none of which is either satisfying or provable.

In the tasting room, it was early morning, conditions were conducive for complete concentration, and I was entirely sober. At the dinner, it was late evening, I was scribbling my notes in a notebook perched on the edge of a table, and I was two small beers and a half-glass of Condrieu less than sober. At the time, I didn’t realise that I had tasted the same wine the day before and scored it so differently. And the winemaker was sitting right next to me.

In the real world, it is the restaurant scenario which most closely resembles the way wine is intended for drinking. Unless you buy wine to drink in entirely sterile conditions (perhaps you’re an appendix surgeon), would you rather trust a dispassionate three-minute judgement or a longer, more contextualised and emotive consideration?

Then of course there is a host of other factors that affect how a wine might perform. Perhaps the first bottle was suffering from mild TCA or oxidation or any of the other factors that constitute bottle variation, and my assessment was surely affected by the Condrieus I tasted immediately before. I checked with Decanter’s Rhône critic Matt Walls, who was tasting with me all week, and he scored the wines at 94 points on both occasions – although at the dinner, he was aware of having scored it highly the previous day.

So it seems the most likely explanation for my variation in scores is good old-fashioned human fallibility. I was evidently wrong once – but which of those scores is the right one?

The wine was Pierre Gaillard’s L’Octroi, incidentally, and, for the record, the final score I settled on was 17.5, for two reasons: that I prefer to give the benefit of the doubt, and that I generally like Gaillard’s wines. So that’s one way of answering the question.

The better answer is that wine scores are not constant. Tasting wine is not a mathematical equation. It does not produce a reproducible result even for the same palate, let alone a different one with a different set of personal preferences.

The problem is compounded by the recurrent assumption that scores are absolute. Headlines were recently grabbed (in wine trade publications, at least) by the latest batch of Bordeaux and Napa reds to be awarded 100 points by the Wine Advocate. But these scores are not a factual, definite thing – in reality, if those same wines were tasted again, blind, by the same critic, the scores would almost certainly be different. But then again, I would say that.

Meanwhile, in China a new scoring system has been created, sanctioned by the state itself, intended to judge wines on criteria that are more relevant to Chinese consumers. Quite what that involves isn’t entirely clear – but a number is still allocated to the wine (out of 10), which will inevitably lead to the same issues.

Scores have become an inexorable feature of wine. They are by no means without any value, and can be useful as a comparative tool, yet they are blatantly flawed. The only way to diminish their influence would be if we all stopped believing in them, like with fairies and Santa. And while a world without fairies and Santa and wine scores might be more realistic, it sounds a lot less fun too.