BJCP to the Rescue?
The BJCP, or Beer Judge Certification Program, is a program that was created back in 1985 with the purpose of (among other things) developing standardized tools, methods, and processes for the structured evaluation, ranking and feedback of beer, mead, and cider.
The vast majority of homebrew competitions in the U.S. are “BJCP sanctioned” meaning that they follow certain BJCP practices and style guidelines for judging beer.
To become a BJCP judge, one must pass an online entrance exam and also a tasting exam.
So does the BJCP exam weed out supertasters?
Maybe. Maybe not.
As mentioned, there are only two parts to the BJCP exam: the entrance exam and the tasting exam. Being successful on the entrance portion of the exam is essentially a matter of demonstrating that you can sufficiently recall BJCP study material. Supertasters would not be identified or weeded out by this.
Assuming you pass the entrance exam, you proceed on to the tasting exam. The tasting portion focuses on evaluating six beers according to the BJCP style guidelines, pointing out flaws, and giving feedback which often consists of suggestions on how to correct a detected flaw.
In theory, if a supertaster were to get weeded out, it would most likely be from the tasting exam. However, depending on how intense the supertaster perceives certain flavors and if those flavors presented themselves in any of the beers evaluated in the exam, the supertaster may slip by. What’s more is that depending on the individual supertaster, she/he may be able to more easily and accurately pick out any number of off-flavors which may in fact boost her/his score. And simply including a bitter beer in the taste test lineup doesn’t necessarily screen out supertasters either as some supertasters reportedly enjoy IPAs which suggests that “cultural and environmental factors have at least as much or more influence on food preferences [as] genetics”.
But here’s the best part about BJCP sanctioned beer competitions: since it’s not required that a beer be judged by an actual BJCP judge, there is an even greater chance that a supertaster has avoided any screening process whatsoever and is sitting squarely at the judging table with another judge of unknown credentials.
The situation for commercial beer competitions such as in the Great American Beer Festival (GABF) is even worse. These beers are judged according to the GABF style guidelines by industry professionals, not BJCP judges or any other sort of otherwise qualified judge (although some judges might happen by chance to be BJCP judges), so the likelihood of a supertaster sitting at the judging table is potentially higher.
But do we really want to weed out supertasters, or simply account for them?
Confronting the Supertaster-Effect
Currently, the BJCP doesn’t formally attempt to identify supertasters, however when a particular beer is being judged, if the scores vary too much, then usually the judges will self-police and adjust scores after discussing why the score should be lower or higher. The problem with this method is that if the people judging a given beer were supertasters, then there may be no score correction made. And unlike mass public beer rating systems like ratebeer or beeradvocate, beer judging competitions consist of a much smaller sample size of “judges” (often only two people per beer), and therefore are more vulnerable to the supertaster-effect.
Here are a few possible solutions:
The PROP Solution: If supertasters were identified as part of the judge certification process, then the scores generated by a supertaster could be adjusted according to some predetermined schema. Using the PROP test to identify supertasters is probably the easiest, most common approach, and at about $5 per test, it’s cost-effective as well.
Certainly the PROP test and even the very definition of “supertaster” has been criticized for only identifying bitter/sweet hypersensitive supertasters and not more general supertasters who are sensitive to other tastes like sourness, saltiness, and umami.
Another critique stems from the idea that according to the original definition, supertasters were identified as having more taste buds than others, however further research demonstrated that there is no correlation between the density of taste buds and being a supertaster. This might suggest that being a supertaster isn’t simply a matter of the number of taste buds one has, but rather if the taster has the gene (or genes) that cause the taste buds to perceive PROP as intensely bitter. However this is more a critique against a rigid definition, not against the use of PROP as an effective solution in identifying supertasters.
It’s also noted that there have been other non-PROP methods used to identify people with supertaster-like attributes, which is why some have suggested using the term “hyperguesia“ to refer to people with a “broadly tuned heightened taste response”. Others point out that about 5% of non-tasters actually perceive PROP as bitter meaning that the PROP test produces false positives roughly 5% of the time.
All critiques against the PROP test withstanding, no other widely-adopted practical field test has been developed that offers the same cost-benefit and ease of use that the PROP test does in identifying supertasters or those with heightened taste response.
The Taste-Correspondence Solution: In a perfect world, all beer judges would already have a wide breadth of academic and practical tasting experience with respect to whichever style of beer they are judging regardless if they are a supertaster or not, however this is not always the case. As such, we can attempt to adjust for the supertaster’s relatively exaggerated perception of some tastes by making it a requirement that all prospective judges sample a number of predetermined commercial examples (calibration beers) that epitomize the style of beer being judge and use those exemplar beers as the ideal standard and submit corresponding tasting notes. So even though a supertaster might personally find an America pale ale to be intolerably bitter, as long as the pale ale being judged shares common characteristics that correspond to a range of predetermined and documented exemplar beers, then the relatively exaggerated perception is mitigated.
The BJCP does provide a list of good commercial examples of the various beer styles, but does not require proof that any such beers have been sampled. Arguably, it would be advisable for prospective judges to actually have sampled good examples of all beer styles, but not doing so would not necessarily prevent anyone from passing the tasting component of the exam.
One possible problem with this approach is that it would seem that the main focus in such a competition would be to determine how closely a given beer corresponds to a predetermined ideal commercial example(s), and not whether the beer being judged is intrinsically superior. In other words, we are assuming that the ideal commercial example is the absolute best example of the style (however that is to be determined), and therefore even if we are presented with a beer that better represents the style than the commercial example, we may never score it as such. The other obvious problem is placing a supertaster in a category that doesn’t have a predetermined exemplar such as the Specialty or Fruit Beer categories. The possible solution there is to exclude supertasters from such categories.
The Continued Training Solution: As with sports, being able to perform at peak condition as a beer judge requires continual training in the form of carefully evaluating beer on a regular basis. And while the supertaster phenomenon may be purely genetic, studies have suggested that nurture can compensate for some of the tasting genes one is dealt.
Although we haven’t discussed the sense of smell much with respect to supertasters, aroma is certainly a critical factor in judging beer [aroma is the second most weighted category affecting a beer’s score on the BJCP beer scoresheet right after flavor]. With respect to the sense of smell and genetics, at least one study reported that explicit sensory training improved the olfactory sensitivity of wine experts, while others have pointed out that after repeated practice smelling various aromas, even people with a reduced ability to pick out certain odors (odor-specific hyposmia) showed improved sensitivity to aromas.
Even if no similar study of improved taste acuity exists, there are arguably just as many if not more aromas found in beer as wine, so such aroma training should benefit beer judges as well. Of course if we assume that what worked with the smell improvement study carries over to taste, then repeated practice tasting certain flavors or even styles of beer would improve taste acuity. To take it a step further, I’d advocate for not just repeated practice (weekly or biweekly) tasting and smelling beer, but careful repeated practice tasting and smelling beer.
By “careful” I mean focused attention is paid when tasting beer, noting at least the factors as spelled out on the BJCP Beer Scoresheet. Now, you don’t need to use the actual BJCP scoresheet itself, just as long as you are noting (writing) what you detect which could also be in conjunction with use of a beer flavor wheel.
A Quick Note about Non-Tasters
We’ve spent a lot of time talking about supertasters, who again are said to make up about 25% of the population. “Normal”, or “medium”, tasters make up 50%, but seeing as how “non-tasters” make up the other 25%, should we be concerned about non-taster beer judges? By the way, “non-tasters” are generally defined as those who almost never perceive PROP to be bitter at all (and never perceive a related compound called PTC to be bitter) and have relatively few taste buds (although this is being reconsidered), and therefore have a relatively muted sense of taste. A fast and hard intuitive response would look something like this: with respect to homebrew competitions, non-tasters would more likely be weeded out via the tasting portion of the BJCP exam which suggests non-tasters represent only a small percentage as judges in homebrew competitions. Similarly, we probably wouldn’t find many non-tasters at the commercial beer judging table as that group is made up of industry professionals (presumably brewers), and the market would have arguably reduced the number of those non-tasters.
Will the BJCP or any other organization that confers some sort of fancy tasting title to people ever institute some method to account for the supertaster phenomenon? Only time will tell.
In the meantime, the next time you sense something a little off with your score sheets, you might just have a supertaster to thank.
Like this blog? Well, thanks- you’re far too kind.
Tweet-worthy? That would be very kind of you: Tweet
Want to read more beer inspired thoughts? Come back any time, friend us on Facebook, or follow us on Twitter:
Or feel free to drop me a line at: email@example.com
Hi, I’m Dan: Beer Editor for Beer Syndicate, Beer and Drinking Blogger, Gold Medal-Winning Homebrewer, Beer Reviewer, AHA Member, Beer Judge, Shameless Beer Promoter, and Beer Traveler. Interests? Beer.
1. Menosky, Joe, and Ronald D. Moore. “In Theory.” Star Trek: The Next Generation. 1 June 1991. Television.
2. Garofalo, Peter. How to Judge Beer. N.p.: www.BJCP.org, n.d. PDF.
3. Bartoshuk, Linda M., Valerie B. Duffy, and Inglis J. Miller. “PTC/PROP Tasting: Anatomy, Psychophysics, and Sex Effects.” Physiology & Behavior 56.6 (1994): 1165-171. Web.
4. Prescott, J., J. Soo, H. Campbell, and C. Roberts. “Responses of PROP Taster Groups to Variations in Sensory Qualities within Foods and Beverages.” Physiology & Behavior.U.S. National Library of Medicine, 15 Sept. 2004. Web. 31 July 2015.
5. Reed, Danielle R. “Birth of a New Breed of Supertaster | Chemical Senses | Oxford Academic.” OUP Academic. Oxford University Press, 18 June 2008. Web. 31 July 2015.
6. Rox, Philippa. “Why Taste Is All in the Senses.” BBC News. BBC, 09 Dec. 2012. Web. 31 July 2015.
7. Bachmanov, Alexander A., and Gary K. Beauchamp. “Taste Receptor Genes.” Annual Review of Nutrition. U.S. National Library of Medicine, 19 Apr. 2007. Web. 31 July 2015.
8. Tempere, S., E. Cuzange, J. C. Bougeant, G. De Revel, and G. Sicard. “Explicit Sensory Training Improves the Olfactory Sensitivity of Wine Experts.” SpringerLink. Springer-Verlag, 20 Jan. 2012. Web. 31 July 2015.
9. Beer Judge Certification Program (BJCP). N.p., n.d. Web. 31 July 2015.
10. Catanzaro, D., E. C. Chesbro, and A. J. Velkey. “Relationship between Food Preferences and PROP Taster Status of College Students.” Appetite. U.S. National Library of Medicine, Sept. 2013. Web. 31 July 2015.
11. Schmitt, Diane M. “Guest Post: Supertasting- Fact, Fiction, or Something In-between?”Science Meets Food. Science Meets Food, 10 Mar. 2015. Web. 31 July 2015.
12. “Sanctioned Competition Requirements.” BJCP. N.p., n.d. Web. 31 July 2015.
13. Association, Brewers. “Competition Information.” Great American Beer Festival. N.p., n.d. Web. 31 July 2015.
14. “Supertaster Test.” Supertaster Test. N.p., n.d. Web. 31 July 2015.
15. Hayes, John E., and Russell SJ Keast. “Two Decades of Supertasting: Where Do We Stand?”Physiology & Behavior. U.S. National Library of Medicine, 24 Oct. 2011. Web. 31 July 2015.
16. Garneau, Nicole L., Tiffany M. Nuessle, Meghan M. Sloan, Stephanie A. Santorico, Bridget C. Coughlin, and John E. Hayes. “Crowdsourcing Taste Research: Genetic and Phenotypic Predictors of Bitter Taste Perception as a Model.” Frontiers. Frontiers, 07 Apr. 2014. Web. 31 July 2015.
17. Steinberger, Mike. “Am I a Supertaster?” Slate Magazine. N.p., 20 June 2007. Web. 31 July 2015.
18. Beauchamp, Gary K., and Julie A. Mennella. “Flavor Perception in Human Infants: Development and Functional Significance.” Digestion. S. Karger AG, 10 Mar. 2011. Web. 31 July 2015.
Good post and some very good ideas.
Some additional points:
The BJCP Beer Tasting exam was designed before the concept of “supertasters” existed, and before the results of groundbreaking research into the genetics of smell, taste, and “flavor” was widely known. As such, the tasting exam is inherently faulty.
This is because 40% of the exam (20% for scoring accuracy, and 20% for perceptions) is based on comparison to proctors’ perceptions (the proctors are usually 2, sometimes 3, judges of National or higher rank). That means that there is a substantial random element to the exam, because examinees don’t know who the proctors will be, and because the odds of having exactly the same genetics of smell and taste between the proctors and a particular examinee are tiny.
While experience and training is a huge aid in evening out variations in scoring, and in calibrating perceptions, there will always be some variations due to genetics, and the effects of nurture and experience (e.g., certain aromas can trigger intense emotional memories – for good or for ill – which in turn affect perceptions. For example, if you got badly sick due to eating cherries, you might develop an instinctive aversion to them, as well as aromas and flavors reminiscent of cherries.)
Depending on who’s proctoring and who’s taking the exam, examinees might be randomly rewarded or punished for having similar or variant perceptions to those of the proctors.
This means that there is a sort of “founder effect” for the sensory traits expected for a high level BJCP judge, not just based on beer tasting experience, but also genetics. That is, people who did well enough on the BJCP exams in the early days of the program that they advanced to National or higher rank, and subsequently went on to proctor a number of exams, act as unintentional “gatekeepers” for subsequent examinees.
But, because there are hundreds of different proctors for various exams, depending on who the proctors are, and what beers and faults are presented on the exam, in some cases the unintentional biases of the proctors reward people who are highly sensitive to a particular compound or class of compunds (e.g., VDK, bittering agents, sweetness). In other cases, it hurts them.
So, the BJCP doesn’t necessarily reward or discourage supertasters, so much as it first selects for people who have perceptions most similar to existing senior judges in the program, and secondarily for people who have the most common perceptions to compounds typically found in beer.
Over time, it is likely that the sensory expectations for BJCP judges will drift towards the most common perceptions found within the beer judge population. This will gradually discourage people who are outliers in terms of sensory perceptions (since they won’t score as well on the exams, and might be discouraged from judging if they constantly find their perceptions to be at odds with those of other judges). But, it will also discourage brewers who are also outliers (in that their beers won’t win in competition as often, and won’t be as popular with the public).
Unfortunately, since the population of highly-ranked beer judges is overwhelmingly middle-aged, white males of European descent, there might be unintentional discrimination against females and other races, since there are some differences in perceptions of smell and taste (hence “flavor”) between the sexes, between different ethnic groups, and to a lesser extent, younger and older people.
This is at odds with best industry practices, where you want your sensory panel to resemble a cross section of your consumers, and people in the general population. In a craft beer setting, analysis by BJCP-trained judges should only be part of quality control and product testing. You also need actual sensory analysis panels, as well as field trials. Informally, I’ve found that when people are given a choice between multiple beers, “the best beer kicks first”. This rule of thumb works well at venues like beer festivals or private parties where it’s otherwise not practical to keep track of consumer acceptance.
I speak as a National-level BJCP judge, who has proctored a number of tasting exams, and who graded nearly 150 beer tasting exams, as an experienced homebrewer, and as a student of sensory analysis and sensory perception.