In the world of wine reviews, evocative writing is key. Consider the following: “While the nose is a bit closed, the palate of this off-dry Riesling is chock full of juicy white grapefruit and tangerine flavors. It’s not a deeply concentrated wine, but it’s balanced neatly by a strike of lemon-lime acidity that lingers on the finish.”
Reading the description, you can almost feel the cool glass sweating in your hand and taste a burst of citrus on your tongue. But the author of this review never had that experience—because the author was a piece of software.
An interdisciplinary group of researchers developed an artificial intelligence algorithm capable of writing reviews for wine and beer that are largely indistinguishable from those penned by a human critic. The scientists recently released their results in the International Journal of Research in Marketing.
The team hopes this program will be able to help beer and wine producers aggregate large numbers of reviews or give human reviewers a template to work from. The researchers say their approach could even be expanded to reviews of other “experiential” products, such as coffee or cars. But some experts warn that this type of application has potential for misuse.
Theoretically, the algorithm could have produced reviews about anything. A couple of key features made beer and wine particularly interesting to the researchers, though. For one thing, “it was just a very unique data set,” says computer engineer Keith Carlson of Dartmouth College, who co-developed the algorithm used in the study. Wine and beer reviews also make a great template for AI-generated text, he explains, because their descriptions contain a lot of specific variables, such as growing region, grape or wheat variety, fermentation style and year of production. Also, these reviews tend to rely on a limited vocabulary. “People talk about wine in the same way, using the same set of words,” Carlson says. For example, connoisseurs might routinely toss around adjectives such as “oaky,” “floral” or “dry.”
Carlson and his co-authors trained their program on a decade’s worth of professional reviews—about 125,000 total—scraped from the magazine Wine Enthusiast. They also used nearly 143,000 beer reviews from the Web site RateBeer. The algorithm processed these human-written analyses to learn the general structure and style of a review. In order to generate its own reviews, the AI was given a specific wine’s or beer’s details, such as winery or brewery name, style, alcohol percentage and price point. Based on these parameters, the AI found existing reviews for that beverage, pulled out the most frequently used adjectives and used them to write its own description.
To test the program’s performance, team members selected one human and one AI-generated review each for 300 different wines and 10 human reviews and one AI review each for 69 beers. Then they asked a group of human test subjects to read both machine-generated and human-written reviews and checked whether the subjects could distinguish which was which. In most cases, they could not. “We were a little bit surprised,” Carlson says.
Although the algorithm seemed to do well at collecting many reviews and condensing them into a single, cohesive description, it has some significant limitations. For instance, it may not be able to accurately predict the flavor profile of a beverage that has not been sampled by human taste buds and described by human writers. “The model cannot taste wine or beer,” says Praveen Kopalle, a marketing specialist at Dartmouth and a co-author of the study. “It only understands binary 0’s and 1’s.” Kopalle adds that his team would like to test the algorithm’s predictive potential in the future—to have it guess what an as-yet-unreviewed wine would taste like, then compare its description to that of a human reviewer. But for now, at least in the beer and wine realm, human reviewers are still essential.
Language-generation AI is not new, and similar software has already been used to produce recommendations for online reviewing platforms. But some sites allow users to screen out machine-generated reviews—and one reason is that this kind of language generation can have a dark side. A review-writing AI could, for example, be used to synthetically amplify positive reviews and drown out negative ones, or vice versa. “An online product review has the ability to really change people’s opinion,” notes Ben Zhao, a machine learning and cybersecurity expert at the University of Chicago, who was not involved in the new study. Using this type of software, someone with bad intentions “could completely trash a competitor and destroy their business financially,” Zhao says. But Kopalle and Carlson see more potential for good than harm in developing review-generating software, especially for small business owners who may not have adequate time or grasp of English to write product descriptions themselves.
We already live in a world shaped by algorithms, from Spotify recommendations to search engine results to traffic lights. The best we can do is proceed with caution, Zhao says. “I think humans are incredibly easy to manipulate in many ways,” he says. “It’s just a question of needing to identify the difference between correct uses and misuses.”