“Emotional AI” Might Sound Good, but It Could Have Some Troubling Consequences

But its consequences could be troubling

Cornelia Li

Join Our Community of Science Lovers!

Perhaps you’re familiar with Data from Star Trek: The Next Generation, an android endowed with advanced artificial intelligence but no feelings—he’s incapable of feeling joy or sadness. Yet Data aspires to more. He wants to be a person! So his creator embarks on a multiseason quest to develop the “emotion chip” that would fulfill that dream.

As you watch the show, it’s hard not to wonder about the end point of this quest. What would Data do first? Comfort a grieving person? Share a fellow crewmate’s joy? Laugh at a joke? Make a joke? Machine learning has already produced software that can process human emotions, reading micro expressions better than humans can and generally cataloguing what may be going on inside a person just from scanning his or her face.

And right out of the gate, advertisers and marketers have jumped on this technology. For example, Coca-Cola has hired a company called Affectiva, which markets emotion-recognition software, to fine-tune ads. As usual, money is driving this not so noble quest: research shows that ads that trigger strong emotional reactions are better at getting us to spend than ads using rational or informational approaches. Emotional recognition can also be used in principle for pricing and marketing in ways that just couldn’t be done before. As you stand before that vending machine, how thirsty do you look? Prices may change accordingly. Hungry? Hot dogs may get more expensive.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


This technology will almost certainly be used along with facial-recognition algorithms. As you step into a store, cameras could capture your countenance, identify you and pull up your data. The salesperson might get discreet tips on how to get you to purchase that sweater—Appeal to your ego? Capitalize on your insecurities? Offer accessories and matching pieces?—while coupons customized to lure you start flashing on your phone. Do the databases know you have a job interview tomorrow? Okay, here’s a coupon for that blazer or tie. Are you flagged as someone who shops but doesn’t buy or has limited finances? You may be ignored or even tailed suspiciously.

One potential, and almost inevitable, use of emotion-recognition software will be to identify people who have “undesirable” behaviors. As usual, the first applications will likely be about security. At a recent Taylor Swift concert, for example, facial recognition was reportedly used to try to spot potential troublemakers. The software is already being deployed in U.S. airports, and it’s a matter of time before it may start doing more than identifying known security risks or stalkers. Who’s too nervous? Who’s acting guilty?

In more authoritarian countries, this software may turn to identifying malcontents. In China, an app pushed by the Communist party has more than 100 million registered users—the most downloaded app in Apple’s digital store in the nation. In a country already known for digital surveillance and a “social credit system” that rewards and punishes based on behavior the party favors or frowns on, it’s not surprising that so many people have downloaded an app that the New York Times describes as “devoted to promoting President Xi Jinping.” Soon people in China may not even be able to roll their eyes while they use the app: the phone’s camera could gauge their vivacity and happiness as they read Xi’s latest quotes, then deduct points for those who appear less than fully enthusiastic.

It’s not just China: the European Union is piloting a sort of “virtual agent” at its borders that will use what some have called an “AI lie detector.” Similar systems are being deployed by the U.S. government. How long before companies start measuring whether customer service agents are smiling enough? It may seem like a giant leap from selling soda to enforcing emotional compliance, and there can certainly be some positive uses for these technologies. But the people pushing them tend to accentuate the positive and downplay the potential downside. Remember Facebook’s feel-good early days?

If Data had ever been able to feel human emotions, he might have been surprised by how important greed and power are in human societies—and “emotional AI,” unless properly regulated, could be a key tool for social control. That should give us all unhappy faces.

Zeynep Tufekci is an associate professor at the University of North Carolina, whose research revolves around how technology, science and society interact.

More by Zeynep Tufekci
Scientific American Magazine Vol 321 Issue 1This article was published with the title ““Emotional AI” Sounds Appealing” in Scientific American Magazine Vol. 321 No. 1 (), p. 86
doi:10.1038/scientificamerican0719-86

It’s Time to Stand Up for Science

If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.

I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.

If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.

In return, you get essential news, captivating podcasts, brilliant infographics, can't-miss newsletters, must-watch videos, challenging games, and the science world's best writing and reporting. You can even gift someone a subscription.

There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.

Thank you,

David M. Ewalt, Editor in Chief, Scientific American

Subscribe