Call your computer program a “bot” and people are going to make certain assumptions, many of them negative. Twitterbots have become notorious over the past few years for their propensity to remove the human element from the microblogging service—automatically generating posts, following users and retweeting messages. Microsoft’s Tay, touted as artificially intelligent, proved anything but last month after users turned it into a trash-talking chatbot, prompting the company to quickly take it offline. Over the past decade “bot” has also become synonymous with a zombie computer that hackers hijack and use to attack other computers.
So what to think of Facebook’s new plan to unleash its version of “chatbots” on its extremely popular Messenger service? Should the company be worried that its AI effort to better cash in on Messenger’s more than 900 million users worldwide could go awry?
Facebook launched a beta version of its Messenger Platform with bots this week at the company’s F8 conference for software developers. Depending on how their makers write them, these bots can be used to automatically deliver subscription content such as weather and traffic reports or help Messenger users complete business transactions—providing things like sales receipts, shipping notifications and automated customer service.
In the most basic sense, a bot is an app that automates some function. Artificially intelligent bots—such as those Facebook wants its advertisers to build—gather data about users’ preferences, such as the things they search for, the sites they visit and the services they use. The bots are written to apply some reasoning related to that information and then generate a response. CNN and The Wall Street Journal are already onboard with Facebook, having created chatbots that push news reports to users based on their preferences. Other chatbots deliver online shopping, weather and other services tailored to individual users. Facebook, Microsoft and other companies are scrambling to build bots that better analyze and “understand” the information they obtain so they can offer some valuable service in return—the key to having people want to interact with bots on a regular basis.
Chat software is nothing new, of course. Retailers, insurance providers and other companies have long offered this option on their sites when they sense that visitors could use a hand completing a sale or some other transaction. Facebook wants to move these interactions to its platform and have companies pay the social media giant for the privilege of reaching its vast collective of users.
Despite the reputation some bots have garnered, Facebook’s chatbots are not a big risk for the company, says Brad Hayes, a postdoctoral associate in Massachusetts Institute of Technology’s Interactive Robotics Group, whose work focuses on human–robot teaming. As the creator of DeepDrumpf, the infamous Twitterbot that produces fake Donald Trump tweets by emulating the Republican presidential candidate’s word choices and speech patterns, Hayes has plenty of experience tinkering with these programs on a big stage. The AI Twitter page @DeepDrumpf has more than 21,000 followers despite having posted only about 170 tweets. Hayes also has a Twitter bot for Democratic presidential hopeful Bernie Sanders known as @DeepLearnBern. “If nothing else, it’s going to be a fantastic learning experience for Facebook,” he says of the company’s foray into chatbots. “If this kind of thing fails, they’re still going to get a lot out of it because very few people can do this at the level and scale they’re doing. And the fact that they’re trying to make money using this suggests they’ll put a lot more effort into it in terms of making sure it serves its intended purpose.”
Microsoft’s biggest problem with Tay was that it had no filter—the bot digested disparaging comments that degraded women and extolled Nazism, for just a couple of examples—and then regurgitated that content as offensive tweets. The company allowed the bot to accept whatever came in, and ended up having to publicly apologize. Tay learned bad things from bad information and responded to it—just as it was designed to. “This is a fairly important lesson that companies and their developers should take to heart,” Hayes says. “Given that data tends to be the most valuable asset for any kind of artificial intelligence–oriented endeavor, there’s a huge temptation to turn to the world at large to collect that data because it’s free and available in large quantities. The problem with Microsoft’s chatbot is that it wasn’t getting the information that they wanted and did nothing to try to figure that out.”
Facebook promises to police the bots on its Messenger platform by issuing guidelines and tools to developers building them, and by reviewing the bots before allowing them on the site. Facebook is also providing a “bot engine” to make it easier to build the software agents for Messenger. The bot engine comes courtesy of Wit.ai, a start-up Facebook bought in January 2015. Wit.ai had launched 18 months earlier with the plan of offering plug-in code that let software developers easily build speech-recognition capabilities into their products. Facebook has pivoted the technology to enable developers with varying skill to create bots that can be used as part of Messenger.
A chatbot is a good vehicle for doing crowdsourced machine learning because it can gather data about interacting with people, and then use that information to improve its own capabilities. But chatbots are not the intelligent systems that programmers aspire to—they are simply a means of developing those intelligent systems, says Jaime Carbonell, director of the Language Technologies Institute in Carnegie Mellon University’s School of Computer Science. Microsoft and Facebook are creating a public face for bots, but the technology could be more useful as a means of training larger, more complex artificial intelligence systems.
Bots can gather information, for example, about questions people tend to have on a particular subject or when confronted with a particular situation, such as dealing with a car insurance claim or customer service problem. These programs are then able to analyze how people respond to the information they are given. Ultimately, Carbonell says, “We’re trying to perfect communication between machines and humans, to the point where people can interact with automated systems to get, for example, important medical or financial information that’s specific to their needs.”