In their latest iteration, Apple’s popular AirPods wireless earbuds let you activate Siri, Apple’s AI assistant, simply by saying, “Hey, Siri,” just as you can with your iPhone. With the original AirPods, a physical tap on one AirPod would bring up Siri, but the voice command is simpler. And it takes us one step closer to a world where we can talk to our AIs and they to us anywhere, anytime.

It’s a technology we’ve been anticipating for decades. From the Enterprise computer on the original Star Trek (1966–1969) to HAL 9000 in 2001: A Space Odyssey (1968) to Samantha in Spike Jonze’s Her (2013), science fiction has shown us all manner of disembodied AI helpmates who can answer our questions, carry out our orders or even provide emotional intimacy.

With the emergence of AIs like Siri, Google Assistant, Amazon’s Alexa and Microsoft’s Cortana, the idea is now a lot less fictional. I’d genuinely miss Alexa if I couldn’t ask her to supply weather forecasts, keep my shopping list, control the lights in my house, and play podcasts and radio.

But AI assistants aren’t yet omnipresent, and they aren’t all that smart. Their arrival in our ear canals, plus some stunning recent progress in AI research, will change all that. In Silicon Valley, Google and OpenAI, a nonprofit research company, have been racing to apply advances in an area called unsupervised learning. Their latest language models cull existing texts on the Web to generate coherent, humanlike responses in question-answering and text-completion tasks. Within a couple of years these models will make AI assistants dramatically more capable and talkative.

And that means it’s time to ask whether we really want AIs whispering in our ears all day—and if so, what conditions and controls we’d like to see implemented alongside them.

In last month’s Ventures column, I looked at the ways Facebook’s seemingly benign plan to connect people with one another went off the rails, resulting in a system of mass surveillance and manipulation. The same thing could happen with AI assistants if we don’t insist on basic protections in advance. Let me suggest a few:

Privacy. Inevitably the smarts of our AIs will reside in the cloud, on servers owned by tech giants such as Amazon, Apple, Google and Microsoft. So our interactions with AIs should be encrypted end to end—unreadable even by the companies—and the records should be automatically deleted after a short period.

Transparency. AI providers must be up front about how they are handling our data, how customer behavior feeds back into improvements in the system, and how they are making money, without burying the details in unreadable, 50-page end-user license agreements.

Security and reliability. We will engage with our AI assistants in our homes, vehicles and workplaces across numerous Wi-Fi and (soon) 5G networks. We will be relying on them for advice, suggestions and answers, at the same time we will be giving them real-world tasks such as monitoring the performance of our appliances and the safety of our homes. We will need high availability, and every link in the communications chain must be hackerproof.

Trustworthiness. The same unsupervised learning algorithms that generate coherent conversation could be coopted to generate fake or misleading content—which is part of the reason OpenAI is not yet releasing its powerful new language models to the outside world. When we ask our AIs for answers, we’ll need assurances that they are drawing on accurate data from trusted sources.

Autonomy. AI assistants should exist to give us more agency over our lives, not less. It would be a disaster for everyone if they morphed into vehicles for selling us things, stealing our attention or stoking our anxieties.

If the giant AI providers are allowed to self-regulate in these areas, the result will surely be more Facebook-style fiascoes. The push for protections will have to come from us, the users, and our representatives in government. After all, no one wants “Hey, Siri,” to turn into “Bye, Siri.”