The Twitter account’s display name read “National Weather Service.” The avatar was the National Weather Service (NWS) logo, and the handle was “@NWSGOV.” Crucially, the name was followed by the blue check mark that had been used to confirm an account was run by the person or organization it indicated. Only by clicking over to @NWSGOV’s full profile could one could see that it had just joined Twitter—and that the biography field noted it was a parody of the NWS, whose real account is @NWS.

The emergence of this and other realistic-looking spoof accounts of companies, politicians and celebrities was a predictable—and predicted—outcome of a change to Twitter’s long-standing “verified” feature that was quickly made this month by the company’s new owner, billionaire Elon Musk. Under the new program, any user could receive a blue check mark for any account simply by paying a monthly $8 fee.

The fake NWS account, along with other rapid changes and wild uncertainty about the future direction of the social media site, set off a wave of concern among weather forecasters, emergency managers and those who study crisis communications. Many of them have voiced worries that an effective tool for quickly disseminating accurate, up-to-date public information during weather events and other emergencies could quickly be riddled with misinformation that could put people in danger. Many fear this potentially lifesaving platform could become unusable or ultimately disappear.

“That kind of filled out this thought that myself and many others had when the plan for this new verification system was rolled out: What happens when somebody pretends to be a government agency or an account that provides lifesaving information to the public?” says Samantha Montano, an assistant professor of emergency management at Massachusetts Maritime Academy. “What could the repercussions of that be?”

When floodwaters are rising or a tornado is bearing down, time is of the essence in getting accurate information to those in harm’s way. Among social media sites, Twitter is uniquely situated to meet those needs, emergency management experts say. It has a relatively simple interface and presents each new post in a linear timeline that updates in real time. “Twitter is, for better or for worse, one of our best ways to get information out during an emergency,” says Kate Hutton, an emergency manager in Seattle, who has used Twitter for official communications since 2015. “It’s a bullhorn that you can use.”

Though only an estimated 22 percent of U.S. adults use Twitter, its reach extends well beyond them. Users often share screenshots of tweets on other social media sites; some send tweets to their contacts via text or e-mail. “We found that Twitter can be a really, really useful platform, especially during disaster-type events,” says Robert Prestley, a scientist at the National Center for Atmospheric Research, who studies how weather information sources use social media. “It is someplace where you can go and get information that is being updated on a somewhat constant basis,” which is especially important during situations with rapidly changing conditions.

Emergency managers and forecasters have limited alternatives for disseminating information quickly and widely. Alerts appear on local television channels, but they require someone to be watching TV. Emergency alerts can also be sent to cell phones, but their loud noises are considered intrusive—so officials tend to use them sparingly to avoid recipients disabling them. “We have redundancy in how we send warnings to the public and where we post information,” Montano says. “But Twitter is uniquely situated to help information spread quickly.”

Twitter has also been somewhat useful in giving authorities up-to-date on-the-ground information during unfolding emergencies. It can be used to crowdsource what streets are flooding in a storm, for example. During Hurricane Harvey in 2017, when the 911 system became overwhelmed, some of those stranded by floodwaters tweeted at emergency services.

Twitter itself has touted its usefulness and concerted efforts to improve in this area. In a blog post dated to October 13 (two weeks before Musk took over), the company proclaimed it “has become a critical communication tool for responding to natural disasters” and that it has a “longstanding commitment to working alongside global partners and developers to share important information, provide real-time updates, facilitate relief efforts” and combat misinformation.

There have, of course, been growing pains. Hutton cites the case of Southern California’s 2017 Thomas Fire, which was then the largest wildfire in the state’s recorded history. One of the Twitter hashtags used during the event was awash in random, often unrelated tweets, drowning out official sources, she says. Issues such as these prompted Twitter to verify official government accounts—and to make sure its algorithms elevated them. The company also manually curated news alerts and other aggregation features during emergencies, says former Twitter employee Tom Tarantino, who worked with emergency managers during his time there. Additionally, Twitter introduced various policies to curb the spread of misinformation and to respond to violations. These measures ranged from a warning message appended to a tweet to the suspension of an account.

The blue check was a crucial aspect of Twitter’s efforts to ensure correct information was getting out during crises, including the COVID pandemic. After Musk took over, the sudden rollout of the $8-per-month “Blue Verified” program immediately sowed confusion as fake accounts emerged.

Initially, at least some legacy verified accounts received a second label: a check mark and the word “Official” written in gray below the account name. But this feature was halted on the same day it was rolled out, November 9. It has since reemerged, though it appears to be applied unevenly. The Weather Channel and the Department of Homeland Security both have it, but as of the time of publication, the National Weather Service does not. “If you’re looking for coherence, it just doesn’t quite exist yet,” says a current Twitter employee who asked to remain anonymous for fear of retaliation. “We’re just iterating live.” Neither Twitter nor Musk replied to e-mailed and tweeted requests for comment on the criteria used for this label or to questions about how the company plans to avoid impersonators and the spread of misinformation. Twitter product management director Esther Crawford said in a tweet before the initial rollout of the “Official” designation that it would apply to “government accounts, commercial companies, business partners, major media outlets, publishers and some public figures.” Technology news website the Verge reported that Twitter plans to impose waiting periods for signing up for Twitter Blue (a subscription package that includes Blue Verified). The report also said that if an account changes its name, its check mark will be removed until Twitter approves that new name. But these measures would still leave open possibilities for impersonation.

Though Twitter removed the spoof accounts that popped up after the Blue Verified launch fairly quickly, many had already been screenshotted and shared widely. Companies, including pharmaceutical manufacturer Eli Lilly, also had to send out tweets countering information shared in the fake accounts. “I think that in the hour it took for Eli Lilly to correct that tweet and say, ‘That wasn’t us,’ that’s an hour that we generally don’t have in emergency management,” Hutton says.

If any updated version of Blue Verified doesn’t adequately label trusted sources, people scrolling through Twitter could see information from an account with a blue check mark that provides inaccurate or even detrimental action—such as telling people to evacuate when they should be sheltering in place. “It’s going to cost people time, which ultimately costs them lives and injury and property during an emergency,” Hutton says. Prestley says research has shown that people often do check other sources for confirmation. But any added steps needed to verify information can delay taking action. “The sooner that people can take action, obviously, the better,” he says.

The spoof accounts that did pop up under Blue Verified largely seemed to be created as intended humor or to expose problems inherent in the new program. But “it doesn’t matter if you’re intending harm or not. There is harm caused by these actions because you sow confusion at a time when there’s already mass confusion,” the current Twitter employee says. Hutton and others have raised concerns that once the novelty of creating fake accounts wears off—and people become less vigilant about double-checking sources—more dedicated bad actors could eventually exploit that space if there is no way to distinguish Blue Verified accounts from authoritative sources of information.

People inside Twitter “have been trying to communicate with [Musk] and share concerns,” the current Twitter employee says. “But the reality is that he is limited in his willingness to engage with those people and take those concerns seriously and act on them.” Wealthy people like Musk have far more resources than others to protect themselves from extreme events, Hutton says. “When you’re insulated from consequence, as many billionaires are, I think it’s easy to wave off a lot of these concerns” and not realize how “dangerous and even possibly deadly” some of these issues can be for more vulnerable groups during an emergency.

Also of concern to emergency managers and forecasters are the impacts of the massive staff layoffs at Twitter following Musk’s takeover. Dedicated teams had previously created news alerts and other curated products that emphasized credible sources. But “those teams do not exist anymore” after the layoffs, says Tarantino, the former employee. Gone, too, are large parts of the trust and safety teams and other people responsible for content moderation, as well as many of the engineers responsible for keeping the site running smoothly. Notably, problems with the two-factor authentication function (which helps prevent identity theft) kept some users from logging on to their accounts on November 14. Hutton notes the possibility of an emergency manager being locked out of their account by such a glitch during a crisis. “It’s just unfortunate that, I think, a platform that has been woven into the fabric of what we do as society these days, that rug is being pulled out very quickly in terms of trustworthiness,” Hutton says.

Such instability not only raises security and clarity concerns—it could also drive people away from Twitter altogether. And if enough users leave the site, it will become less effective for emergency mangers to maintain a presence on Twitter. If people do leave in droves or if Twitter otherwise ceases to function, “that would be a pretty tremendous loss to our ability to communicate during these types of events,” Prestley says.

Emergency mangers have few alternatives in the social media world because it would take several other apps to replicate what Twitter can do, Montano and others say. This approach “spreads out where people are getting information, spreads out where we have to be posting information,” Montano says. “It just makes everything more complex at a time where you don’t necessarily want more complexity.” Also, local emergency management offices have limited staff and time to maintain multiple social media presences, Hutton adds. “Depending on what direction Twitter goes here,” Montano says, “there is potential for some huge gaps in how emergency management unfolds.”

Tarantino advises users, particularly those who represent authoritative sources, to continue to maintain their Twitter accounts in order to fill the site with as much trustworthy information as possible. Abandoning accounts leaves a vacuum for bad actors to fill, he says. Hutton advises people to use Twitter’s list feature to round up accounts they currently know and trust, making it easier to sort good information from bad. She also encourages people to sign up for emergency alerts from their local jurisdiction.

“Disasters are relatively inevitable, unfortunately,” Hutton says. “The next time something big happens, especially a no-notice sort of a thing” such as an earthquake or a tornado, “if we are in our current state of affairs with social media, I think it’s going to be very, very confusing and chaotic—more so than it needs to be.”