On Jan. 5, 2021, the lawyers and specialists on Twitter’s safety policy team, which set rules about violent content, were bracing for a day of brutality in Washington. In the weeks since President Donald Trump had tweeted a call for his supporters to gather in the nation’s capital for a protest he promised would be “wild,” the site had erupted with pledges of political vengeance and plans for a military-style assault.
“I am very concerned about what happens tomorrow, especially given what we have been seeing,” said one member of the team, Anika Collier Navaroli, in a video call, the details of which are reported here for the first time. “For months we have been allowing folks to maintain and say on the platform that they’re locked and loaded, that they’re ready to shoot people, that they’re ready to commit violence.”
Some participants in the call pushed the company to adopt a tougher position, arguing that moderators should be able to remove what they called “coded incitements to violence” — messages, such as “locked and loaded,” that could be read as threats. But a senior manager dismissed the idea, saying executives wanted them to take action against only the most flagrant rules violations, adding, “We didn’t want to go too far.”
“What if there’s violence on the ground?” responded another team member in Twitter’s Dublin office. “Would we take action … or do we have to wait for violence — someone getting shot?”
The next day, a mob of Trump supporters stormed the U.S. Capitol, leaving five people dead and more than 100 police officers injured.
Two and a half years after those events, the role of social media companies in fomenting the violence remains a volatile topic. Twitter’s current owner, Elon Musk, commissioned a series of reports intended to reveal how the company had previously sought to squelch conservative speech, and a Republican-led committee in the House of Representatives is working to build the case that the tech giants have been digitally weaponized against conservative ideas.
But the video and other newly obtained internal Twitter records show that, far from working to censor pro-Trump sentiment in the days before the Capitol riot, the company’s leaders were intent on leaving it up — despite internal warnings that trouble was brewing.
Congressional Republicans, Trump supporters and Musk allies have condemned the company for suspending Trump’s account in the riot’s aftermath, saying its employees were too quick to punish the former president because of their liberal prejudice.
But the records reveal a company that fought until the end to give some of Trump’s most belligerent supporters the benefit of the doubt, even as its internal teams faced an overwhelming volume of tweets threatening retribution in line with Trump’s lies that the election had been stolen.
They also show that Twitter’s leaders were reluctant to take action against Trump’s account two days after the insurrection, even as lawyers inside the company argued that his continued praise of the Capitol rioters amounted to “glorification of violence,” an offense punishable then by suspension under Twitter’s rules.
Trump’s 88-million-follower account was ultimately suspended on the night of Jan. 8, hours after he’d tweeted that “great American Patriots … will not be disrespected or treated unfairly in any way, shape or form!!!” The suspension, the records show, was taken only after employees had assembled for executives a list of examples in which Twitter users responded to Trump’s tweets with calls for further violence across the United States.
The records also undercut claims that Twitter had worked on behalf of the Biden administration in freezing Trump’s account, as Trump claimed in a lawsuit against Twitter that was dismissed last year by a federal judge.
None of the records obtained by The Washington Post — including the 32-minute video, a five-page retrospective memo outlining the suspension discussions and a 114-page agenda document detailing the safety policy team’s meetings and conversations — show any contacts with federal officials pushing the company to take any action involving Trump’s account.
The records were part of a large set of Slack messages, policy documents and other files given to the House Jan. 6 committee in preparation for its landmark hearings, though the committee never made them public. The Post obtained the records from a person connected to the investigation, and their authenticity was confirmed by another person with knowledge of their contents.
The Post is not naming employees cited in the records due to the sensitivity of the matter. The Post was able to view the full video, whose existence, along with a partial description of its contents, was first reported by Rolling Stone.
Navaroli, who declined to comment, ultimately testified before Congress that Twitter’s reluctance to take action earlier had been fueled by anxiety over both the political and financial consequences of pushing out one of the platform’s biggest attractions.
Another former employee, who testified before the committee under the pseudonym J. Johnson, said “Twitter was terrified of the backlash they would get if they followed their own rules and applied them to Donald Trump.”
A former Twitter executive, who spoke on the condition of anonymity due to fear of harassment, said the leaders believed the company’s policies as they stood already applied to “coded” threats.
Investigators for the Jan. 6 committee wrote in a memo that Twitter had played a key role in helping provoke the Capitol riots by hosting and amplifying Trump’s incendiary statements about his 2020 election loss and that Twitter leadership had “hesitated to act until after the attack on the Capitol” and “changed course only after it was too late.”
The memo was circulated among committee members but was not made public due to hesitations about taking on issues that could divert the focus from Trump, three people familiar with the matter told The Post earlier this year.
On the night of Jan. 6, after law enforcement officials had fought to regain control of the Capitol grounds, Twitter briefly suspended Trump’s account but said it would allow him to return after 12 hours if he deleted three tweets that broke Twitter’s “civic integrity” rules against manipulating or interfering in elections. One tweet included a video in which he called for peace from the “very special” rioters who he said had been “hurt” because the “fraudulent election … was stolen from us.”
The former Twitter executive said the company sent Trump’s representatives an email on Jan. 6 saying that his account would face an immediate ban if he broke another rule and that the executives hoped, with a 12-hour time out, Trump would “get the message.”
Trump deleted the tweets and, on Jan. 7, posted a conciliatory video in which he said “this moment calls for healing and reconciliation.” The next day, however, he tweeted a more fiery message about how the “American Patriots” who voted for him would “not be disrespected” and announced that he would not attend Joe Biden’s inauguration.
The tweets set off new alarms inside Twitter, according to a postmortem document written by Navaroli that detailed the company’s deliberations for the purpose of internal review.
In a Slack channel where the safety policy team discussed “escalations” requiring high-level consideration, members initially agreed that the tweets had not broken Twitter’s rules because they offered no clear “call to violence” or “target of abuse,” the document states.
The members drafted a short advisory memo saying as much, which was then passed to other departments, including to Twitter’s general counsel, Vijaya Gadde, and its chief executive, Jack Dorsey, who was working then from a French Polynesian island.
One of those departments, a team of internal lawyers that advised the safety policy team, wrote back with a different argument: that the “American Patriots” of Trump’s tweet could refer to the rioters who had just ransacked the Capitol, an interpretation that would violate Twitter’s “glorification of violence” policy, according to Navaroli’s document.
“They see it that ‘He is the leader of a violent extremist group who is glorifying the group and its recent actions,’” one employee wrote on Slack, describing the lawyers’ assessment. The message was first reported in the “Twitter Files,” a cache of internal documents Musk made available to a select group of writers.
“They now view him as the leader of a terrorist group responsible for violence/deaths comparable to Christchurch shooter or Hitler and on that basis and on the totality of his Tweets, he should be de-platformed,” the employee added.
The lawyers, according to the postmortem document, argued that the tweets should not be assessed in isolation but as part of “a continuation and culmination of rhetoric that led to deadly violence days before.”
Twitter moderators at the time had recorded many instances of pro-Trump accounts continuing to call for violence, including “additional occupations” of federal and state government buildings, the document said. Others were citing Trump’s commitment not to attend the inauguration as an indication that the event would be ripe for attack.
At the lawyers’ recommendation, members of the safety policy team drafted a second assessment ruling that Trump’s tweets had broken the rules against glorification of violence and recommending that his account be permanently suspended.
Twitter’s online competitors had already taken similar action. On Jan. 6, Facebook and Instagram suspended Trump’s accounts for 24 hours, and the next morning Facebook chief Mark Zuckerberg announced that the suspensions would be extended indefinitely, saying the risks of him using the sites after having incited and condoned a “violent insurrection” were “simply too great.”
And inside Twitter, everyone seemed to be on edge. Thousands of employees, most of whom were not involved in content-moderation decisions, had spoken out on Slack threads and video calls, urging the company to take stronger action against Trump and saying they were worried about their personal safety.
Still, some Twitter executives voiced hesitation about taking down Trump’s account, arguing that “reasonable minds could differ” as to the intentions of Trump’s tweets, according to Navaroli’s document. Twitter had for years declined to hold Trump to the same rules as everyone else on the basis that world leaders’ views were especially important for voters to hear.
At a 2 p.m. video call on Jan. 8, which was described in the document but not viewed by The Post, top officials in Twitter’s trust and safety team questioned the “glorification of violence” argument and debated whether the company should instead wait to act until Trump more blatantly broke the platform’s rules.
Navaroli argued that this course of inaction had “led us to the current crisis situation” and could lead “to the same end result — continued violence and death in a nation in the midst of a sociopolitical crisis,” the document shows.
In another call, around 3:30 p.m., after safety policy team members had compiled examples of tweets in which users detailed plans for future violence, Twitter’s top lawyers and policy officials voiced support for a “permanent suspension” of Trump’s account. One note in the safety policy agenda document read that there was a “team consensus that this is a [violation]” due to Trump’s “pattern of behavior.”
Their assessment was sent to Dorsey and Gadde for final approval and, at 6:21 p.m., Twitter’s policy team was notified over Slack that Trump had been suspended. A company tweet and blog post announced the decision to the world shortly after.
Dorsey later tweeted that he regretted having to approve a move that would “limit the potential for clarification, redemption and learning” but that he ultimately believed “we made a decision with the best information we had based on threats to physical safety.”
The suspension, as it turned out, was not permanent. Trump’s Twitter account was reinstated late last year at the direction of Musk, who has called the suspension tyrannical.
In February, executives at Facebook and Instagram parent company Meta also ended Trump’s two-year account suspension, saying they’d surveyed the “current environment” and determined that “the risk has sufficiently receded.” And this month, YouTube said it would no longer remove videos that falsely claimed the 2020 election had been stolen, arguing that the removals could curtail “political speech without meaningfully reducing the risk of violence.”
Trump has yet to use his restored Twitter account, choosing instead to post messages, known as “truths,” to a website he owns called Truth Social. But it is there, if he ever wants to, and still has 86 million followers.