Twitter announced Thursday that it is expanding its policies against election-related misinformation, setting new rules that will likely force the platform to more aggressively fact-check President Donald Trump during the final months of the 2020 campaign.
The social media giant rolled out the new policies in a blog post, which said that Twitter (TWTR) will either add fact-check labels to or hide altogether tweets that contain “false or misleading information that causes confusion” about election rules, or posts with “unverified information about election rigging.”
Twitter’s porous and subjective policies have enabled Trump to spread a steady stream of misinformation about the election to millions of Americans. The company led the way for Big Tech when it rebuked Trump for a misleading tweet in May, but that watershed moment has ended up looking more like an outlier. Twitter only rarely applies fact-check labels to Trump’s tweets containing false information about voting, and it’s unclear how much labeling achieves.
The new rules, which Twitter says will go into effect next week, explicitly prohibit a lot of the material Trump is prone to posting, putting the company on a collision course with Trump while it tries to help steer the country through an unprecedented voting and post-election process.
The new rules include policies geared toward reducing potential post-election chaos, a major concern this year because of Trump’s rhetoric and the influx of mail-in ballots, which will slow down vote-counting. Twitter will now prohibit “misleading claims about the results,” premature claims of victory, or “inciting unlawful conduct” that prevents a peaceful transition of power.
The company says it will “label or remove” posts that break the rules, but didn’t spell out what process will be used to determine what is egregious enough to get removed instead of labeled.