When it comes to fighting disinformation online, social media and technology companies have demonstrated devotion to one of our oldest cliches: If at first you don’t succeed, try again. In fact, they’ve tried more than 300 times to shift, tweak, and change their policies in just a year and a half on critical areas such as civic integrity, violence and extremism, and public health.

But these 300 changes have done little to effectively address the disinformation crisis. The online world is overrun with political conspiracy theories, echo-chambers, half-truths, and outright lies.

In April, disinformation on Facebook pages about the COVID-19 pandemic drew an estimated 460 million views. Just this month, fake “vaccination exemption” cards were being promoted on Twitter. And in the week following the general election, videos with electoral disinformation were viewed 137 million times on YouTube. In a stark and sobering showing of the consequences of such rampant disinformation, a violent mob fueled by lies spread on social media attacked the Capitol and the peaceful transfer of power, the cornerstone of our democracy.

While the problems with leaving Big Tech to police itself were clear even before January 6th, now they are impossible to ignore. We have more than enough evidence to say definitively that Big Tech’s efforts at “self regulation” are simply not working and Congress must step in.

Large social media platforms announced at least 321 policy changes over the last 18 months that impacted disinformation and democracy, according to new research from Decode Democracy. Facebook, Twitter, and Google announced at least 160 policy changes in the areas of civic integrity, violence and extremism, and public health, or roughly two changes per week.

And instead of proactively addressing disinformation and online political deception, the platforms largely reacted to crises and public pressure. Is this patchwork of ineffective, constantly changing, and reactive policies created and enforced by private companies really the best we can do when it comes to the information that powers our democracy?

The major social media platforms had four full years to prepare policies limiting political disinformation heading into the 2020 election. Yet lies circulated freely across social media throughout the 2020 election cycle. Although the platforms made multiple changes, those shifts were frequently too little and too late. In several cases, platforms also failed to implement existing policies—such as when Facebook announced it would stop pushing people to partisan political groups but actually didn’t. Setting the stage for a tumultuous post-election period, Twitter didn’t clarify how it would handle false claims of election victory until one day before the election. Facebook further tightened its policies on such false claims one day after the election.

QAnon website Tucker Carlson conspiracy theory disinformation
In a segment about disinformation amongst the U.S. public, Fox News host Tucker Carlson said that he was unable to find “the famous QAnon” online after an entire day of searching. In this August 2, 2018 photo, David Reinert holds up a large “Q” sign, representing QAnon, while waiting in line to see President Donald J. Trump at his rally at the Mohegan Sun Arena at Casey Plaza in Wilkes Barre, Pennsylvania.
Rick Loomis/Getty

Fortunately, lawmakers are starting to recognize that social media and technology companies won’t effectively police themselves. Earlier this month, the House passed a sweeping reform package, the For the People Act, which now awaits action in the Senate. And while much of the conversation around the bill has centered around how it limits the influence of money in politics, cracks down on gerrymandering, and expands access to voting, it’s received less attention as the most significant stand Congress has taken so far against online disinformation.

The For the People Act improves disclosure rules for online political ads and requires online platforms to maintain a public database of online political ads shown to their users. The bill also demands that platforms make reasonable efforts to prevent foreign interference in our elections. Combined, these reforms mark an important first step to combat disinformation, hold digital platforms accountable, and increase transparency for voters about who is attempting to influence them online.

In addition to helping pass legislation, there are other important roles for the Biden-Harris administration to play in addressing online disinformation. Before President Biden took office, a coalition of organizations focused on issues including racial justice, faith, women’s rights, environmental protection, government accountability, and gun violence prevention called on the incoming administration to recognize and respond to the role of disinformation in degrading public debate, increasing division, limiting trust in institutions, and hindering our ability to create public policy to address pressing concerns. The groups proposed nearly a dozen concrete steps the new administration can take. These included appointing a disinformation expert to the COVID-19 task force to coordinate a whole-of-society response to the infodemic; launching a website to combat viral disinformation as it occurs; and establishing an interagency task force to study the harms of disinformation across major social media platforms.

It’s not too late to solve our disinformation crisis, but it may be soon if lawmakers don’t recognize the urgent threat and respond. Otherwise, we can watch the problem continue to wreak havoc on our democracy while Big Tech rolls out more announcements, press releases, and policy tweaks without ever getting to a fix.

Daniel G. Newman is the president of Decode Democracy, a nonprofit that fights online political deception to build a better democracy.

The views in this article are the writer’s own.