Two years after the Russian government manipulated social media to interfere in the 2016 U.S. presidential election, online information platforms continue to serve as mediums for such operations, including the 2018 midterm elections. Under intense public criticism and congressional scrutiny, the three most prominent online information platforms – Facebook, Twitter, and Google – have taken steps to address vulnerabilities and to protect their users against information operations by actors linked to authoritarian regimes. However, given the ongoing nature of online authoritarian interference, the steps taken by these companies continue to fall short.
This report reviews and analyzes the steps taken by online information platforms to better defend against foreign interference since 2016, adopting the framing of the Senate Intelligence Committee by focusing on the largest and most influential online information platforms of Facebook, Twitter, and Google.
The platforms’ efforts to combat foreign interference have focused primarily on three key lines of effort: preventing or suppressing inauthentic behavior, improving political advertising transparency, and investing in forward-looking partnerships. Measures to limit user interaction with inauthentic behavior include content removal, labeling, and algorithmic changes. The platforms have also taken steps to improve advertising through policies to publicize advertiser information and improve verification standards for those hoping to publish political advertisements. Investments in forward-looking measures have included internal initiatives to critically assess vulnerabilities and external partnerships with civil society, academia, and fact-checking organizations. They have also led to increased transparency about the behavior and content of accounts linked to the Russian operation against the 2016 and 2018 elections, as well as other nation-state operations targeting Americans.
Though all of these steps are important, ongoing vulnerabilities demand more urgent action by the platforms to secure the online information space against foreign manipulation, while ensuring American’s ability to engage freely in robust speech and debate. Six areas where Facebook, Twitter, and Google must take further steps include:
- Focusing on behavior: Online information platforms have unique insight into the computational tools used by bad actors on their respective platforms, allowing them to identify and eradicate coordinated inauthentic behavior, even when attribution is impossible. Although they have made recent progress in targeting behavior rather than content, a more aggressive focus on detecting and tackling networks will be key to counter evolving influence operations.
- Increasing transparency and information sharing: Recent efforts to expose foreign interference operations have demonstrated greater transparency and information sharing by online information platforms. But these efforts remain largely ad hoc, and robust sharing that includes privacy protections requires the development of standing information sharing mechanisms with industry peers, government agencies, and the greater public.
- Establishing standardization and effective coordination: Despite numerous actions to counter disinformation and inauthentic behavior, platforms still lack a unified understanding of the threats they face. Standardizing terminology and constructing institutionalized communication mechanisms will foster better cross-platform cooperation to tackle interference operations.
- Improving policies and enforcing rules clearly and consistently: Platforms need to ensure that current policies go past window-dressing to achieve stated goals. And companies should work to more clearly articulate their terms of service, and should consistently and transparently apply those rules.
- Thinking critically about future technologies: As the threat of foreign interference continues to evolve and change, tech companies will need to think proactively about how to protect users against manipulation, and about how future technologies may be exploited by hostile foreign actors.
- Making user protection the bottom line: Platforms need to improve efforts to inform users about the threats that target them and to empower them with tools they can use to protect themselves. Further, platforms will need to change the ways that they design new features to emphasize user protection over ad revenue or convenience.