July 29, 2019  |  Updated July 29, 2019

What Congress Got Right and What it Overlooked at Last Week’s Deepfakes Hearing

By Lindsay Gorman

The House Permanent Select Committee on Intelligence held its first hearing devoted to the national security implications of artificial intelligence in the information space last week. “For more than five centuries, authors have used variations of the phrase, ‘seeing is believing.’ But in just the past half-decade, we’ve come to realize that that’s no longer always true, ” Dr. Doermman, founder of a Defense Department research program on media forensics (MediFor) testified. He’s right: Deepfakes and synthetic media risk blurring the relationship between fact and fiction, undermining a key foundation of democracy: objective reality.  Democratic governance relies on an informed citizenry to choose political leaders and an evidence-based justice system to uphold the rule of law.  Both require a strict separation between what is true and what is not to be effective and fair.

The hearing highlighted three important considerations. First, research advances in AI are democratizing access to deepfakes. Manipulated content follows a trend that is true of dual-use technology writ large — what was previously only tested in the AI labs is now increasingly available to the average person. For example, software developed by researchers at Adobe, Princeton, Stanford, and Germany’s Max Planck Institute allows users to input their own text for the subject of a deepfake video.  While this particular software is not public yet, some tools are, and it will only be a matter of time before editable software like this is available to anyone. Even cruder tools are already being deployed to influence public discourse and political debate. “Shallow fakes” (also known as “cheapfakes”) such as the Nancy Pelosi video, which was slowed to make her appear incapacitated, or the Jim Acosta video, sped up to make him appear more aggressive toward a young White House aide, can themselves go just as viral and be just as damaging as sophisticated deepfakes.

Read more

Sign up for more information