Technology

House Intel Comm to examine deepfakes after ‘Drunk Pelosi’ goes viral

In response to the “Drunk Pelosi” viral video that did not use deepfake technology, the House Intelligence Committee will examine deepfakes in a hearing on Thursday.

As a response to the fake video that appeared to show House Speaker Nancy Pelosi drunkenly slurring her words, the House Permanent Select Committee on Intelligence will organize an open hearing on the national security challenges of Artificial Intelligence (AI), manipulated media, and deepfake technology on June 13.

This is despite the fact that the “Drunk Pelosi” video did not use deepfake technology. Deepfakes use AI that is capable of creating realistic-looking fake videos. The “Drunk Pelosi” video was just a video of her played in slow motion.

However it was done, the video made politicians take notice of doctored footage in general — something The Sociable warned about back in 2016 (see link below).

Read More: Is there nothing that can’t be faked with vocal, facial manipulation?

Concerning real deepfakes, House Intelligence Chairman Adam Schiff told CNN that he feared Russia might get involved in a “severe escalation” of its disinformation campaign targeting the US ahed of the 2020 presidential elections.

“And the most severe escalation might be the introduction of a deep fake — a video of one of the candidates saying something they never said,” added Schiff, who was asked by Republicans in March to resign from his “committee post for repeatedly pushing claims of collusion between President Trump’s 2016 campaign and Russian operatives.”

Former US president, Barak Obama, has also expressed concern about AI-powered fake videos he had seen that bear his likeness and model his voice and movements.

First Hearing Devoted to Deepfakes

In what will be the first House Intel Committee hearing devoted specifically to examining deepfakes and other types of AI-generated synthetic data, the Committee will inspect the national security threats that AI-enabled fake content poses, ways of detecting and battling it, and roles that the public sector, the private sector, and society as a whole should play to keep the “potentially grim”, “post-truth” future at bay.

“Advances in machine learning algorithms have made it cheaper and easier to create deepfakes – convincing videos where people say or do things that never happened,” according to the Committee press release.

“Such advances also support the production of fake audio, imagery, and text at scale, and these capabilities are fast becoming more accessible and widely available.

“Deepfakes raise profound questions about national security and democratic governance, with individuals and voters no longer able to trust their own eyes or ears when assessing the authenticity of what they see on their screens,” the press release concludes.

Psychological Impact of Deepfakes

Another topic of interest to the Committee is the enduring psychological impact of deepfakes as well as counterintelligence risks that loom.

The roles of Internet platforms in policing fake content will also be discussed along with the appropriate role for the US government to take within the difficult legal challenges raised by deepfakes.

The Committee’s willingness to discuss deepfakes might not be unfounded. Going ahead regulations and policing might be the norm of making videos, not a comfortable future for vloggers and Youtubers.

The Committee also seeks testimony on Future advancements in deepfake technology and how it could lead people to deny legitimate media.

As the following tweet shows, we can expect deepfake videos to surface more and more as the 2020 campaign proceeds, with confusion following denials and cross-accusations.

Deepfake Fears

If there is a silver lining to the deepfake fears, it’s that maybe our society will finally start to question everything we see and hear instead of accepting everything blindly.

Read More: Who discerns what is fake news and why don’t we decide for ourselves?

It doesn’t take much to convince people. Social media comments to the fake Nancy Pelosi video show that many people fell for the fake.

As deepfake technology becomes more common, it’s starting to cause havoc among politicians and the public. Even low-quality deepfakes are enough to do the job of spreading a sentiment, be it, panic, hope, or hatred.

However, people are sure to start claiming they never said things they actually did. What happens when people start mistrusting actual real videos?

Easy to Make Fakes

Scientists from Stanford University, the Max Planck Institute for Informatics, Princeton University, and Adobe Research, recently demonstrated that it’s becoming increasingly easier to edit speech in videos and create realistic fakes.

Their method automatically annotates an input talking-head video with phonemes, visemes, 3D face pose and geometry, reflectance, expression and scene illumination per frame. To edit a video, the user has to only edit the transcript.

Additionally, Adobe’s Voco is like the Photoshop for voice.

Products that allow video and audio manipulation are flooding the market, as vloggers and film editors seek quality in their work.

The creative could use deepfake to get out of work, speeding tickets, school attendance or even a crime.

From the very top of power and politics to the everyday person, deepfakes could soon have the guilty screaming conspiracy while innocents suffer for words put in their digital mouths.

Navanwita Sachdev

An English literature graduate, Navanwita is a passionate writer of fiction and non-fiction as well as being a published author. She hopes her desire to be a nosy journalist will be satisfied at The Sociable.

View Comments

Recent Posts

The new generation of AI-powered chatbots boosting patient engagement and helping busy physicians 

AI in health has been growing for years, helping to spot disease biomarkers and better…

4 days ago

As tech companies recognize the strategic importance of PR, these 10 professionals are ones to watch in 2026

In 2026, digital technology can no longer be classified as a trend. Today, it represents…

4 days ago

Rockefeller exec echoes Tony Blair, Larry Ellison calls to unify data: One Health Summit

Rockefeller Foundation VP for Reimagining Humanitarian Nutrition Security Simon Winter tells the One Health Summit…

6 days ago

NTT Research unveils SaltGrain, a zero-trust data security tool built for the AI agent era

NTT Research launched SaltGrain at its Upgrade 2026 conference on Wednesday in San Jose, California.…

6 days ago

NTT Research names Dr. Tetsuomi Sogawa as new Physics & Informatics Lab director 

NTT Research, the Silicon Valley-based research division of Japanese telecom giant NTT, announced Dr. Tetsuomi…

6 days ago

What the Fall of the Mall Reveals About the Future of Synthetic Data

This piece started from a series of conversations I kept coming back to over the…

6 days ago