Dark Secrets: Inside The World Of AI Deepfakes

November 6, 2023

Key Takeaways

  • Deepfake technology is used in political advertising, with examples such as a deepfake video of First Lady Jill Biden criticizing President Biden's policies.
  • Filmmaker Kenneth Lurt used AI models trained on natural speech to generate a synthetic voice for Jill Biden.
  • The use of AI and deepfake technology in political advertising raises concerns about misinformation and manipulation.
  • While AI tools can assist in creating deepfakes, human creativity, and filmmaking skills are still necessary for convincing results.
  • Lurt's goal with the deepfake video was to draw attention to the suffering in Palestine and provoke discussion about the policies involved.
  • Synthetic media presents challenges regarding truth, trust, and accountability that society needs to address.
  • Mitigating deepfake threats requires a balance between regulation, media literacy training, and responsible use of technology.

Summary of "Confessions of an AI deepfake propagandist: using ElevenLabs to clone Jill Biden’s voice"

The article from VentureBeat discusses a recent deepfake video of the First Lady of the United States, Jill Biden, created by filmmaker and producer Kenneth Lurt. In the video, Jill Biden appears to criticize President Biden's political policies, precisely his stance on the ongoing Israeli-Palestine and Hamas conflict. The video was created using machine learning techniques and the services of ElevenLabs, a voice and audio AI-focused startup. The AI was trained on samples of Jill Biden's voice from interviews and appearances, enabling it to generate new speech in her voice pattern and cadence. The video was posted on X (formerly Twitter) and Reddit's Singularity subreddit, where it received significant attention.

Lurt's goal was to create a dramatic and engaging narrative to draw attention to the situation in Palestine. He combined the AI-generated speech with curated clips from Biden campaign footage, news reports on Palestine, and social media videos of suffering in Gaza to craft a superficially plausible narrative.

The article also discusses the increasing prevalence of AI and deepfake technology in political advertising. Examples include an ad released by the RNC depicting generative imagery of a potential future Biden victory in 2024, and an ad by the Never Back Down PAC featuring an AI-generated version of Trump criticizing Gov. Reynolds of Iowa. These examples illustrate how synthetic media can promote or attack political candidates[1].

Lurt's project demonstrates the potential of synthetic media for novel discourse but also highlights the challenges it presents regarding truth, trust, and accountability. Regulators and advocates have pursued various strategies to curb the threats posed by deepfakes, but challenges remain. The FEC has opened public comment on AI impersonations in political ads, but there are doubts about the FEC's authority and concerns that partisanship may stall comprehensive proposed legislation.

In conclusion, the article underscores the potential of AI and deepfake technology to disrupt traditional media and political discourse while also highlighting the ethical and regulatory challenges that these technologies present.

Click below to read the full story