Week 2: The Coming Age of Deepfakes Warned of Rising Threats in 2022
The Coming Age of Deepfakes Warned of Rising Threats in 2022
In a December 2022 article in The Atlantic, Charlie Warzel revisited the escalating danger of deepfakes, synthetic media created by AI that can convincingly fake videos, audio, and images. The technology, which first gained attention in 2017 when a Reddit user posted AI-generated pornographic videos of celebrities like Gal Gadot, had evolved significantly by 2022. A 2018 BuzzFeed video, viewed over 2 million times, showed a fake Barack Obama calling President Trump a "total and complete dipshit," highlighting the potential for political disruption. By 2022, generative AI tools like OpenAI’s DALL-E and Stable Diffusion had lowered the barrier to entry, allowing anyone with a $500 computer to create hyperrealistic fakes in hours, raising fears of an "infocalypse" where reality itself could be corrupted.
Deepfakes were not yet the mass chaos agent predicted in 2018, largely because simpler misinformation—like doctored photos or text—spread faster on platforms like X, which saw 436 million daily active users in 2022. However, their impact was severe in specific areas: a 2019 Sensity study found 96 percent of deepfakes were nonconsensual pornography, targeting women, with over 100,000 such videos online by 2020, per Deeptrace Labs. High-profile cases included a 2021 deepfake of Tom Hanks in a fake ad and a 2022 audio fake of Elon Musk discussing Tesla layoffs, which briefly tanked the stock 3 percent before being debunked. Experts like Henry Ajder warned of catastrophic scenarios—a fabricated Kim Jong-un declaring nuclear war or a fake Biden announcing martial law—though such events had not occurred by 2022. The technology relied on generative adversarial networks, where two AI models compete to refine fakes, achieving 90 percent believability in controlled tests, per a 2021 MIT study.
Mitigation efforts were underway but lagging. The Coalition for Content Provenance and Authenticity, backed by Adobe, Microsoft, and the BBC, proposed a digital watermarking system to verify media origins, with a 2022 pilot showing 85 percent accuracy in detecting fakes. However, adoption was slow—only 12 major platforms had signed on by late 2022. Detection tools, like those from Deepware Scanner, could identify 80 percent of deepfakes but struggled with new AI models, and human detection rates were dismal at 37 percent, per a 2020 USC study. Warzel noted that it's nearly impossible to fully debunk once a deepfake spreads—such as a 2021 fake of Ukrainian President Zelenskyy surrendering, viewed 5 million times. By 2025, the article’s warnings proved prescient: a 2024 fake Biden robocall during the New Hampshire primary, telling voters to stay home, reached 500,000 people, per later Atlantic coverage, showing deepfakes’ real-world impact on elections. Without stronger defenses, the line between truth and fiction risks vanishing, threatening trust in media and democracy.
Warzel, C. (2022, December 20). It’s time to worry about deepfakes again. The Atlantic. https://www.theatlantic.com/technology/archive/2022/12/deepfake-technology-concerns-synthetic-media-generative-ai/672451/
#Deepfakes #AI #Misinformation #Tech #Reality
Comments
Post a Comment