Much has been made on how generative artificial intelligence (AI) technology could replace human workers, and Goldman Sachs warned earlier this year that upwards of 300 million full-time jobs could be impacted. In addition, a report from the Congressional Research Service last month noted that recent innovations in AI are also raising new questions about how copyright law principles such as authorship, infringement, and fair use will apply to content created or used by AI.
“The widespread use of generative AI programs raises the question of who, if anyone, may hold the copyright to content created using these programs, given that the AI’s user, the AI’s programmer, and the AI program itself all play a role in the creation of these works,” Congressional Research Service cautioned.
However, another concern is how generative AI could be employed in disinformation campaigns on social media.
“Generative AI poses serious risks to users of social media platforms,” said John Hale, professor in the University of Tulsa’s Master of Science in Cyber Security program.
“It is well-known that Facebook, Twitter, TikTok, and other platforms are now the frontlines of information warfare campaigns. The rise of generative AI in this space both creates new hazards and amplifies existing ones,” Hale continued.
Going Deep With Deepfakes
So-called deepfakes–essentially highly manipulated imagery, video, and audio that once required Hollywood talent–can now be more readily created by almost anyone. More ominously, while “movie magic” not too long ago still didn’t look quite perfect, deepfakes are increasingly indistinguishable from the real thing and simultaneously much easier to create.
“Deepfakes allow one to steal or appropriate a person’s voice, embed them in an image or video, or otherwise manipulate or alter digital content in undetectable ways,” suggested Hale. “And we are just now at the beginning of this journey. Deepfakes will only increase in realism from this point forward.”
As the creation of deepfakes is simplified, social media will continue to serve as a platform to readily spread such content to the masses.
“Their unrestricted and ungoverned distribution on social media platforms, where people now get their news, results in a perfect storm of uncurated, unvetted content that is designed to influence, persuade and misinform,” said Hale. What the future may hold is even more concerning, as actors learn how to more optimally weaponize these techniques for maximum impact and as ‘live” deepfakes evolve.”
Generative AI, coupled with deepfakes, could present truly dire consequences for the 2024 presidential election.
“AI creates a seemingly paradoxical situation where voters are manipulated into believing what is untrue and potentially doubting what is true. AI deepfakes appear so realistic that determining their authenticity is a challenge even for experts,” explained Dr. Timothy Sellnow, professor and associate director of Graduate Studies, Research and Creative Activity within the Nicholson School of Communication and Media at the University of Central Florida.
“Simultaneously, voters are so inundated with warning messages about AI and fabricated videos that confidence in political reporting on social media has plummeted,” added Sellnow.
Social media platforms risk further diminishing of the credibility of their content if they knowingly or perhaps mistakenly allow AI-altered or generated messages to linger on their networks.
“That risk may provide motivation for sites to more actively police posts for AI manipulation,” suggested Sellnow.
Though the industry is developing standards for ‘content authentication’ to allow users to validate the provenance of digital content, development and adoption will take some time.
“This is a necessary step in the evolution of digital media,” said Hale. “Still, users will have to be more discriminating and cynical when consuming content from social media platforms.”
The challenge will remain the unwillingness of voters to believe the legitimate content of a candidate engaging in questionable behavior or making offensive statements. Dishonest candidates may deny messages with deceitful claims the video material is faked through AI.
“Such claims could pressure platforms to deny posts, including accurate content. The best practice for social media platforms seeking to maintain credibility during the upcoming campaigns will be to disallow fakes that are easily identified through context or witnesses,” said Sellnow.
“As a society, we will have to fundamentally change the way we make trust decisions and formulate our beliefs based on the information we get from social media,” added Hale. “There will no longer be a clear-cut answer to the old Marx Brothers rejoinder, ‘Who are you going to believe—me or your own eyes?'”
Read the full article here