PITTSBURGH — Channel 11 is continuing its series on the concerns surrounding “deepfakes,” which are artificially-generated videos that appear real.
“It’s scary and at the same point in time, highly compelling and fascinating,” said Ari Lightman, Professor of Digital Media and Marketing at the Heinz College at Carnegie Mellon University.
Production of these fake videos is ramping up each year, with the technology rapidly advancing. Videos are appearing more and more real, and with little regulation. Using apps like “Sora 2,” essentially anyone can create “deepfakes” and for free.
Channel 11’s Liz Kilmer downloaded the app, which was developed by OpenAI, the company behind ChatGPT. Within minutes, she was able to develop realistic videos of herself, including fake anchoring clips within a fake studio. Videos were generated simply by stating three numbers into her cell phone camera, moving her head twice, and typing a prompt into the app.
It’s a simple process that’s being employed by many, using a variety of different programs. “Deepfakes” have been produced by numerous sources, from government officials to scam artists.
“It’s the next wave of social media manipulation,” Lightman said. “It is a huge issue that has societal implications around harm - financial penalties, a variety of different things - just the erosion of trust that we have as a society towards institutions where we’re getting a lot of our information from.”
Lightman referenced a recent instance where a sophisticated “deepfake” went viral in Ireland, showing a presidential candidate dropping out just before the election.
Closer to home, you may remember when Norwin High families panicked over fake AI-created content of flames coming from the high school.
Lightman told Channel 11 that “deepfakes” can and have also been used as part of elaborate scams.
CNN recently reported that hackers have used the technology to create “deepfakes” of top executives, tricking people into sharing sensitive information or sending money.
So, what can be done?
When it comes to viral online videos, Lightman stated that it’s “incumbent” on video hosting sites to label AI-generated content as such, but he notes that such a task can be “challenging” considering the mass quantity of content.
Lightman said he’d like to see better regulation.
Congress has introduced the “No Fakes Act,” aimed at protecting an individual’s voice and likeness, but it hasn’t passed.
Legislators at the federal and state levels have, however, passed laws aimed at combating child sexual abuse material and other non-consensual pornographic “deepfakes,” as Channel 11 reported earlier this month.
As far as other “deepfake” content, policing it can prove challenging depending on the “intent.” Satire, for instance, is protected by the First Amendment. Other video producers could argue that their “deepfakes” are a form of art or creative expression.
We asked Lightman where he’d like to see the state of “deepfakes” in the next decade:
“What I would like to see is that we establish sort of multilateral coalitions between education, technology providers, regulators, citizen-based societal groups, that all come together to create level and sensible regulation that can’t be misconstrued.”
Download the FREE WPXI News app for breaking news alerts.
Follow Channel 11 News on Facebook and Twitter. | Watch WPXI NOW
©2025 Cox Media Group




