Now more than ever, a certain level of sophistication is needed to navigate the various sources of information on the internet. Compounding the issue is the rise of the “deepfake,” extraordinarily realistic-looking videos that often feature public figures doing and saying things they never actually did. So far, their uses have been relatively benign, so far as we know, but the potential for abuse is alarming, to say the least. Here’s everything you need to know spot a deepfake video.Read More
Filtering by Tag: March 2019
What if a person had the ability to impersonate your physical traits and voice? With the current artificial intelligence technology, a person’s face can be replaced with somebody else’s. This technology creates videos known as deepfakes, an AI-based software that creates hyper realistic videos that present something that didn’t happen. The reason why it became viral was because of a Reddit user by the username “deepfakes”, posted hyper realistic videos of celebrities on pornographic videos.Read More
Scholars argue that video and audio fabrications could threaten modern government but lack satisfactory regulatory solutions.Read More
Morphin is amusing, and the tech that powers it is impressive. According to TechCrunch, the app was in stealth development for three years. The image mapping works so well that users’ faces mimic the expressions in the original GIF, without taking the creation into uncanny valley territory. But the app’s ease of use is a double-edged sword: If Morphin represents the angel on image mapping’s shoulder, then deepfakes are its dark side.Read More
Last week, Harvard’s Berkman Klein Center for Internet & Society and the MIT Media Lab announced seven winners of their first “AI and the News: An Open Challenge” competition. Each winner received a grant to help with the development of technology platforms that address the problem of misinformation in society.Read More
Want to star in your favorite memes and movie scenes? Upload a selfie to Morphin, choose your favorite GIF and your face is grafted in to create a personalized copy you can share anywhere. Become Tony Stark as he suits up like Iron Man. Drop the mic like Obama, dance like Drake or slap your mug on Fortnite characters.Read More
So-called “deep fakes” rocketed into the public consciousness in December 2017 with the rise of AI-constructed pornography in which celebrity faces were inserted into pornographic videos. A Reddit user named “deepfakes” published a series of videos that both ushered in a new era of thinking of the impact of AI and its move from the lab to the real world and gave us the name for this new kind of digitally constructed artificial reality. After that brief initial burst, it seems we’ve largely lost interest in the idea of “deep fakes” as the topic simply blends into the never-ending stream of AI advances, from driverless cars to facial recognition algorithms. Overall, the media has focused on the potential electoral impacts of deep fakes, while the public has focused on pornography.Read More
Lexology: Arent Fox Secures Patent for Verifying Authenticity of Digital Video Content Using Blockchain Technology
Arent Fox has successfully procured another patent for Acronis, a leading provider of cloud backup and data management services, covering a new technology for watermarking digital content using a blockchain network.
Digital video recording has become ubiquitous in this modern age of technology. From video cameras incorporated into smartphones, to “smart” home security cameras, and even to body-worn video cameras, an avalanche of digital video data is being generated. However, one key drawback of recorded video is the ease with which it can be altered (e.g., using AI-based “deepfakes” video applications) and the difficulty with which it can be reliably authenticated. Video can be authenticated, e.g., by determining the time that the video was recorded or by confirming that the video has not been digitally modified. The dangers of unauthenticated digital video can have a serious impact on many areas of society, including privacy, security, journalism, and law enforcement. These dangers are exacerbated by the speed at which viral news stories can propagate across the Internet.Read More
Image-doctoring is nothing new: Joseph Stalin ordered his enemies airbrushed out of official photos and Cuba altered images of Fidel Castro to remove his hearing aid. But national security experts are worried about a new frontier in manipulated content: deepfakes. Deceptively realistic, deep fakes are AI-generated videos that use techniques like faceswaps, lip synchs, and even "digital puppeteers" to show people saying things they never said or doing things they never did. We'll talk about how to spot deepfakes and the potential threats they pose to democratic institutions.
Hany Farid, professor of computer science, Dartmouth College
Bobby Chesney, professor of law, University of Texas at Austin; co-author with Danielle Citron, "Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security"Read More
Mother Jones: The Bizarre and Terrifying Case of the “Deepfake” Video that Helped Bring an African Nation to the Brink
A controversial appearance by the president of Gabon portends a future where you can’t believe your eyes.Read More
The proliferation of visual digital media that can be produced and shared instantly has improved connectivity, but it comes with one major drawback. Visual media manipulation technologies have also evolved, giving amateurs and experts alike the tools to realistically manipulate media for potentially antagonistic ends like propaganda and misinformation.
The Media Forensics (MediFor) program out of DARPA’s Information Innovation Office (I2O) is trying to develop automated AI technologies to assess visual media manipulation at scale in an end-to-end platform, I2O Program Manager and MediFor program lead Dr. Matt Turek outlined at Thursday's DARPA AI Colloquium in Alexandria, Virginia.Read More
Video has long been considered a source of hard evidence in determining the truth. But how will trust in video change at a time when artificial intelligence is making it easier to create fake audiovisuals?
On this episode of The Stream, we speak with:
Sam Gregory @SamGregory
Programme Director, WITNESS
Tarun Wadhwa @twadhwa
Founder & CEO, Day One Insights
Tim Hwang @timhwang
Director, Harvard-MIT’s Ethics and Governance of AI Initiative
Steve Grobman, chief technology officer at cybersecurity firm McAfee, and Celeste Fralick, chief data scientist, warned in a keynote speech at the RSA security conference in San Francisco that the tech has reached the point where you can barely tell with the naked eye whether a video is fake or real. They showed a video where Fralick’s words were coming out of a video of Grobman’s face, even though Grobman never said those words.Read More
Aging is a natural process inflicted on all of us humans. With AI, we can get a glimpse ahead of time of what the future holds for our wrinkles, age spots, and sagging skin.
A new machine learning paper shows how AI can take footage of someone and duplicate the video with the subject looking an age the researchers specify. The team behind the paper, from the University of Arkansas, Clemson University, Carnegie Mellon University, and Concordia University in Canada, claim that this is one of the first methods to use AI to tackle aging in videos.Read More
We’ve spent the last year wringing our hands about a crisis that doesn’t exist
If you’ve been following tech news in the past year, you’ve probably heard about deepfakes, the widely available, machine-learning-powered system for swapping faces and doctoring videos. First reported by Motherboard at the end of 2017, the technology seemed like a scary omen after years of bewildering misinformation campaigns. Deepfake panic spread broader and broader in the months that followed, with alarm-raising articles from Buzzfeed (several times), The Washington Post (several times), and The New York Times (several more times). It’s not an exaggeration to say that many of journalism’s most prominent writers and publications spent 2018 telling us this technology was an imminent threat to public discourse, if not truth itself.
But more than a year after the first fakes started popping up on Reddit, that threat hasn’t materialized.Read More
A perfect storm arising from the world of pornography may threaten the U.S. elections in 2020 with disruptive political scandals having nothing to do with actual affairs. Instead, face-swapping “deepfake” technology that first became popular on porn websites could eventually generate convincing fake videos of politicians saying or doing things that never happened in real life—a scenario that could sow widespread chaos if such videos are not flagged and debunked in time.
The thankless task of debunking fake images and videos online has generally fallen upon news reporters, fact-checking websites and some sharp-eyed good Samaritans. But the more recent rise of AI-driven deepfakes that can turn Hollywood celebrities and politicians into digital puppets may require additional fact-checking help from AI-driven detection technologies. An Amsterdam-based startup called Deeptrace aims to become one of the go-to shops for such deepfake detection technologies.Read More
With recent focus on disinformation and “fake news,” new technologies used to deceive people online have sparked concerns among the public. While in the past, only an expert forger could create realistic fake media, deceptive techniques using the latest research in machine-learning allow anyone with a smartphone to generate high-quality fake videos, or “deep fakes.”Read More