News at the Intersection of Synthetic Media & Moving Image Archives
March 26, 2019
Search Newswire story archive back to April 2018:
News publisher Reuters created its own manipulated video in order to train its journalists in how to spot fake content before it gets shared widely.
Over the course of a few days, Reuters and a specialist production company created a so-called “deepfake” video of a broadcaster reading a script in a studio. Reuters then shared the video with its user-generated content team of around 12 producers asking if they noticed anything odd about it. Several people with knowledge of the manipulation spotted it had been manipulated, noticing a mismatch between audio and lip-synching, as well as inconsistencies where the reader looked as she was lisping but didn’t sound like it. The speaker also sat unusually still. Those who weren’t expecting an altered video noticed something was off in the audio but struggled to define it.
The technology industry has a unique opportunity to tackle “deepfakes”—the problem of fake audio and video created using artificial intelligence—before they become a widespread problem, according to human rights campaigner Sam Gregory.
But, he warns, major companies are still a very long way from tackling the pervasive and more damaging issue of cruder “shallowfake” misinformation.
A horrifying magnitude 7.9 earthquake hit Japan on September 1, 1923, killing over 140,000 people. And while news of the devastation reached newspapers around the world by the next day, there was no way to get film footage from Japan to the United States that quickly. But that didn’t stop filmmakers from making fake films to show in theaters around the U.S.—like a fake newsreel of the earthquake in Japan that was rushed to theaters in a matter of days.
Here in the early 21st century, Americans are obsessed with fake videos, as our politics becomes more unhinged and the technology to create so-called deepfakes becomes more common. But the distinction between “real” and “fake” was just as loose in the first couple of decades of American cinema, believe it or not. People were sometimes watching movies of recreated news.
Now more than ever, a certain level of sophistication is needed to navigate the various sources of information on the internet. Compounding the issue is the rise of the “deepfake,” extraordinarily realistic-looking videos that often feature public figures doing and saying things they never actually did. So far, their uses have been relatively benign, so far as we know, but the potential for abuse is alarming, to say the least. Here’s everything you need to know spot a deepfake video.
What if a person had the ability to impersonate your physical traits and voice? With the current artificial intelligence technology, a person’s face can be replaced with somebody else’s. This technology creates videos known as deepfakes, an AI-based software that creates hyper realistic videos that present something that didn’t happen. The reason why it became viral was because of a Reddit user by the username “deepfakes”, posted hyper realistic videos of celebrities on pornographic videos.
Scholars argue that video and audio fabrications could threaten modern government but lack satisfactory regulatory solutions.
Morphin is amusing, and the tech that powers it is impressive. According to TechCrunch, the app was in stealth development for three years. The image mapping works so well that users’ faces mimic the expressions in the original GIF, without taking the creation into uncanny valley territory. But the app’s ease of use is a double-edged sword: If Morphin represents the angel on image mapping’s shoulder, then deepfakes are its dark side.
Last week, Harvard’s Berkman Klein Center for Internet & Society and the MIT Media Lab announced seven winners of their first “AI and the News: An Open Challenge” competition. Each winner received a grant to help with the development of technology platforms that address the problem of misinformation in society.
Want to star in your favorite memes and movie scenes? Upload a selfie to Morphin, choose your favorite GIF and your face is grafted in to create a personalized copy you can share anywhere. Become Tony Stark as he suits up like Iron Man. Drop the mic like Obama, dance like Drake or slap your mug on Fortnite characters.
So-called “deep fakes” rocketed into the public consciousness in December 2017 with the rise of AI-constructed pornography in which celebrity faces were inserted into pornographic videos. A Reddit user named “deepfakes” published a series of videos that both ushered in a new era of thinking of the impact of AI and its move from the lab to the real world and gave us the name for this new kind of digitally constructed artificial reality. After that brief initial burst, it seems we’ve largely lost interest in the idea of “deep fakes” as the topic simply blends into the never-ending stream of AI advances, from driverless cars to facial recognition algorithms. Overall, the media has focused on the potential electoral impacts of deep fakes, while the public has focused on pornography.
Arent Fox has successfully procured another patent for Acronis, a leading provider of cloud backup and data management services, covering a new technology for watermarking digital content using a blockchain network.
Digital video recording has become ubiquitous in this modern age of technology. From video cameras incorporated into smartphones, to “smart” home security cameras, and even to body-worn video cameras, an avalanche of digital video data is being generated. However, one key drawback of recorded video is the ease with which it can be altered (e.g., using AI-based “deepfakes” video applications) and the difficulty with which it can be reliably authenticated. Video can be authenticated, e.g., by determining the time that the video was recorded or by confirming that the video has not been digitally modified. The dangers of unauthenticated digital video can have a serious impact on many areas of society, including privacy, security, journalism, and law enforcement. These dangers are exacerbated by the speed at which viral news stories can propagate across the Internet.
Image-doctoring is nothing new: Joseph Stalin ordered his enemies airbrushed out of official photos and Cuba altered images of Fidel Castro to remove his hearing aid. But national security experts are worried about a new frontier in manipulated content: deepfakes. Deceptively realistic, deep fakes are AI-generated videos that use techniques like faceswaps, lip synchs, and even "digital puppeteers" to show people saying things they never said or doing things they never did. We'll talk about how to spot deepfakes and the potential threats they pose to democratic institutions.
Hany Farid, professor of computer science, Dartmouth College
Bobby Chesney, professor of law, University of Texas at Austin; co-author with Danielle Citron, "Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security"
A controversial appearance by the president of Gabon portends a future where you can’t believe your eyes.
The proliferation of visual digital media that can be produced and shared instantly has improved connectivity, but it comes with one major drawback. Visual media manipulation technologies have also evolved, giving amateurs and experts alike the tools to realistically manipulate media for potentially antagonistic ends like propaganda and misinformation.
The Media Forensics (MediFor) program out of DARPA’s Information Innovation Office (I2O) is trying to develop automated AI technologies to assess visual media manipulation at scale in an end-to-end platform, I2O Program Manager and MediFor program lead Dr. Matt Turek outlined at Thursday's DARPA AI Colloquium in Alexandria, Virginia.
Video has long been considered a source of hard evidence in determining the truth. But how will trust in video change at a time when artificial intelligence is making it easier to create fake audiovisuals?
On this episode of The Stream, we speak with:
Sam Gregory @SamGregory
Programme Director, WITNESS
Tarun Wadhwa @twadhwa
Founder & CEO, Day One Insights
Tim Hwang @timhwang
Director, Harvard-MIT’s Ethics and Governance of AI Initiative
Steve Grobman, chief technology officer at cybersecurity firm McAfee, and Celeste Fralick, chief data scientist, warned in a keynote speech at the RSA security conference in San Francisco that the tech has reached the point where you can barely tell with the naked eye whether a video is fake or real. They showed a video where Fralick’s words were coming out of a video of Grobman’s face, even though Grobman never said those words.
Aging is a natural process inflicted on all of us humans. With AI, we can get a glimpse ahead of time of what the future holds for our wrinkles, age spots, and sagging skin.
A new machine learning paper shows how AI can take footage of someone and duplicate the video with the subject looking an age the researchers specify. The team behind the paper, from the University of Arkansas, Clemson University, Carnegie Mellon University, and Concordia University in Canada, claim that this is one of the first methods to use AI to tackle aging in videos.
We’ve spent the last year wringing our hands about a crisis that doesn’t exist
If you’ve been following tech news in the past year, you’ve probably heard about deepfakes, the widely available, machine-learning-powered system for swapping faces and doctoring videos. First reported by Motherboard at the end of 2017, the technology seemed like a scary omen after years of bewildering misinformation campaigns. Deepfake panic spread broader and broader in the months that followed, with alarm-raising articles from Buzzfeed (several times), The Washington Post (several times), and The New York Times (several more times). It’s not an exaggeration to say that many of journalism’s most prominent writers and publications spent 2018 telling us this technology was an imminent threat to public discourse, if not truth itself.
But more than a year after the first fakes started popping up on Reddit, that threat hasn’t materialized.
A perfect storm arising from the world of pornography may threaten the U.S. elections in 2020 with disruptive political scandals having nothing to do with actual affairs. Instead, face-swapping “deepfake” technology that first became popular on porn websites could eventually generate convincing fake videos of politicians saying or doing things that never happened in real life—a scenario that could sow widespread chaos if such videos are not flagged and debunked in time.
The thankless task of debunking fake images and videos online has generally fallen upon news reporters, fact-checking websites and some sharp-eyed good Samaritans. But the more recent rise of AI-driven deepfakes that can turn Hollywood celebrities and politicians into digital puppets may require additional fact-checking help from AI-driven detection technologies. An Amsterdam-based startup called Deeptrace aims to become one of the go-to shops for such deepfake detection technologies.
With recent focus on disinformation and “fake news,” new technologies used to deceive people online have sparked concerns among the public. While in the past, only an expert forger could create realistic fake media, deceptive techniques using the latest research in machine-learning allow anyone with a smartphone to generate high-quality fake videos, or “deep fakes.”
Advances in machine learning will soon make it possible to sound like yourself with a different age or gender—or impersonate someone else.
Advances in artificial intelligence have made it easier to create compelling and sophisticated fake images, videos, and audio recordings. Meanwhile, misinformation proliferates on social media, and a polarized public may have become accustomed to being fed news that conforms to their worldview.
All contribute to a climate in which it is increasingly more difficult to believe what you see and hear online.
There are some things that you can do to protect yourself from falling for a hoax. As the author of the upcoming book Fake Photos, to be published in August, I’d like to offer a few tips to protect yourself from falling for a hoax.
Deepfakes, also called “AI synthesized fakes,” are rapidly evolving and proliferating. While many websites banned the use of the technology, new forensic tools are being developed to root out fakes. Meanwhile, lawmakers are pushing for new regulations while many lawyers argue that the law is already able to manage the illegal use of the emerging technology.
More and more security holes are appearing in cryptocurrency and smart contract platforms, and some are fundamental to the way they were built.
Blockchains are particularly attractive to thieves because fraudulent transactions can’t be reversed as they often can be in the traditional financial system. Besides that, we’ve long known that just as blockchains have unique security features, they have unique vulnerabilities. Marketing slogans and headlines that called the technology “unhackable” were dead wrong.
Truepic’s technology is already used by the U.S. State Department and others. The startup now wants to get social media companies on board.
Truepic was founded in 2015 by Craig Stack, a Goldman Sachs alum who saw an opportunity in making it harder for Craigslist scammers and dating-site lurkers to deceive people. “It hit me that there were all these apps that deal with image manipulation or spoofing location and time settings,” says Stack, who now serves as COO. But today the company’s primary mission is to use image-verification tools to identify and battle more formidable forms of disinformation—from the faux social media accounts that the Kremlin used to manipulate the 2016 U.S. presidential election to the doctored photos that travel the back roads of WhatsApp and catalyze violence in places like Myanmar and India.
A new website that utilizes artificial intelligence can endlessly generate the faces of people who don’t actually exist.
The AI works by analyzing countless photos of the human face in order to generate realistic ones of its own. While creating such images initially required advanced computer hardware and specific knowledge, the process is now widely available thanks to the site.
Philip Wang, a software engineer at Uber and creator of the website, told Motherboard that the new service is designed to “dream up a random face every two seconds.”
Video has become an increasingly crucial tool for law enforcement, whether it comes from security cameras, police-worn body cameras, a bystander's smartphone, or another source. But a combination of "deepfake" video manipulation technology and security issues that plague so many connected devices has made it difficult to confirm the integrity of that footage. A new project suggests the answer lies in cryptographic authentication.
Called Amber Authenticate, the tool is meant to run in the background on a device as it captures video. At regular, user-determined intervals, the platform generates "hashes"—cryptographically scrambled representations of the data—that then get indelibly recorded on a public blockchain. If you run that same snippet of video footage through the algorithm again, the hashes will be different if anything has changed in the file's audio or video data—tipping you off to possible manipulation.
Burger King’s Super Bowl spot featuring Andy Warhol eating a Whopper led many viewers to question whether what they were witnessing was real. Warhol, after all, has been dead for nearly 32 years, so how was he available to shoot a commercial?
In transpired that the Warhol in the film was ‘real’ – Burger King had repurposed Danish filmmaker Jørgen Let’s 1982 study of the Factory don. Yet in the same week another artist really was being brought back to life using the same deepfake technology that placed Steve Buscemi’s face onto Jennifer Lawrence in a rather disturbing viral video.
Dalí Lives, the latest project from the the Dalí Museum in Florida, is being marketed as an ‘avant-garde experience’ designed to celebrate the 30th anniversary of the surrealist’s death. Goodby Silverstein & Partners were brought on board to commemorate the occasion; the San Franciscan agency decided resurrection was the route to go down.
Paul Pflug, Melissa Zukerman, and Hans-Dieter Kopal of Principal Communications have teamed with leading cyber research and security firm Edgeworth to form Foresight Solutions Group — a “reputation-management” entity that will use advanced technology, former FBI data analysts, and old fashioned crisis-management skills to advise individuals and companies in an age where old tweets can bring down an Oscar host or jeopardize a billion-dollar superhero franchise.
Foresight’s tech team will also work proactively to squash erroneous and damaging social media content in the public sphere and on the dark web — like the unsettling rise of fake videos (or “deepfakes”) that have targeted stars like “Game of Thrones” lead Emilia Clarke with manufactured, but photo-realistic pornographic images.
Advances in machine learning are making it easy to create fake videos—popularly known as “deepfakes”—where people appear to say and do things they never did. For example, a faked video of Barack Obama went viral in April in which he appears to warn viewers about misinformation. Falsehoods already spread farther than the truth, and deepfakes are making it cheap and easy for anyone to create fake videos. When convincing fakes become commonplace, the public will also start to distrust real video evidence, especially when it does not match their biases. Unfortunately, the technology that enables deepfakes is advancing rapidly. Deepfakes will become easier to create, and humans will increasingly struggle to distinguish fake videos from real ones. Luckily, there is some hope that algorithms may be able to automatically detect deepfakes. Computer scientists have generally struggled to automate fact checking. However, early research suggests that fake videos may be an exception.
Search Newswire story archive back to April 2018: