News at the Intersection of Synthetic Media & Moving Image Archives
May 22, 2019
Search Newswire story archive back to April 2018:
Over the last few years, the field of artificial intelligence (AI) has grown by leaps and bounds. Researchers are working on driverless cars, voice-controlled smart assistants and image recognition that can spot tumors in photos better than radiologists.
On the retail front, AI is already changing how customers shop online. Algorithms are already suggesting items you might like based off of previous searches or purchases. The Echo Look is Amazon’s “style assistant” that takes a photo of your outfit and makes fashion recommendations that are conveniently available for sale on Amazon. And AI will transform online commerce for retailers in an even more major way in the near future — realistic digital models may eventually replace humans.
A new deepfake combining comedian Bill Hader and action star Arnold Schwarzenegger is going viral online. The video, produced by a deepfake creator known as Ctrl Shift Face, has already been viewed more than 300,000 times on YouTube alone.
The clip shows Hader during a 2014 interviewwith late-night talk show host Conan O’Brien doing his best impression of the former California governor. But this time, Hader’s impression is accompanied by Schwarzenegger’s actual face.
Surrealist painter Salvador Dalí once said in an interview, “I believe in general in death, but in the death of Dali, absolutely not.” Now, the Dalí Museum in St. Petersburg, Florida, has worked to fulfill the painter’s prophecy by bringing him back to life — with a deepfake.
The exhibition, called Dalí Lives, was made in collaboration with the ad agency Goodby, Silverstein & Partners (GS&P), which made a life-size re-creation of Dalí using the machine learning-powered video editing technique. Using archival footage from interviews, GS&P pulled over 6,000 frames and used 1,000 hours of machine learning to train the AI algorithm on Dalí’s face. His facial expressions were then imposed over an actor with Dalí’s body proportions, and quotes from his interviews and letters were synced with a voice actor who could mimic his unique accent, a mix of French, Spanish, and English.
I can be an adult about this. I'm a mature, functioning human being. I can handle 59 seconds of baby footage with SpaceX and Tesla founder Elon Musk's face superimposed onto the child's face. Oh god, who am I kidding? I'm broken.
I like babies. I like Elon Musk. But I also like garlic and chocolate, I just don't ever want to experience them together in one.
"The most realistic AI voice clone we’ve heard”
Up until now, these voices have been noticeably stilted and robotic, but researchers from AI startup Dessa have created what is by far the most convincing voice clone we’ve ever heard — perfectly mimicking the sound of MMA-commentator-turned-podcaster Joe Rogan.
Listen to clips of Dessa’s AI Rogan, and take a quiz on the company’s site to see if you can spot the difference between real Rogan and faux Rogan.
"I just listened to an AI generated audio recording of me talking about chimp hockey teams and it's terrifyingly accurate," Rogan wrote on Friday. "At this point, I've long ago left enough content out there that they could basically have me saying anything they want, so my position is to shrug my shoulders and shake my head in awe, and just accept it. The future is gonna be really f---ing weird, kids."
Online videos may be today’s most efficient information vector. They are the closest thing to a real-life event happening in front of your eyes, as opposed to pictures, texts, or even a combination of both. Nevertheless, the emergence of maliciously altered videos and deepfakes puts us at risk of being dangerously misinformed. To counter this, sound forensic methods supported by specifically designed signal processing tools and artificial intelligence are being used to detect most falsifications.
Yet often, misinformation conducted via video does not rely on a technical alteration but rather on false claims that accompany the footage. This could be achieved through a misleading title, a falsely alleged geographical location, or through anachronism, which consists in attributing filmed events to a false period. Anachronism can also include the resurfacing of older videos that not only add to the spread of fake news but affect open source research.
The following investigation shows how an unaltered video, which depicted a real event that happened at an originally correctly claimed date, confused the audience and online investigators by re-surfacing under a different title during a significantly more susceptible political context.
Government should be cautious about moving to new law for “deepfake” audio and video, a new Law Foundation-backed study released today says.
Co-author Tom Barraclough predicts that deepfake and other synthetic media will be the next wave of content causing concern to government and tech companies following the Christchurch Call. While it is tempting to respond with new law, the study finds that the long list of current legislation covering the issues may be sufficient.
Companion piece: Deepfake and the law - Expert Reaction
A new report funded by the Law Foundation cautions against rushing to develop new laws to respond to synthetic media. Instead, the authors say there is already a long list of laws that cover the issue, including the Privacy Act, Copyright Act and the Harmful Digital Communication Act.
The SMC asked experts to comment on the report and deepfakes more broadly.
A Hollywood union has thrown its weight behind legislation in California’s Senate that would make pornographic deepfakes a crime.
Bill SB564, which has passed California’s Judiciary Committee, is being sponsored by The Screen Actors Guild‐American Federation of Television and Radio Artists (SAG-AFTRA).
It extends the definition of a consenting individual to include not just an actual act, or a performance, but a “realistic digitized performance in which the individual did not actually perform”.
Members of SAG-AFTRA and Los Angeles Congressman Adam Schiff sounded alarms Monday about the proliferation of “deep fake” technologies — realistic digital forgeries including sex scenes.
“We have a medium in which lies and fear travel faster than anything else and this has happened practically overnight,” said Schiff during a two-hour panel discussion at union headquarters in Los Angeles.
“I am deeply concerned that deep-fakes could be used to spread disinformation or interfere in our elections, and we have already seen these technologies used to harass, exploit and invade the privacy of private citizens, particularly women,” said Schiff. “We have another election coming up and it’s more important than ever for the public to distinguish between what is real and what is fake. Our democracy depends on it.”
What do you do with a technology that could restore the voices of people who have lost theirs — but also sow chaos and incite violence?
What's happening: A growing group of companies are walking this tightrope, betting they can deploy deepfakes — videos, audio and photos that are altered or generated by AI — as a force for good, or at least non-malign purposes, while keeping the technology away from those who would use it to do harm.
Deepfake videos arguably have become a growing concern in politics ahead of the upcoming elections. But now, someone is trying to use them as a tool to sway agency recruiters by creating faux recommendations from some of the industry’s biggest names.
Andrew Tyukavkin, an ECD at Publicis Latvia and Lithuania, deepfaked video recommendations from CP&B Chief Creative Engineer Alex Bogusky, Droga5 Founder and Creative Chairman David Droga, Publicis Groupe CEO Arthur Sadoun and Hasan & Partners CEO/CCO Eka Ruola to beef up his portfolio.
As I anxiously awaited the last season of “Game of Thrones,” I found myself thinking about my favorite character from the series: the assassin who belongs to the mysterious cult of “Faceless Men.” Specifically, I thought about his ability to change his face and appearance at will, and how this character parallels the emergence of deepfake images and videos and the science of facial recognition.
After some cursory research into the creation of deepfake videos and a few of the forensic tricks used to distinguish real videos from the ones created by generative adversarial networks (GANs) backed by artificial intelligence (AI) algorithms, I considered how these deepfakes would impact the field of cyberthreat intelligence and the intelligence community as a whole. Then I came up with a couple of ways artificially intelligent systems could have a positive impact on the intelligence community.
A new deep learning algorithm can generate high-resolution, photorealistic images of people — faces, hair, outfits, and all — from scratch.
The AI-generated models are the most realistic we’ve encountered, and the tech will soon be licensed out to clothing companies and advertising agencies interested in whipping up photogenic models without paying for lights or a catering budget. At the same time, similar algorithms could be misused to undermine public trustin digital media.
Deepfakes and other AI-generated images have become commonplace the algorithms that churn them out have become widespread.
On one sugar-coated hand, this means cooler movieand video game visual effects. On the other hand, it means that bad actors can produce photorealistic propaganda, fake porn of real people, or other convincing but fake media.
That’s why two University of Washington scientists created a website, “WhichFaceIsReal.com,” which is meant to train people to spot the telltale signs that an alleged photo was actually built by an algorithm — by asking them to guess which of two side-by-side photos a real person and which is an AI-created dupe.
12 months later, Deepfakes is proving prescient. A new wave of companies is looking to cash in on similar technology, leveraging machine learning to do unprecedented things in media–from faking voices, to faking avatars, to faking highly detailed photographs. I spoke with people at three of these companies, each of which is working to develop commercial applications. In addition to figuring out a sustainable business model for their software, each of them must reckon with the power of this still-emerging tech and how to protect society from their own tools, rather than subvert it.
Just as news is increasingly delivered digitally, so are banking services. Unified Communications and Omni-Channel strategies mean banks communicate with their customers using browser-based video/audio for instance. This could be with a human agent, but in the future also Artificial Intelligence (AI) based agents.
It is not too hard to imagine, therefore, a video/audio conversation between a high net-worth client and their private banker. If the client looks and sounds like him/herself, and of course can provide the answers to any security questions (as they invariably would), why would the banker not acquiesce to any instructions the client gives?
The CEO of a UK startup pioneering deepfake technology thinks we're just three years away from having computer-generated versions of actors that are so good, they're indistinguishable from real humans.
Victor Riparbelli, 27, cofounded Synthesia two years ago. The company made its first big splash in 2018 when it used its technology to make a BBC news anchor appear to be speaking Spanish, Mandarin and Hindi.
More recently the company applied its tech to soccer legend David Beckham. In collaboration with the campaign Malaria Must Die, Synthesia manipulated Beckham's facial features so that nine malaria survivors were able to speak through him — in nine different languages.
China plans to ban deepfakes — a move that shows at least one world power is taking the threat of AI-manipulated video very seriously.
Modern machine-learning technology brings deepfakes within reach of anyone.
When President Donald Trump’s press secretary claimed that the crowd at the 2017 inauguration was record-breaking, the United States had a nervous breakdown.
Fact-checkers launched into action, analyzing charts of past attendance numbers, as well as metro ridership the day of Trump’s inauguration, to show why Sean Spicer’s claim was demonstrably false. Journalists compared photos on live television. And Trump adviser Kellyanne Conway infamously justified the falsehood by stating that Spicer had used “alternative facts.”
It was a bizarre precursor for what would become a historically inaccurate presidency — and it felt like a defining moment for American fact-checkers. But the obsession with crowd size isn’t only a feature of American politics.
A startlingly realistic new breed of AI-driven faked videos is starting to emerge, circulated by propagandists and other shadowy actors via social platforms. These videos appear to show news events, or public figures speaking, and seem to be published by legitimate news outlets. However, they are in fact highly sophisticated AI-driven video forgeries. This session explores what strategies and technologies news outlets and consumers should be adopting to defend themselves against this frightening new development.
If you want to make a video deepfake, you can download free software and create it yourself. Someone with a bit of savvy and a chunk of time can churn out side-splitters like this one. Not so for audio deepfakes — at least not yet. Good synthetic audio is still the domain of startups, Big Tech and academic research.
Thinking about deepfakes tends to lead to philosophical head-scratchers. Here's one: Should you worry about your face being grafted into hard-core pornography if deepfakes are bent on sabotaging global power?
Deepfakes are video forgeries that make people appear to be doing or saying things they never did. Similar to the way Photoshop made doctoring images a breeze, deepfake software has made this kind of manipulated video not only accessible but also harder and harder to detect as fake.
And chances are, unless you've scrupulously kept your image off the internet, a deepfake starring you is possible today.
Apple has prized another AI superstar away from Google.
Ian Goodfellow is famous in the world of AI for his pioneering work on Generative Adversarial Networks, or GANs. GANs have become notorious in recent years, because they can be used to generate convincing fake images of real-world objects, and are integral to a lot of "deepfake" software.
CNBC was the first to spot that Goodfellow's LinkedIn had been updated on Thursday with his new role at Apple, which he appears to have started in March. A Google spokesperson confirmed his departure to CNBC.
Big Tech, top university labs and the U.S. military are pouring effort and money into detecting deepfake videos — AI-edited clips that can make it look like someone is saying something they never uttered. But video's forgotten step-sibling, deepfake audio has attracted considerably less attention — despite a comparable potential for harm.
The House Intelligence Committee is planning to hold a hearing in the coming months that will examine a series of national security matters, including the threat of videos manipulated by artificial intelligence that look strikingly real, according to a committee aide.
News publisher Reuters created its own manipulated video in order to train its journalists in how to spot fake content before it gets shared widely.
Over the course of a few days, Reuters and a specialist production company created a so-called “deepfake” video of a broadcaster reading a script in a studio. Reuters then shared the video with its user-generated content team of around 12 producers asking if they noticed anything odd about it. Several people with knowledge of the manipulation spotted it had been manipulated, noticing a mismatch between audio and lip-synching, as well as inconsistencies where the reader looked as she was lisping but didn’t sound like it. The speaker also sat unusually still. Those who weren’t expecting an altered video noticed something was off in the audio but struggled to define it.
The technology industry has a unique opportunity to tackle “deepfakes”—the problem of fake audio and video created using artificial intelligence—before they become a widespread problem, according to human rights campaigner Sam Gregory.
But, he warns, major companies are still a very long way from tackling the pervasive and more damaging issue of cruder “shallowfake” misinformation.
A horrifying magnitude 7.9 earthquake hit Japan on September 1, 1923, killing over 140,000 people. And while news of the devastation reached newspapers around the world by the next day, there was no way to get film footage from Japan to the United States that quickly. But that didn’t stop filmmakers from making fake films to show in theaters around the U.S.—like a fake newsreel of the earthquake in Japan that was rushed to theaters in a matter of days.
Here in the early 21st century, Americans are obsessed with fake videos, as our politics becomes more unhinged and the technology to create so-called deepfakes becomes more common. But the distinction between “real” and “fake” was just as loose in the first couple of decades of American cinema, believe it or not. People were sometimes watching movies of recreated news.
Search Newswire story archive back to April 2018: