News at the Intersection of Synthetic Media & Moving Image Archives
February 21, 2019
Truepic’s technology is already used by the U.S. State Department and others. The startup now wants to get social media companies on board.
Truepic was founded in 2015 by Craig Stack, a Goldman Sachs alum who saw an opportunity in making it harder for Craigslist scammers and dating-site lurkers to deceive people. “It hit me that there were all these apps that deal with image manipulation or spoofing location and time settings,” says Stack, who now serves as COO. But today the company’s primary mission is to use image-verification tools to identify and battle more formidable forms of disinformation—from the faux social media accounts that the Kremlin used to manipulate the 2016 U.S. presidential election to the doctored photos that travel the back roads of WhatsApp and catalyze violence in places like Myanmar and India.
A new website that utilizes artificial intelligence can endlessly generate the faces of people who don’t actually exist.
The AI works by analyzing countless photos of the human face in order to generate realistic ones of its own. While creating such images initially required advanced computer hardware and specific knowledge, the process is now widely available thanks to the site.
Philip Wang, a software engineer at Uber and creator of the website, told Motherboard that the new service is designed to “dream up a random face every two seconds.”
Video has become an increasingly crucial tool for law enforcement, whether it comes from security cameras, police-worn body cameras, a bystander's smartphone, or another source. But a combination of "deepfake" video manipulation technology and security issues that plague so many connected devices has made it difficult to confirm the integrity of that footage. A new project suggests the answer lies in cryptographic authentication.
Called Amber Authenticate, the tool is meant to run in the background on a device as it captures video. At regular, user-determined intervals, the platform generates "hashes"—cryptographically scrambled representations of the data—that then get indelibly recorded on a public blockchain. If you run that same snippet of video footage through the algorithm again, the hashes will be different if anything has changed in the file's audio or video data—tipping you off to possible manipulation.
Burger King’s Super Bowl spot featuring Andy Warhol eating a Whopper led many viewers to question whether what they were witnessing was real. Warhol, after all, has been dead for nearly 32 years, so how was he available to shoot a commercial?
In transpired that the Warhol in the film was ‘real’ – Burger King had repurposed Danish filmmaker Jørgen Let’s 1982 study of the Factory don. Yet in the same week another artist really was being brought back to life using the same deepfake technology that placed Steve Buscemi’s face onto Jennifer Lawrence in a rather disturbing viral video.
Dalí Lives, the latest project from the the Dalí Museum in Florida, is being marketed as an ‘avant-garde experience’ designed to celebrate the 30th anniversary of the surrealist’s death. Goodby Silverstein & Partners were brought on board to commemorate the occasion; the San Franciscan agency decided resurrection was the route to go down.
Paul Pflug, Melissa Zukerman, and Hans-Dieter Kopal of Principal Communications have teamed with leading cyber research and security firm Edgeworth to form Foresight Solutions Group — a “reputation-management” entity that will use advanced technology, former FBI data analysts, and old fashioned crisis-management skills to advise individuals and companies in an age where old tweets can bring down an Oscar host or jeopardize a billion-dollar superhero franchise.
Foresight’s tech team will also work proactively to squash erroneous and damaging social media content in the public sphere and on the dark web — like the unsettling rise of fake videos (or “deepfakes”) that have targeted stars like “Game of Thrones” lead Emilia Clarke with manufactured, but photo-realistic pornographic images.
Advances in machine learning are making it easy to create fake videos—popularly known as “deepfakes”—where people appear to say and do things they never did. For example, a faked video of Barack Obama went viral in April in which he appears to warn viewers about misinformation. Falsehoods already spread farther than the truth, and deepfakes are making it cheap and easy for anyone to create fake videos. When convincing fakes become commonplace, the public will also start to distrust real video evidence, especially when it does not match their biases. Unfortunately, the technology that enables deepfakes is advancing rapidly. Deepfakes will become easier to create, and humans will increasingly struggle to distinguish fake videos from real ones. Luckily, there is some hope that algorithms may be able to automatically detect deepfakes. Computer scientists have generally struggled to automate fact checking. However, early research suggests that fake videos may be an exception.
Axios reports that several important legislators have proposed new criminal laws banning the creation or distribution of so-called "deepfakes," computer generated videos that make it seem like someone did something they didn't actually do. The technological ability to create deepfakes has caused a lot of justifiable concern. But I wanted to express some skepticism about the current round of proposed new criminal laws.
When Google announced the Google News Initiative in March 2018, it pledged to release datasets that would help “advance state-of-the-art research” on fake audio detection — that is, clips generated by AI intended to mislead or fool voice authentication systems. Today, it’s making good on that promise.
The Google News team and Google’s AI research division, Gai prinoogle AI, have teamed up to produce a corpus of speech containing “thousands” of phrases spoken by the Mountain View company’s text-to-speech models. Phrases drawn from English newspaper articles are spoken by 68 different synthetic voices, which cover a variety of regional accents.
In just a few short months, "deep fakes" are striking fear in technology experts and lawmakers. Already there are legislative proposals, a law review article, national security commentaries, and dozens of opinion pieces claiming that this new deep fake technology — which uses artificial intelligence to produce realistic-looking simulated videos — will spell the end of truth in media as we know it.
But will that future come to pass?
At this year's Worldwide Threats hearing before the US Senate's Select Committee on Intelligence, the leaders of the country's top intelligence agencies, including the National Security Agency, the CIA and the FBI, again pointed at tech issues as their biggest worry.
The Tuesday hearing covered issues like weapons of mass destruction, terrorism, and organized crime, but technology's problems took center stage. That echoes last year's hearing, when officials flagged cybersecurity as their greatest concern, after major hacks like the NotPetya attack, which cost billions in damages. But concerns over technology aren't limited to cyberattacks: Lawmakers also brought up deepfakes, artificial intelligence, disinformation campaigns on social media, and the vulnerability of internet of things devices.
WHAT KINDS OF DAMAGE COULD DEEPFAKES CAUSE IN GLOBAL MARKETS OR INTERNATIONAL AFFAIRS?
Deepfakes could incite political violence, sabotage elections, and unsettle diplomatic relations. Earlier this year, for instance, a Belgian political party published a deepfake on Facebook that appeared to show U.S. President Donald Trump criticizing Belgium’s stance on climate change. The unsophisticated video was relatively easy to dismiss, but it still provoked hundreds of online comments expressing outrage that the U.S. president would interfere in Belgium’s internal affairs.
Inside the Pentagon’s race against deepfake videos
Advances in artificial intelligence could soon make creating convincing fake audio and video – known as “deepfakes” – relatively easy. Making a person appear to say or do something they did not has the potential to take the war of disinformation to a whole new level.
The big mood these days is waiting on the tech apocalypse. All it takes is a video of a humanoid robot displaying the motor skills of a 6-year-old to have people preparing for Skynet to kill us all. The same goes, perhaps even more so, for fears of “deepfakes”: software getting good enough that anybody with an iPhone can doctor fake videos that can spark a riot. Seeing computers convincingly putting words in the mouths of presidents is scary, and once a Macedonian teenager can do it in minutes it’s game over, so the thinking goes.
But if the last few years — and yes, the particularly hellish last few days — have taught us anything, it’s that fake video isn’t going to destroy our ability to see the truth. It’s the real video we need to worry about, and our true problem is that we can all see the very same thing and disagree on what it was.
I’m waiting for the day a criminal defendant escapes conviction by claiming that perfectly legitimate visual evidence was faked. When that happens — and it’s only a matter of time — run for cover.
John Tariot interviews Hany Farid, professor at Berkeley School of Information, where he focusses on digital forensics, image analysis, and human perception. He is one of the subjects in the New Yorker article, “In the Age of A.I., Is Seeing Still Believing?” Hany and John Tariot have been having an ongoing conversation about the impact deepfakes will have on archives, and he and John continue the discussion here on some of the issues raised at the Association of Moving Image Archivists conference session “Everything in Your Archive is Now Fake.”
In the darker corners of the Internet, you can now find celebrities like Emma Watson and Selma Hayek performing in pornographic videos. The clips are fake, of course—but it’s distressingly hard to tell. Recent improvements in artificial intelligence software have made it surprisingly easy to graft the heads of stars, and ordinary women, to the bodies of X-rated actresses to create realistic videos.
These explicit movies are just one strain of so-called “deepfakes,” which are clips that have been doctored so well they look real. Their arrival poses a threat to democracy; mischief makers can, and already have, used them to spread fake news. But another great danger of deepfakes is their use as a tool to harass and humilia
Deepfakes would, I believed, usher in an infopocalypse: a new world where commonly held reality fell apart, and chaos reigned. But then something interesting happened – or rather, didn’t.
“Deepfake” creators are making disturbingly realistic, computer-generated videos with photos taken from the Web, and ordinary women are suffering the damage.
How fake-porn opponents are fighting back: The best hope for fighting computer-generated fake-porn videos might come from a surprising source: the artificial intelligence software itself.
Technical experts and online trackers say they are developing tools that could automatically spot these “deepfakes” by using the software’s skills against it, deploying image-recognition algorithms that could help detect the ways their imagery bends belief.
A political organization endorsed by former U.S. Vice President Joe Biden is concerned so-called “deepfakes” could be a threat to democracy.
It developed an online quiz to see whether people found an AI-generated Trump impersonator more convincing than actors and comedians.
The next step for the foundation is building deepfake-detection software, rolling it out to journalists, and educating the public about the technology.
The rise of authoritarianism has coincided with the proliferation of “deepfakes” — realistic videos created with artificial intelligence software. This has frightening implications for journalism.
Note: discussion of techniques currently used by journalists
The work of the fact-checker is perpetually evolving. As tactics of spreading disinformation are exposed and countered, perpetrators continuously innovate new ways of distributing falsehoods and distorted narratives. Fact-checkers must contend with finding efficient ways of verifying information in the present, while actively preparing for the information environment of the future.
In this vein, “deepfakes” — the use of recent breakthroughs in artificial intelligence to create believable fakes in images, audio, and video — have raised concerns throughout the past year.
Artificial intelligence has been used to created hyper-realistic portrait photographs of men, women, and children of different races who never existed, prompting one author to declare the “end of photography as evidence.”
Award-winning British artist Gillian Wearing created a deep fake video of herself as part of her exhibition at the Cincinnati art Museum.
Experts fear that in the wrong hands, deepfakes could become the next frontier in fake news – and spark very real consequences.
To help detect a DeepFake video, look at the eyes. Siwei Lyu discusses the battle against DeepFakes. Lyu is an associate professor of computer science at Albany, part of the State University of New York System. A transcript of this podcast can be found here.
Generative adversarial networks, or GANs, are fueling creativity—and controversy. Here’s how they work. By Karen Hao
This panel identifies guidelines tech companies can follow to limit their negative use and offer views on how governments should react to deep fakes, if at all. With speakers Robert Chesney, James Baker Chair in Law, University of Texas at Austin, Aviv Ovadya, Chief Technologist, Center for Social Media Responsibility, University of Michigan, and Laura M. Rosenberger, Senior Fellow and Director, Alliance for Securing Democracy, German Marshall Fund of the United States
Advances in digital imagery could deepen the fake-news crisis—or help us get out of it. Interview with Hany Farid
After a public outcry over privacy and their inability — or unwillingness — to address misleading content, Facebook, Twitter, and other social media platforms finally appear to be making a real effort to take on fake news. But manipulative posts from perpetrators in Russia or elsewhere may soon be the least of our problems. What looms ahead won’t just impact our elections. It will impact our ability to trust just about anything we see and hear.
“It’s something archives have dealt with for centuries,” Yvonne Ng, a senior archivist at WITNESS, a nonprofit that focuses on collecting video evidence of human rights abuses, told Gizmodo. “The deepfake is a new spin on this process, but archives have always had to deal with forgeries or fakes or plagiarism—and even unintended damage and deterioration—and then having to determine the authenticity of objects with all of those considerations in mind.”
"Archival methods are not primarily about tools and the tech,” Ng noted. “Archival methods have always been more about having controlled and consistent policies and rules.” She said that descriptions of information and its metadata are a major part of archival work: In other words, documenting the context of the content."
"Ultimately, the greatest protection archives offer against the distortion of history may be their careful documentation of previous errors. By supporting archiving projects, we not only ensure that the past is preserved accurately, but create a guide for the future by chronicling the long relationship between media and deception.” - Author Melanie Ehrenkranz