News at the Intersection of Synthetic Media & Moving Image Archives
January 16, 2019
John Tariot interviews Hany Farid, professor at Berkeley School of Information, where he focusses on digital forensics, image analysis, and human perception. He is one of the subjects in the New Yorker article, “In the Age of A.I., Is Seeing Still Believing?” Hany and John Tariot have been having an ongoing conversation about the impact deepfakes will have on archives, and he and John continue the discussion here on some of the issues raised at the Association of Moving Image Archivists conference session “Everything in Your Archive is Now Fake.”
In the darker corners of the Internet, you can now find celebrities like Emma Watson and Selma Hayek performing in pornographic videos. The clips are fake, of course—but it’s distressingly hard to tell. Recent improvements in artificial intelligence software have made it surprisingly easy to graft the heads of stars, and ordinary women, to the bodies of X-rated actresses to create realistic videos.
These explicit movies are just one strain of so-called “deepfakes,” which are clips that have been doctored so well they look real. Their arrival poses a threat to democracy; mischief makers can, and already have, used them to spread fake news. But another great danger of deepfakes is their use as a tool to harass and humilia
Deepfakes would, I believed, usher in an infopocalypse: a new world where commonly held reality fell apart, and chaos reigned. But then something interesting happened – or rather, didn’t.
“Deepfake” creators are making disturbingly realistic, computer-generated videos with photos taken from the Web, and ordinary women are suffering the damage.
How fake-porn opponents are fighting back: The best hope for fighting computer-generated fake-porn videos might come from a surprising source: the artificial intelligence software itself.
Technical experts and online trackers say they are developing tools that could automatically spot these “deepfakes” by using the software’s skills against it, deploying image-recognition algorithms that could help detect the ways their imagery bends belief.
A political organization endorsed by former U.S. Vice President Joe Biden is concerned so-called “deepfakes” could be a threat to democracy.
It developed an online quiz to see whether people found an AI-generated Trump impersonator more convincing than actors and comedians.
The next step for the foundation is building deepfake-detection software, rolling it out to journalists, and educating the public about the technology.
The rise of authoritarianism has coincided with the proliferation of “deepfakes” — realistic videos created with artificial intelligence software. This has frightening implications for journalism.
Note: discussion of techniques currently used by journalists
The work of the fact-checker is perpetually evolving. As tactics of spreading disinformation are exposed and countered, perpetrators continuously innovate new ways of distributing falsehoods and distorted narratives. Fact-checkers must contend with finding efficient ways of verifying information in the present, while actively preparing for the information environment of the future.
In this vein, “deepfakes” — the use of recent breakthroughs in artificial intelligence to create believable fakes in images, audio, and video — have raised concerns throughout the past year.
Artificial intelligence has been used to created hyper-realistic portrait photographs of men, women, and children of different races who never existed, prompting one author to declare the “end of photography as evidence.”
Award-winning British artist Gillian Wearing created a deep fake video of herself as part of her exhibition at the Cincinnati art Museum.
Experts fear that in the wrong hands, deepfakes could become the next frontier in fake news – and spark very real consequences.
To help detect a DeepFake video, look at the eyes. Siwei Lyu discusses the battle against DeepFakes. Lyu is an associate professor of computer science at Albany, part of the State University of New York System. A transcript of this podcast can be found here.
Generative adversarial networks, or GANs, are fueling creativity—and controversy. Here’s how they work. By Karen Hao
This panel identifies guidelines tech companies can follow to limit their negative use and offer views on how governments should react to deep fakes, if at all. With speakers Robert Chesney, James Baker Chair in Law, University of Texas at Austin, Aviv Ovadya, Chief Technologist, Center for Social Media Responsibility, University of Michigan, and Laura M. Rosenberger, Senior Fellow and Director, Alliance for Securing Democracy, German Marshall Fund of the United States
Advances in digital imagery could deepen the fake-news crisis—or help us get out of it. Interview with Hany Farid
After a public outcry over privacy and their inability — or unwillingness — to address misleading content, Facebook, Twitter, and other social media platforms finally appear to be making a real effort to take on fake news. But manipulative posts from perpetrators in Russia or elsewhere may soon be the least of our problems. What looms ahead won’t just impact our elections. It will impact our ability to trust just about anything we see and hear.
“It’s something archives have dealt with for centuries,” Yvonne Ng, a senior archivist at WITNESS, a nonprofit that focuses on collecting video evidence of human rights abuses, told Gizmodo. “The deepfake is a new spin on this process, but archives have always had to deal with forgeries or fakes or plagiarism—and even unintended damage and deterioration—and then having to determine the authenticity of objects with all of those considerations in mind.”
"Archival methods are not primarily about tools and the tech,” Ng noted. “Archival methods have always been more about having controlled and consistent policies and rules.” She said that descriptions of information and its metadata are a major part of archival work: In other words, documenting the context of the content."
"Ultimately, the greatest protection archives offer against the distortion of history may be their careful documentation of previous errors. By supporting archiving projects, we not only ensure that the past is preserved accurately, but create a guide for the future by chronicling the long relationship between media and deception.” - Author Melanie Ehrenkranz
Artificial intelligence is emerging as the next frontier in fake news — and it just might make you second-guess everything you see.
Once filmmakers have no need of human actors, expect more sequels, more lawsuits—and fewer opportunities for newcomers
Whenever a big debate comes up over an individual's likeness rights, one can expect Hollywood studios and the industry's performers to offer different visions about what's at stake — free expression or unfettered exploitation. Thus, it's no surprise that a bill currently before the New York legislature establishing a right of publicity for living and deceased individuals is drawing praise and fire from the usual suspects.
"what makes a deepfake in the first place?"
"the question raises a number of interesting issues: not only our difficulty in defining deepfakes, but the problems that could arise if the term is applied vaguely in the future. Could “deepfake” become the next “fake news,” for example; a phrase that once described a distinct phenomenon (people publishing fabricated news stories on social media for profit), but that has now been co-opted to discredit legitimate reporting.”
The digital manipulation of video may make the current era of “fake news” seem quaint.
SAG-AFTRA says it’s “fighting back” against the dangers posed by new face-swapping technologies that have been used to digitally superimpose the faces of its members onto the bodies of porn stars. In recent months, the technology – known as “deepfaking” – has hijacked the likenesses of several famous actresses and singers to make it appear that they were performing in pornographic films.
Thanks to AI-assisted software that lets you put one person's face on another's body, you could put yourself in a porn...or anyone else, for that matter. That's where things get complicated.