It is now possible to take a talking-head style video, and add, delete or edit the speaker's words as simply as you'd edit text in a word processor. A new deepfake algorithm can process the audio and video into a new file in which the speaker says more or less whatever you want them to.Read More
The word deepfake has been around only for a couple years. It is a combination of “deep learning” – which is a subset of AI that uses neural networks – and “fake.” The result is that it's possible to manipulate videos that still look authentic.
During the past couple weeks, we have seen high-profile examples of this. There was a deepfake of Facebook’s Mark Zuckerberg who seemed to be talking about world domination. Then there was another one of House Speaker Nancy Pelosi, in which it appeared she was slurring her speech (this actually used less sophisticated technology known as “cheapfake”).Read More
Researchers at Adobe and UC Berkeley are collaborating on a method to detect facial manipulations made to digital photos in Photoshop. While development is in the early stages, it is part of a broader effort across Adobe to better detect image, video, audio, and document manipulations in today's landscape of fake news.Read More
On Tuesday, Motherboard reported that a group of artists and machine learning engineers posted a deepfake of Mark Zuckerberg to Instagram, making it look like he gave an ominous speech about the power the social network gets from collecting user data.
According to Facebook, the video was flagged by two of its fact checking partners, which prompted Facebook to limit its distribution on its platforms. This process suggests that Facebook has the ability to mitigate the virality of a doctored video that aims to spread misinformation, at least once it's highlighted by a news publication.
But the Zuckerberg deepfake is not part of a malicious misinformation campaign. It's art criticizing the CEO of one of the most influential companies in the world, and now that company appears to be suppressing distribution of that work. It raises a complicated question: How is Facebook supposed to fact-check art?Read More
Elevation Partners co-founder Roger McNamee discusses Facebook Inc.'s efforts to fight hate speech and misinformation. He speaks with Bloomberg's Emily Chang on "Bloomberg Technology.”Read More
A spy may have used an AI-generated face to deceive and connect with targets on social media.
The news: A LinkedIn profile under the name Katie Jones has been identified by the AP as a likely front for AI-enabled espionage. The persona is networked with several high-profile figures in Washington, including a deputy assistant secretary of state, a senior aide to a senator, and an economist being considered for a seat on the Federal Reserve. But what’s most fascinating is the profile image: it demonstrates all the hallmarks of a deepfake, according to several experts who reviewed it.Read More
Deepfakes are becoming more and more realistic. With the fear of ‘fake news’ growing, we need a means of verifying. Blockchain technology provides us with the single best solution.Read More
Pew Research Center: About three-quarters of Americans favor steps to restrict altered videos and images
The proliferation of altered videos – some of which are known as “deepfakes” – has sparked concern in recent years, particularly as a growing share of Americans now get news from online video-sharing sites such as YouTube. Overall, the U.S. public sees altered videos and images as a major problem and believes action should be taken to stop them, a recent Pew Research Center survey found.
Nearly two-thirds of Americans (63%) say made-up or altered videos and images create a great deal of confusion about the facts of current issues and events, with another 27% saying they create some confusion, according to the survey, conducted Feb. 19-March 4, 2019. Just one-in-ten say these videos and images create not much confusion or none at all.Read More
Fighting the spread of malicious deepfakes will require a two-pronged attack by both the government and tech industry, and could potentially involve the use of offensive cyberweapons, tech and national security experts told Congress.Read More
"Deepfakes" have changed the idea that seeing is believing - and could have a huge impact on how future political campaigns unfold.Read More
CBS News: From deepfake to "cheap fake," it's getting harder than ever to tell what's true on your favorite apps and websites
In early 2018 a video that appeared to feature former President Obama discussing the dangers of fake news went viral. The clip, created by comedian Jordan Peele, foreshadowed challenges that have now become all too real. These days, tech firms, media companies and consumers are all routinely forced to make determinations about whether content is authentic or fake— and it's increasingly hard to tell the difference.Read More
Deepfakes, a technique of using AI to superimpose images and video with misleading or false information are showing up more often and have lawmakers worried. Rep. Adam Schiff says "now is the time for social-media policies to protect users from misinformation." Bloomberg's Ben Brody joins Emily Chang on "Bloomberg Technology" to discuss.Read More
Intelligencer staffers Brian Feldman, Benjamin Hart, and Max Read discuss how dangerous manipulated videos really are.Read More
Long-frustrated efforts to crack down on harmful content on social media sites may finally get some momentum thanks to an unlikely source: counterfeit videos known as “deepfakes” that experts worry could undermine democracies around the world.
The House Intelligence Committee held its first-ever hearing on deepfakes Thursday, probing the implications of AI technology that can create realistic, counterfeit videos. At the hearing, members of Congress, including committee chairman Adam Schiff (D-Calif.), argued that deepfakes could be justification for altering laws that exempt technology companies from legal liability for harmful content on their platforms.Read More
Russian, Chinese and other actors both foreign and domestic could flood the 2020 election and the broader political landscape with sophisticated "deepfake" digital forgeries, lawmakers and researchers cautioned Thursday, warnings that arrive as questions mount about whether campaigns and Silicon Valley firms are prepared to ward off a swarm of phony footage.Read More
Top artificial-intelligence researchers across the country are racing to defuse an extraordinary political weapon: computer-generated fake videos that could undermine candidates and mislead voters during the 2020 presidential campaign.
And they have a message: We’re not ready.Read More
Deepfakes of world leaders may be easier to debunk ahead of the 2020 US presidential election with a new detection method. The rest of us are still out of luck, though. And the technique works only for a specific style of talking-head-style deepfake. Oh, and it's only a matter of time before manipulators figure out how to dodge this kind of detection, too.
As the researchers behind the breakthrough put it: Democracy, national security and society are at stake.
The researchers, who outlined the new technique in an academic paper Wednesday, created profiles of the unique expressions and head movements made by powerful people talking, such as Donald Trump, Hillary Clinton, Barack Obama and US presidential hopeful Elizabeth Warren. This "soft biometric model" helped detect a range of deepfakes, the kind of manipulated videos powered by artificial intelligence that have sprung up lately featuring Mark Zuckerberg, Kim Kardashian and others.Read More
MIT Technology Review: Training a single AI model can emit as much carbon as five cars in their lifetimes
The artificial-intelligence industry is often compared to the oil industry: once mined and refined, data, like oil, can be a highly lucrative commodity. Now it seems the metaphor may extend even further. Like its fossil-fuel counterpart, the process of deep learning has an outsize environmental impact.
In a new paper, researchers at the University of Massachusetts, Amherst, performed a life cycle assessment for training several common large AI models. They found that the process can emit more than 626,000 pounds of carbon dioxide equivalent—nearly five times the lifetime emissions of the average American car (and that includes manufacture of the car itself).Read More
Deep-learning techniques have made it easier and easier for anyone to forge convincing misinformation. But just how easy? Two researchers at the United Nations decided to find out.
In a new paper, they used only open-source tools and data to show how quickly they could get a fake UN speech generator up and running. They used a readily available language model that had been trained on text from Wikipedia and fine-tuned it on all the speeches given by political leaders at the UN General Assembly from 1970 to 2015. Thirteen hours and $7.80 later (spent on cloud computing resources), their model was spitting out realistic speeches on a wide variety of sensitive and high-stakes topics from nuclear disarmament to refugees.Read More