News at the Intersection of Synthetic Media & Moving Image Archives
June 17, 2019
Search Newswire story archive back to April 2018:
It is now possible to take a talking-head style video, and add, delete or edit the speaker's words as simply as you'd edit text in a word processor. A new deepfake algorithm can process the audio and video into a new file in which the speaker says more or less whatever you want them to.
The word deepfake has been around only for a couple years. It is a combination of “deep learning” – which is a subset of AI that uses neural networks – and “fake.” The result is that it's possible to manipulate videos that still look authentic.
During the past couple weeks, we have seen high-profile examples of this. There was a deepfake of Facebook’s Mark Zuckerberg who seemed to be talking about world domination. Then there was another one of House Speaker Nancy Pelosi, in which it appeared she was slurring her speech (this actually used less sophisticated technology known as “cheapfake”).
Researchers at Adobe and UC Berkeley are collaborating on a method to detect facial manipulations made to digital photos in Photoshop. While development is in the early stages, it is part of a broader effort across Adobe to better detect image, video, audio, and document manipulations in today's landscape of fake news.
On Tuesday, Motherboard reported that a group of artists and machine learning engineers posted a deepfake of Mark Zuckerberg to Instagram, making it look like he gave an ominous speech about the power the social network gets from collecting user data.
According to Facebook, the video was flagged by two of its fact checking partners, which prompted Facebook to limit its distribution on its platforms. This process suggests that Facebook has the ability to mitigate the virality of a doctored video that aims to spread misinformation, at least once it's highlighted by a news publication.
But the Zuckerberg deepfake is not part of a malicious misinformation campaign. It's art criticizing the CEO of one of the most influential companies in the world, and now that company appears to be suppressing distribution of that work. It raises a complicated question: How is Facebook supposed to fact-check art?
Elevation Partners co-founder Roger McNamee discusses Facebook Inc.'s efforts to fight hate speech and misinformation. He speaks with Bloomberg's Emily Chang on "Bloomberg Technology.”
A spy may have used an AI-generated face to deceive and connect with targets on social media.
The news: A LinkedIn profile under the name Katie Jones has been identified by the AP as a likely front for AI-enabled espionage. The persona is networked with several high-profile figures in Washington, including a deputy assistant secretary of state, a senior aide to a senator, and an economist being considered for a seat on the Federal Reserve. But what’s most fascinating is the profile image: it demonstrates all the hallmarks of a deepfake, according to several experts who reviewed it.
Deepfakes are becoming more and more realistic. With the fear of ‘fake news’ growing, we need a means of verifying. Blockchain technology provides us with the single best solution.
The proliferation of altered videos – some of which are known as “deepfakes” – has sparked concern in recent years, particularly as a growing share of Americans now get news from online video-sharing sites such as YouTube. Overall, the U.S. public sees altered videos and images as a major problem and believes action should be taken to stop them, a recent Pew Research Center survey found.
Nearly two-thirds of Americans (63%) say made-up or altered videos and images create a great deal of confusion about the facts of current issues and events, with another 27% saying they create some confusion, according to the survey, conducted Feb. 19-March 4, 2019. Just one-in-ten say these videos and images create not much confusion or none at all.
A hilarious new video takes a funeral scene from the final season of "Game of Thrones" and turns it into a fan apology. "I'm sorry we wasted your time," Snow says in the video, spotted first by The Daily Dot. "I'm sorry we didn't learn anything from the ending of 'Lost.’"
Fighting the spread of malicious deepfakes will require a two-pronged attack by both the government and tech industry, and could potentially involve the use of offensive cyberweapons, tech and national security experts told Congress.
"Deepfakes" have changed the idea that seeing is believing - and could have a huge impact on how future political campaigns unfold.
In early 2018 a video that appeared to feature former President Obama discussing the dangers of fake news went viral. The clip, created by comedian Jordan Peele, foreshadowed challenges that have now become all too real. These days, tech firms, media companies and consumers are all routinely forced to make determinations about whether content is authentic or fake— and it's increasingly hard to tell the difference.
Deepfakes, a technique of using AI to superimpose images and video with misleading or false information are showing up more often and have lawmakers worried. Rep. Adam Schiff says "now is the time for social-media policies to protect users from misinformation." Bloomberg's Ben Brody joins Emily Chang on "Bloomberg Technology" to discuss.
Intelligencer staffers Brian Feldman, Benjamin Hart, and Max Read discuss how dangerous manipulated videos really are.
Long-frustrated efforts to crack down on harmful content on social media sites may finally get some momentum thanks to an unlikely source: counterfeit videos known as “deepfakes” that experts worry could undermine democracies around the world.
The House Intelligence Committee held its first-ever hearing on deepfakes Thursday, probing the implications of AI technology that can create realistic, counterfeit videos. At the hearing, members of Congress, including committee chairman Adam Schiff (D-Calif.), argued that deepfakes could be justification for altering laws that exempt technology companies from legal liability for harmful content on their platforms.
Russian, Chinese and other actors both foreign and domestic could flood the 2020 election and the broader political landscape with sophisticated "deepfake" digital forgeries, lawmakers and researchers cautioned Thursday, warnings that arrive as questions mount about whether campaigns and Silicon Valley firms are prepared to ward off a swarm of phony footage.
Top artificial-intelligence researchers across the country are racing to defuse an extraordinary political weapon: computer-generated fake videos that could undermine candidates and mislead voters during the 2020 presidential campaign.
And they have a message: We’re not ready.
Deepfakes of world leaders may be easier to debunk ahead of the 2020 US presidential election with a new detection method. The rest of us are still out of luck, though. And the technique works only for a specific style of talking-head-style deepfake. Oh, and it's only a matter of time before manipulators figure out how to dodge this kind of detection, too.
As the researchers behind the breakthrough put it: Democracy, national security and society are at stake.
The researchers, who outlined the new technique in an academic paper Wednesday, created profiles of the unique expressions and head movements made by powerful people talking, such as Donald Trump, Hillary Clinton, Barack Obama and US presidential hopeful Elizabeth Warren. This "soft biometric model" helped detect a range of deepfakes, the kind of manipulated videos powered by artificial intelligence that have sprung up lately featuring Mark Zuckerberg, Kim Kardashian and others.
The artificial-intelligence industry is often compared to the oil industry: once mined and refined, data, like oil, can be a highly lucrative commodity. Now it seems the metaphor may extend even further. Like its fossil-fuel counterpart, the process of deep learning has an outsize environmental impact.
In a new paper, researchers at the University of Massachusetts, Amherst, performed a life cycle assessment for training several common large AI models. They found that the process can emit more than 626,000 pounds of carbon dioxide equivalent—nearly five times the lifetime emissions of the average American car (and that includes manufacture of the car itself).
Deep-learning techniques have made it easier and easier for anyone to forge convincing misinformation. But just how easy? Two researchers at the United Nations decided to find out.
In a new paper, they used only open-source tools and data to show how quickly they could get a fake UN speech generator up and running. They used a readily available language model that had been trained on text from Wikipedia and fine-tuned it on all the speeches given by political leaders at the UN General Assembly from 1970 to 2015. Thirteen hours and $7.80 later (spent on cloud computing resources), their model was spitting out realistic speeches on a wide variety of sensitive and high-stakes topics from nuclear disarmament to refugees.
OTTAWA — Barack Obama says he’s concerned “deepfake” videos will have real-life consequences, messing with people’s abilities to sort fact from fiction.
The former U.S. president told an Ottawa audience Friday evening he’s seen fake videos bearing his likeness, powered by artificial intelligence, modelling his voice and movements.
He explained part of the problem is because the human brain hasn’t adapted quickly enough to process the onslaught of information readily available to them on multiple platforms, and A.I. is only going to make things worse. Especially for democracies, he said.
“The marketplace of ideas that is the basis of our democratic practice has difficulty working if we don’t have some common baseline of what’s true and what’s not.”
As we await the inevitable post-truth, technocratic dystopia that we’re headed towards, though, we should all take a moment to admire the novel uses a group of Russian researchers have found for deepfakes (they call them “talking head models”).
Most notably, researchers from Moscow’s Samsung AI Center and Skolkovo Institute of Science and Technology have taken portraits such as da Vinci’s Mona Lisa and managed to make them move as if they’re (pretty much) real people talking in the present day.
Researchers from the Tandon School of Engineering at New York University are developing methods to identify when an image has been altered using artificial intelligence and “digital watermarks” to help detect deepfakes.
Concerns about malicious use of those advances have given rise to a debate about whether deepfakes could be used to undermine democracy. The concern is that a cleverly crafted deepfake of a public figure, perhaps imitating a grainy cell phone video so that it’s imperfections are overlooked, and timed for the right moment, could shape a lot of opinions. That’s sparked an arms race to automate ways of detecting them ahead of the 2020 elections. The Pentagon’s Darpa has spent tens of millions on a media forensics research program, and several startups are angling to become arbiters of truth as the campaign gets underway. In Congress, politicians have called for legislation banning their “malicious use.”
Last week, Samsung researchers announced a system that can create realistic deepfake video avatars from just one image. Around the same time, a doctored video surfaced of House Speaker Nancy Pelosi that had been slowed down to make her appear drunk. These two unsettling events, representing an impressive achievement by the Samsung team and a less sophisticated case of "malinformation" around Pelosi, bring the issue of AI-augmented deepfake videos starkly back into the limelight. Last year, deepfake videos dominated the headlines, culminating in deepfake celebrity pornography and a blanket ban by Reddit in August 2018.
Societies around the world are grappling with how to best respond to the rise of doctored videos. From this week's Pelosi video’s slowed playback to AI-produced “deep fakes,” the world of video is shedding the old motto that “seeing is believing.” This raises the question of whether these digital falsehoods, especially newer AI-created videos, represent something fundamentally new or whether they are merely a technological update of the age-old plague of false information. In particular, reflecting on Orson Welles’ infamous October 30, 1938 radio adaptation of The War of The Worlds reminds us that believable falsehoods have been with us since the early days of modern broadcast mass communication.
Imagine if someone created a video of you by simply stealing your profile picture from Facebook or Instagram. Now, the bad guys over the online platform do not have to hand over this tech yet. However, Samsung has now figured out a way to make this happen.
Now, deepfakes are fabricated clips that use big sets of data images to create video clips that appear to do something they never ever did or said. This real looking forgery was a far cry a few years back with a stringent set of work required to achieve the same. Now, Samsung managed to develop a new system comprised of AI or Artificial Intelligence which can generate fake clip simply by feeding just 1 picture.
One Of The most difficult things about detecting manipulated photos, or "deepfakes," is that digital photo files aren't coded to be tamper-evident. But researchers from New York University's Tandon School of Engineering are starting to develop strategies that make it easier to tell if a photo has been altered, opening up a potential new front in the war on fakery.
The 2020 presidential campaigns appear to have done little to prepare for what experts predict could be a flood of fake videos depicting candidates doing or saying something incriminating or embarrassing.
The House Intelligence Committee has slated a hearing in June that will examine a series of national security matters, including the threat of videos manipulated by artificial intelligence (AI) that look strikingly real, a panel aide said.
The congressional hearing on June 13 will be one of the first to primarily focus on so-called deepfakes, which experts and lawmakers say pose a major disinformation threat heading into the 2020 election.
Search Newswire story archive back to April 2018: