Film Video Digital

603 - 643 - 2627

Forbes: Deepfake: What You Need To Know

The word deepfake has been around only for a couple years. It is a combination of “deep learning” – which is a subset of AI that uses neural networks – and “fake.” The result is that it's possible to manipulate videos that still look authentic.

During the past couple weeks, we have seen high-profile examples of this. There was a deepfake of Facebook’s Mark Zuckerberg who seemed to be talking about world domination. Then there was another one of House Speaker Nancy Pelosi, in which it appeared she was slurring her speech (this actually used less sophisticated technology known as “cheapfake”).

Read More

Vice: The Mark Zuckerberg Deepfakes Are Forcing Facebook to Fact Check Art

On Tuesday, Motherboard reported that a group of artists and machine learning engineers posted a deepfake of Mark Zuckerberg to Instagram, making it look like he gave an ominous speech about the power the social network gets from collecting user data.

According to Facebook, the video was flagged by two of its fact checking partners, which prompted Facebook to limit its distribution on its platforms. This process suggests that Facebook has the ability to mitigate the virality of a doctored video that aims to spread misinformation, at least once it's highlighted by a news publication.

But the Zuckerberg deepfake is not part of a malicious misinformation campaign. It's art criticizing the CEO of one of the most influential companies in the world, and now that company appears to be suppressing distribution of that work. It raises a complicated question: How is Facebook supposed to fact-check art?

Read More

MIT Technology Review: Deepfakes may be a useful tool for spies

A spy may have used an AI-generated face to deceive and connect with targets on social media.

The news: A LinkedIn profile under the name Katie Jones has been identified by the AP as a likely front for AI-enabled espionage. The persona is networked with several high-profile figures in Washington, including a deputy assistant secretary of state, a senior aide to a senator, and an economist being considered for a seat on the Federal Reserve. But what’s most fascinating is the profile image: it demonstrates all the hallmarks of a deepfake, according to several experts who reviewed it.

Read More

Pew Research Center: About three-quarters of Americans favor steps to restrict altered videos and images

The proliferation of altered videos – some of which are known as “deepfakes” – has sparked concern in recent years, particularly as a growing share of Americans now get news from online video-sharing sites such as YouTube. Overall, the U.S. public sees altered videos and images as a major problem and believes action should be taken to stop them, a recent Pew Research Center survey found.

Nearly two-thirds of Americans (63%) say made-up or altered videos and images create a great deal of confusion about the facts of current issues and events, with another 27% saying they create some confusion, according to the survey, conducted Feb. 19-March 4, 2019. Just one-in-ten say these videos and images create not much confusion or none at all.

Read More

CBS News: From deepfake to "cheap fake," it's getting harder than ever to tell what's true on your favorite apps and websites

In early 2018 a video that appeared to feature former President Obama discussing the dangers of fake news went viral. The clip, created by comedian Jordan Peele, foreshadowed challenges that have now become all too real. These days, tech firms, media companies and consumers are all routinely forced to make determinations about whether content is authentic or fake— and it's increasingly hard to tell the difference.

Read More

Bloomberg: The Dangers of Deepfakes

Deepfakes, a technique of using AI to superimpose images and video with misleading or false information are showing up more often and have lawmakers worried. Rep. Adam Schiff says "now is the time for social-media policies to protect users from misinformation." Bloomberg's Ben Brody joins Emily Chang on "Bloomberg Technology" to discuss.

Read More

Mother Jones: Deepfakes Could Finally Bring Accountability to Big Tech Companies

Long-frustrated efforts to crack down on harmful content on social media sites may finally get some momentum thanks to an unlikely source: counterfeit videos known as “deepfakes” that experts worry could undermine democracies around the world.

The House Intelligence Committee held its first-ever hearing on deepfakes Thursday, probing the implications of AI technology that can create realistic, counterfeit videos. At the hearing, members of Congress, including committee chairman Adam Schiff (D-Calif.), argued that deepfakes could be justification for altering laws that exempt technology companies from legal liability for harmful content on their platforms.

Read More

CNet: Deepfake debunking tool may protect presidential candidates. For now. Sometimes

Deepfakes of world leaders may be easier to debunk ahead of the 2020 US presidential election with a new detection method. The rest of us are still out of luck, though. And the technique works only for a specific style of talking-head-style deepfake. Oh, and it's only a matter of time before manipulators figure out how to dodge this kind of detection, too.

As the researchers behind the breakthrough put it: Democracy, national security and society are at stake.

The researchers, who outlined the new technique in an academic paper Wednesday, created profiles of the unique expressions and head movements made by powerful people talking, such as Donald Trump, Hillary Clinton, Barack Obama and US presidential hopeful Elizabeth Warren. This "soft biometric model" helped detect a range of deepfakes, the kind of manipulated videos powered by artificial intelligence that have sprung up lately featuring Mark Zuckerberg, Kim Kardashian and others.

Read More

MIT Technology Review: Training a single AI model can emit as much carbon as five cars in their lifetimes

The artificial-intelligence industry is often compared to the oil industry: once mined and refined, data, like oil, can be a highly lucrative commodity. Now it seems the metaphor may extend even further. Like its fossil-fuel counterpart, the process of deep learning has an outsize environmental impact.

In a new paper, researchers at the University of Massachusetts, Amherst, performed a life cycle assessment for training several common large AI models. They found that the process can emit more than 626,000 pounds of carbon dioxide equivalent—nearly five times the lifetime emissions of the average American car (and that includes manufacture of the car itself).

Read More

MIT Technology Review: You can train an AI to fake UN speeches in just 13 hours

Deep-learning techniques have made it easier and easier for anyone to forge convincing misinformation. But just how easy? Two researchers at the United Nations decided to find out.

In a new paper, they used only open-source tools and data to show how quickly they could get a fake UN speech generator up and running. They used a readily available language model that had been trained on text from Wikipedia and fine-tuned it on all the speeches given by political leaders at the UN General Assembly from 1970 to 2015. Thirteen hours and $7.80 later (spent on cloud computing resources), their model was spitting out realistic speeches on a wide variety of sensitive and high-stakes topics from nuclear disarmament to refugees.

Read More