Wednesday, June 26, 2019

Watching a Deepfake Being Made Is Boring, And You Must See It

AI-generated fake videos known as "deepfakes" are the quintessential modern tech nightmare. The technology is hard to understand, they can be highly convincing, and they can have horrifying repercussions, especially for women. And then there are the political implications of being able to create fake videos of lawmakers.

Despite the sizzle around the tech, watching a deepfake get made in real-time is incredibly, devastatingly boring. It is so deeply, mind-numbingly tedious, in fact, that you owe it to yourself to see exactly how one is made. That way, you'll have a deeper understanding of what's really behind the next deepfake you see in the wild.

YouTuber "Dan it all" is currently live streaming the creation of a deepfake of Linus Sebastian from the popular YouTube channel "Linus Tech Tips" and Louis Rossman, a popular repair YouTuber. To the viewer, that amounts to intently watching a computer desktop displaying some changing text in the Command Prompt box; some graphics card diagnostics; blurry faces as the program's inputs and outputs; and that's about it.

The stream includes a view inside of the PC to live-stream the hardware, but nothing much happens—at least from the human viewer’s perspective. It will quietly work like this for a week, according to the video description. It is absolutely worse than watching paint dry.

So, what's going on here? Deepfakes are made using a type of machine learning architecture known as a Generative Adversarial Network, or GAN. In very broad terms, GANs take a huge amount of data as input (say, videos or photos of a person) and "learn" to generate new examples that look like the real thing in a process referred to as "training."

Once trained, these types of programs can put somebody's face convincingly on another person's body, or make it seem like they're saying something they never did: For example, the deepfake of Facebook founder Mark Zuckerberg created to make it sound like he’s bragging about controlling the world’s data

A big part of what makes deepfakes such an animating point of discussion is that the technology is highly accessible to laypeople. Open source programs are available for free online, there's plenty of video out there to use as input, and you don't even need a super high-end computer. Dan it all notes that he only has a mid-grade graphics card and 8 GB of VRAM, which is an average consumer-level PC, but he’s still able to make a deepfake.

It'll just take a week to train the AI model, compared to other examples we’ve seen recently like those out of tech giants’ labs—like Nvidia or Samsung, or research institutions like Stanford and Princeton—that only need hours to create realistic fakes.

Sometimes deepfakes are light and entertaining (what if Sylvester Stallone starred in Terminator 2?), and sometimes it is dark, bad, invasive, and generally shitty, as in the case of non-consensual deepfake pornography. And sometimes—increasingly, even—they're just YouTube content. Whatever the context, you now have some incredibly boring (but very important) insight into how these things are made.

Listen to CYBER, Motherboard’s new weekly podcast about hacking and cybersecurity.



from VICE https://ift.tt/2Xbg9ZZ
via cheap web hosting

No comments:

Post a Comment