AI-generated faces are taking over the internet

The Times profiled an 18-year-old Ukrainian woman named “Luba Dovzhenko” in March to illustrate life under martial law. According to the article, she studied journalism, spoke “poor English” and started carrying a weapon after the Russian invasion.

The problem, however, was that Dovhenko doesn’t exist in real life, and the story was removed shortly after it was published,

Luba Dovhenko was a fake online persona designed to capitalize on the growing interest in Ukraine-Russia war stories on Twitter and gain a large following. Not only did the account never tweet before March, but it also had a different username, and the updates it tweeted, which may have caught The Times’ attention, had been ripped off from other real profiles. However, the most damning evidence of her fraud was straight to her face.

In Dovhenko’s profile photo, some of her locks of hair were detached from the rest of her head, a few lashes were missing, and most importantly, her eyes were noticeably centered. They were all telltale signs of an artificial face coughed up by an AI algorithm.

The positioning of facial features is not the only anomaly in @lubadovzhenko1‘s profile picture; not the loose hair in the lower right part of the image and the partially missing eyelashes (among others). pic.twitter.com/UPuvAQh4LZ

— Northern Conspirator (@conspirator0) March 31, 2022

Dovhenko’s face was fabricated by the technology behind deepfakes, an increasingly mainstream technique that allows anyone to place a face over another person in a video. It’s used for everything from revenge porn to manipulating speeches by world leaders. And by giving such algorithms millions of photos of real people, they can be reused to create lifelike faces like Dovhenko’s from scratch. It is a growing problem that makes the fight against misinformation even more difficult.

An army of artificial intelligence generated fake faces

As social networks crack down on faceless, anonymous trolls in recent years, AI has armed malicious actors and bots with an invaluable weapon: the ability to appear alarmingly authentic. Unlike in the old days, when trolls just ripped real faces off the internet and anyone could expose them by turning their profile picture upside down, it’s practically impossible for anyone to do the same for AI-generated photos because they’re fresh and unique. And even on closer inspection, most people can’t tell the difference.

dr. Sophie Nightingale, a psychology professor at Lancaster University in the United Kingdom, found that humans have only a 50% chance of seeing an AI-synthesized face, and many even considered them more reliable than real ones. The means for anyone to access “synthetic content without specialized knowledge of Photoshop or CGI,” she told Digital Trends, “creates a significantly greater threat to nefarious applications than previous technologies.”

Illustrations of natural FFHQ and StyleGAN2 generated images that are barely distinguishable.

What makes these faces so elusive and very realistic, says Yassine Mekdad, a cybersecurity researcher at the University of Florida whose model for recognizing AI-generated photos has a 95.2% accuracy, is that their programming (known as a Generative Adversarial Network) two opposing neural networks that work against each other to enhance an image. One (G, generator) has to generate the fake images and mislead the other, while the second (D, discriminator) learns to distinguish the results of the first from real faces. This “zero-sum game” between the two allows the generator to produce “indistinguishable images”.

And AI-generated faces have indeed taken over the internet at breakneck speed. Aside from accounts like Dovhenko’s that use synthesized personas to garner a following, this technology has enabled much more alarming campaigns of late.

When Google fired an AI ethics researcher, Timnit Gebru, in 2020 for publishing a paper highlighting biases in the company’s algorithms, network of bots with AI-generated faces, who claimed to work in Google’s AI research division, popped up on social networks and attacked anyone who spoke in Gebru’s favor. Similar activities by countries like China have been discovered promoting government narratives.

On a cursory Twitter review, it didn’t take long to find several anti vaxxers, pro-russians, and more – all hiding behind a computer-generated face to push their agendas and attack anyone who gets in their way. While Twitter and Facebook regularly take down such botnets, they have no framework to deal with individual synthetic-faced trolls, even though the former policy on misleading and deceptive identities prohibits impersonating individuals, groups or organizations to deceive. , confuse or deceive others, nor use a false identity in a way that interferes with the experience of others.” This is why when I reported the profiles I came across, I was informed that they are not violating any policy.

Sensity, an AI-based fraud solutions company, estimates that about 0.2% to 0.7% of people on popular social networks use computer-generated photos. That doesn’t seem like much on its own, but for Facebook (2.9 billion users), Instagram (1.4 billion users), and Twitter (300 million users), it means millions of bots and actors who could potentially be part of disinformation campaigns.

The match rate of an AI-generated Chrome face detection extension by V7 Labs confirmed Sensity’s numbers. Its CEO, Alberto Rizzoli, claims that an average of 1% of photos people upload are flagged as fake.

The marketplace for fake faces

A collection of AI generated faces on generated photos.
Generated photos

Part of the reason AI-generated photos have spread so quickly is how easy it is to get them. On platforms like Generated Photos, anyone can get hundreds of thousands of high-resolution fake faces for a few bucks, and for people who need a few for one-off purposes, such as personal smear campaigns, they can download them from websites like thispersondoesnoteexist.com, every time. that you reload it automatically generates a new synthetic face.

These websites have made life particularly challenging for people like Benjamin Strick, the research director of the UK’s Center for Information Resilience, whose team spends hours every day tracking and analyzing misleading content online.

“If You Roll” [auto-generative technologies] in a package of fake profiles, working in a fake startup (via thisstartupdoesnotexist.com),” Strick told Digital Trends, “there is a recipe for social engineering and a foundation for highly deceptive practices that can be set up in minutes.”

Ivan Braun, the founder of Generated Photos, argues that it’s not all bad, though. He claims that GAN photos have plenty of positive use cases – such as anonymizing faces in Google Maps street view and simulating virtual worlds in gaming – which is what the platform promotes. If anyone is into deceiving people, Braun says he hopes his platform’s anti-fraud platform will be able to detect the malicious activity, and eventually social networks will be able to filter generated photos from authentic photos.

But regulating AI-based generative technology is also tricky, as it also powers countless valuable services, including that latest filter on Snapchat and Zoom’s smart lighting features. Sensity CEO Giorgio Patrini agrees that banning services like Generated Photos is impractical to stop the rise of AI-generated faces. Instead, there is an urgent need for more proactive approaches to platforms.

Until that happens, synthetic media adoption will continue to erode trust in public institutions like governments and journalism, said Tyler Williams, the research director at Graphika, a social network analytics firm that has uncovered some of the most extensive campaigns involving fake characters. And a critical element in the fight against the misuse of such technologies, Williams adds, is “a media literacy curriculum that starts at an early age and source verification training.”

How do you recognize an AI generated face?

Lucky for you, there are a few surefire ways to tell if a face has been artificially created. The thing to remember here is that these faces are simply conjured up by piecing together tons of photos. So while the actual face may look real, you’ll find plenty of clues around the edges: the ear shapes or the earrings may not match, locks of hair may fly all over the place, and the rim of the glasses may be odd – the list goes on. The most common giveaway is that when you cycle through some fake faces, all of their eyes are in the exact same position: in the center of the screen. And you can test with the “folded train tickethack, as demonstrated here by Strick.

Nightingale believes that the main threat posed by AI-generated photos is fueling the “lie dividend” – its very existence can dismiss any medium as fake. “If we can’t reason about the basic facts of the world around us,” she says, “our societies and democracies are at significant risk.”

Editor’s Recommendations






Leave a Comment