Two Stanford researchers have found widespread use of fake Linkedin accounts created with artificial intelligence-generated (AI) profile photos. These profiles target real users in an attempt to increase interest in certain companies before passing the successful leads to a real salesperson.
Misinformation online takes different forms — from false or skewed facts presented as the truth to machine-generated photos and videos, that can be used for a variety of unethical and damaging purposes.
AI Photos Used as Fake Profile Photos
Two researchers, Renée DiResta and Josh Goldstein from the Stanford Internet Observatory, discovered that Linkedin, the same as Facebook and Twitter, is not immune to this digital age problem. In the case of Linkedin, they found that bots using AI-generated faces — as many as 1,000 fake profiles — are being leveraged to create false buzz around some companies, reports The Register.
The process is simple: a bot with an AI-generated profile photo contacts an unsuspecting Linkedin user and, if the target shows interest, they get passed on to a real salesperson to continue the conversation.
Meet Keenan Ramsey. Her LinkedIn profile says she sells software for RingCentral & has a business degree from NYU. She likes CNN, Amazon, & Melinda French Gates. Her pitches come punctuated with emojis.https://t.co/TyoBp2qxIP pic.twitter.com/LLfIvph17N
— Shannon Bond (@shannonpareil) March 27, 2022
The two researchers made the discovery after DiResta received a message from a profile belonging to a “Keenan Ramsey.” At first, it looked like a normal sales pitch from a software company but it soon became clear that Ramsey was a fictitious person — the fake profile headshot contained multiple red flags, like the unusually central alignment of eyes, only one earring, and parts of hair were blurred into the background.
But…RingCentral doesn’t have any record of an employee named Keenan Ramsey. NYU says no one named Keenan Ramsey has received any undergraduate degree.
And the biggest red flag? Her face appears to have been created by artificial intelligence.https://t.co/TyoBp2qxIP pic.twitter.com/o9ew9IM3ml— Shannon Bond (@shannonpareil) March 27, 2022
After the AI-generated profile photo jumped out as a fake, DiResta, who has also studied Russian disinformation campaigns and anti-vaccine conspiracies, began looking into the matter with her colleague Josh Goldstein and found over 1,000 profiles using AI-generated photos.
Using AI to Cut Down on Hiring Costs
Companies use profiles like these to cast a wide net of potential leads without having to use real sales staff and to avoid hitting Linkedin message limits. It was found that more than 70 businesses were listed as employers of fake profiles, with some companies telling NPR that they hired outside marketers to help with sales but hadn’t authorized the use of AI-generated photos, and were surprised by these findings.
We came across a bunch of companies that sell LinkedIn marketing services. Some even explicitly offer bot or “avatar” accounts – which go against LinkedIn’s rules that accounts are supposed to represent real people, with real photos.
— Shannon Bond (@shannonpareil) March 27, 2022
The use of fake profiles is not permitted by Linkedin. The company’s spokesperson Leonna Spilman told TechRadar that the company’s policies make it clear that every Linkedin profile must represent a real person.
“We are constantly updating our technical defenses to better identify fake profiles and remove them from our community, as we have in this case,” Spilman says. “At the end of the day, it’s all about making sure our members can connect with real people, and we’re focused on ensuring they have a safe environment to do just that.”
Difficult For Naked Eye to Detect Truth
Although some businesses may employ AI-assisted marketing tactics because they are cheaper than employing real people, it’s difficult for users on the other side of the screen to distinguish between a fake or real profile photo — a recent study by PNAS found that people have a 50% chance of guessing correctly. The research also found that some people find machine-generated faces more trustworthy because AI often uses average facial features, suspects Hany Farid, co-author of the study.
To make it easier for people to tell real and fake profiles apart, V7 Labs created a new AI software that works as a Google Chrome extension and is capable of detecting profiles belonging to a bot, with a claimed 99.28% accuracy.
The V7 Labs’s “Fake Profile Detector” extension aims to help authorities and regular Internet users spot and report profiles that spread fake news or otherwise create misleading content.
Header image: All photos are AI-generated via This Person Does Not Exist/