LinkedIn Launches New Approach To Detect Fake Profiles

In a bid to help its fake profile problem, the platform has created an AI image detector to root out fake headshots.

LinkedIn has this week announced a new AI image detector to catch fake profiles, that has a 99% success rate.

The platform’s Trust Data Team claims the new approach can catch falsified profile images and remove fake accounts before they reach LinkedIn members.

This latest security innovation comes a few months after it was revealed LinkedIn was featured in over half of Q1 2022 phishing attacks.

Why are People Creating Fake LinkedIn Profiles?

As well as Twitter, LinkedIn has had trouble in recent times with the amount of fake profiles on its site. In the first half of 2022 alone, the platform had detected and removed 21 million fake accounts. 

But why exactly are all these fake profiles popping up? For some it’s to create trust amongst visitors to their websites, for others it's rooted in SEO purposes under the false belief that Google ranks articles with authors higher than ones without.

Whatever the motivation, there’s no doubt it’s even easier to create a fake profile thanks to the advances in AI.

“We are constantly working to improve and increase the effectiveness of our anti-abuse defenses to protect the experiences of our members and customers. And as part of our ongoing work, we’ve been partnering with academia to stay one step ahead of new types of abuse tied to fake accounts that are leveraging rapidly evolving technologies like generative AI.” – LinkedIn’s announcement on its new approach 

Fake Accounts Have Become Harder to Catch

This new approach comes following lengthy research into recognizing the structural differences between AI-generated faces and real faces – something most people don’t know how to spot.

LinkedIn keeps a close eye on unwanted activity that could pose a security risk, such as fake profiles and content policy violations. However, the sophistication of AI-generated images has, until recently, proved to be impossible to detect.

Get Your Data Back!

Incogni by Surfshark can help you reclaim your information from third-party vendors.

The key in solving this has been to know exactly what to look for. According to LinkedIn, AI-created images all share similar patterns that they call ‘structural differences’. Real images don’t have these structural differences.

An example in their blog post references a test of 400 AI-generated images vs 400 real ones. While the real images were displayed in clarity, the AI-generated ones grew increasingly blurrier, showcasing that areas around the eyes and nose tend to be very similar in fake photos.

While there’s no doubt that AI appears relentless in its potential security risks, LinkedIn’s latest development can be seen as a success against fake data.

Did you find this article helpful? Click on one of the following buttons
We're so happy you liked! Get more delivered to your inbox just like it.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at contact@tech.co

Written by:

Ellis Di Cataldo (MA) has over 9 years experience writing about, and for, some of the world’s biggest tech companies. She's been the lead writer across digital campaigns, always-on content and worldwide product launches, for global brands including Sony, Electrolux, Byrd, The Open University and Barclaycard. Her particular areas of interest are business trends, startup stories and product news.

Explore More See all news
Back to top
close Building a Website? We've tested and rated Wix as the best website builder you can choose – try it yourself for free Try Wix today