Meta has announced an expansion of its facial recognition trials as part of efforts to tackle scams involving celebrity images on its platforms. The company aims to strengthen its anti-scam measures, including the use of machine learning tools in ad reviews, to reduce fraudulent ads on Facebook and Instagram.
In a blog post, Monika Bickert, Meta's VP of Content Policy, highlighted the prevalence of "celeb-bait" scams. These scams use images of public figures, such as celebrities or influencers, to lure users into engaging with ads that direct them to fraudulent websites. On these sites, victims are often tricked into sharing personal data or making payments. “These schemes violate our policies and harm our users,” Bickert noted.
To counter this, Meta is testing facial recognition as a safeguard for ads flagged as suspicious. If an ad contains the image of a public figure, the system compares it to the celebrity’s profile pictures on Facebook and Instagram. If a match confirms the ad as a scam, it will be blocked. Meta emphasized that facial data generated in this process is immediately deleted and used solely for scam detection.
Early trials, involving a small group of public figures, have shown promise in improving detection rates and enforcement. Meta is also exploring the technology's potential for identifying deepfake scam ads, where AI-generated images of celebrities are used for fraudulent purposes.
The company plans to expand these efforts by notifying more public figures about their enrollment in the system, with an option to opt out via their Account Center. Additionally, Meta is testing facial recognition to combat impersonation accounts by matching profile pictures on suspicious accounts with those of legitimate public figures.
New Methods for Account Recovery
Meta is also piloting facial recognition for account recovery. Users locked out of their accounts due to scams can upload a video selfie for verification. The technology compares the video to the account’s profile pictures, offering a faster alternative to submitting government-issued ID. Meta claims the method is secure, with data encrypted and deleted immediately after verification.
This approach mirrors technologies like Apple’s Face ID and could pave the way for Meta to expand its digital identity services. However, critics warn it might condition users to share biometric data, raising privacy concerns.
Restrictions in the UK and EU
Currently, these tests are not being conducted in the UK or EU due to stringent data protection laws requiring explicit consent for biometric use. Meta has indicated ongoing discussions with regulators in these regions but faces significant hurdles given local privacy rules.
While Meta’s use of facial recognition for anti-scam purposes may appear narrowly focused, its broader implications, particularly regarding the collection and use of user data for AI training, remain a contentious issue.
Post a Comment