TechCrunch News 2024年10月22日
Meta tests facial recognition for spotting ‘celeb-bait’ ads scams and easier account recovery
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Meta宣布扩大面部识别测试,作为反诈骗措施,以打击名人诈骗广告等。该测试旨在加强现有反诈骗措施,如利用机器学习分类器进行自动扫描。测试包括检查含公众人物图像的可疑广告、识别名人冒充账号、为被锁账号提供更快解锁方式等。公司称早期测试结果良好,但在英国和欧盟未进行测试,因其数据保护规定严格。

🎯Meta的一些测试旨在加强现有反诈骗措施,如利用机器学习分类器的自动扫描,来检测诈骗广告,使其难以逃过检测,防止用户被虚假广告欺骗。

👀测试使用面部识别技术,将广告中的公众人物面部与他们在Facebook和Instagram的个人资料图片进行对比,若确认广告为诈骗则进行拦截,并会立即删除相关面部数据。

🚫Meta还测试利用面部识别来发现名人冒充账号,通过AI对比可疑账号与公众人物的资料图片,以打击诈骗者扩大欺诈机会的行为。

🔓此外,Meta试验将面部识别应用于视频自拍,以便被骗子接管账号的用户能更快解锁,用户上传视频自拍后,会使用面部识别技术进行对比,无论是否匹配都会立即删除面部数据。

Meta is expanding tests of facial recognition as an anti-scam measure to combat celebrity scam ads and more broadly, the Facebook owner announced Monday.

Monika Bickert, Meta’s VP of content policy, wrote in a blog post that some of the tests aim to bolster its existing anti-scam measures, such as the automated scans (using machine learning classifiers) run as part of its ad review system, to make it harder for fraudsters to fly under its radar and dupe Facebook and Instagram users to click on bogus ads.

“Scammers often try to use images of public figures, such as content creators or celebrities, to bait people into engaging with ads that lead to scam websites where they are asked to share personal information or send money. This scheme, commonly called ‘celeb-bait,’ violates our policies and is bad for people that use our products,” she wrote.

“Of course, celebrities are featured in many legitimate ads. But because celeb-bait ads are often designed to look real, it’s not always easy to detect them.”  

The tests appear to be using facial recognition as a back-stop for checking ads flags as suspect by existing Meta systems when they contain the image of a public figure at risk of so-called “celeb-bait.”

“We will try to use facial recognition technology to compare faces in the ad against the public figure’s Facebook and Instagram profile pictures,” Bickert wrote. “If we confirm a match and that the ad is a scam, we’ll block it.”

Meta claims the feature is not being used for any other purpose than for fighting scam ads. “We immediately delete any facial data generated from ads for this one-time comparison regardless of whether our system finds a match, and we don’t use it for any other purpose,” she said.

The company said early tests of the approach — with “a small group of celebrities and public figures” (it did not specify whom) — has shown “promising” results in improving the speed and efficacy of detecting and enforcing against this type of scam.

Meta also told TechCrunch it thinks the use of facial recognition would be effective for detecting deepfake scam ads, where generative AI has been used to produce imagery of famous people.

The social media giant has been accused for many years of failing to stop scammers misappropriating famous people’s faces in a bid to use its ad platform to shill scams like dubious crypto investments to unsuspecting users. So it’s interesting timing for Meta to be pushing facial recognition-based anti-fraud measures for this problem now, at a time when the company is simultaneously trying to grab as much user data as it can to train its commercial AI models (as part of the wider industry-wide scramble to build out generative AI tools).

In the coming weeks Meta said it will start displaying in-app notifications to a larger group of public figures who’ve been hit by celeb-bait — letting them know they’re being enrolled in the system.

“Public figures enrolled in this protection can opt-out in their Accounts Center anytime,” Bickert noted.

Meta is also testing use of facial recognition for spotting celebrity imposer accounts — for example, where scammers seek to impersonate public figures on the platform in order to expand their opportunities for fraud — again by using AI to compare profile pictures on a suspicious account against a public figure’s Facebook and Instagram profile pictures.

“We hope to test this and other new approaches soon,” Bickert added.

Additionally, Meta has announced that it’s trialling the use of facial recognition applied to video selfies to enable faster account unlocking for people who have been locked out of their Facebook/Instagram accounts after they’ve been taken over by scammers (such as if a person were tricked into handing over their passwords).

This looks intended to appeal to users by promoting the apparent utility of facial recognition tech for identity verification — with Meta implying it will be a quicker and easier way to regain account access than uploading an image of a government-issued ID (which is the usual route for unlocking access access now).

“Video selfie verification expands on the options for people to regain account access, only takes a minute to complete and is the easiest way for people to verify their identity,” Bickert said. “While we know hackers will keep trying to exploit account recovery tools, this verification method will ultimately be harder for hackers to abuse than traditional document-based identity verification.” 

The facial recognition-based video selfie identification method Meta is testing will require the user to upload a video selfie that will then be processing using facial recognition technology to compare the video against profile pictures on the account they’re trying to access.

Meta claims the method is similar to identity verification used to unlock a phone or access other apps, such as Apple’s FaceID on the iPhone. “As soon as someone uploads a video selfie, it will be encrypted and stored securely,” Bickert added. “It will never be visible on their profile, to friends, or to other people on Facebook or Instagram. We immediately delete any facial data generated after this comparison regardless of whether there’s a match or not.”

Conditioning users to upload and store a video selfie for ID verification could be one way for Meta to expand its offerings in the digital identity space — if enough users opt in to uploading their biometrics.

All these tests of facial recognition are being run globally, per Meta. However the company noted, rather conspicuously, that tests are not currently taking in the U.K. or the European Union — where comprehensive data protection regulations apply. (In the specific case of of biometrics for ID verification, the bloc’s data protection framework demands explicit consent from the individuals concerned for such a use case.)

Given this, Meta’s tests appear to fit within a wider PR strategy it has mounted in Europe in recent months to try to pressurize local lawmakers to dilute citizens’ privacy protections. This time, the cause it’s invoking to press for unfettered data-processing-for-AI is not a (self-serving) notion of data diversity or claims of lost economic growth but the more straightforward goal of combating scammers.

“We are engaging with the U.K. regulator, policymakers and other experts while testing moves forward,” Meta spokesman Andrew Devoy told TechCrunch. “We’ll continue to seek feedback from experts and make adjustments as the features evolve.”

However while use of facial recognition for a narrow security purpose might be acceptable to some — and, indeed, might be possible for Meta to undertake under existing data protection rules — using people’s data to train commercial AI models is a whole other kettle of fish.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Meta 面部识别 反诈骗 账号解锁
相关文章