🧠 The Fraud That Fooled Facial Recognition: The Case That Reveals the Limits of Biometrics
Deepfakes, stolen selfies, and the failure of digital liveness detection, how a billion-dollar system was tricked and what it says about the future of trust in biometric technologie
In early 2025, a major fraud scheme exposed vulnerabilities in the use of facial recognition by Brazil’s National Institute of Social Security (INSS, responsible for retirement and social benefits, similar to the Social Security Administration in the US). Criminals discovered they could bypass biometric authentication simply by using photos of real beneficiaries. Armed with selfies taken from social media and images leaked from the INSS’s own databases, they forged loan authorizations in the names of retirees, without the victims’ knowledge.
In many cases, scammers tricked beneficiaries into sending photos under false pretenses, for example, by promising to speed up benefit reviews, and then used those images to impersonate them. With access to both the data and selfies, they were able to remotely apply for loans. Victims only discovered the fraud when they noticed mysterious deductions in their monthly payments.
Technically, the scam exploited flaws in the system’s liveness detection. Ideally, when using facial recognition, the INSS (or Gov.br) app should verify that a real, live person is in front of the camera, not just a static photo or a pre-recorded video. In practice, however, the authentication process appeared to rely on a static selfie compared to the document photo, with no guarantee of actual liveness. That gap was the key vulnerability: a high-quality photo of the victim, obtained illegally, could pass facial comparison if the system didn’t require real-time proof of life. Some courts had already warned that a selfie alone doesn’t reliably prove consent, especially in the case of vulnerable elderly users. In other words, extra layers were missing to distinguish a live person from a mere reproduced image.
Implementing that distinction is particularly challenging in countries with wide diversity in mobile devices. Many citizens use simpler smartphones that lack advanced depth or infrared sensors needed for more secure facial verification. On these devices, liveness checks rely solely on the front-facing camera, making them more susceptible to deception. Independent tests illustrate this well: in 2023, a review of 48 smartphone models found that 19 of them could be unlocked simply by showing a printed photo of the owner, on regular paper, with average resolution. Most of the deceived phones were entry-level or mid-range models, precisely the most accessible to the general population. This highlights a hardware limitation: without 3D technology or dedicated sensors, many mobile facial recognition systems can be fooled by two-dimensional images.
Beyond static photos, deepfakes present a growing threat. Thanks to advances in artificial intelligence, it’s now possible to generate extremely realistic videos from a single photo, animating the face to blink, smile, or speak. Even systems that ask users to move their heads or speak could be tricked. In late 2024, a security firm, Group-IB, uncovered over 1,100 fraud attempts in Indonesia where synthetic facial images were used to bypass “Know Your Customer” (KYC) checks at banks. In those cases, fraudsters used virtual camera software to inject deepfake videos directly into the application, simulating a live person in front of the system. In short, deepfake tools are already capable of bypassing basic liveness detection, generating digital faces that respond to challenges almost in real time.
Faced with the scandal, the authorities doubled down on facial recognition. Starting in May 2025, no new INSS-backed loans will be approved without biometric authentication via the official app. This facial recognition is legally treated as an “advanced electronic signature,” assuming that only the legitimate beneficiary can validate their identity this way (but can they really? And for how long?).
The new requirement aims to stop third parties from taking out fraudulent loans, adding a layer of digital security where previously a selfie or signature was enough. However, it also exposes a dilemma: we are placing our trust in digital images and algorithms, believing they are foolproof.
Experts have long warned that such trust requires caution. The U.S. National Institute of Standards and Technology (NIST) conducts extensive testing of facial recognition systems and fraud detection through its Face Recognition Technology Evaluation (FRTE/FRVT) program. In recent studies, NIST challenged dozens of liveness detection algorithms using various spoofing techniques, from silicone masks to printed photos, and found that no single algorithm could catch them all. Some detect specific types of spoofing, but miss others. Even the best had gaps, though combining multiple algorithms improved effectiveness in some cases. In short, software-only liveness detection remains an evolving field, and creative fraudsters are always looking for ways to stay one step ahead.
In my view, if the public loses trust in remote solutions, fearing that selfies can be easily forged, we risk a serious step backward toward in-person validation. Institutions may return to requiring retirees to be physically present at agencies just to prove they are who they say they are, undoing years of progress in digital convenience. For a country with continental dimensions, this would mean long lines, travel burdens, and significant difficulty, especially for the elderly, those with limited mobility, or people in remote regions.
On the other hand, insisting on digital paths without reinforcing security would be irresponsible. The way forward, in Brazil and globally, lies in improving biometric technology and adopting multi-factor approaches without sacrificing inclusion. This means better use of available device sensors, innovation in deepfake detection, and, most importantly, strong protection of citizens’ biometric data. After all, once your facial image leaks, you can’t change your “password”, your face is a permanent identifier (and not even facial harmonization can save you).
Balancing convenience and security is complex: the Brazilian social security scandal is just the beginning. Privacy and biometrics must go hand in hand if we want to keep trusting our faces to digital systems.
References
VERENICZ, Marina. Criminals used leaked photos and selfies to commit loan fraud at Brazil’s Social Security, says CGU. InfoMoney, May 9, 2025. Available at: https://www.infomoney.com.br/politica/criminosos-usaram-fotos-vazadas-e-selfies-para-fraudar-emprestimos-no-inss-diz-cgu/. Accessed: May 19, 2025.
KUNERT, Paul. Phones’ facial recog tech ‘fooled’ by low-res 2D photo. The Register, May 19, 2023. Available at: https://www.theregister.com/2023/05/19/2d_photograph_facial_recog/. Accessed: May 19, 2025.
NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY (NIST). What’s Wrong With This Picture? NIST Face Analysis Program Helps to Find Answers. News release, September 20, 2023. Available at: https://www.nist.gov/news-events/news/2023/09/whats-wrong-picture-nist-face-analysis-program-helps-find-answers. Accessed: May 20, 2025.
GROUP-IB. Deepfake Fraud: How AI is Bypassing Biometric Security in Financial Institutions. Global Security Magazine, December 2024. Available at: https://www.globalsecuritymag.com/deepfake-fraud-how-ai-is-bypassing-biometric-security-in-financial-institutions.html. Accessed: May 20, 2025.
That’s wild.