🔍 Facial recognition in residential buildings: when security turns into exposure
The use of facial recognition in residential buildings is growing worldwide, but few residents are aware that this practice has already sparked controversy, led to fines, and even been banned in sever
I live in a residential building and use facial biometrics. At the time, I asked where the images were stored, and no one could answer me. The registration process was done by simply sending a selfie via WhatsApp to the person in charge. Yes, that’s right. This week, I read a news story about a resident in Brazil who refused to provide her biometric data and went through a similar situation.
In the United States, residents in New York blocked the use of this technology in their building after reporting risks to privacy and freedom (https://www.theguardian.com/cities/2019/may/29/new-york-facial-recognition-cameras-apartment-complex). In Europe, facial recognition faces strong resistance.
Even in China, where state control is the norm, recent regulations require residential buildings to offer non-biometric alternatives. No one should be forced to hand over their face just to enter their home.
But looking at it in practice: facial recognition in residential buildings: security or overkill?
In many buildings, facial biometrics have been treated as a synonym for security. The promise is simple: more control, more convenience. But in reality, it often means handing over extremely sensitive information without knowing exactly who will have access to it, how long it will be stored, or whether it will be shared. Transparency is still rare, and most people don’t even realize it.
Few residents take a moment to question the basics: who is responsible for this data? What is the real purpose behind collecting it? When will it be deleted? The right to access the privacy policy, presented in a clear and accessible way, is practically non-existent in most buildings that use this type of technology.
I’ve seen cases where biometric data isn’t even deleted after a resident moves out. The data just sits there, forgotten or poorly managed, increasing the risk of leaks and misuse. In my view, the problem isn’t the technology itself, but the irresponsible way it’s being implemented, with no regard for the most basic right: knowing who’s storing my face and why.
The biggest issue? Your face is a password that can’t be changed. When there’s a data breach, and there has been, like the case of the Australian company in 2024 (https://www.wired.com/story/outabox-facial-recognition-breach/) , there’s no fixing it. Moreover, few buildings can explain who accesses the images, how long they’re stored, or if they’re shared with third parties.
The problem isn’t biometrics itself , technology was never the problem, but ensuring that companies collecting this data use it securely and strictly for its intended purpose, rather than just storing gigabytes of residents’ photos on a desktop computer sitting under the security guard’s desk, accessible to anyone with a USB stick.
We still have a long way to go, but if each residential or commercial building did the bare minimum, it would already be a huge improvement compared to where we are today.
I’m assuming visitors & co-leasers have biometrics stored as well. How often are these systems really not deleting the data? With it not deleted, can old residents come back years later still capable of entry? If someone gets a restraining order on a co-leaser, do stored biometrics cause delay in what would potentially be a simple lock change? In turn, causing a greater security risk…