As a result, many sectors have been embracing biometric technology and incorporating the same into their various security systems.
This has been especially true for institutions that rely on having top notch security systems due to the sensitivity of their business operations.
Some of the renowned banks for instance, that have been using biometric security solutions include Barclays, Citi and HSBC to mention a few. In fact, a recent study done by Visa that surveyed randomly chosen consumers in seven different European countries (UK included) revealed that over 68% of customers would actually prefer to use biometric technology for making payments, due to their undoubted faith in the technology’s reliability.
Unfortunately, new research proves that biometric technology may after all not be as robust as we’ve all previously deemed it to be. With something as simple as a photo you posted online years ago, cyber stalkers can use 3D printing to create a Virtual Reality (VR) face mask that could indeed beat facial security systems with about 80% accuracy.
I know – it’s a little bit creepy right?
Researchers from the University of Carolina did a number of experiments that led them to the conclusion that 3D printed models or masks based on photos one may have posted on Facebook or Instagam can be used by malicious people to crack at least four out of five facial security systems that we depend on today.
For people who are not usually ‘obsessed’ with sharing photos on social media platforms, this news could come with a little sigh of relief. Well, here’s why you still have reasons to worry. In their article “Virtual U: Defeating Face Liveness Detection by Building Virtual Models from your Public Photos” , the UNC researchers explained that even if you are often cautious about posting photos of yourself online; you still don’t have as much control over the matter as you may think.
Your friends and family might just as easily post photos you’d taken with them online. These team of researchers say that a minimum of three photos that you may have taken a few years ago, even if they’re of low quality, are more than enough to produce good 3D models which any third party could use to crack facial recognition systems.
Test Done on 20 Volunteers
This rather chilling discovery about the loopholes present in facial authentication systems was demonstrated earlier this month by the North Carolina researchers at the Usenix Security Conference. The researchers illustrated a system that uses photos taken from Facebook among other social networks, to create digital facial 3D models that easily beat today’s facial recognition software.
The demonstration was done on a team of 20 volunteers, who by being security researchers themselves, were always careful not to share too much online. Among the twenty, one participant in particular had uploaded only two photos in the last three years.
Image engine searches were done with the aim of collecting as many photos as was possible of the twenty volunteers. This was done mainly through Facebook, Instagram, LinkedIn and Google+. When the image data mining was over, the researchers were able to find about 3-27 photos online of each of the participants.
Study author True Price said:
“We could leverage online pictures of the [participants], which I think is kind of terrifying. You can’t always control your online presence or your online image.”
Using the data obtained, the UNC researchers created 3D printed models of the volunteers’ faces and adjusted the eyes of the models to look directly into a smartphone’s camera. The researchers even added animations on the 3D printed facial models like frowning or smiling to make them as detailed as possible when compared to their respective human subjects.
For the volunteers whose online photos were not accurately showing their whole faces, the researchers implemented improvisation techniques like adding textures and shadows and even recreating the missing parts on the models. After all these, the details on the 3D printed face models were so accurate that the 3D faces would move in a similar manner a real human face would when the devices were rotated. The researchers said:
“To an observing face authentication system, the depth and motion cues of the display exactly match what would be expected for a human face.”
Five key facial authentication systems, which are available to consumers through iTunes Store and the Google Play Store were then tested. These included KeyLemon, TrueKey, Mobius, BioID and ID. The results were astounding. Four out of five of these facial security systems were hacked and generally the success rate was 55%-85%.
That is not all! What’s more is, when the researchers took clear head shots of the participants in an indoor setting, all the five facial security systems were cracked. They affirmed:
“Our exploitation of social media photos to perform facial reconstruction underscores the notion that online privacy of one’s appearance is tantamount to online privacy of other personal information, such as age and location.”
How to Improve Facial Recognition Systems
This research and demonstration popped up many questions. Does this mean that people should no longer share photos online? Or, should biometric technology be done with altogether? The researchers concluded by saying that the threat cannot be eliminated entirely in this age of social media. The only viable solution was for facial security systems to be improved in such a way that would make it difficult for them to be hacked, even when detailed 3D printed facial models are used.
According to them, facial recognition systems can be designed to detect fraud by setting them up in a way that they reject all synthetic faces with low-resolution textures. They recommended incorporating facial security systems with features such as illuminated infrared sensors, light projection patterns and the ability to recognize minute skin tone fluctuations.
Even with all these propositions on how to make facial security systems more robust, the UNC researchers still believed that there was a lot to be done in order to make such authentication systems totally reliable against malicious attacks to unsuspicious individuals. They had this to say:
“Even if a system is able to robustly detect a certain type of attack – be it using a paper printout, a 3D-printed mask, or our proposed method – generalizing to all possible attacks will increase the possibility of false rejections and therefore limit the overall usability of the system. The strongest facial authentication systems will need to incorporate non-public imagery of the user that cannot be easily printed or reconstructed, [such as a skin heat map].”
From all these, clearly, facial security systems have a lot of catching up to do if they are going to really be capable of preventing technology-fuelled fraud. Check out these other articles about 3D Printing and Crime and 3D Printing and Security Breaches.
If you’d like to learn more about 3D printing then feel free to try my new Introduction to 3D Printing at Home online course.
Thanks for reading.