We talked about how biometrics can be used for multifactor authentication in order to verify someone’s identity. Biometric authentication refers to something you are (as opposed to something you know or something you have), which means that we can use a number of techniques in order to verify someone using their unique biological features. This includes:
We can also look at using something called gait analysis.
In fact, anything that involves the idea of human biology being used in combination with technology counts.
We’ve seen this in movies for quite a long time: if you’ve watched X-Men, the door to access Cerebro requires an eye or face scan for entry to a room…The replicator in Stark Trek responds to a voice print. Pretty much every James Bond or high-tech movie implements other sorts of biometrics for a cool effect.
However, using biometrics is not an exact science and can sometimes lead to errors. That’s why we’ll also talk about concepts like:
- Efficacy rates
- False acceptance
- False rejection
- Crossover error rate
These are all concepts you should understand if you are to ever deploy any sort of biometric authentication, and/or if you ever plan on taking the CompTIA Security+ exam.
With that said, let’s get started with biometric techniques!
Using fingerprints to identify someone probably doesn’t need much explaining. It’s been embedded in many smartphone devices and even laptops or access doors for many years now.
Fingerprint recognition works well as a form of authentication because there is such a tiny chance that anyone else in the world has the same fingerprint as you that this is generally recognized to be a unique biological feature. That’s why many systems use fingerprint readers or scanners to verify that you are who you claim to be.
Fingerprint recognition was common in both Android and iOS mobile devices, until Apple started leaning more towards using facial recognition with FaceID, which we’ll talk about further down. It is still one of the primary forms of authentication for Android devices.
To set this up, you’re asked to roll your finger or hold it at certain angles. The reason for that is because the device needs several instances of the fingerprint to be more accurate, especially under real-life scenarios where you may not have your finger perfectly straight onto the sensor.
Retina & Iris
The retina is something else about you that’s unique. It’s a layer of tissue in the back of your eye that senses light and sends images to your brain. It’s such a complex biological structure that everyone’s retina is unique, making retinal scans another authentication option.
The Iris is also part of your eye, but it’s towards the front of the eye and it’s what provides color in your eyes. Iris scanners can pick up the patterns of your iris, which is also going to be unique to you.
Facial scanning or recognition is a way of identifying a human being based on their facial features. This can be used in cameras for real-time identification, or it could also be used for later analysis.
Facial recognition is now the primary way that iPhones authenticate users, using what’s called FaceID.
One of the issues with FaceID is that you typically have to be facing the phone a certain way for it to work. If you’re picking up the device from an angle, it may not recognize your face. Another issue that surfaced in 2020 and 2021 as most people used masks in public was that the phone wouldn’t recognize the user. A later update pushed by Apple provided the ability to have 2 FaceID profiles, so you could have your regular facial recognition and a mask facial recognition.
One of the benefits of FaceID over fingerprint scanning is that you do not have to place your finger a certain way on the device in order to unlock it, and wearing gloves doesn’t create as much of a hassle.
We’d be curious to hear your opinion in the comments, though — do you prefer fingerprint or facial recognition for mobile phones? Let us know!
Our voices also have unique characteristics, and we can use systems to scan the sound of our voice to identify humans.
As artificial intelligence becomes more and more advanced, it is now able to mimic other people’s voices in a highly realistic way using just open source software, which means that voice recognition does have risks that some of the other biometrics do not have.
In fact, a lot of these voice recognition services are now explicitly asking whether you have permissions to be training their machine learning models with that voice (ie: that it’s your own voice and not someone else’s). Of course, I doubt they thoroughly verify this and anyone can go grab open source software to implement their own machine learning training.
Veins in your body can also be used to identify you. As an example, we can use something called palm vein identification.
This technology detects blood flowing through the user’s palm using infrared light in order to map their vein structure.
I’ve actually had this technique used before in order to take a certification exam from ISC2, so this is definitely not just a theoretical identification technique.
Gait analysis is an interesting one. This is the analysis of how someone moves…such as body movements, activity of muscles, and overall body mechanics.
Everyone walks, moves, and gestures a bit differently. We all develop our own “style” which can be identified by the right equipment and analysis software.
This is not commonly used for identification because it’s definitely not the cheapest option, and it’s arguably prone to more errors than other much cheaper and more easily implemented techniques.
Biometrics can be tricky to rely on because they may not always be reliable. For one, biometrics require that we have a database with a baseline. If we want to compare fingerprints, we need to have an existing image of the person’s fingerprint to properly identify them. What if the baseline image was poorly taken?
Or what if the user damaged their fingertip, or has lotion that prevents the machine from properly reading their fingerprint?
That also means we need to invest in good quality equipment to properly implement biometric authentication.
All of that feeds into what’s called an efficacy rate…which rates how well a technique or particular device works.
For example, NIST ran a study and published results in 2004 that showed the best fingerprint scanning systems were accurate 98.6 percent of the time when testing a single finger. That gives us an indication of the efficacy rate.
98.6 percent, however, is not 100 percent accurate. What about that 1.4 percent difference? Of that, how many times did someone get accepted through the system even though they shouldn’t have been? How many times did the system improperly grant authorization to a user that should have been denied?
That is called the false acceptance rate, also known as FAR.
Or, what if the user scanned their finger and was rejected when they should have been accepted? That’s called false rejection rate, or FRR.
This could start to frustrate users or cause them to be unable to access systems or locations that they need to be able to access.
Crossover error rate
In the middle of false acceptance and false rejection, we have the crossover error rate, also known as CER.
CER is achieved at the point where the FRR and FAR are equal. This is the sweet spot for errors that organizations may strive towards as they aim to configure the sensitivity of their biometric authentication devices.
Whether you are studying for your CompTIA Security+ exam, or whether you’re looking to deploy a biometric solution for your organization, there’s a lot more to it than just picking the technique. We also have to consider efficacy rates, false acceptance, and false rejections.
Otherwise, we may end up implementing the wrong biometric solution for our use case, or we may not properly that solution.
Hopefully, this article helped you gain a better understanding of what your options are and what you need to consider.