In the last two posts I ranted about passwords and then asked if there is something better. If you are immediately thinking “use passphrases instead of passwords”, please consider that humans are still too lazy… However, as lazy as they are, security experts have found a way to “enhance” password security, by making the entire authentication process even more complex.
One of the first alternatives, or rather enhancements, to passwords is the use of Multi-Factor Authentication.
The best known additional factor of authentication involves some external (out of band) device while authenticating. This is the infamous “something you have” factor. “Something you have” normally involves some one-time password that is sent via SMS or generated directly on a mobile device (mobile phone or device dedicated to code generation). Users who wish to authenticate to the system will now have to provide use this external device as well. Even if a user’s password is compromised, a malicious actor cannot do much with it on the current system without the authenticator.
Remember, password reuse is still remains rampant, and a compromised password might be usable on other systems.
However, to a systems administrator this enhanced security can sound wonderful. However, there are additional costs associated with this.
In a blog by an “Access Control Company”, Duo, the costs associated with multi-factor authentication can be split into three categories:
- Upfront Costs
- Deployment Costs
- Ongoing Costs
One of the types of costs that fall under all three of these categories involve humans. Essentially, the workforce needs to be trained, encouraged, and supported. The sudden use of a new device can lead to frustration (especially when they keep forgetting the thing at home), and the productivity cost of a lost device (replacement device and administration) can have an impact on the company.
Once again, the human-factor is holding us back in our endeavour for improved security! But, what if we play to the human’s strengths using more unconventional authentication methods, that do not directly impact the user.
Graphical passwords rely on the human brain to remember and recognise images (as opposed to words) for authentication. System such as Passfaces™ were explicitly developed for this purpose, but as a second factor of authentication. It requires that the user first remember set of faces (usually between 3–7) and then later, during authentication, select them. However, an investigation showed that using Passfaces™ lead to slower authentication and provoked user resistance.
If the human is holding us back, let us involve the human wholly in our search for ultimate security.
Humans and security – if you cannot beat them, use them. Thanks to the proliferation on television and movies, people are no longer stranger to placing their finger on a pad for a fingerprint scan or staring blankly at a camera to have their face recognised. These elements, these biometric identifiers, are used as a means of identifying the person actively providing the biometric. Common place biometric identifiers include fingerprints, retina scans and voice recognition.
Conspiracy theorists aside, there is a belief that the use of biometrics is completely secure. However, that might ultimately be a false sense of security. Not only is it possible that fingerprints are not completely unique, but even the way in which biometric systems are designed is flawed.
As biometric authentication needs to make a digital decision from some organic input (fingerprint, etc.) there need to be some decisions made along the way. This is the biggest failing of a biometric system – its decision policy.
The first of the major decision policies is the “false rejection rate” (FRR). The FRR describes the number of biometrics that are considered to be incorrect when in fact they were accurate. That is, the rate at which correct biometrics will be considered to be incorrect. This could potentially cause a valid user to be denied access just because their finger is a bit more greasy than normal. Not a big problem, however, the next of these policies far outweighs this.
When any policy contains something called a “false acceptance rate” (FAR), that should be cause for concern. This FAR describes the number of biometrics that are considered to be accurate that were in the fact false. In other words, this is the rate at which fake/incorrect biometrics will be considered to be true. Yes, the rate at which a malicious actor could potentially breach security is quantifiable.
These two rates need to be taken into consideration when a decision policy is decided upon for any authentication system that makes use of biometrics. A balance needs to be found between the FAR and FRR. Depending on the actual security required by the system, the policy could rather favour false rejections (for better security) or false acceptance (for happier users).
While great for false peace of mind, biometrics are inherently fallible. But, because it involves the physical human in the authentication, it feels more secure.
Maybe we can create some kind of utopia? I wonder…