http://ad.doubleclick.net/activity;src=2516211;met=1;v=1;pid=48874434;aid=225124590;ko=0;cid=36794359;rid=36812237;rv=1;&timestamp=2524189;eid1=9;ecn1=1;etm1=0; 

Thursday October 7th 2010

Babbage

Biometrics

The Difference Engine: Dubious security

Oct 1st 2010, 8:22 by N.V. | LOS ANGELES

http://www.economist.com/sites/default/files/20101002_STP505.jpg

 

THANKS to gangster movies, cop shows and spy thrillers, people have come to think of fingerprints and other biometric means of identifying evildoers as being completely foolproof. In reality, they are not and never have been, and few engineers who design such screening tools have ever claimed them to be so. Yet the myth has persisted among the public at large and officialdom in particular. In the process, it has led—especially since the terrorist attacks of September 11th 2001—to a great deal of public money being squandered and, worse, to the fostering of a sense of security that is largely misplaced. 

Authentication of a person is usually based on one of three things: something the person knows, such as a password; something physical the person possesses, like an actual key or token; or something about the person’s appearance or behaviour. Biometric authentication relies on the third approach. Its advantage is that, unlike a password or a token, it can work without active input from the user. That makes it both convenient and efficient: there is nothing to carry, forget or lose. 

The downside is that biometric screening can also work without the user’s co-operation or even knowledge. Covert identification may be a boon when screening for terrorists or criminals, but it raises serious concerns for innocent individuals. Biometric identification can even invite violence. A motorist in Germany had a finger chopped off by thieves seeking to steal his exotic car, which used a fingerprint reader instead of a conventional door lock. 

Another problem with biometrics is that the traits used for identification are not secret, but exposed for all and sundry to see. People leave fingerprints all over the place. Voices are recorded and faces photographed endlessly. Appearance and body language is captured on security cameras at every turn. Replacing misappropriated biometric traits is nowhere near as easy as issuing a replacement for a forgotten password or lost key. In addition, it is not all that difficult for impostors to subvert fingerprint readers and other biometric devices. 

Biometrics have existed since almost the beginning of time. Hand-prints that accompanied cave paintings from over 30,000 years ago are thought to have been signatures. The early Egyptians used body measurements to ensure people were who they said they were. Fingerprints date back to the late 1800s. More recently, computers have been harnessed to automate the whole process of identifying people by biometric means. 

Any biometric system has to solve two problems: identification ("who is this person?") and verification ("is this person who he or she claims to be?"). It identifies the subject using a “one-to-many” comparison to see whether the person in question has been enrolled in the database of stored records. It then verifies that the person is who he or she claims to be by using a “one-to-one” ncomparison of some  measured biometric against one known to come from that particular individual.

Scanning the fibres, furrows and freckles of the iris in the eye is currently the most accurate form of biometric recognition. Unfortunately, it is also one of the most expensive. Palm-prints are cheaper and becoming increasingly popular, especially in America and Japan, where fingerprinting has been stigmatised by its association with crime. Even so, being cheap and simple, fingerprints remain one of the most popular forms of biometric recognition. But they are not necessarily the most reliable. That has left plenty of scope for abuse, as well as miscarriage of justice. 

The eye-opener was the arrest of Brandon Mayfield, an American attorney practicing family law in Oregon, for the terrorist bombing of the Madrid subway in 2004 that killed 191 people. In the paranoia of the time, Mr Mayfield had become a suspect because he had married a woman of Egyptian descent and had converted to Islam. A court found the fingerprint retrieved from a bag of explosives left at the scene, which the Federal Bureau of Investigation (FBI) had “100% verified” as belonging to Mr Mayfield, to be only a partial match—and then not for the finger in question.

As it turned out, the fingerprint belonged to an Algerian national, as the Spanish authorities had insisted all along. The FBI subsequently issued an apology and paid Mr Mayfield $2m as a settlement for wrongful arrest. But in its rush to judgment, the FBI did more than anything, before or since, to discredit the use of fingerprints as a reliable means of identification. 

What the Mayfield case teaches about biometrics in general is that, no matter how accurate the technology used for screening, it is only as good as the system of administrative procedures in which it is embedded. That is also one of the finding of a five-year study (“Biometric Recognition: Challenges and Opportunities”) published on September 24th by the National Research Council in Washington, DC. 

The panel of scientists, engineers and legal experts who carried out the study concludes that biometric recognition is not only “inherently fallible”, but also in dire need of some fundamental research on the biological underpinnings of human distinctiveness. The FBI and the Department of Homeland Security are paying for studies of better screening methods, but no one seems to be doing fundamental research on whether the physical or behavioural characteristics such technologies seek to measure are truly reliable, and how they change with age, disease, stress and other factors. None looks stable across all situations, says the report. The fear is that, without a proper understanding of the biology of the population being screened, installing biometric devices at borders, airports, banks and public buildings is more likely to lead to long queues, lots of false positives, and missed opportunities to catch terrorists or criminals. 

What is often overlooked is that biometric systems used to regulate access of one form or another do not provide binary yes/no answers like conventional data systems. Instead, by their very nature, they generate results that are “probabilistic”. That is what makes them inherently fallible. The chance of producing an error can be made small but never eliminated. Therefore, confidence in the results has to be tempered by a proper appreciation of the uncertainties in the system. 

On the technical side, such uncertainties may stem from the way the sensors were calibrated during installation, or how their components degrade with age. Maybe the data get corrupted by inappropriate compression, or by bugs in the software that surface only under sporadic conditions. The sensors may be affected by humidity, temperature and lighting conditions. Effects may be aggravated by the need to achieve interoperability between different proprietary parts of the system. There are endless ways for performance to drift out of true. 

On the behavioural side, uncertainties may arise from an incomplete understanding of the distinctiveness and stability of the human traits being measured. The attitude of people using the system may affect the results. So will their experience with, or training for, such scanning equipment. 

Whatever, if the likelihood of an impostor or wanted criminal showing up is rare, even recognition systems that have very accurate sensors can produce a lot of false alarms. And when a system generates a fair number of false positives relative to the remote possibility of a true positive, operators will inevitably become lax. That is a fact of life. And when that happens, it defeats the whole objective of having a screening process in the first place.

The body of case law on the use of biometric technology is growing, with some recent cases asking serious questions about the admissibility of biometric evidence in court. Apart from privacy and reliability, biometric recognition raises important issues about remediation. Increasingly, we can expect the courts to use remediation as a way of addressing both lax and fraudulent use of biometrics, especially for individuals (like Mr Mayfield) who have been denied their due rights because of an incorrect match or non-match in some screening process. 

The biometrics industry has a vital role to play in these threatening times. But it would win broader acceptance if it paid greater attention to the concerns and cultural values of the people being scanned. And everyone would be better served if a good deal more was known about what it is, biologically, that makes each and everyone of us a unique human being.

Safran may care to consider the experience of GMAC, the organisation that represents 1,800 business schools worldwide. Faced with impostors taking their entrance exam, GMAC spent two years using flat print fingerprinting technology to verify people's identity. It didn't work, they've dropped it and now they're trying palm vein biometrics.

Meg Hillier MP, here in the UK, delights in upbraiding the government for cancelling ID cards and cancelling plans to put fingerprints on British passports. Is she right? Or are GMAC right?

Meanwhile the UK Border Agency has deployed smart gates to 10 airports. These gates compare your face with the picture in your passport. When tested in Manchester, there were so many false negatives -- the machines said that you are not you -- that they had to drop the matching tolerance to 30%. At that point, according to one biometrics expert, the machines couldn't distinguish between Osama bin Laden and Winona Ryder.

Once we've convinced UKBA to stop wasting our money, there is a second matter to consider. Note that the tolerance level can be varied by the user, whether Manchester airport or the FBI or ... The identity they ascribe to you is discretionary. They can assert that you are you or that you are not you, whichever, depending only on how they set the machines. That is not how we usually think of personal identity.

At the root of it all lies a mathematical conundrum (Bayes' Theorem which bedevils many other fields as well): You can't tell how effective any biometric device is unless you know the rate at which people are trying to subvert it.

For air travel, this is definitely not a known quantity to the sort of precision needed for the US TSA (or any other country for that matter) to properly evaluate their equipment. Absent this rather key number they can claim whatever "effectiveness" they wish to justify buying these machines, and no one can prove otherwise.

However, when combined with TSA's terrorist "Watch List" numbers, Bayes' theorem does lead to a rather interesting result:

a) If the nearly million strong "watch list" is full of real terrorists who regularly travel, then these machines are nigh near useless.
 

b) If the machines are working well, then there can't be that many terrorists running about, and the watch list should be pared down by a factor of 20-50 or so to be effective.

Fundamental math then proves TSA can't have it both ways, but I doubt that will stop them. They'll continue to buy the equipment and expand the watch list. Such is the nature of bureaucracy.

My money's on the machines. If there's really a million active terrorists hitting the airways regularly, then we're doomed anyway.

The primary form of biometrics, historically and today, is face recognition, generally performed by a human. Every driver's license and passport carries a picture of a person's face. We also do voice recognition (does he sound like an Italian?) routinely. So we crossed this threshold long ago. The question is making it better.

The trick with all these things is, as the author points out, understanding the statistics. No single biometric scheme is foolproof, but if you use several independent means, the likelihood of getting it wrong plummets.

That said, the caution against a rush to judgment is very well taken. Even if the chance of two people having the same iris-scan were one in 100 million, you can expect one pair of matching people per town of 10,000. (there are 10,000 x 9,999 pairs ~= 100 million possible pairs of people).

Yes, fingerprints are easily forged ... or lost, if you have a particular medical condition like I do: http://wp.me/ppqxP-9j

The biometrics industry has been dishonest from the start. Biometrics was a term introduced in, I think, in 1947, by Ronald Fisher and Sewall Wright to describe the use of mathematics and statistics in Biology. But, since no-one thought it necessary to register the name, it was easy for Gates et al to steal it. One should never trust intellectual thieves.

An unfortunate side effect of the probabilistic nature of results from a biometric system is the inability to securely hash the biometric data prior to storage.

Biometrics and chip/data technology identification and in particular RFID technology is a lot less safe than "old" ways of identification.

True, it is possible to copy a passport, but difficult. Such skills are less and less active now. What is easier is to fake an RFID identification, as you can much more easily fake data than actual paper ID. Especially considering that all data is stored centrally and privacy policies in general are so bad on-line that forging pictures and such is very easy.

There are many cases of catching wireless data, many cases of central data being lost/stolen, and many cases of forged data and security breaches. Considering all this it will be exceptionally easy for people in the future to copy fingerprints, photos and other information to make a fake electronic ID, rather than having to actually show a passport and yourself to a person who will verify the reality of it.

Nothing beats the humans, certainly not machines. Yet. I will not be trusting biometrics and data security.

"The FBI subsequently issued an apology and paid Mr Mayfield $2m as a settlement for wrongful arrest."

If FBI will pay USD 2m to everyone who will be wrongfully identified, I actually think the system would be quite popular.

Mr Mayfield and many others will never forget what they felt when they were humiliated and arrested by people who believed probabilities of 30% - 50%. I encourage you to buy the most expensive fingerprint reader you can find and have a go at it with a few dozen people...

I'll tell you what, since we work in a fascinating and very revealing industry, we'll use your login credentials, your IP address and a simple phone registry lookup along with a few gov databases to bring the numbers within a reasonable margin of error and then red flag you and a few of your family members on our systems during one of the next 10 international flights you decide to take.

Perhaps you'll be visiting Ireland or the UK again... Yes, we already performed a cursory check and can confirm that you even lived there. We do this regularly to test our technology and improve the probabilities cause.. 10%, 20% or 30% is still a lot better than zero percent.

Likely scenario:

During one of your travels, you will be taken aside and asked some very personal and revealing questions such as why you owned a Porsche boxster but drive a VW Jetta instead, what you carried in such a car and where you drove to on certain dates and then you will be asked these questions again and then again in a different. non-disclosed location. Let's see how you feel about the implementation of math probabilities and unreliable technology after such an experience... are you really ready for that?

 

 

About Babbage

In this blog, our correspondents report on the intersections between science, technology, culture and policy.

 

http://ad.doubleclick.net/activity;src=2516211;met=1;v=1;pid=48874435;aid=225124591;ko=0;cid=38539988;rid=38557745;rv=1;&timestamp=2531283;eid1=9;ecn1=1;etm1=0;http://ad.doubleclick.net/ad/teg.ckau/obba;nav=null;subs=n;pos=mpu_bottom_right;sz=300x250,45x45;tile=5;ord=616781044?