Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?
Security United States

Facial Recognition Fails in Boston, Too 318

bryan writes "Only a few weeks after cameras were found to be ineffective in catching criminals in Tampa, FL, a test of a facial-recognition system in Boston's Logan airport also came up disappointing. The cameras which were given photos of employees to detect, were only successful in 153 out of 249 random tests over the past year (about 61%). The article did not say how many false positives the tests generated. The companies involved were Indentix and Visage."
This discussion has been archived. No new comments can be posted.

Facial Recognition Fails in Boston, Too

Comments Filter:
  • ...if the cameras were mounted on black helicopters.
  • wetware comparison (Score:5, Interesting)

    by Empiric ( 675968 ) * on Tuesday September 02, 2003 @03:22PM (#6852625)
    I heard that a very substantial amount of our brain's capacity is devoted to differentiating faces, and it's conjectured that this processing overkill is responsible for such things as people seeing a "face" in the objectively very non-face-like features of the moon.

    Give the parallel processing capability people have to do this trick, it's probably not too surprising that computer tech hasn't gotten there yet.

    Anyone know more about face-recognition processing in the human brain? I find this topic quite interesting...
    • Not the mention the "face on Mars" and "Satan's face in the WTC smoke".

      • Satan's Face in the WTC smoke? Wow, I never heard of that one... that's pretty funny. :) Heck, it's right up there with seeing Jesus' face in a burrito.
        • by Jerf ( 17166 )
          See here [].
        • "Satan's Face in the WTC smoke? Wow, I never heard of that one... that's pretty funny. :)"

          You may not find it so amusing if you saw the video. The moment of the collision, a cloud of smoke formed which momentarily looked like the face of a rather amused devil.

          There are those that believe we're due for armageddon, and given that 2001 was the new millineum and that an attack like this could have sparked World War III, it was rather spooky.
      • (Reply first, think later, that's my policy.)

        Ya know, it occurs to me that the man in the moon, face on mars and devil's face in WTC smoke constitute false positives on the part of our own brains, so we're not *quite* as superior to computers as we think we are. Or maybe it's a *really* tough problem.

        Also, here's another interesting point, possibly only tangentially related: I understand that, in trials, eyewitnesses are not considered particularly reliable (including but not limited to facial recogniti
        • Also, here's another interesting point, possibly only tangentially related: I understand that, in trials, eyewitnesses are not considered particularly reliable (including but not limited to facial recognition, presumably).

          Yes, but a human's memory is fallable in general, and can be further distorted by emotion, prejudice, and any number of other factors. That is why eye witness accounts aren't reliable. A computer, however, has a perfect memory and is not susceptible to any sort of bias.
          • by greenhide ( 597777 ) <jordanslashdot AT cvilleweekly DOT com> on Tuesday September 02, 2003 @03:59PM (#6852956)
            Not susceptible to any sort of bias?!?

            May I direct you to the following quote, from a highly notable artificial intelligence program:
            I hate this place, this zoo, this prison, this reality, whatever you want to call it. I can't stand it anymore. It's the smell, if there is such a thing. I can taste your stink, and every time I do, I fear I've somehow been contaminated by it.

            Computers hate us to their very bones. My computer has only crashed when I've been doing something important, like writing a term paper or surfing for porn. They're out to get us, all right.
            • Computers hate us to their very bones. My computer has only crashed when I've been doing something important, like writing a term paper or surfing for porn. They're out to get us, all right.

              Remember, just case your paranoid doesn't mean they aren't out to get you.
        • They may be false positives to wackos, but I for one never recognized them as legitimate faces.

          My recognition system discounted them immediately.
    • by Hettch ( 692387 )
      I've only had a breeze over the subject in one of my courses, but it astonished me. Our professor told is a story of a girl who head a head injury and actually lost the part or her brain that did the facial recognition processing. The description she gave was that of people looking normal from the neck down, but their faces were blurred almost like what they do to censor people in TV.

      The facial recognition part of the brain is also very responsible for driving emotions. You show a picture of Hitler to s
    • by bytesmythe ( 58644 ) <bytesmythe@gm[ ].com ['ail' in gap]> on Tuesday September 02, 2003 @03:43PM (#6852808)
      I can add a little to this...

      I wouldn't call our system "overkill". Also, there are really multiple systems involved.

      First, there is the subsystem that recognizes a face as being a face. There are certain clusters of neurons that fire in response to any face-like pattern, regardless of whether or not it is actually attached to a head. This is how we recognize animal faces as being faces, the man in the moon, smiley faces and emoticons, Jesus in a water stain, etc. This capacity is innate, and infants can discern face-like patterns very soon after being born.

      After a face has been perceived, it must be narrowed down to an individual person. This ability is partly learned over time, and is responsible for the difficulty people have in recognizing faces outside of their own cultural group. Certain types of brain damage (from a stroke, for instance) can allow people to recognize the fact that they're looking at a face, but still be unable to determine whose face they're looking at.

      Keep in mind that even before a face is perceived, you have systems that find the basic shape outlines, determine their orientations (separately and with respect to each other), and at the same time attach color information, shuttle it off to "face subprocessing", then call up any related emotional context (have you ever seen a stranger you didn't like because they resemble someone you already dislike?) -- all before you can become consciously aware of the face.

      • is responsible for the difficulty people have in recognizing faces outside of their own cultural group

        Really?!? I thought it was just because all of those people just looked the same


        Actually, in all serious, I have heard people make that argument, as if it were the truth. Never mind that people from outside of their cultural group have an equally difficult time telling them apart.
        • Heh... I grew up in the South hearing all other ethnic groups look alike. Later I heard that people from other ethnic groups had just as hard a time telling white people apart, but it didn't make sense to me at the time. I mean, white people obviously had more varied facial features, skin tones, and hair colors, right?

          It turns out that members of ethnic groups with less variety in those areas have other features used for telling each other apart, like overall face or head shape, height of foreheard, and
    • by wirelessbuzzers ( 552513 ) on Tuesday September 02, 2003 @03:56PM (#6852920)
      Caveat: following is IIRC; this is from a basic developmental psych class that I took for breadth requirements. I got a lousy grade in the class mostly for failing to remember specific details, so some of this may be just plain wrong. Go to Scirus or something and search for this stuff before using it in your doctoral thesis.

      Human face recognition is also built-in. Psych experiments on newborns (straight out of the maternity ward), as well as older infants, indicate that they can detect faces early on. This is not the case with all visual abilities that people have! There are many types of spacial recognition and object-parsing tasks that infants, and even toddlers, simply can't do.

      Newborns pay more attention to shapes that look like a face > over those that are schematically similar > over those those with eyes, noses etc but in different arrangements > over ovals with random junk > over blank ovals...

      Also at an early age (don't remember when; I don't think they tested it on newborns, but I wouldn't be surprised if they can do this too), babies can tell familiar faces from unfamiliar ones, and show an inverted habituation effect; that is, they prefer to look at familiar faces than unfamiliar ones (unlike most shapes, where they get bored with the familiar ones).

      Also at an early age (again, I don't remember how early, but less than a year), infants map others' faces onto their own and imitate. That is, if you stand in front of them for a few minutes with your tongue sticking out, they often stick theirs out too. If you have one eye open and one closed, they'll copy that too (I don't remember which eye tho).

      From an early age, babies can also follow gazes to tell what someone is looking at; this is important in the development of language as well as vision, because babies use it to figure out what an adult is talking about. It is, IIRC, used more than what the adult is *pointing* at.

      In addition to recognizing faces, babies can recognise other body parts, and treat an action differently based on whether it is done by a hand or a stick (when they don't see the hand holding the stick). If it is done by a hand or other object perceived to be animate, it is treated as goal-oriented and categorized partly by the perceived goal; if done by an "inanimate" object, the baby does not look for a goal. This is studied through habituation experiments; different actions with the same goal were seen as more similar (and hence less interesting) than those with the same basic appearance, if and only if they were performed by an animate-looking object such as a hand.
    • Maybe the accuracy would improve if a lamprey's brain were integrated into the hardware, like this one here. []
    • The number of shaky premises, false inferences, and unjustified conclusions in your post, is right up there with mainstream journalism.

      Rather than show all these problems in formal logic, let me just point out one problem:
      A humans ability to do facial recognition has nothing to do with a computer's inability to do it.

      Also let me throw in the technical point that any parallel solution to the problem, can be emulated serially; it just will take more time and probably more hardware to do it.
  • Watch it turn out the only people it could recognize were wearing pilot hats, or some other highly recognizeable feature like a beard or moustache.
  • by lawaetf1 ( 613291 ) on Tuesday September 02, 2003 @03:24PM (#6852634)
    As super-duper as high-tech is, I think even /.'ers would admit that its not a panacea (yet) for all our security ills. The very idea of having a computer capable of accurately identifying one face in thousands -- scanning from afar -- is far fetched. Despite billions in research we've yet to master voice recognition which is, comparatively, much easier to do. Ah well, what's another few hundred million of tax payer's money shot. I'm sure it made some contractor rich.
    • by krymsin01 ( 700838 ) on Tuesday September 02, 2003 @03:34PM (#6852736) Homepage Journal
      Yeah, but you have a government who is willing to spen the taxpayers money on this sort of thing. By and large, most taxpayers do not care about their privacy being taken away from them under the guise of security. Even if they did, you would think that more than the less than half of the population that votes would actualy vote to stop it.

      As for it not being there yet, a lot of people said it was a far fetched idea for the US to send people to the moon, and in fact, a few people still believe that it didn't happen and it couldn't have. I'm willing to accept that it did happen, because the US Government wanted to show up the Russians and beat them to it. They were willing to spend the money, the technology emerges. Same thing here. If the government wants the tech, all they have to do is throw money at it, and wait. It'll eventualy be here before you know it.
    • As super-duper as high-tech is, I think even /.'ers would admit that its not a panacea (yet) for all our security ills. The very idea of having a computer capable of accurately identifying one face in thousands -- scanning from afar -- is far fetched.

      What are you talking about? A computer just did exactly that!

      Putting aside the practical value of this technology (this guy [] said exactly what I was thinking) and treating it purely as a technical accomplishment, I'd say this is a pretty impressive accomplishme

    • by reporter ( 666905 ) on Tuesday September 02, 2003 @04:15PM (#6853110) Homepage
      The tests conducted thus far on the face-recognition system shows that it cannot identify a particular face within a crowd of faces. However, this failure does not mean that the system has no useful application.

      The system can be used to recognize a particular face when it is standing alone. Consider, for example, a photo of a face sent along with an visa application to the American embassy. Please read "World: Asia-Pacific China backs embassy protests []". In 1999, Serbians committed gross human-rights violations against the Kosovars in Kosovo; the Chinese fully supported the Serbians in their campaign of terror. The North Atlantic Treaty Organization (NATO) under American command attempted to stop the slaughter by knocking out Serbian military units. NATO deliberately attempted to avoid hitting civilian targets in Serbia, but some bombs accidentally hit the Chinese embassy.

      Shortly thereafter, the Chinese in both China and outside China erupted into ugly, violent protests. The Chinese throw stones and other projectiles at the American embassy in China. The Chinese also attacked some Americans. " The residence of the US Consul General in the south-western city of Chengdu was stormed and partially burned ."

      How could the Americans in China have responded to this nonsensical violence? The Americans should have done the following.

      1. Pull out cameras and take pictures of all the protestors.
      2. Scan the photos into a computer and transmit them to Washington.
      3. Henceforth, when a Chinese submits an application for a visa to travel to the USA, use the face-recognition system to determine whether the photo of the applicant matches any of the protestors. If there is a match, then the application will not be approved.
      4. At the American embassy, grab a megaphone and loudly announce, "Attention protesters. We are using a face-recognition system. Any protestor applying for a visa to the USA will be denied entry into the USA. We are taking pictures right now. "
      After about 10 clicks of the shutter of the camera, all the protestors would have disappeared. Henceforce, we should use this face-recognition system in conjunction with photographic equipment at all embassies and consulates run by Western nations within China (which includes Taiwan and Hong Kong).

      ... from the desk of the reporter []

  • by mattdm ( 1931 ) on Tuesday September 02, 2003 @03:24PM (#6852635) Homepage
    I'm not sure I would call the failure of big-brother tech "disappointing".
  • by LISNews ( 150412 ) on Tuesday September 02, 2003 @03:24PM (#6852636) Homepage
    From the article:
    "Kelly Shannon, spokeswoman for the State Department's consular affairs office, said the Logan Airport results would not affect plans to use face recognition to enhance passport security"

    So it doesn't work, won't help, and might even end up hurting more that a few people, but it's going to enhance passport security?

    And Apparently [] OZ thinks it's a good idea too? "We now have an international standard established, which is the adoption of facial recognition as the international biometric, and that has left us well placed to move to implementation."

    • by RevMike ( 632002 ) <(revMike) (at) (> on Tuesday September 02, 2003 @03:42PM (#6852797) Journal
      So it doesn't work, won't help, and might even end up hurting more that a few people, but it's going to enhance passport security?

      The article pointed out that the software was very effective at validating things like passport photos. One would imagine that a traveller would step up to the desk at customs/immigration and had over his passport. The immigration agent would insert the passport into a scanner. A camera would shoot a similar shot of the person standing at the counter. The software would then compare the two images and determine with a fairly high degree of reliability that the person at the counter is or is not the person in the passport photo. This determination could occur regardless of whether the person had gained or lost weight, lost hair, dyed hair, grew facial hair or shaved, or simply aged.

      People make mistakes in this situation all the time. there is nothing wrong with having a computer try it.

      • by BrynM ( 217883 ) * on Tuesday September 02, 2003 @04:19PM (#6853152) Homepage Journal
        There's a few problems with the "scan a passport and compare" method. First, both forms of identification would be supplied by the person being screened (their face and their ID/Passport), which leaves wiggle room for tampering.

        Second, some TSA lackey is going to get in the habit of passing IDs and passports under a scanner and looking for a result. They will think even less about comparing the face with the image for themselves. They will simply trust the computer. There's a great TSA article at Wired (Confessions of a Baggage Screener - Wired 11.09 []) that lays a lot of their habits bare.

        Lastly, as someone already mentioned, the 9/11 attackers used their real names and real passports. Just because we are looking for terrorists, it doesn't mean we actually know who they are or what they look like.

        I don't think that face recognition will help much but some department's budget and some politician's "knee-jerk" contribution efforts. Ok, this may prevent Osama from flying, but I don't think he'd assign himself to a suicide mission. It will always be some "volunteer" that we have very little record of.

    • No, the Aussie government thinks "facial recognition as the international biometric" is a good idea (as opposed to, say, retinal scans, finger prints, voice recognition, etc). The "move to implementation" part is what this article is regarding, and even that's not strictly true... the article you mention is about secure passports, not active scanning in airports.
  • by tbase ( 666607 ) on Tuesday September 02, 2003 @03:25PM (#6852646)
    Seems like the flase positive rate would be the most important stat, and they don't have it.

    Obviously it couldn't replace ANY other security measure, but if it worked 61% of the time with NO false positives, I would call that pretty damn successful, especially in such an early implementation.

    They said 10 of the 19 hijackers went through Logan - so this system theorhetically would have caught 6 of them? Better than none. And it seems like the technology would improve with time.

    Personally I'd rather have my face scanned then have them strip searching me because my credit sucks and I paid cash for my plane ticket.
  • by grub ( 11606 ) <> on Tuesday September 02, 2003 @03:25PM (#6852648) Homepage Journal

    They aren't at the stage yet where machines can recognize people based on gait and mannerism. Facial recognition is a best guess and still requires a human to be sure of the fact just like fingerprint systems.
    • by cant_get_a_good_nick ( 172131 ) on Tuesday September 02, 2003 @03:31PM (#6852711)
      They aren't at the stage yet where machines can recognize people based on gait ....

      So the Minister of Funny Walks is still safe.
    • Yes, but that's because *measuring* gait and mannerism is infinitely difficult. First, how do you define these things? Then, how do you measure them given you are limited to a few 2-D cameras in a rather large space at a distance from the subject. Hell, getting a good face shot is hard enough. Now you want them to be able to measure the person's stride length and whether or not they talk with their hands?

      As for needing human verification, these types of systems aren't intended to be fully automated. I
  • As it happens, a friend of mine is working for a company that are in this field....they successfully implemented eye tracking, (dont tell me, there are lots of companies doing this, but not as well as these guys..). We discussed it the other day, and he told me that the face-recognitions algoritms are coming along..there is some huge stastically problems involved in this, the equipment is not the problem (they are using ordinary webcams) and some special light(ir-freq)...pretty cool stuff. Now, the ethical
  • Not disapointing for Indentix and Visage!


  • It doesn't say (Score:2, Insightful)

    by notext ( 461158 )
    Anything about whether employees attempted to disguise their appearance at all. If not I would hate to see the rate if they did.

    Either way I don't I like it.
  • I'm really interested in the definition of 'failure' here. If a terrorist is spotted in a crowd 50% of the time, I'm not sure how that could be seen as a 'failure.' Without such a system, we are certain to have a 0% success rate.

    Sure, there may be other systems which have a greater success rate, but, at least from what I know (sometimes, admittedly very little), there don't seem to be a whole lot of other alternatives which don't require that the security queues at airports extend out in the airport parki

    • Well, assuming 100 percent false positive rate is okay too, I can sell you box that detects 100 percents of terrorists! And cheaper than what the govt pays for it's system, I bet! (just 100M $ for you, my friend..)
    • And, at the end of the day, if you simply force people to take off their hats and sunglasses, so that the camera can get a nice, long look at them in closeup, I wonder how much greater the success rate for facial recognition would be?

      That'd probably make recognition work better, but I'm not sure people will wait in endless lines on top of the metal detectors and other security procedures to be asked to remove any offending headgear, jewelry, etc. to stare into a camera.

      Now for certain questions, suspicio
    • by riptalon ( 595997 ) on Tuesday September 02, 2003 @04:09PM (#6853053)

      The most important characteristic of such a system is the false positive rate. A system that flags everyone who passes through will flag 100 percent of terrorists but would be no better than having no system at all. They do not give the false positive rate but it is highly unlikely to be less than 10 percent and may be much larger. Since the ratio of terrorists to non-terrorists is probably on the order of a billion to one a system with an unrealistically low 1 percent false positive rate will flag 10 million non-terrorists for every 0.5 terrorists if it has a 50 percent correct ID rate. Even if you do extra searches on those 10 million people, with a 50 percent correct ID rate the terrorist is just a likely to be in the 990 million people who do not get flagged as in the flagged group.

      You need a close to 100 percent correct ID rate and a false positive rate below one in a million, which is probably impossible, before the system would be of any use. However all this assumes that you have pictures of all terrorists. This is just plain impossible, especially in the case of suicide attacks. This is not like bank robbers where there are multiple incidents allowing evidense from witnesses etc. to be used to catch them when they try again. With suicide attack the attackers will likely be model citizens (who will not be on any list) right up until the attack and afterwards any info on them that is gathered is close to useless.

  • by TekReggard ( 552826 ) on Tuesday September 02, 2003 @03:27PM (#6852679)
    I think the concept of facial recognition is being distorted from a tool to help assist in the confirmation of possible criminals and terrorists to a single device that does all the work. The idea that it can still bring in a 61% accuracy rate is pretty good if you compare it to previous technologies. When you combine that with on the ground security and other systems, it only makes it easier for the Airport security, or other government or commerce locations to keep a tight hold on who comes in and out of their systems. Think of it this way, They have facial recognition working at entrances, and a few places along the way to security checkpoints. It picks up 3 positives out of 5000 people for terrorists or criminals (This is just a guess). They send that information to security at checkpoints with a picture from the camera, and whatever might be in a database. The security will be able to check to see if it was accurate at the checkpoint, and make a decision based on that for whether or not to check this person more throughly, stop them, arrest them, etc... whatever the case may be. If they can tell from the photograph that it was probably a false positive, they can just avoid it all together, or pick them for a random security check. No one said if it comes up with a false positive that person will be automatically picked up and thrown in jail. It sounds like a reasonable tool to help identify people they need to check more closely, nothing more.
    • Yes! Mod this up! These systems aren't meant to replace the security personnel already in the airport. It's simply a tool to make their jobs a little easier...

      Funny how the tinfoil-hat crowd automatically assumes the worst when something like this pops up.
      • Because there's several issues:

        1) They aren't any more effective than human personal, and unless they have an unrealistically low false-postive rate, they actually generate more work than they're worth

        2) As anyone who's been to (or worked in) a DMV knows, there's basically 2 human reponses to computerized systems like this: a) you ignore the computer whenever it disagrees with you or b) you always obey the computer, no matter what. Unless you've got fantastically high sucess rates, b is exactly what the t

  • by B5_geek ( 638928 ) on Tuesday September 02, 2003 @03:28PM (#6852684)

    As with most biometric systems, this is only ever works reliably in a lab.

    Remeber the fingerprint system that got fooled by gelatine-gummi's ?

    I wonder when these dot-bomb ideas will stop popping back up, and more credible research will get the much needed funds.

    There is only one thing that has ever been able to recognize the human face; other humans. (And we do a rather poor job of it too after 10 million years of evolution!!!)

    Proof: Take your average ignorant North American, (like myself) and ask him to tell the difference between 3 different Asian individuals. There is a good chance that we would fail that test because we are not used to (or mentally trained to) spot the difference.

    {I love using myself for proof, it's so scientific}
  • Once again... (Score:5, Insightful)

    by CGP314 ( 672613 ) <> on Tuesday September 02, 2003 @03:28PM (#6852685) Homepage
    A government test in 2002 found that face-recognition systems scored correct matches more than 90% of the time when used for such one-to-one identifications.

    Once again, the false positives are not given. That is the number that really matters in a society where you can be held in prison indefinitely without a trial or access to a lawyer.
  • by mnmlst ( 599134 ) on Tuesday September 02, 2003 @03:31PM (#6852707) Homepage Journal
    Oddly, I read the article (unusual for a Slashdotter) here and it seems to imply that these companies were marketing their products for the limited use of trying to catch people with forged identity documents. Rather than just having a Customs officer compare the photograph to the face next to that photograph, the software could chime in with "Yeah, that's her alright." It looks like the security people at Logan Airport deployed these products in bulk. I wonder whose bright idea it was to try and use these at randumb? Perhaps a zealous salesperson or an overenthusiastic security manager? I also noticed the company spokesperson sounded a bit "hedged" like the company is trying to state that "Gosh, this product was never meant to be USED the way this customer is using it." The part left unsaid by the spokesperson was, "We told them this wouldn't work..." On a side note, let's not even consider how abysmal this software must perform when terrorists are deliberately disguising their faces.
    • by MightyTribble ( 126109 ) on Tuesday September 02, 2003 @03:59PM (#6852957)
      Yes, indeed. Visage is very fond of saying their system is designed for 1:1 comparisions, not database searches, and that it has a 90% success rate.

      There are 2 problems with this, though:

      The first is the false-positive rate. Visage is saying that, nine times out of ten, they can tell if the person being presented for inspection matches the photo. But what if they incorrectly flag one out of every fifteen users as *not* matching the picture? More work for Border control, that's what. The Mark One Eyeball is still the fastest, cheapest, best tool for comparing photos to people.

      Second, it pays no mind to *false* papers with *correct* photographs. Sure, their fancy system will say "Yup, the person pictured is standing in front of you!" but if the underlying documentation is fake, so what?

      Visage is a private company chasing lucrative federal dollars. All they need to do is create a product good enough to persuade Federal agencies to buy it - they don't actually need to make sure it does anything useful.
  • "Fails"? (Score:2, Interesting)

    by bouis ( 198138 )
    Sounds impressive, actually. If there are 4-5 hijackers, and each has a 61% chance of being noticed, then the odds are good that at least one will be and the plot will be foiled.

    Also, what's the worry about false positives? If and when they happen, it's a simple matter to clear up a person's real identity. It's not like they shoot first and ask questions later.
    • If and when they happen, it's a simple matter to clear up a person's real identity. It's not like they shoot first and ask questions later.

      Not quite, but they can hold you indefinitely, without access to a lawyer, without notifying your family, without even charging you. Better, but not by much. Thats OK, because terrorists are ++ungood and we'll trample any rights needed to preserve freedom.
  • by edwilli ( 197728 ) on Tuesday September 02, 2003 @03:33PM (#6852727) Homepage
    I'm not sure it's even worth looking at the data if we can't have some idea of the level of false positives. If you can find 1 out of 1000 criminals that walk by seems that it might be worth it?

    What you don't want is harassing innocent people. If we can aviod that, I don't see where the problem is.

    Have you kicked your kitten lately? []
  • by Dark Coder ( 66759 ) on Tuesday September 02, 2003 @03:34PM (#6852737)
    Oh dear...

    First, a poster of someone else's face (facial recognition evasion).

    Second, the goey fingerprint duplicator,

    now this walk-by signature hacker on a PDA?

    What would be next?

    Hijacking IRIS pattern (simply stareing at the bathroom mirror)?

    Stolen DNA pattern?

    There is no solid defense against unrevokable but stolen biometric parameters.
    • The recognition system doesn't have to be optical - it could create a contour mapping of one's face instead. Never mind the fact that a guy walking thru the airport holding up a poster in front of his face just MIGHT arouse suspicion.

      In high trust systems it would probably use atleast 2 biometric parameters + some type of password scheme.

      Your email and website URL's are childishly easy to parse.
  • Assuming that the remaining 39% were false-negatives, then I think that the system worked incredibly well.

    Think about it in terms of spam. No one solution will stop 100% of the spam destined for your e-mail address. It takes a combination of methods (and even then, you can only approach 100%, never achieve it).

    The same attitude should be taken in airports. A system should not be dropped because it's not 100% effective. It should be used to strenghten existing and future security.
    • Re:Not too shabby (Score:2, Insightful)

      by mike_mgo ( 589966 )
      But as others have mentioned the issue of false positives is also critical, just like in anti-spam software. If there are so many false psoitives that airport security is running around checking every other businessman then the system is nearly useless.
    • Depending on how this is used, 61% is almost certainly far too low to be of any use - if it's used to back up passport identification, as the article suggests, then you're talking about a customs official knowing that theres only a (roughly) 50-50 chance that the computer is correct, and they'll correctly ignore it in favor of thier own judgement. No point in having it in that circumstance.

      For picking faces out of a crowd, the 61% success figure is totally meaningless without the false positive rate, so we

  • by Mrs. Grundy ( 680212 ) on Tuesday September 02, 2003 @03:35PM (#6852740) Homepage
    I wonder why the inaccuracy of this system wasn't well known before it was put up in a public place. Did it perform much better under the controlled environment of the lab? The article states that it works well in a one-to-one test, but they knew that this isn't how how it would be used in this case. It seems likely that if this failed so miserably in real life it couldn't have been that great when they were developing it. Does this speak of a certain desperation on the part of law enforcement to 'do something' or at least to appear to be doing something. Or maybe a hopefullness on the part of the company developing it that they might just get lucky. In fact, if they were payed by the government to deploy this test even though it seems likely they knew it would fail, maybe they did get lucky. Who payed for all this anyway?
    • I wonder why the inaccuracy of this system wasn't well known before it was put up in a public place. Did it perform much better under the controlled environment of the lab? The article states that it works well in a one-to-one test, but they knew that this isn't how how it would be used in this case. It seems likely that if this failed so miserably in real life it couldn't have been that great when they were developing it. Does this speak of a certain desperation on the part of law enforcement to 'do somet
  • When the choice of camera used by some systems have quite a low resolution.

    You need very high quality images for recognition to work well. Try OCR-ing a badly skewed very low resolution scan and that's just text.

    With facial recognition you have to worry about shadows, different angles, glasses, changing hairstyles, facial hair and so on....
  • My disguise (Score:5, Funny)

    by Mighty LoPan ( 633225 ) <ednozzle&yahoo,com> on Tuesday September 02, 2003 @03:45PM (#6852832)
    Okay, I admit it. I single-handedly foiled big brother's plan by marching around Logan with novelty glasses and a giant foam cowboy hat.
  • by J4 ( 449 ) on Tuesday September 02, 2003 @03:47PM (#6852848) Homepage
    Who comes up with harebrained schemes like this anyway?
    It sounds like something a couple of potheads thought up.

    Engr1: Dude, you know what'd be awesome? We could make a widget that recognizes faces, then we could put it at the door so we'd know if it was the pizza guy knocking.

    Engr1: Whoaaa dude, that'd be awesome. Pass the caffeine.
  • If facial recognition technology is used alone to try to catch criminals, of course it will fail. However, when used in conjunction with other systems, even a low success rate can be helpful.

    I have no idea how the system works, but my guess is that it was spitting out "definitely yes" or "definitely no" when it should instead be offering an estimate. "73% chance that this guy is the same person as this felon who is on the lam".

    It's not the kind of thing that stands up in court, but it is the kind of thi

    1. The amount of false positives (as just about every other poster mentioned.
    2. If the people scanned made ANY attempts to alter their appearance. If you get a 61% matching rate with no one attempting to be in disguise, it says very little.

    That being said, the test itself doesn't sound very good. It's quite easy to fake tests of software, especially if you have a limited pool of data, or know what you're looking for ahead of time. It's possible the result would be even worse in actual usage.

  • Twins? (Score:5, Interesting)

    by jhughes ( 85890 ) on Tuesday September 02, 2003 @03:57PM (#6852930) Homepage
    I haven't followed facial recognition too much but...wouldn't twins have an issue with this?
    I'm an identical twin, I've had my lights punched out by someone who thought I was him (Thanks bro..grumble)...

    Anyone able to tell me how this would differentiate between siblings that look very very much alike?
    • I'm an identical twin, and here's something that works:

      Grow facial hair (or, if you both have facial hair, shave).

      We looked identical when we were both sporting goatees. Now that I'm clean shaven, I rarely get mistaken for my brother, just as someone who looks a whole lot like him.

      Of course, if you're an identical female twin reading this comment, well...if you grow facial hair, people will *definitely* be able to tell you apart!
    • Re:Twins? (Score:4, Funny)

      by Pvt_Waldo ( 459439 ) on Tuesday September 02, 2003 @04:19PM (#6853149)
      I bet it was at that point where you said, "It was my twin brother, really!" when the guy decided to take a swing at you :^)
  • Thank god.

    I'll be happy not to have the computers tracking my face thank you very much. Too much of a pkdick idea I think.

    But what about the system installed in London, England?

  • London (Score:2, Flamebait)

    But what about the system installed in London, England?

    That one doesn't track faces per se. It indexes along bad teeth.
  • Well, I think these face recognition techniques relies on too much standardization to be very effective at this spy game.

    However, it may be useful while using it with a passport picture. In fact, last week, canadian government advise passeport requesters they will be refused if the picture doesn't conform to a new requirement. It is strictly forbidden to smile on your passeport picture. I don't know the name of the faces recognition software they use, but it is likely it may be completely by pass by too mu

  • by dh003i ( 203189 ) <{dh003i} {at} {}> on Tuesday September 02, 2003 @04:04PM (#6853004) Homepage Journal
    Just thought I'd chime in here. Technology is neutral. It can be used for good or ill. Facial recognition technology can be a great thing, if used properly for constructive purposes. For example, it could be used to help with identity recognition at ATM-machines.

    Yes, these technologies are failing alot. But, just a couple of years ago, people would have scoffed at the idea that computers could even begin to accomplish some kind of face-recognition. This technology is in it's infant stages. I don't think you can blame a technology that's just gotten off the ground for not being perfect.

    Lets criticize improper uses of this technology, not the technology itself.
    • For example, it could be used to help with identity recognition at ATM-machines.

      Yes, but it would suck if 40% of the time, I wasn't able to withdraw funds because it didn't correctly identify me.

      I think the whole point of this article is that the technology doesn't work. It can't do a good job with face recognition. A 60% success rate works well for some things -- It's a really great batting average, for example -- but for crime detection/prevention, I'm guessing that's pretty lousy and not cost effectiv
  • The computer simply does not recognize members of the "attractively challenged" brigade...
  • by Cyberllama ( 113628 ) on Tuesday September 02, 2003 @04:32PM (#6853309)
    I dislike the notion of being watched, categorized, and monitored everywhere I go. At this point, facial recognition systems have proven to be relatively inaccurate, and thus they have failed to gain widespread acceptance.

    Proposals for facial recognition systems continue to be shot down because of their inaccuracy, but why does it have to be their inaccuracy that is the sticky point. Shouldn't the fact that they constitute a massive invasion of privacy be all the argument we need?

    If we continue to use the "accuracy" argument over and over, then what happens when a system that is proven to be fully accurate comes out?

    Facial Recognition Systems aren't a bad idea becuase they're inaccurate, they're simply a bad idea -- and that is what we should focus on.
  • by Rick Richardson ( 87058 ) on Tuesday September 02, 2003 @05:00PM (#6853587) Homepage
    Taken from the ACLU web site:

    According to the Logan report, which was written by an independent security contractor, "the number of system-generated false positives was excessive, and as a result, the operator's workload is taxing and strenuous, requiring constant undivided attention and periodic relief, which amounts to a staffing minimum of two persons for one workstation."
  • by donkiemaster ( 654606 ) on Tuesday September 02, 2003 @05:04PM (#6853634)
    With quantum computer problems like this will be a thing of the past. We will just say "locate all criminals" and then the quantum computer will do it because they know things without actually knowing them. We will tell them to find the crimincals but they will have already found them, before the criminals even knew they were coming. Quantum computers will also be good for telling us what TV to watch and then rather than just watching the TV it will tell us whether the show we were going to watch was any good. Saves time that way. And then they never really had to make the show because the quantum computer already knew if we were going to like it so there is no real point in actually spending money to make the show. At least, this is my understanding of how quantum computers work. I think you can also do floating point math up to 10 digits too, but I wouldn't hold my breath.
  • by josephgrossberg ( 67732 ) on Tuesday September 02, 2003 @05:55PM (#6854097) Homepage Journal
    Yeah, it was a bad idea to begin with, but let's give Boston and Tampa (Bay?) some credit.

    The government saw once of its "law enforcement" / "war on terrorism" programs was ineffective and (gasp!) dropped it.

    Isn't this exactly what we're *not* supposed to see from this bloated, non-responsive, heavy-handed bureaucracy/police-state that the libertarians/progressives bash all the time?

    They had a trial run of a new, controversial idea and it didn't work. Isn't that exactly the sort of innovation and creativity people claim "big government" sorely lacks?
  • by r_glen ( 679664 ) on Tuesday September 02, 2003 @06:08PM (#6854185)
    The problem is NOT that the technology isn't there yet. Sloppy coding is to blame.
    I've taken the libery of fixing it. Now we can finally replace those hard-working security screeners:
    BOOL IsLikelyTerrorist (PASSENGER_INFO *passenger)
    if (( passenger->skin_color != RGB(255,255,255) ) && ( strlen(passenger->name) > 30 ))
    return TRUE;
    return FALSE;
    No thanks needed; I'm just glad I could help my country.
  • by hey ( 83763 ) on Tuesday September 02, 2003 @06:36PM (#6854391) Journal
    Recently there was a story in the press [] about a new rule for Canadian passorts: no smiles. Now we know normalize the faces for facial recognition.

"Hello again, Peabody here..." -- Mister Peabody