Video RSA: An Unusual Approach to User Authentication: Behavorial Biometrics (Video) 69
Tim Lord: What’s your name and your title?
Neil Costigan: I am Neil Costigan and I am the CEO at BehavioSec.
Tim Lord: Okay. And BehavioSec is not an American company?
Neil Costigan: No. We are Scandinavian. We are a university spin-out from a university in the far north of Sweden, up in the [din dark] north in Lapland.
Tim Lord: Okay, and your company focuses on biometrics? What is the smart person uninterested in the field’s view of biometrics?
Neil Costigan: Well, what we do is behavioral biometrics. What we are doing is saying that how a person interacts with their computer, website or the phone, has a behavioral pattern that can uniquely identify them, and therefore like a biometric. Now we are not looking at your physical thing that you are doing, just more your attributes of how you act and make a statistical pattern of how you do things, and monitor that.
Tim Lord: So not like iris identification or fingerprint, it’s probably about more like statistical patterns of typing.
Neil Costigan: Yeah. There is a kind of you can’t take the biometric we have and reproduce a person. You can’t find our fingerprint or profile and then pretend to be that person, so it is kind of on the good side of Big Brother if that makes any sense.
Tim Lord: Sure. So what are some examples? You mentioned typing speed.
Neil Costigan: Yeah, it would be typing rhythm, we’d be looking at not what you type, it is not the password you are typing, it is not that it is Neil from Ireland, it is the speed from the N to the E. How fast you go from the N right to the N. So it is kind of key flight, key press, sequence speeds, pressure on the keys, and that in a simple sense is keystroke dynamics. In the case of a mouse or on indeed like a touchscreen like a smartphone, it is the pressure, it is the swipe, it is the angles, how fast you go across the sequence, do you hover over a button with your mouse, and there are small little behavioral patterns that are unique to yourself, and that helps us compare one person to another.
Tim Lord: Now things like that, that sort of ongoing behavior, that seems different from where we often see things like passwords used, or PIN numbers for bank account, things like that.
Neil Costigan: Exactly. It falls in this area of, what we would describe as continuous authentication or active authentication; it is after the gatekeeper, after the door is open, you know with a lot of stuff you are in and that’s actually you have proven yourself. Well, what we are saying, at any one time, you can constantly look at what is going on, and we publish a score, the likelihood that this person is who they are supposed to be all the time. So, it is continuous. And it falls then into a field of active authentication, and if the score is low, then you may be asked to produce the secret password, the one-time token or the smart card. So this is used very much as an additional layer that is constantly running and watching.
Tim Lord: Now you sell your software to companies, like a bank? Does that mean then that the bank has a subscription with you, or they buy a complete package that they then use?
Neil Costigan: Well, the software works is really not personal, we can host it in the cloud, indeed we have services in Amazon and in the cloud, where you can ask, “hey is this pattern this entity,” and it will give you back an answer, so it is not a personal identifier, but the banks tend to be quite conservative and also for privacy reasons, and also for performance reasons, scalability and ownership, they tend to take our software and put it in-house behind the firewall.
Now, there would be advantages to everybody collaborating with this stuff and protecting against fraudsters in a larger scale in the cloud and that may come down the line. But right now our current customers tend to install it in their bank, in the website, kind of behind the internet bank, or behind the app, and do the matching and comparisons there.
Tim Lord: Now if someone wanted to break the system and compromise it, is there a point that they could grab this data from the wire, and say, here is what the pattern looks like, even though it is not being done by a person.
Neil Costigan: We do a kind of ticketing, a system where you get one shot at a particular pattern, you got piece of token to do it. And also the software in the back, it is kind of some of the attributes, we can tell the frequencies are too uniform, so it is likely to be a bug or a robot or an attempt to do it exactly the same; and also perhaps it’s too fast, it is likely to be an automated attack and so those are some of the secondary attributes that it can help us to track that is it a human
Tim Lord: Now it sounds like it is still a startup company in some ways, but you have had some success as far as selling it?
Neil Costigan: Yeah, it does sound that well the idea itself, this whole idea of keystroke dynamics and stuff is quite old. I mean, they used to be used in second world war, listening into the wires, you could tell who was typing Morse code and stuff, and the family is all but I think we are at the generation where computing power and also the amount of sensors that are available, you don’t need custom keyboards for precision, the touchscreens and the mouse have the precision, and the amount of information it senses is huge, so we have got an awful lot more input in biometrics.
And finally we are going to have, instead of keyboard replacement, instead of getting somebody to type the same sentence over and over again, we are going at their normal behavior, whatever you type, your own password, your own user name, not the ‘quick brown fox’ but whatever you want to type. So the technology is old enough.
Now, we are a startup. It was a university spin-out, so the founding date is a bit hazy, but we think we are at it seriously in the last two years, when we raised some funding and put some permanent staff in and expanded the team.
We are Scandinavian. We have had great success in the local market which tends to be quite good at adopting new technologies. The traditional customers for security, the earlier adopters tend to be the banks, and the military and those kind of people who are normally risk averse and are normally very very slow to do things. We’ve been working on it quite a lot. We’ve got all the banks in Scandinavia, with a massive rollout of our phone stuff, so inside their internet banking app, all the banks have our behavioral biometric technology embedded in it. And so we are taking that now, and going out of Scandinavia.
Tim Lord: What about people who are uncomfortable with everything they type being viewed by an omniscient presence up above?
Neil Costigan: I suggest to our customers that you make sure that people know in a comfortable way that we are watching your behavior for your benefit, just like they log in your IP address, and they log in time of day when you do transactions, and you would expect this. Taking the rhythm of your typing, not what you are typing. And very often, it is and even in the most basic case, it is the PIN number, they know you are typing the PIN number, they are going to check the PIN number. The fact that they are able to have a four or six-digit PIN rather than a really hard password because you have this rhythm. It is not really the same as knowing what you are typing and that kind of stuff. It is kind of in a context which is suitable, that is the security step
Tim Lord: So in scanning behavior, like you said before, phones have lots of sensors, you don’t even have any hardware to do this, it is entirely data analysis.
Neil Costigan: No. That is also kind of the reason there has been a huge interest in this. We are using off-the-shelf hardware in this area, and the guiding light is, there are no extra sensors, no extra hardware, no extra costs, hardware costs. There is a balance in all this. I mean, you can get hugely impressive security solutions that would cost an awful lot of money because of the hardware and are very very complex to use because of the very nature of the complexity or whatever.
What we do is kind of balance that usability cost and security, so generally the end user isn’t involved. It is transparent. We are not doing a training stage, we are not changing the user experience, we are not shipping stuff out, we are not getting any people to lose. It is all in the app and behind. And so the extra layer is a real benefit to both the consumer. A simple system that is more secure, and then the people rolling it out, it seems to be cost effective, and all the benefits they want.
Tim Lord: One more thing. You mentioned that there is some DARPA involvement? Can you talk about that just briefly?
Neil Costigan: DARPA in their wisdom put out a program about a year or 18 months ago what they call Active Authentication or the DARPA AA program, where they had this vision that the desktop of the future for the DoD and subsequently for the rest of the world, would be not just this gatekeeper but also active authentication, all the things you do would be part of the things that involves you, would help identify. And we spotted this accidentally actually, and then waved our hands in the far north of Sweden, so we have been doing this for how long, that we would call ourselves the experts in it.
And so they asked us to put in proposals for how can this area be enhanced, what is the open research problems, what is stopping this idea from being real? And so we worked on it. And again, those things in security tend to be ability to quantify them, some metrics, new definitions for things like the biometric is normally false accept, false reject in a time span. Continuous needs a time dimension. Maybe if all the vendors and the reassessors used the same terminology, so we proposed our open terminology, common data formats, a lot of this stuff would need to be interoperable.
So if we have the solution, a customer would be much better off if they can use both and indeed a lot of this is stacked. We are doing gesture based and form based, but if somebody else has got a voice biometric joining those two things together makes sense. So an openness, open standards, open terminology, open data formats, I think or we believe encourages the whole sector to go and benefits everybody.
Tim Lord: So are these open formats, open standards that you are talking about, are they published and available some place that someone could investigate and examine?
Neil Costigan: To the best of my knowledge, right now they are open, and DARPA have published them, we submitted to DARPA and DARPA published them. We have together all submitted a number of papers to IEEE. This is special edition of the IEEE computer publication, can’t remember the name, going out in May, that is going to document and show this off. There is a large biometrics conference in Tampa in September, where all the participants in the program are jointly presenting and showing how it work and how it works together.
And the whole intent is that this stuff is open, it’s published, and it’s out there to encourage more research, more commercial companies to get involved and for collaboration. So to date, from it, today actually and at this show, a lot of the people have come up here saying, we know you are in the DARPA program, we have something similar, how do we work together. So it is DARPA seeding research and seeding collaboration and seeding the future I think.
Tim Lord: Could you please speak a few words of Swedish for us?
Neil Costigan: [Swedish].
Tim Lord: That sounds like a hard thing to learn.
Neil Costigan: I think so, my daughter doesn’t think so, so we’ll see where it goes. (smile)
Assuming you will always type the same way. (Score:5, Interesting)
Re:Assuming you will always type the same way. (Score:5, Funny)
On the plus side, however, this will lock you out if you try to write a drunken facebook wall post to one of your exes...
Re:Assuming you will always type the same way. (Score:4, Funny)
Re:Assuming you will always type the same way. (Score:4, Insightful)
" Hopefully, there would be alternate authentication methods built in"
And then, I would question the security improvement of behavioral authentication. If I'm going to login and I'm an attacker, I'll just use the alternate authentication then.
Reminds me of https://wellsoffice.wellsfargo.com/ceoportal/signon/loader.jsp [wellsfargo.com]
Re: (Score:1)
I don't type the same way when I am the desktop, on the couch with the laptop used as a movie viewer, with the other netbook and his remapped cheapo german keyboard...
And a keylogger can log the timing too, look at the unix command typescript and its options, it's trivial.
Re: (Score:2)
On the minus side as always "The problem with biometrics is keeping the body parts alive."
Now it means having to keep just enough alive it can act like the original.
Re: (Score:2)
So you're suggesting we run some sort of DNA biometric test based on the vomit that hits the keyboard?
Fail out the gate! (Score:5, Interesting)
I have experienced Behavior Biometric Denial of Services. Humans are just too erratic, imagine this.
Your front door is locked using this method. All of a sudden you are outside and a thug walks by making obvious threats and you start running inside to get away or get your gun and the door now locks your ass out.
You are using email services and you start looking for a job and with the sudden increase in email traffic and/or login presence causes your service to block your account temporarily because of behavioral changes. (this actually happened to me for a short time)
I was in the middle of waiting for an actual offer letter when this occurred... very frustrating!
Re: (Score:2)
I have this mental image than in the future some people will be discriminated against because they cannot resolve captchas. Maybe there will be a job for those who are more human than normal humans.
By the way, try imposing a sudden spurt of activity on your credit card. Likely you will find it blocked. That happens to me all of the time, so i can believe your emails were blocked as well.
Re: (Score:2)
All of a sudden you are outside and a thug walks by making obvious threats and you start running inside to get away or get your gun
It is amazing how guns are differently perceived among countries. This scenario is just science fiction for me.
Re: (Score:2)
Re: (Score:2)
New authentication options (Score:5, Funny)
1) SHA1 password
2) Enterprise LDAPS
3) Tourrets
Smells like an academic spinoff (Score:5, Insightful)
I've encountered lots of projects over the years that sound neat on paper and have enough meat to flesh out a thesis-sized research project, but don't quite have the universal applicability that translates to widespread practical (and financial) success in the real world.
Two problems jump right out at me:
1. Instead of having to remember a sequence of characters, a user now has to remember and replicate a set of obscure behavioral quirks. Or actually they don't, because it's supposed to be innate. But just as a signature isn't identical everytime, the quirky typing won't be either, leading to possible authentication failures, unless the authentication method is forgiving enough to take this into account. ... which leads us to
2. It's open to mimicry, particularly if it's forgiving enough to account for natural variability. Authenticate enough times around an observant person with a knack for forgery and they can pick up on the patterns. A little bit of practices, and those rhythm and style quirks can be copied. Even easier if they can record video and/or audio with a mobile device.
If the mimicry is successful, it's a lot harder to learn a new set of unconscious quirks than to just memorize a new password.
Overall, the method seems academically interesting but not feasible in practice, except perhaps in a limited set of circumstances.
Re:Smells like an academic spinoff (Score:5, Interesting)
This is not so much an authentication method as a heuristic used to decide whether or not to ask for additional credentials. It's exactly analogous to the way security questions work for online banking. If it recognizes you, there's a good chance you are who you say you are and your password is considered sufficient. But, if it doesn't recognize you, that isn't necessarily indicative of an impostor, just that it needs to ask for more information (in the form of a token, smartcard, security question, etc) before it can be confident you are who you say you are.
A "yes' from this this is acceptance, but a "no" is not a complete rejection. It just makes you jump through an extra hoop or two.
Worked for us for millions of logins already (Score:3)
In that implementation, at least, the keyboard rhythm is one of SEVERAL factors that are considered. A sprained finger probably wouldn't keep you out, unless you were also a) far from home and b) using a different computer than you normally do. All three factors combined would make it seem likely that it was someone else trying to acces
It will never be reliable enough... (Score:4, Interesting)
What happens if I am sick? My mental acuity is not the same when my head is pounding with a headache... My reactions are slowed. Even if you can account for the difference in attentiveness between the start of the work day and the end, will you be able to recognize me when someone wakes me at 3am to troubleshoot?
Even without sickness and sleepiness, anything that can affect my mood can bring some minor changes to my typing habits. Even if they use cameras to measure eye movement, mood will be a factor. Think of how well you type (or how you would expect to) during major life changing events such as marriage/divorce/birth of children/death of parents. Can the even account for differences between days that you get promoted (or at least praised) compared to the day when your boss chews you out.
Then there are physical changes... Anything from a paper cut to carpal tunnel syndrome, or breaking a bone and getting a cast will seriously impact your typing.
Finally, what happens when your keyboard (or mouse) breaks and you need to get a new one. Even if it is the same model, a new one will generally have stiffer keys and buttons. You would be screwed if it had a different layout of keys or if it was a model of a different size. As for smart phones and tablets, what happens when you buy a new phone?
I'm sorry, I do not believe that this can be reliable enough. Even though I am somewhat impressed with Analytic software's ability to determine people's behaviour, that works on the masses with a margin of error; there will always be a few fringe cases that do not fit the mold; for authentication you need to be right, all the time, and I do not see that possibility.
Re:It will never be reliable enough... (Score:4, Insightful)
I posted this before, but I'll summarize here:
If this matches, it's likely that you are who you say you are. If this doesn't match, it just asks for additional factors of authentication (security questions, smartcards, etc). It is not a replacement for any other form of authentication.
Re: (Score:1)
Which lands us with some other form of authentication, since this one will not be relevant to an attacker. You just purposely fail it and get asked an easier question.
Re: (Score:1)
Except that it's less about keeping the wrong people out and more about making it easier for the right people to get in. The masses don't like multi-factor authentication because, frankly, it's way more of a pain than just typing in a password. This sort of technology encourages adoption of more secure methods because, assuming it works well, Mr. Bank get more security, and, because it is much more user friendly, they won't get a ton of calls from their users bitching about how much of a pain it is to log
Re: (Score:2)
Which lands us with some other form of authentication, since this one will not be relevant to an attacker. You just purposely fail it and get asked an easier question.
What makes you think the fallback questions will be easier?
Re: (Score:2)
It doesn't need to be reliable enough to work 100%. At a certain accuracy level it could be enough to trigger secondary authentication.
I tend to walk away from my computer at work for trivial reasons, and I don't always lock the screen. So I started thinking about this a few years ago. I was thinking bluetooth triangulation might be good, but that could be defeated by leaving your keys on your desk or a few other means. So I thought "what if the computer could detect my keyboard rhythm to a certain leve
Re: (Score:1)
This kind of monitoring would be terrible to rely on for actual authentication. However, it could be very useful for things like displaying the %match of typing patterns of the person you think you're talking to via IM (a particular bash.org quote comes to mind). Just sending a notification somewhere to say that behavioral patterns suddenly don't match anymore and a real person should go check it out
The key is using it not as an authoritative authentication measure, but as additional information that can be
Re:What happens if you are sick? (Score:2)
What if you don't know you are sick and this detects it? An interesting way for Microsoft or Apple to monetize this would be to patent an alzheimer's detection algorithm...quickly.
About time (Score:5, Informative)
http://psycnet.apa.org/journals/rev/6/4/345/ [apa.org]
Re: (Score:3)
Prior art (Score:2)
Rick Joyce and Gopal Gupta - Identity Authentication Based on Keystroke Latencies [pace.edu], 1990
F Monrose, A Rubin - Authentication via Keystroke Dynamics [ucdavis.edu], 1997
Arkady G. Zilberman - US Patent 6442692: Security method and apparatus employing authentication by keystroke dynamics [google.com], 1998 (I think some of the claims in this patent could be invalidated because of previous disclosure in the 1990 and 1997 papers)
An old idea (Score:3)
Back in the morse code days, people used to ID senders through their keying style. This was fairly routinely used (and abused) in the military - for example, when the Japanese Navy went to attack Pearl Harbor, the normal radio operators were kept behind and sent messages from (IIRC) the Kuril Islands, in case the US was tracking them as belonging to the carriers (which I don't believe we were).
Re: (Score:3)
See also the important pause between spoken words in Rudyard Kipling's "The Great Game".
Re: (Score:2)
The idea of ID by keyboard style was used in science fiction in the '60s and '70s by multiple authors. Heinlein, "The Moon is a Harsh Mistress"? When I tried googling, though, I found descriptions from 2012, 2010, 2009, 2003, and 1989. See also the important pause between spoken words in Rudyard Kipling's "The Great Game".
How about Tron's "It felt like Flynn"?
Aside: They really need to make "The Moon is a Harsh Mistress" into a movie, or a mini-series.
[Detects one-handed lingerie browsing] (Score:4, Funny)
My Laptop: "Yep, that's him..."
Military intel used to do this to radio operators (Score:2)
With enough analysis, military intelligence could tell exactly which enemy radio operator was banging out Morse code into their radio, based on things like rhythm, speed, and how hard the key was struck. They call this metric the R/T operator's "fist".
Re: (Score:2)
Exactly. Even if you're not very good at sending or receiving Morse, you will have a distinctive "fist" - just as distinctive as your handwriting or the sound of your voice. As you get better, your speed and accuracy will improve but your fist will sound just the same.
Machine-sent Morse is as weirdly unintelligible as synthesized speech, and for much the same reason - the inflections are missing or wrong.
That's not a new technique (Score:1)
This method had been on the market at least since 2007: https://de.wikipedia.org/wiki/Psylock [wikipedia.org] (German Wikipedia; there's apparently no English version of that page)
and Strongbox has ben doing is for about that long (Score:2)
In that implementation, at least, the keyboard rhythm is one of SEVERAL factors that are considered. A sprained finger probably wouldn't keep you out, unless you were also a) far from home and b) using a different computer than you normally do. All three factors combined would make it seem likely that it was someone else trying to access your account. Just one factor alone wouldn't trigger anything.
Re: (Score:1)
Of course a) and b) are strongly correlated. If I'm at home, I usually use my desktop computer; if I'm far away from home, I'll certainly not use that; I'll typically use my laptop. Now if I type differently on my laptop than on my desktop (not unlikely, since the keyboard is noticeably different), that means I would not be able to get into a Strongbox site when abroad.
Happy surprise. Like your wife's haircut. (Score:2)
Now if I type differently on my laptop than on my desktop (not unlikely, since the keyboard is noticeably different)
That was one of the very first things I wanted to test, in the proof-of-concept stage. I asked someone who normally uses a laptop to instead use MY desktop keyboard. So they were going from their familiar laptop to an unfamiliar desktop keyboard. I was glad to see that with the elements we were measuring, it still looked like the same person - even on a totally different type of keyboard.
Understand this is similar to using hair (style and color) as factors in recognizing someone you know. If you see s
Re: (Score:1)
What about different keyboard layouts (e.g. someone normally using Dvorak using a Qwerty keyboard on another computer)?
Great question (Score:2)
I broke my arm (Score:1)
What do I do now?
Tracking (Score:3)
This technique is quite old, but it's not the typing you should be focusing on, but more general computer usage. Think like an Intrusion Detection System, anything that would constitute abnormal behavior. Example:
Mar 1 18:05:57 localhost - User started web browser application
Mar 1 18:06:12 localhost - User opened 17 tabs to various porn sites
Mar 1 18:08:20 localhost - User closed browser
Mar 1 18:08:24 localhost - Microphone picking up sobbing noises
Mar 1 18:08:26 localhost - User identity verified.
Wait until malware coders get hold of this... (Score:1)
Future security awareness advice: "If you type without rhythm, then you won't attract a worm..."
My credit union ahs had this for several years (Score:1)
It has only failed to log me in a couple times over the years and all it does is make you answer your security q's when it fails.
old tech, and not that useful (Score:2)
This is decades old technology, and there's a reason it hasn't caught on: it has potentially high false negatives and high false positives.
Behavioral measures are useful for forensics, but they are not useful for authentication.
...good for surveillance bad for authentication (Score:1)
Reliability is the issue with playful artificial k (Score:2)
Lappland? (Score:1)
I've never heard of Lappland, but I have heard of Lapland.
behavior is unreliable (Score:2)
Dogwalker (Score:2)
A "short" story that sort of takes this is into account. (also, it was only published 23 years ago).
http://books.google.com/books?id=FLNCovxKl7IC&pg=PA160&lpg=PA160&dq=orson+scott+card+dogwalker&source=bl&ots=a2pcvnSmFx&sig=xIKvpnSdJ01xoxMt2SbkG7XKphM&hl=en&sa=X&ei=OB8yUb-bCuLbyQHW24HAAQ&ved=0CDgQ6AEwAQ#v=onepage&q=orson%20scott%20card%20dogwalker&f=false [google.com]