A Vision For a World Free of CAPTCHAs 168
An anonymous reader writes "Slate argues that we're going about verifying humans on the Web all wrong: 'As Alan Turing laid out in the 1950 paper that postulated his test, the goal is to determine whether a computer can behave like a human, not perform tasks that a human can. The reason CAPTCHAs have a term limit is that they measure ability, not behavior. ... the random, circuitous way that people interact with Web pages — the scrolling and highlighting and typing and retyping — would be very difficult for a bot to mimic. A system that could capture the way humans interact with forms algorithmically could eventually relieve humans of the need to prove anything altogether.' Seems smart, if an algorithm could actually do that."
Re:Just a Thought... (Score:3, Informative)
Factoring an integer has one answer. Trial and error doesn't work. Scrolling and clicking tempos have many answers, trial and error does work.
Spam Karma? (Score:3, Informative)
It seems like the old Spam Karma module for Wordpress did this. It calculated how long they were on the page vs. how much they had typed, how fast they typed, and a bunch of other factors before it ever hit a captcha. Back when I used wordpress I remember being it pretty accurate too.
Re:Response Times (Score:3, Informative)
These guys have botnets, and with networks like Tor, you can't limit access to one IP. Besides, if you've got captcha that is being attacked, to limit them by IP, you need to send them all through a single location to perform the detection, completely breaking your load balancing. It becomes a DoS target.
Basically, the attacker has more machines, more IP addresses and more time than the target.
Even if I only have one machine, that's fine, I attack 10 or 100 sites instead of just yours. Or, I use a network like Tor and select random out proxies. The only problem? All of my compatriots will be doing the same.
The target won't see any real decrease in attacks, they will only lose all of their corporate customers who are unable to access the network from home (or dorms, or school, or libraries).
Re:Just a Thought... (Score:3, Informative)
The anonymous poster makes a good counter argument against the idea that the algorithm must be easily defeatible: just because you have an algorithm that detects human behavior does not imply you have an algorithm that emulates the human behavior detected by the original algorithm.
That's vaguely clever, but it doesn't really pass the sniff test. While "one-way" or "trapdoor" functions may or may not exist, they appear to be pretty rare. That's why it's such a big deal when computer scientists identify a new possible trapdoor function. The chances that any randomly-chosen process happens to be trapdoor (for example, verifying human mouse gestures on a webpage) is monumentally unlikely.
Trapdoor functions came to prominence in cryptography in the mid-1970s with the publication of asymmetric (or public key) encryption techniques by Diffie, Hellman, and Merkle. Indeed, Diffie and Hellman first coined the term (Diffie and Hellman, 1976). Several function classes have been proposed, and it soon became obvious that trapdoor functions are harder to find than was initially thought.
http://en.wikipedia.org/wiki/Trap_door_function [wikipedia.org]