Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Security Graphics The Internet Hardware Technology

Graphics Cards: the Future of Online Authentication? 178

Gunkerty Jeb writes "Researchers working on the 'physically unclonable functions found in standard PC components (PUFFIN) project' announced last week that widely used graphics processors could be the next step in online authentication. The project seeks to find uniquely identifiable characteristics of hardware in common computers, mobile devices, laptops and consumer electronics. The researchers realized that apparently identical graphics processors are actually different in subtle, unforgeable ways. A piece of software developed by the researchers is capable of discerning these fine differences. The order of magnitude of these differences is so minute, in fact, that manufacturing equipment is incapable of manipulating or replicating them. Thus, the fine-grained manufacturing differences can act as a sort of a key to reliably distinguish each of the processors from one another. The implication of this discovery is that such differences can be used as physically unclonable features to securely link the graphics cards, and by extension, the computers in which they reside and the persons using them, to specific online accounts."
This discussion has been archived. No new comments can be posted.

Graphics Cards: the Future of Online Authentication?

Comments Filter:
  • by Anonymous Coward on Tuesday October 02, 2012 @04:00PM (#41530831)

    see subject.

    • by NevarMore ( 248971 ) on Tuesday October 02, 2012 @04:18PM (#41531089) Homepage Journal

      Not entirely true. Good security is based on 3 things:
        - something only you have (your graphics card, a physical key)
        - something only you know (a password)
        - something only you are (biometrics, typing patterns)

      As it stands today you usually have one of those things, the password. Adding in something difficult to spoof as the summary suggests is an improvement. So now you have to have a password and a graphics card with certain flaws.

      I agree with your sentiments though. This is an interesting idea but seems awkward to implement.

      • I thought best practice was not to rely so much on "something only you are". A lot of biometric identifiers, such as fingerprints, have been replicated, and such identifiers that have been compromised can't be revoked and reissued so easily.
        • by Altrag ( 195300 ) on Tuesday October 02, 2012 @05:06PM (#41531609)

          That's why you have multiple methods:

          - Something you have can be stolen.
          - Something you know can be coerced from you, retrieved via social engineering (ie: knowing your mother's maiden name or whatever), or whatever else.
          - Something you are can be duplicated by replicating you (or at least, the portion of you that the scanner cares about.)

          Its still not perfect -- its entirely possible that somebody will just kidnap you while you've got your physical token on you -- that covers two of the three. And unless you're extremely stubborn and motivated, it probably wouldn't be hard to coerce most people's passwords either.

          The easiest from a computer perspective is the password -- that's why its the most common/used.

          Security tokens are rapidly becoming available for many systems (especially with the advent of cell phone authenticators since everybody already has a cell phone -- you don't need to purchase/obtain and carry around however many additional trinkets.)

          Biometrics is harder. First of all, biometrics itself isn't extremely accurate. Its good enough to limit possibilities but for really secure applications, you still want a person to go in and confirm (or pick from a list, as in a police database search) to ensure that you've got a match. Not that people aren't fallible as well, but at least there's someone to blame.

          Secondly, biometric scanners aren't all that common yet. If touch screens become high enough density then perhaps they could be used for fingerprint ID. Cameras are likely already good enough to be used for retinal scans, but it would require the user to position the camera at the correct angle and whatnot which is pretty implausible if they're just loosely holding it in front of them (that's why real retinal scanners, including your optometrist's tools, have headrests -- they keep your eyes in relatively the correct position while its scanning.)

          So we've got one.. we're moving towards two.. I think three-tier authentication is a while away yet though.

          • everybody already has a cell phone

            Not strictly everybody. In my aunt's family of five, only three have cell phones. The other two rely on the house's POTS phone. And even then, not all cell phones can run "apps". Good luck getting an authenticator application to run on a prepaid flip phone without costing money for a sent text message and received text message.

            Cameras are likely already good enough to be used for retinal scans, but it would require the user to position the camera at the correct angle and whatnot which is pretty implausible

            I've read good things about iris scans. On a device with a front-facing camera, having the user stare at four randomly positioned icons in the correct order would help get the eyes to

            • by Altrag ( 195300 )

              Not strictly everybody.

              Yes, I was generalizing quite a bit. But not all services use authentication tokens yet either. Both sides of the equation are increasing however, so its only a matter of time.

              I've read good things about iris scans. On a device with a front-facing camera, having the user stare at four randomly positioned icons in the correct order would help get the eyes to the right angle and distinguish a live iris from a printout.

              Admittedly I don't follow along in biometrics news (beyond what gets posted to Slashdot of course) but of the two points you've listed there:
              - Front-facing camera requirement. These are pretty common -- almost all laptops have them, and many smart phones do as well. The bigger question is how much leeway can be given with respect

              • There is another problem. How can you guarantee that the picture is even coming from the camera, and not just a recording? Unless you have physical control of the computer recording the video, then that video can just be a rerun of a verification made some time ago.

          • by bpkiwi ( 1190575 )
            I think you are missing a critical point about biometric identifiers however. A password can be change an infinite number of times, a token can be replaced an infinite number of times. A fingerprint? - well once you have changed it ten times you are out of luck.

            Biometrics are just "something you have" but with limited ability to replace. Its a weak token at best.
            • That's a moderately argument against using only a fingerprint (and there are others, like limited enrollment). But it's not an argument against using a biometrics as part of larger authentication system -- you only need to be able to revoke one of the required tokens to restore the the security of the system.

              Biometrics are a useful addition to an authentication system not just for the user/admin benefits (hard to forget, hard to share) because the methods by which they are lost or duplicated are significant

          • by sjames ( 1099 )

            On the other hand, the better authentication is (but inevitably short of perfect), the more thoroughly screwed you tend to be when someone DOES spoof it.

      • Re: (Score:3, Interesting)

        by sexconker ( 1179573 )

        Not entirely true. Good security is based on 3 things:

        - something only you have (your graphics card, a physical key)

        - something only you know (a password)

        - something only you are (biometrics, typing patterns)

        As it stands today you usually have one of those things, the password. Adding in something difficult to spoof as the summary suggests is an improvement. So now you have to have a password and a graphics card with certain flaws.

        I agree with your sentiments though. This is an interesting idea but seems awkward to implement.

        From the perspective of the one doing the verification, that's something you know, something you know, and something you know.
        Nobody comes out and physically inspects your graphics card or looks at your thumb print or asks you to present a key fob.
        They all ask for the numbers programs of devices output. Keyfobs generate a specific code at a given time. Biometric scanners generate a hash given a specific input or any similar input. This GPU scanning program will do the same. These things are hard for an

        • Actually they are saying that GPUs are good candidate for PUFs (physically unclonable functions). This means that the GPU would not have one "fingerprint" but a unique function which is specific to it. The standard way to use this is with a sort of challenge response protocol where intercepting any of the messages doesn't help impersonate the user later. PUFs are the physical analog to one-way functions. It may be possible to hack the verifier and then impersonate users to that verifier only, or to hack
      • Sounds to me like it causes a bigger problem than it would solve. The problem with using a built-in graphics card, is that all your online accounts would suddenly be tied to the ONE device with that graphics card. You wouldn't be able to login from any other device, and that includes any new devices you buy to replace old ones. I hope I'm misunderstanding something 'cause that sounds like a useless technique in a networked world.
        • Or they could store more than one hardware fingerprint, just like you can have more than one key in your authorized_keys file.

      • by Kjella ( 173770 )

        Not entirely true. Good security is based on 3 things:
        - something only you have (your graphics card, a physical key)
        - something only you know (a password)
        - something only you are (biometrics, typing patterns)

        Good authentication is based on 3 things. Good security depends on a lot more, like not getting hacked so they can go crazy with your credentials. My online bank uses two-factor authentication for each unknown/big transfer so the integrity of my bank account is pretty good, but pretty much all confidentiality is out the window if they can piggyback on your connection and if the security is only at the gate then the rest too. I'm not concerned about my authentication tokens, they're fairly safe. It's the dev

      • There are really only two things. Something you claim to be, and something you can present as evidence of that.

        "Something you have" could be either, but it's not really separate.

      • I still havent heard a good explaination for why all 3 things are not, essentially, "something you know". Until we switch back to analog, all of them are going to be encoded at some level as digital data and sent as part of authentication, right?

        Or am I missing something?

    • But this wouldn't work for me. My evga graphics card is FTW flawless!

    • I don't think so. That's why I don't carry a debit card. Oh wait. What I'm saying doesn't actually make sense, because the card is only one factor of a two-factor authentication scheme. Silly me.
  • by SGDarkKnight ( 253157 ) on Tuesday October 02, 2012 @04:00PM (#41530843)

    I could see this being a good thing, and a bad thing. If online accounts are using hardware to determine the user account, whats to stop someone from just "borrowing" your hardware and connecting to your account? Sure, they could still have user names passwords and such as backup, but then what would be the point of doing the hardware authenication? Plus how much of a pain in the ass would it be to upgrade your computer and notify the online account to expect changes in your hardware for the next time you login?

    Bah, i think i'm rambling now... need coffee... or beer... beer sounds better

    • by 0racle ( 667029 ) on Tuesday October 02, 2012 @04:07PM (#41530953)
      I often buy my video cards second hand off ebay. I wonder who's accounts I'd be able to get into one day doing that.
      • TFA doesn't mention how they calculate these metrics but (maybe naively) I assume it's deduced by measuring differences in performance for a given task?

        This begs the question: what happens if the performance of your graphics card changes, say for example your GPU overheats or the fan gets clogged up with dust, surely that will change the results of the 'authentication' process?
    • by sumdumass ( 711423 ) on Tuesday October 02, 2012 @04:07PM (#41530955) Journal

      Or how much of a pain would it be for me to clone your hardware uniqueness and impose it into a virtual machine with software representing hardware?

      Now instead of tricking you into installing malware, I just need to convince you to create an account.

    • by mangobrain ( 877223 ) on Tuesday October 02, 2012 @04:10PM (#41530981) Homepage

      I was thinking the exact same things. Identifying the hardware is fundamentally different from identifying the person currently using it, and being able to state unequivocally that they are authorising whatever action is taking place. Plus, as you said, hardware gets upgraded. Even worse, though, is that hardware also fails; particularly high-end GPUs nearing the end of a life spent being slightly too hot. Unexpected hardware failure could leave users with no overlap in the usable life of old & new components, meaning they cannot log in to existing accounts in order to register the fingerprint of the new hardware. Also, unless there's a hidden cache of documents I'm missing somewhere, I can't find any details of what these "unclonable functions" actually are, just that they exist. Are they robust against simple replay attacks?

      This all smells like a bad idea to me; something cooked up by a bunch of theorists with very little grounding in practicality. Not sure what part of this could be a "good thing", to be honest.

      • Yep the statistical likelihood of a graphics card failing suddenly and catastrophically raises to a near certainty when you register its fingerprint. Its the Murphy's Misery Principle - The likelihood of something going wrong is proportional to the amount of misery it would cause. Many graphics cards fail but still operate but with glitches. I imagine this might change the fingerprint.
    • It's a cheap way to do two-factor authentication. You need your password, and you also need your graphics card. If either of them is lost or changes, then you have a much more difficult reidentification process. This system has the same vulnerabilities that any two-factor authentication scheme has, but less than many deployed systems. Many banks already use cookies or something to "register" your computer, and ask you extra questions when you logon from a different machine or clear the cookies. Some se
  • While the card's "identity" may be different, it doesn't matter if something can stand in for the hardware and provide a false ID.

    • by viperidaenz ( 2515578 ) on Tuesday October 02, 2012 @04:08PM (#41530959)
      ... which is something explicitly mentioned in TFA.

      The more difficult question to answer at this point, she said, is whether someone could use software to emulate the differences in behavior between graphical processing units. Lange said the key is finding a way to guarantee, in an authentication process, that the party attempting to authenticate a user is communicating with an actual GPU and not software attempting to replicate its behavior and uniqueness

    • It can be a man-on-the-side attack, too.

      The attacker just needs to have something running on your machine that they can use from their machine to provide the answers to your bank.

      This is not technically an interposition attack, it's a referral attack, similar to the captcha breaking systems which proxy a captcha to a human wanting to look at porn, the human solves the captcha, gets the porn, and is happy, while the system proxying the captcha has used the solution to attack an unrelated system normally requ

      • by Lehk228 ( 705449 )
        not so much a mitm but a remote zombie, just use a remote control to cause victim's PC's to do the dirty work on themselves, bonus points it is much harder for the victim to claim fraud since IP logs and hardware fingerprinting show it was done from their PC, for even nastier crimeware, have it wait until certain activities are detected such as facebook posting or email reading so the user was provably at their computer at the time of the transaction. further extended have the malware delivered thro
  • I have a home Linux machine, my wife's machine, my laptop and my work machine.

    How can I share my authentication amongst them ?

    • Maybe similar to what you would do with ssh keys and an online repository like github? Have different keys in each machine, but all linked to the account?

  • by Hentes ( 2461350 ) on Tuesday October 02, 2012 @04:04PM (#41530911)

    Why not just admit that they've found the unbreakable DRM? Online authentication is a solved problem.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      The order of magnitude of these differences is so minute, in fact, that manufacturing equipment is incapable of manipulating or replicating them.

      Don't worry; if it's well-defined enough for software to use, it's well-defined enough to emulate.

      There is no unbreakable DRM.

      • if it's well-defined enough for software to use, it's well-defined enough to emulate.

        Unless current computers aren't fast enough to emulate it in the time that the party on the other end of the network connection demands. Try running PBKDF2 in hardware vs. in software.

        • by 1u3hr ( 530656 )
          What if your network is lagging? Satellite connection? You have to have a fall back to more traditional verification.
          • The timing may not be important to the server ; it's important to the program that constructs the hardware identifier. I'd imagine that the end result would just be a hash value.

            • by 1u3hr ( 530656 )

              it's important to the program that constructs the hardware identifier. I'd imagine that the end result would just be a hash value.

              If the program that does the verification is on the client, then it's liable to be compromised. Or just replaced by a keygen.

          • You're like me [slashdot.org] in that you like to think up edge cases. Trouble is, the common case is more profitable to the market than the edge cases. I'll deal with each:

            What if your network is lagging?

            Then you are not in the audience for a real-time online multiplayer video game.

            Satellite connection?

            Then you are not in the audience for a real-time online multiplayer video game. Heck you're probably not even in the audience for a single-player game that's a large download because of the single digit GB/mo cap that satellite ISPs apply to home subscribers.

    • Drm, yes, there is no other use case for this. But unbreakable, no more than every drm ever - reliant on the 'chain of trust' consisting of hardware, rootkitted operating systems, apps and the vendor, at every step distrusting the user. I wonder how far will this get this time.

    • Why not just admit that they've found the unbreakable DRM?

      Because they haven't. Software can still be disassembled and stripped of authentication routines. This just adds another layer of bullshit to the cake of lies. Repeat after me: Client side security is a lie. Client side security is a lie...

  • by aaaaaaargh! ( 1150173 ) on Tuesday October 02, 2012 @04:05PM (#41530929)

    You can feed false information to the software that reads the characteristics of a graphics card just as you can fake an MAC address. I fail to see a substantial difference.

    • Re:Why not RTFA? (Score:4, Informative)

      by Anonymous Coward on Tuesday October 02, 2012 @04:15PM (#41531049)

      You can feed false information to the software that reads the characteristics of a graphics card just as you can fake an MAC address. I fail to see a substantial difference.

      "The more difficult question to answer at this point, she said, is whether someone could use software to emulate the differences in behavior between graphical processing units. Lange said the key is finding a way to guarantee, in an authentication process, that the party attempting to authenticate a user is communicating with an actual GPU and not software attempting to replicate its behavior and uniqueness. Lange went on to admit they aren’t quite there yet, which is why the product is not finished."

      • I don't see then "being there" anytime soon either. Any hardware can be emulated, it's just a matter of how much resources the crackers can put into it - it doesn't have to be a basement geek, it could very much be china/NSA/KGB/wharever.

  • by Anonymous Coward on Tuesday October 02, 2012 @04:07PM (#41530949)

    If this fingerprint is orders of magnitude beneath manufacturing controls, are the researchers sure that it persists over long time frames?

    Will that graphics card have the same fingerprint the first day it is purchased as it does 2 years later after putting in hundreds of hours at high temperatures playing accelerated games?

  • From:

    http://puffin.eu.org/WP1.html [eu.org]

    the best I can figure is they're doing something like shutting off memory refresh and seeing what the cells look like. That's the most best source of random mfgr "stuff" I can think of.

    Other than that, I'm mystified how they're doing it. There just shouldn't be that much mfgr variation.

    It could be that there's only a couple bits of randomness (like they're reading out the model number and calling it good). The fact they aren't advertising the details implies the details

    • by slew ( 2918 ) on Tuesday October 02, 2012 @05:26PM (#41531817)

      FWIW: If you read WP2 & WP3, I think they are just attempting to read some of the SRAM from inside the GPU for a source of what they call a "PUF" (physically uncloneable function). They hope to sprinkle some error-correction code and some magic crypto dust the uninitialized SRAM pattern to create a number that will be useable for attestation (basically to assure that it is the machine that you think it is).

      This idea isn't new. A quick google search shows papers about using SRAMs as both PUFs and Random numbers going back in 2007 (they called them FERNs) http://people.cs.umass.edu/~kevinfu/papers/holcomb-FERNS-RFIDSec07.pdf [umass.edu]

      The major problems with this stuff is that...

      Once you power up your system, something is gonna want to use that SRAM (GPU vendors aren't in the business of leaving big chunks of SRAM that they don't use for researchers to discover and use), so you have to take a snapshot after powerup, but before someone wants to use the GPU. This makes many avenues of attack available (e.g., you have to put that fingerprint somewhere, because the GPUs will shortly trounce all over it).

      Secondly is the stability issue. Although some parts of the uninitialized SRAM is going to be statistically stable (power-up to 1 or 0 pretty reliably), some others are going to be pretty random (in fact other researchers are looking for highly unstable bits in SRAM powerup to be able to extract a random number for a nonce). Across temperature, and over time as the parts age, these bits will change (some stable ones will become random and some random ones may exhibit a strong bias one way or another). Without extensive characterization over age and temperature, this would be pretty unstable to use as a definitive ID.

      Third, when GPU vendors notice that people are accessing SRAM before initalization, they will start wiping the memory on boot. This is to prevent this third-party ID usage model (because nobody wants to repeat the intel CPUID fiasco) and because now that GPUs are being used for general-purpose computing, any type of SRAM retention issues across power-up is a security risk. On a related note, there are in fact there are other researchers attempting to use SRAM retention to create a reasonably secure clock (google TARDIS: Time and Remanence Decay in SRAM).

      If I had to speculate, about the only reasonable model for this (assuming the GPU vendors don't co-opt it or shut them out) is to create some sort of "ticket" system. Distill a timestamp and a challenge value with the PUF (and maybe even the "random" part of the SRAM for salt) down to a ticket using some cryptomagic. That ticket would be valid for a while, and you'd have to create a new ticket before it expired. Over a short enough time and temperature regime, a security system might be convinced that this temporary ticket is an acceptable substitute credential, but it would not really replace an actual authentication technique.

      This stuff has also been researched extensively for 5 years or so. I don't know what these folks are really bringing to the table (other than they are looking at GPUs for big blocks of SRAM). Why be so secret? Maybe it's because they want to keep that funding coming. A quick google showed someone in 2009 even wrote an undergrad paper on the subject of SRAM/PUFs... http://www.wpi.edu/Pubs/E-project/Available/E-project-031709-141338/unrestricted/mqp_sram.pdf [wpi.edu]

      • by kent.dickey ( 685796 ) on Tuesday October 02, 2012 @06:02PM (#41532095)

        The WPI report confirms what most everyone suspects: Reading from an uninitialized SRAM returns mostly noise, about 50/50 (but not exactly) 1's and 0's, and highly dependent on temperature. I think what they're saying is something like "Look at uninitialized memory, whose values are apparently random 1's and 0's, and somehow compute a unique fingerprint that is stable for this device, but different from all other devices". I'm not sure that's actually possible. I can't think of anything on chips that would produce "random"-looking data and which wasn't highly temperature dependent.

        Even if a clever algorithm could "fingerprint" an SRAM device, others have already pointed out all the ways to break this. It's simply a slightly more complex MAC address, and will likely be easy to effectively clone. It's like printing a password on paper in special red ink that only you have, and then saying no one can log in to your system (by typing the password) since they can't replicate that red ink. Umm, the special red ink is a red herring. All you need is the password.

        I don't think there's really anything here. There's no details at the PUFFIN site.

  • It's not a good idea to use the particularities of a hardware production process as the theoretical basis for authentication.
  • If you have ISPs give everyone a fixed IP address, you get ID down to the house level. Have cell phones use fixed IP addresses too. That gets most of the world IDed fairly well and it doesn't require a fancy new API to allow a web site to pull some hardware ID from your computer - it's the same as the address they're sending data to.
    • Re: (Score:2, Funny)

      by Anonymous Coward

      Im registering 192.168.1.1 as myself.. Please dont anyone use it..

    • So, that means I can only use a single device?
      I can't share any computers either?

  • I'm interested, but sceptical.

    I don't need to clone the hardware, if it is just the source of some data. I can simply replay your data on my machine, no matter what the hardware is. You can't prevent that - if you could prevent software manipulation, you could skip the whole hardware step and embed your key in the software.

    Hardware as authentication only works if actual calculations are done on the hardware (Smartcards, SecureID, etc.) or you are able to interface with the hardware (RFID chips, keycards, et

  • I have upgraded my video card twice this year alone, do you seriously expect me to jump though hoops of bullshit just to get my software running again? I own more than one game, and one song... douches

  • That makes sense. (Score:5, Insightful)

    by overshoot ( 39700 ) on Tuesday October 02, 2012 @04:29PM (#41531223)

    Every time I upgrade my graphics card, all of my games stop working.

    I'm sure that there's something wrong with this, but I can't put my finger on it.

    • The obvious mistake is that you're buying games which includes this online authentication.

      There's a reason for me not owning a single Ubi game past Assassin's Creed.
  • That's cool in a nerdy sort of way. Ten years out of date, tough. I guess they didn't look at what's already available, what used to be available and is no longer used, and why. This sentence puts ten years out of date: "link the graphics cards, and by extension, the computers in which they reside and the persons using them, to specific online accounts" 1 person 1 account! Commodity software that's been widely available for many years already ties one account to on human user, across multiple devices,
  • I wonder if the specific parameters used to identify a card (note not a user or a machine...) can change as the card ages, as it wears. Heat/Cold cycles, failing bits in memory, changes / updates in drives, malware infecting drivers or firmware... (that last would be -real- fun... suddenly you are not you.)
  • This the computer equivalent of biometrics and has all of the same security issues as biometrics for people.

    Sure the graphics card can't be cloned just like you can't clone a finger or retinal print. However if the authenticating system is compromised then it becomes really really hard to establish your credentials again - although replacing a graphics card is easier than replacing a finger or eyeball.

    See The issues with biometric systems [biometricnewsportal.com] (the first thing that popped up on google for me)

  • ...and your 16yr old babysitters boyfriend sits down at your computer while you're out to dinner and your premise for security is out the window. The simple fact of the matter is you can NEVER be sure the person on the other end of a computer connection is who they say they are. Once you assume that, the rest of your security procedures become rather simple.

    My bank allows me to move money from one of my accounts to another of my accounts. That's it. The worst that can happen is someone hacks in and moves al
  • So you can distinguish between two supposedly-identical graphics cards. Ok, yeah, I guess that's neat. One hacker test point for you. But you're really reaching for applications of this knowledge, aren't you? Dude, give in: it has no useful applications. That's ok. Be happy about what you did anyway, use it to impress some chick in a bar ("hey baby, did you know I can tell your Radeon from another Radeon?"), and go on to the next project.

    • Short answer - you can't. If I render a picture on a properly working card it will be identical on another card on the same software setup. That is the whole point of computers - given the same initial state and inputs they generate the same output.

      What they must be doing is pushing the cards to working incorrectly (overclock, under voltage...), as different cards of the same model will fail differently outside of their normal operating conditions.

  • fingerprints (Score:4, Interesting)

    by PopeRatzo ( 965947 ) on Tuesday October 02, 2012 @06:44PM (#41532463) Journal

    Why is the first thing I thought about when I read this "another way for the MPAA/RIAA to track down copyright violators so they can send drone strikes"?

  • So they want to use the single most unreliable hardware component in my PC to identify it and potentially control whether I have access to my online resources?

    Over the years, the graphics card is the one thing that consistently ends up cooking itself. Never mind that something as simple and common as a firmware version change or a driver version change can and does modify its behaviour.

  • by Sebastopol ( 189276 ) on Tuesday October 02, 2012 @08:00PM (#41533049) Homepage

    The actual website indicates it hasn't even been done yet, and is lighter on details than white bread.

    It is complete BS, the website has no details and tons of press releases. Here is how much work they have done so far, about a dozen lines of text:

    http://puffin.eu.org/WP1.html [eu.org]
    http://puffin.eu.org/WP2.html [eu.org]
    http://puffin.eu.org/WP3.html [eu.org]

    I think they posted the release in hopes of letting the online community discuss ideas, and will then harvest those.

    Lame.

    • I think they posted the release in hopes of letting the online community discuss ideas, and will then harvest those.

      I think I speak for the whole "Online Community" when I say that our idea is "Die in a fire."

  • The moment I have to authenticate myself in order to use the Internet, beyond the ppp username/password in my DSL router, that'll be the moment I stop using the Internet.

    What's wrong with these people that they have this insatiable urge that everything and everybody needs to be identified and authenticated.

    I am so sick of that.

    What a waste of time and resources. Why don't these "researchers" actually do something useful.

    • by tbird81 ( 946205 )

      In fairness, they were probably playing around with video cards, setting up a LAN or something and overclocking them, and discovered identical brand cards had different errors. They thought "this seems cool" and investigated and experimented a bit more and found out you can identify cards with it.

      They then thought "I can get funding for this if I make it topical... Mmm, stops terrorism? No. Reverses global warming? No. Predicts economy? No."
      Then the epiphany "It can stop haxoring! That's how we'll get fundi

    • by cdrguru ( 88047 )

      Well, if you ever access any sort of financial system, be it a bank, trading account, Ebay, etc. you might be concerned that you are the one doing it rather than someone else doing it "for" you. Similarly, if folks in a particularly nasty government start getting messages implying that the sender is going to be using a rifle to start taking down members of the government, you might like it if there was a solid way of saying that it wasn't you even though the messages claim to be from you.

      Even simpler - how

  • My guess is they overclock the GPU until it fails (in this context "fails" means stops working 100% correctly, not catch fire) , and then check what failure has occurred. Do the textures generated on CPU core 15 start dropping bit 12 when running at 933MHz? That sort of stuff.

    The propagation delay between and through components is very sensitive to difference in processes, and unlike overclocking memory or CPU over-clocking a GPU will not make the whole system unstable. Of course these sorts of failures are

  • even if they made it so you can authenticate by video card, it doesn't mean it's forgery proof, you can always hack the software to report the same result that someone else's video card generated and bypass the entire thing altogether.
  • I am really sorry, but I've got a book, printed in mid-80s that suggests nearly the same method for identificating hardware. Turn off DRAM regeneration of a memory block for a while, then read the contents. These methods are really useful for, to say, identification of a stolen notebook.

  • Sure, smartcards aren't 100% foolproof, but they're purpose-designed for this sort of thing, are tamper-resistant, have widespread support from a variety of vendors, are cheap (I recently bought a new USB token [with integrated smartcard] for 17 EUR), and there's standardized interfaces for communicating with them.

    For general online authentication, use something like OATH one-time passwords (such as produced by hardware tokens, Google Authenticator, or other compatible code generators). It makes password gu

  • How is this substantially different than the unique CPU ID that Intel tried to do back in the PIII days? Everyone thought that was a Bad Idea because of privacy concerns.

Genius is ten percent inspiration and fifty percent capital gains.

Working...