Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Encryption Security

Physical-layer Ethernet Encryption 95

Tekmage writes "Intel has just announced that they'll be shipping their ethernet encryption co-processor in their fourth quarter. Definitely a must (IMHO) for anyone considering wireless networking. "
This discussion has been archived. No new comments can be posted.

Physical-layer Ethernet Encryption

Comments Filter:
  • First, I'd like to disagree that the key lengths are short. Triple-DES has a 168-bit key, which should be long enough.

    Triple-DES was designed very carefully to avoid problems with multiple encryptions. Crypt-analysts found that encrypting something twice with DES and 2 different keys made it not significantly more difficult to decrypt than a single encryption (Schneier; Applied Cryptography; Section 15.1). Triple-DES does encrypt-decrypt-encrypt, defeating the meet-in-the-middle-attack.

    But, if you use SSL with DES encryption (or even Triple-DES), over a physical layer that uses DES or Triple-DES, would this have an adverse affect? I'm asking, not telling, as I don't know. Does the meet-in-the-middle attack work when encrypting different data (it would seem not, but IANACryptanalyst)?

    Any thought from someone who knows more than I?

    -Dave
  • And "peer" review is exactly what you'll probably get. If Intel has any hopes of selling some of these things to the US Dept. of Defense, certain other US gov't agencies, and some European gov't agencies, then they will submit it to a testing lab to get certified against the "Common Criteria" for information security. If you're not familiar with it, the "Common Criteria" (aka ISO IS 15408-1, -2, and -3) is the replacement for the old DoD "orange" book.
  • great, don't forget that Intel's Nightshade mboard, with the integrated NIC that does Wake on LAN, can also be put to sleep with an IP packet. these will most likely suffer the same problem. now all of your trusted IPsec Intel NICs are asleep. where did your network logging and IDS's go? oh yeah... nowhere. :) have a nice day, folks.
  • "Cryptology 101: you only need a cipher strong enough to last as long as you use it."

    IMHO its so that a cipher can be considered "safe enough" as soon as other ways of obtaining the encrypted information (breaking into your house and taking your PC/forcing you to tell your passwords/you name it) become easier than breaking the encryption. Breaking the SSL key that you entrust your credit card info to is technically possible. But it still requires significantly more effort than just stealing your wallet or even writing down the numbers while you use your card in a shop, so thats safe as far as I am concerned.

    Yes, you might have incredibly valuable information that you want absolutely no one to get... But if you pick an encryption strong enough that whoever wants that information is easier off doing something like getting you drugged and making you just tell your password than brute-forcing the encryption, thats about as much as you can do. (Except for completely hiding the existance of secret information; see steganography)
  • Okay, well how about this.

    1. Take known word - let's use "abracadabra" for illustration - and send it through the card.

    2. Observe encrypted results with network sniffer or similar device.

    3. Take same known word and send it through software version of the algorithm you are testing (DES, 3DES, whatever).

    4. Observe result of software encryption.

    5. If result from step 2 = result from step 4 chip is doing what it claims.

    Which brings up an interesting question. Since DES, 3DES, and the other encryption methods listed for this chip set are symetric encryption algorithms (i.e., both sender and receiver use the same key to encrypt and decrypt) how do you get the same key into the sending and receiving cards without sending the key in clear text?

    I haven't read the IPSec standard yet, but the way this works for other hardware encryption systems I've worked with (like Automatic Teller Machines) is that someone has to initialize the encryption devices on both ends with a "key encryption key". This key is then used to send the first session key, and keys are randomly generated and exchanged on an ongoing basis using the previous key to encrypt the new one. (Of course one flaw that I often observe in this system is that the idiots running these networks choose 0123456789ABCDEF as the initial key, consistently).
  • DES is about to be obsoleted and replaced by AES. Will this chip be able to
    adopt new standards when they come out? It should be possible to build a
    chip which is a general Feistal network engine which could run both DES and
    newer algorithms such as twofish, which is an AES candidate. This chip does not
    appear to have such flexibility.

  • Uhhh.. Why not just print the CPUID on the side of the chip if it's so equivialent to a serial number?
  • The same reason that your driver's license number is also encoded on that little magnetic strip. The same reason that certain radiolocation tags *broadcast* their ID's. The same reason your cable TV box announces its ID number to your cable company.

    Convenience in retrieval. Period.
  • The IP addresses need to be outside the encrypted payload for routing. This encryption won't be done at each gateway. So a snoop would know where your packets are going.

    I think (but don't know since I am too lazy to read the IPSec spec) that the layer 4 (UDP / TCP) packets will be encoded so a snoop will not know if you are going to TCP port 80.

  • I thought his post was rather amusing --

    and dead-on.
  • Could you be so kind as to inform those of us that don't know (like me) :)
  • This product is clearly designed for the corporate market, and it will likely get good play there. To answer some of the questions raised in this thread:

    Why do hardware encryption? Because it's fast. Purpose designed hardware is almost always (okay, why hedge? make that always) faster than the same process in software. A hardware encryption chip will make the encryption process essentially invisible to the user. This is particularly important when the users are neanderthals like corporate lawyers and merger & acquisition types who think PGP is one of those new US television ratings.

    Will the FBI/CIA/NSA have a back door into this? What if they do? As I have already stated, this product is clearly aimed at the corporate market (who may want some sort of key escrow anyway). If you're worried about it, software encrypt whatever you're going to send first.

    Will Intel build IDs into these things? Of course! It's called a MAC address.

    Not sure why only Win2K support. Probably the IPSec calls in the IP stack.

    Mark
  • There are dozens of perfectly valid reasons a CPU ID could and *is* being used that have absolutely nothing to do with invading your privacy or destroying your life.

    Why do we have serial numbers in the first place? Vehicle Identification Numbers? Think about it from a Real World perspective and these things are hardly as evil as you seem to think they are.
  • by coyote-san ( 38515 ) on Wednesday September 15, 1999 @06:05AM (#1680017)
    Cryptology 101: you only need a cipher strong enough to last as long as you use it.

    IIRC, IPsec negotiates a session key for each connection. It definitely uses a different session key for each remote host - that's a simple 'pigeon-hole' proof, and it's easier to use per-session keys than to try to force the same key on all connections to each remote host.

    This means that your keys only have to be remain secret for the duration of your TCP/IP connections. After you've closed the connection, an attacker knowing your session key will not be able to impersonate you, and that's one of the primary concerns with network encryption. There is still the risk that someone could scan your datastream for sensitive information, but nothing prevents you from using additional levels of encyption for passwords, sensitive files, etc.

    Even if you do slip up and copy an unencrypted sensitive file, IPsec algorithms are not lightweight. A couple spooks might be able to read your list of porn site passwords, but not the script kiddies (and not-so-kiddies) working for your competition and/or wife's divorce lawyer.
  • Several major universities are working on "Internet II". To techies this has been portrayed as a faster technology proving ground for the Internet, with limited access to the chosen developers at universities. However, I've heard it marketed as the future "secure commerce internet" by someone from a certain big blue American corporation. This person specifically claimed that host traffic would be verifiable, and thereby secure, and ISPs would be approved. Of couse big brother is going to have a backdoor, and your chip ID is going to uniquely identify you on the internet.

    There is *some* reason for government to worry; If Internet technologies are going to be used for commerce of course people should worry that a cyber-terrorist could bring the banking system to its knees. But is that really what is happening? Transaction processing at that level has been taking place on private networks for years, and on that scale private networks are not nearly so cost prohibitive. In the decades long electronic banking era, have we had any national security level incidents? Not that I know of. So what if we now use TCP instead of SNA.

    In my opinion, big brother is looking to gain more control of the every-day Internet -- it's a reflex action -- and big business is looking to create barriers to entry for the new economy -- another reflex action -- and they are willing to sacrifice a lot of economic growth in order to get it. Just an opinion.
  • I agree with your post, except for your comments on virus prevention. Virus scanners are a fundamentally incomplete solution. Choice of an operating system does matter, but not for obscurity reasons. OpenBSD is secure because it has been designed (and verified) to be that way; it is not secure because it is "unusual". And if one has a secure platform, one should be able to run untrusted binaries (provided they are not run by a privileged user such as root).
  • Although most companies didnt realize it, it may be a matter of survival for them to have a secure communication system without any backdoors. There are two reasons for that: Espionage: Companies might do active development, in which other companies might be interested. There are also cases reported when the NSA/CIA helped American companies to retrieve industrial secrets from foreign countries. CrackersIf there is a backdoor, and the NSA (or whoever else) can use it, a cracker might use it, too. This wouldnt be good at all for banks, credit card companies, eCommerce,... There is a real need for secure cryptography, but it must be open sourced and widely checked and accepted by leading cryptographers.
  • Your method is pretty good, except that there might be allowable variation in the output of an algorithm which would still allow the decryptor to work, but in practice if the chip says anything diffrent, then we have a flag to inspect it closer.. so your meathod still works. What I'm more concerned about is what if the whole card is designed to respond to or send special packets. How can we check against this possible backdoor?

    Jeff
  • I can't trust *every* cracker out there who gets a copy of the source code, but I can trust *most* of them, including myself.

    Let's say you've got one bad apple at RedHat who slips a trojan into an encryption driver. How long do you think it will be before it is discovered? Not long? You're right.

    As soon as the code goes out the door people will start looking at it. I can do a diff against the same code acquired from different places. I can recompile the driver myself. I can inspect the code myself.

    So you see, open source DOES solve these problems.

  • "On the other hand, if the encryption is done at layer 2... They have to decrpyt every single packet you send looking for gold..."
    Yeah, except there's a flaw in this logic: if you're encrypting everything at a low layer, the network has no idea where to send the packets because their destination addresses are all encrypted. So, the network hardware needs to be able to read the destination, which means the FBI, CIA, NSA, McDonalds can just check the source/destination on the packets the same way the network does.
    There is however a (not very important) case where link layer encryption could be useful, and that's if you suspect promiscous client machines snooping on your own LAN. If you encrypt at link layer, they would only know that the packet is headed for the router, but not know the final destination. So, they would have to crack one of the routers instead, but it would be a wee bit safer... IMHO it would be a waste of resources, as switching the network would do almost the same trick, but link layer encryption isn't completely useless. You would still be vulnerable to traffic analysis, though.
  • Why physical layer security? This isn't physical layer security. The poster who though it was was wrong. If you want to adhere to strict OSI layer definitions -- well, you're out of touch with modern networking reality, but if you do -- then this is a Link Layer security.
    As we're talking about IPSEC it's Network Layer security. (Ref: IETF ipsec charter [ietf.org] -- "A security protocol in the network layer will be developed".) Technically, the chip could be used for Link Layer security, but the Intel article does not suggest any such use.
    Uhm, what is that comment about? If you go by the standards you're out of touch with modern networking? How is the OSI model obsolete?
    The OSI model is not really obsolete, but for classifying many systems it's really overkill. Some of the layers in the OSI model rarely correspond to any used protocols or functions, notably the Session and Presentation Layers. Therefore many people prefer the five-layer model of the TCP/IP-stack.
  • great, don't forget that Intel's Nightshade mboard, with the integrated NIC that does Wake on LAN, can also be put to sleep with an IP packet. these will most likely suffer the same problem. now all of your trusted IPsec Intel NICs are asleep. where did your network logging and IDS's go? oh yeah... nowhere. :) have a nice day, folks.
    Actually, having access to a cryptochip would possibly facilitate protection against this kind of attacks. You could design a protocol for decently secure sleep-on-LAN using shared secrets (between client and management server). Thus only the management server would have to be trusted, and if it gets cracked you're done for anyway... (Note: I dont' say that Intel does this, and it's probably not in the wake-on-LAN specs (I haven't checked, though) but it would be possible.)
  • By the way, this is very similar to how most *nix systems determine if the password you enter at login time is the same as the password in the passwd or "shadow" password file.

  • I would point out that the Intel CPU ID is also popular among corporate asset managers that are looking for a better system to electronically track machines.

    The paranoia surrounding is issue is somewhat justified -- Intel did originally market the CPU ID as an 'Internet' feature.

  • It would be bizarre if this were the case. I would think that any provider of networking equipment in their right mind would need to support lots of stuff (Solaris, NetWare, NT4, NT3, Linux, etc.)

    But as other's said, there could be good technical and/or legal reasons for this.
  • That doesn't mean that they won't release specs soon.

  • With all of the other goings-on involving encryption, do you think the Justice Department will like this? It sounds like a very good way to keep law enforcement types from looking at your transmissions.
  • No Encryption works forever, so what's the point for a encryption in the physical-layer?
  • speed and overhead

    Computer processors are getting faster all of the time too...so what is the point of 3d accelerators for video games?
  • Security features are for use in PC applications running the Microsoft Windows 2000 operating system only. Too bad it leaves Linux, *BSD, etc. out in the cold (assuming that we'd actually want to use it, but that's a different issue).
    ---
  • An encryption product from the company that brought us CPU ID numbers? No thank you.
  • by Anonymous Coward
    This might seem inflammatory, but it is probably reasonable for the Open Source community to start moving on seriously boycotting (via a black-hole list which is publicly accessible and highly visible) any hardware for which public information does not exist, or standards which are "Patent Protected" and cannot be legally distributed in Open Source.

    The Open Source community (and the whole Internet, frankly), shouldn't put up with this kind of blantant "you only get this if you use all of our software and pay us extra money" kind of mentality.

  • by PD ( 9577 ) <slashdotlinux@pdrap.org> on Wednesday September 15, 1999 @05:13AM (#1680042) Homepage Journal
    Encryption is all about trusting nobody. Once you trust someone, you've lost your security. It's an easy matter (for someone who can write a device driver!) to make a module that will encrypt and decrypt packets as they go in and out of the ethernet interface. Why would you want to build something like this into hardware, particularly when the user requirements aren't known ahead of time?

    Some users might just be surfing, and they might decide that 64 bit encryption is good enough for them. They never buy anything on the web, but they might want to safeguard passwords and e-mail addresses against spammers or whatever. On the other hand, the U.S. Army might have a need for 1024 bit encryption because they don't want to take the slightest chance that their battle management network will be compromised. Would any hardware level encryption meet the needs of these different users?

    And then, what if there's a bug in the encryption. That bug might affect the actual security of the protocol making the device completely worthless. Or it might just affect what devices you could connect to, making the product useful in a very limited way. AHA! you say the solution is to make the hardware upgradable by burning a new program into a flash RAM. Well, why can't a virus do the same thing, except strip out all encryption totally?

    Finally, any algorithm implemented in silicon is unlikely to be peer-reviewed. If I will have security, I want it to be built into a software package distributed under a licence which requires source code to be available. I want everyone to check the code out, find the bugs, try to crack it. Once it stands up to all that, I'll use it. I can't just trust Intel to do my homework for me.


  • You mean Ft. Meade, MD.

    -Doug
  • This looks like Intel is just catching up to Red Creek's VPN [redcreek.com] adapter.

  • Not to split hairs, but the Intel description seems to indicate that the encryption is actually a chipset taking the IPSec encrypt/decrypt load off the OS and CPU, but that would be more network-layer stuff, not physical. Perhaps the title should be "Firmware-based encryption"?
  • 5 will get you 10 the chip design originated in... Hmmm...let me take a wild guess here...how about LANGLEY VA. Nah, they wouldn't do somthing like that, would they?
  • Sounds like a good idea - hardware vendors are slowly starting to see the sense of open standards, but a kick up the backside wouldn't slow them down. Moderate up!
  • I have to add my skepticism to this discussion as well. Despite the fact that most of the critical comments already posted have been moderated down, I think some good points have been raised.

    While I think Intel is taking a step in the right direction by offloading the burden of network encryption onto hardware, the fact that the first rev only runs under Win2k and they're using export restricted crypto bothers me. I suppose this does have it's applications inside corporatations (otherwise they wouldn't have brought the product to market). Obviously this product is not targetted at the Open Source community.

    The question on everyone's mind is, where's the Linux support. I don't know, but all I can say is that I hope it's coming. I would like to think that Intel's investment in RedHat was more than to just turn a profit and generate some good PR. Only time will tell on that one.

    Here's my biggest concern: Did they kowtow to the FBI and install some key escrow into this product? Corporations might like that sort of thing anyway... keeps the feds happy and potentially off their back. However, if they did, then I don't think I'll ever be buying this little gadget. I'll just continue sacrificing a tiny bit of CPU performance to use the crypto I like.
  • Intel really has nothing to worry about as of right now. Assuming that they start selling the product before any law is passed, they will almost certainly not be able to face any sort of charges, because it will be considered grandfathered -- old technology that was legal before new laws.

    Of course, the good lord knows Intel isn't going to do anything that isn't 100% risk-free, so I would not be terribly suprised if there is at the very least the potential for a back door that can be switched on at a later date. You'd have to look very closely at the technical specs to find it, I would imagine, though.
    --
    Matt Singerman
  • Why encrypt for wireless? Just use spread spectrum equipment. Hell, direct sequence is hard enough to grab, let alone frequency hopping...
  • I agree with your post, except for your comments on virus prevention. Virus scanners are a fundamentally incomplete solution. Choice of an operating system does matter, but not for obscurity reasons. OpenBSD is secure because it has been designed (and verified) to be that way; it is not secure because it is "unusual". And if one has a secure platform, one should be able to run untrusted binaries (provided they are not run by a privileged user such as root).
    Actually, I would recommend all of the measures listed, not just any one - a virus scanner is useful, if only for detecting sources of suspect software.

    We're in agreement about operating system choice - I meant that OpenBSD is unusual because a great deal of time has been spent making it secure. The fact that OpenBSD doesn't have as many people attempting to build attacks is just a handy side benefit. I should have stated that better in my original post.

    As far as untrusted binaries goes, I agree provided you have quotas setup properly, all interesting software is not user-modifiable (permissions & things like immutable) and that you minimize work done as root (which many people are lax about).

    Another point about trusted code is that you want to get things like updates directly from the authors. Some people set up otherwise reasonable security & then install code off of the big archive sites without even testing a checksum. Ideally everything on your system would come with a digitally signed cryptographic hash for the archive.

  • > Finally, any algorithm implemented in silicon is unlikely to be peer-reviewed

    While the rest of your comment was thoughtful and correct, the above statement is simply incorrect. I doubt that any hardware vendor would spend the time and money to create a hardware implementation of an encryption protocol that might be worthless. Algorithms implemented in hardware are likely to be better tested and reviewed than what you get in the latest software.

    Furthermore, key management issues are a major problem in software, because many programs do a horrible job of it. (And an encryption system is only as strong as the weakest link.) Hardware is more likely to have secure key management features.

    In reference to bugs in hardware encryption, that is just extremely unlikely.

    And it is entirely possible that a single hardware solution can support multiple levels of encryption. RC4 uses a variable length key, and so do several other peer-reviewed algorithms. From a practical standpoint, 1024 is way too much for a symmetric algorithm. 128 bits is safe from just about any brute force attack imaginable, and 256 bits is just inconceivable to crack with brute force.
  • Well, in that premise, what if there's a widely reported bug in the driver, which Redhat issues a patch for. The original bug is now gone, but a new one has been inserted (accidentally)... Are you going to recognize this? Probably not, until it's exploited. It's not like these things are commented out like (THE FOLLOWING MODULE SENDS KEYSTROKES TO _ _ _)

    Unless you're a certified expert, you need to trust the source-code...
  • "That's probably why Intel chose to implement standard, peer-reviewed algorithm:"

    I have no quarrel with the algorithm they chose. It's the implementation that I don't trust. If I were to give you my very own implementation of RC4 that I wrote myself, would you use it? Why not? I might have screwed something up! Crypto algorithms are very tricky things to get right. Even if the algorithm is correct, there's all sorts of details involved with locking memory at the right times, erasing input buffers, avoiding buffer overruns, and key handling. The best way to try to crack something is to attack the implementation! It's very difficult for a single programmer or small group to get every last detail right, and if there's any gap at all you could lose all your security.

    You don't believe me? How big a deal is a 1 byte buffer overrun? Who cares about that? 1 byte is sometimes is all that is necessary to crack a system wide open.

    Peer review of the *implementation* is absolutely required. There's absolutely nothing at all wrong with the algorithms that Microsoft uses for security on their operating system. The problem is an implementation that is closed to peer review. Just like this device from Intel.

  • First, triple DES has an effective 112 bit key length because of the same meet-in-the-middle attack you described (section 8.2 in Applied Cryptography).

    As for mixing SSL (which I think uses RC4, and possible others) and DES, that's a hard question. It might turn out that the combination of the two algorithms, which are strong by themselves, make a weak algorithm. Or it might be even stronger. It would depend on whether the two algorithms form a group, and there's no general method to determine that (AFAIK).
  • You make several good points, I'm going to use one of them to deal with the other.

    >This is particularly important when the users are neanderthals like corporate lawyers and merger & acquisition types who think PGP is one of those new US television ratings.



    >Will the FBI/CIA/NSA have a back door into this? What if they do? As I have already stated, this product is clearly aimed at the corporate market (who may want some sort of key escrow anyway). If you're worried about it, software encrypt whatever you're going to send first.

    You yourself have admitted that the target market for these devices isn't very techno savvy. Yet those people have a cery real need for privacy. Sometimes it is vital to have SECURE communications between people in the company. Imagine chipmaker XYZ decides that they're going to announce a 5% drop in prices tomorrow at noon. Company ZYX finds out and decides to steal their thunder. They announce a 10% reduction at 10am.

    XYZ has two choices, take one in the eye and let ZYX make the best of the situation. Or two tighten up the belt and make price reduction of twice what they anticipated having to.

    IMPORTANT shit is what gets encrypted. Nobody cares what the company cafeteria is serving for lunch today.

    LK
  • If this is like most other IPsec accelerator chips I've seen, the chip just does the encryption and MAC, and doesn't make any suppositions about what the format of the packet is (which means that IPv6 support should be easy).

    -lee

  • I think the blasse attitudes about the CPU ID is exactly the kind of complacency that Intel and others (including the government) are counting on. As we've seen in the past (with the social security number, for example), any time an identifying "key" is established, capable of tying a person to information about them, has a tendency to expand beyond its original intent. We now have in the works, proposals to establish a DNA database (supported by the likes of New York City's
    Mayor Giuliani), a national photo database (a la Image Data and the NSA), and god knows what else. In my opinion, ANY time a uniquwly identifying key is proposed, especially for use in tracking information about PEOPLE, there better be a damned good reason for it.
  • I have been evaluating IP Sec solutions for my job, and have found some interesting things along the way:

    Please be aware that IPSec solutions do require automated rekeying of each connection, or even negotiate a new symetric key for each session. In fact it is a requirement to allow manual keying. Thus, if someone did discover your session key, they could impersonate you.

    While anyone who would roll out IP Sec using manually distributed keys is taking a big risk of their key being discovered, you might be surprised that if you ever see someone demo IP Sec that they are actually using manual session keys.

    WHY? 1. Its simpler, and 2. The IKE (internet key exchange) protocol is fairly new, and many vendor's IP Sec products don't work very well with other vendors certificate authorities.

    Even if you decide to implement automated keying of the session, it is also theoretically possible to attack the Diffie-Helmann key exchange used to agree upon a session key using a man-in-the-middle attack.

    Finally, if someone were to steal your digital certificate they could impersonate you again. Note: Some implementations of IP Sec client software DO NOT require a password to unlock the digital certificate. Yikes! Thus, if a user has your system, they are you to IP Sec.

  • Yes, I agree there are valid reasons for serial numbers, but there are also dozens invalid ways to use them. It all depends on how they are used. McDonalds or Burger King doesn't check my VIN every time I use their drive through. Why should a web sites keep track of my CPUID every time I visit?
  • > Of couse big brother is going to
    > have a backdoor, and your chip ID
    > is going to uniquely identify you
    > on the internet.

    Thank God for open-source software. As long as we can write the networking code ourselves, we can circumvent the chip id, no?
  • To add to what ahne said:

    I didn't mean that that whole OSI model should be tossed, just that is shouldn't be applied strictly

    Take IP and ATM: both "Layer 3." What does it mean if you have IP running over ATM or ATM over IP? Or NETBIOS over IP over ATM on top of Carrier Pigeons? It "Layer 2 Tunnelling Protocol" (L2TP) really Layer 2? Who cares!

    The OSI model is a great starting place for modeling protocols, but the era where it can be taken as "dogma" are pretty much gone.

  • There's absolutely nothing at all wrong with the algorithms that Microsoft uses for security on their operating system.

    Well, that's not quite true either. Some of MS's algorithms positively suck.

    Eg, the one that encrypts SMB passwords before passing them over the wire. Not only did it make Samba break, but it's also incredibly easy to work around; you can simply just sniff then use the encrypted password... none of the tried-and-trusted challenge/authenticate method.

    That is a poor algorithm, and MS are notorious for it.

  • I was all set to blast this as a silly thing to implement, until I read that they're implementing IP security. If it were just link-layer encryption, it would be a waste of time.

    Unfortunately, I don't see any mention of IP security in the chip for IP version 6 (and yes, IPsec is required in IPv6).

  • A number of people have commented that this is no better than using software based encryption and is possible worse because of the relatively short key length.

    This really is not true. Every software based encryption technology of which I am aware still allows the hypothetical spies to see WHERE you go. Since, they can see where you go, they can theoretically narrow down the amount of traffic the want to try to crack.

    It's like this: if you were using https, and they wanted to know what the content of your corporate intranet was, they wouldn't have to waste their time trying to crack all the times you pulled down pages from slashdot.

    On the other hand, if the encryption is done at layer 2... They have to decrpyt every single packet you send looking for gold, including all those pesky netbios broadcasts. It would take subnstantially longer and they would not be able to ascertain any information from _who_ you were talking to.

    Oh yeah -- this scenario really only applies if you're concerned about hacks into your local ethernet.
  • Ack! They've got me doing it to! Hacks = cracks.

    *sigh*
  • Get a clue -- every network card ever made has a unique serial number. So does every car, television, mouse, lawnmower, or any other piece of electronic/hardware equipment ever made.

    If Intel was going to deliberately make it evil (read: insecure, compromised, etc.), someone would find out, and the hullabaloo would make the Pentium bug look like nothing.

  • by Anonymous Coward
    It is neither physical layer nor link-layer encryption. It is IPSec, which means network layer. That is actually better, since IPSec is end-to-end. (Think about it: would you rather have encryption just on your LAN, or all the way across the 'Net?)

    Also, 3Com came out with a similar card a few months back. I'm sure pretty much all the big NIC makers will have crypto coprocessors soon enough.
  • I wonder if you look at the silicon of that chip you see NSA Key written in the circuitry? :-)
  • I want everyone to check the code out, find the bugs, try to crack it. Once it stands up to all that, I'll use it. I can't just trust Intel to do my homework for me.

    But you can trust every cracker out there who can get a copy of the source code, right? Don't forget, not everyone who can read code wants to fix the bugs they find. Some people want to exploit them, and making the code available makes it that much easier.

    I think open source/free software is a fine and noble idea, but it isn't a panacea.
  • by Anonymous Coward
    That is an interesting point, as I was reading this after looking over one of the new QED MIPS chips and the PPC roadmap -- what about being able to order a little coprocessor core with the rest of your intergrated pc-on-a-chip that MIPS and Moto/IBM and NatSemi seem to be coming out with? That would take the load off as well, and I would think that this will just get easier with the move to modularize the chips as much as possible to make embedded stuff get the advances of the mass market stuff as soon as possible. Cool. You used to have to buy a mainframe to get an embedded crypto chip and I think that they are a year or two away from standard with stuff like this, as the periferal stuff seems to be moving inexorably into the CPU itself.
  • You're correct about the strict OSI layer definition of physical layer.

    I was thinking about it more in terms of being a dedicated chip-set (physical hardware) when I submitted the article.
  • Unless there is a backdoor. Which is what the clipper proposal was about.

    This sounds different tho. But who knows...
  • by Ozwald ( 83516 ) on Wednesday September 15, 1999 @07:10AM (#1680078)
    This looks more like a request from Microsoft for security rather than a feature for Win2K OS.

    Remember the news about Microsoft's wireless campus LAN? Programmers at MS will be broadcasting huge amounts of private information into the atmosphere, including Windows 2000 source code. What is stopping someone from opening up shop near by with a couple recievers? And Windows 2000 is slow enough as it is, without on the fly driver based encryption. Microsoft needs this card.

    Personally I would prefer hardware acceleration rather than the card doing everything.

    Ozwald
  • This could be good news, if Intel werent an US company. But since it is forbidden in the US to export strong cryptography, this system will either not be exported (which I cant believe) or it wont be really secure. I hope that the US government will someday realize that their strict crypto policy helps European and Asian companies to get advantages in the fast growing security market.
  • It all depends on how they are used.

    But are they being misused? Call me uninformed, but I can't think of a single instance where a device's serial number or a car's VIN has been used in a privacy-compromising or otherwise "bad" way against its owner.

    Also, what makes you think web sites *can*, much less will, collect this CPU ID for every visit you make? The only way this is possible is if the web site codes up an ActiveX or Java applet that you run with "full permissions". (A normal Java/ActiveX object would not work without making changes to the underlying engine.)

    If someone writes such an applet, and you agree to start it with full permissions, then I suppose your CPU ID could be recovered that once. Similarly, you could easily download a program that could pull this CPU ID and start secretly tracking you based on this number. Let's face it, though, if I were going to write such a trojan, there are probably a few better ways I could think to get sensitive information from your PC and track you than a CPU serial number.

    Additionally I suppose it's possible for all of the JVM vendors to add in "hooks" in their implementations to allow applets in the "sandbox" to recover something as sensitive as the CPU ID, but why in God's name would they choose to do that? There's a reason the Java/ActiveX sandbox exists. It wouldn't make any sense for those same people to start putting in all sorts of back doors.

    Anyways, this whole CPU ID thing has been argued over and over already, and since you still seem to think they're inherently evil, I doubt that my comments here will make a dent, so you won't hear from me any more on this thread.
  • Possibly. But then I'd expect it to sound more like optimized for Windows 2000 or possibly designed for Windows 2000, rather than feature available only for...

    But I've been wrong before :-)
    ---

  • which can be easily changed in many cards...

    any sysadmin that believes that a MAC address is a unique ID is asking for trouble....

    https://www.mav.net/teddyr/syousif/ [mav.net]
  • The flipant version of this question is "why should I trust Intell?" The real question is how do I verify that the chip only dose the algorithms it's supposed to and nothing else?

    Is there a way to plot out the chip by inspecing the silicon? Is there room for the card to transmit information to be sniffed (like hiding things in bad packets)? Can we show that the algorithm which the chip implements dose not leave loose information floating arround (perhaps even in the initial choice of random numbers) which would allow additional information to be encoded ontop of a good packet? Who is going to audit the hardware? How scalable are the keysizes?

    We probable can make good hardware encryption, but I would like to know that it is not an NSA trojan horse like clipper was.

    Jeff



  • Why encrypt for wireless? Just use spread spectrum equipment. Hell, direct sequence is hard enough
    to grab, let alone frequency hopping...


    Bad idea. Spread spectrum is not hard to intercept when predictable hopping sequences or pn codes are used. There is off-the-shelf equipment that will do the job. A lot of it is sold to the folks at Fort Meade.


    I was interested in wireless LAN equipment until I found out that the available equipment either had no link layer encryption or had 40 bit link layer encryption. The IEEE wireless LAN standard even specified (optional) 40 bit encryption. The vendors seemed to have a cavalier attitude about security.

  • That's a fluke that shouldn't be counted on as being a way to spot security flaws! I'm positive that subtler intrusion methods either exist, or could exist, in such a way as to draw much less attention to themselves! :)
  • by the red pen ( 3138 ) on Wednesday September 15, 1999 @05:39AM (#1680090)
    1. Why should we believe this is secure? Where is the spec? Read the IPSec spec. It's wide open. RSA, DH, X.509, 3DES... this is not a "black box" system.
    2. Why physical layer security? This isn't physical layer security. The poster who though it was was wrong. If you want to adhere to strict OSI layer definitions -- well, you're out of touch with modern networking reality, but if you do -- then this is a Link Layer security.
    3. Why should we trust hardware? The NSA only trusts hardware. After you verify that it performs the correct operations, then you don't have to worry about someone hacking it -- even if they 0wnZ your box. Please don't waste your time with hair-splitting "what if" scenarios; we all know there's "always a way to circumvent security," but when it requires physical access to a box, it's much, much, harder.
    4. Hasn't this been done? Yes. IPSec is a standard. Lots of people are doing it. There is IPSec technology being built into the Linux IP stack. That means you can VPN to your pals with a RedCreek VPN or a Network Alchemy gateway or one of these Intel network cards.

    Please return to your regularly scheduled rants about FBI/NSA/CIA conspiracies.

  • "On the other hand, if the encryption is done at layer 2... They have to decrpyt every single packet you send looking for gold..."

    Yeah, except there's a flaw in this logic: if you're encrypting everything at a low layer, the network has no idea where to send the packets because their destination addresses are all encrypted. So, the network hardware needs to be able to read the destination, which means the FBI, CIA, NSA, McDonalds can just check the source/destination on the packets the same way the network does.

    To get around anything like this you'd have to A) have a globally-secure Internet that has absolutel yno back-doors for the feds to sneak in (yeah right) or B) a global VPN that connected every network with every other network directly over encrypted links so you'd have to know beforehand where traffic was coming from/going to so you could snoop that specific connection, which of course defeats the whole purpose of a routable network.

    -=-=-=-=-

  • by adamsc ( 985 ) on Wednesday September 15, 1999 @05:44AM (#1680092) Homepage
    It's an easy matter (for someone who can write a device driver!) to make a module that will encrypt and decrypt packets as they go in and out of the ethernet interface. Why would you want to build something like this into hardware, particularly when the user requirements aren't known ahead of time?
    Speed. IPSec can encrypt all traffic and, on a high speed link, the protocol overhead could easily be higher than a server's application overhead. Remember that Intel's gotten very interested in creating internet server farms, precisely the type of area where this is most useful.
    Finally, any algorithm implemented in silicon is unlikely to be peer-reviewed.
    That's probably why Intel chose to implement standard, peer-reviewed algorithm:

    Performance offload support for IETF IPSec standard mandated cryptographic algorithms
    Support for SHA1-HMAC, MD5-HMACs, DES, and 3DES

    If it's used for IPSec, its output can be compared to that of another IPSec implementation. If it's at a lower level, there's nothing preventing someone from hooking up a packet analyzer to their LAN (which, if you're paranoid at this level, you should be doing anyway - do you trust your crypto supplier more than Intel?).

    3DES is considered a safe conservative choice for strong encryption. IPSec is a reviewed protocol. MD5 and SHA-1 are also safe choices.

    That bug might affect the actual security of the protocol making the device completely worthless. Or it might just affect what devices you could connect to, making the product useful in a very limited way. AHA! you say the solution is to make the hardware upgradable by burning a new program into a flash RAM. Well, why can't a virus do the same thing, except strip out all encryption totally?

    Why couldn't the same attacks could be launched against your existing system? It'd probably be easier to trojan your encryption system's libraries than get write-access to an EPROM's memory address. Besides, this is easily solved by using virus scanners, unusual operating systems (how many OpenBSD viruses are there?) and not running untrusted binaries.

    Of course, it would have been better to put some public-key system on the chip and require updates to be signed with an Intel secret key. Still, if you're worried about attacks at that level you'd be insane to use regular hardware in any case.

  • Don't read too much into that sentence. It probably parses as

    Losers running Windows 9x and NT4 are out of luck - this product will *not* run on these platforms unless marketing runs amok

    instead of

    We spit in the face of Linux users everywhere! Ha ha ha!

    A second concern might be US export restrictions. Windows allows you to write drivers that verify they're running on a US distribution, but that's impossible with Linux.

    (For security reasons, it's possible that someday each kernel will validate the modules it loads with cryptographic signatures, so Red Hat (to pick a name at random) could implement zoned distributions, but there's nothing to prevent Blue Jacket from rebuilding the same modules with its own keys. Encryption keys are private, source code is public.)

Remember, UNIX spelled backwards is XINU. -- Mt.

Working...