Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Software Security

Security Focus Interviews Damien Miller 80

An anonymous reader writes "The upcoming version 4.3 of OpenSSH will add support for tunneling allowing you to make a real VPN using OpenSSH without the need for any additional software. This is one of the features discussed in SecurityFocus' interview of OpenSSH developer Damien Miller. The interview touches on, among other things, public key crypto protocols details, timing based attacks and anti-worm measures."
This discussion has been archived. No new comments can be posted.

Security Focus Interviews Damien Miller

Comments Filter:
  • Yes and no. (Score:5, Interesting)

    by jd ( 1658 ) <imipak@yahoGINSBERGo.com minus poet> on Wednesday December 21, 2005 @03:38AM (#14307119) Homepage Journal
    You are correct, but only as far as you go. It is possible to compress first and then encrypt. Indeed, this is generally regarded as the superior method, precisely because the compression will disguise a lot of the information that cryptography will leave behind.


    Secondly, cryptography is generally expensive on the CPU but cryptographic processors exist. Motorola's processor unit (before they spun it off) had a very nice unit called the S1, which could encrypt or decrypt four streams in parallel. They had a very nice manual, describing the complete protocol to communicate with it. Despite this, I never have yet seen a Linux driver for it. A pity, regardless of what you think of the S1, simply because it would have been a good opportunity to win over those who do use such chips.


    TCP offload engines are also beginning to come into the picture. When TCP stacks didn't do a whole lot, it cost more to offload than you'd gain by having a co-processor. These days, a glance at the multitude of QoS protocols defined in papers, the staggering range of TCP algorithms in Linux, and the complex interleaving of the Netfilter layers -- it almost has to be better to have all that shoved onto a network processor.


    (Notice that I'm including more than just the basic operations here. It's the ENTIRE multitude of layers that is expensive. Linux supports Layer 7 filtering, virtual servers, DCCP. There's even an MPLS patch, if anyone cares to forward-port it to a recent kernel. IGMPv3 isn't cheap, cycle-wise. Nor is IPSec.)


    There is also the crypto method to consider, too. RSA is expensive but ECC and NTRU are considerably cheaper. SHA-1 is much slower than TIGER and is not clearly better. Whirlpool is also better than SHA-1 on speed and strength.


    I'll also mention that OpenSSH is sub-optimal on the implementation, that there are patches out there to make it faster. I mentioned those the last time OpenSSH became a hot topic. Even if the patches themselves aren't "good enough", they must surely be evidence that it is possible to tighten the code a great deal in places. If nothing else, slow code is more vulnerable to DoS attacks.

  • by Z00L00K ( 682162 ) on Wednesday December 21, 2005 @03:47AM (#14307135) Homepage Journal
    There is actually a point in locking out (blacklisting) IP addresses from where a brute force is attempted. This since those bots often try one site at a time and scans for known login/passwords. It isn't that common that an attacker uses several different sources at the same time when attacking a site unless it's a DOS attack.

    Blacklisting will at least make it harder for stupid bots.

  • by rodac ( 580415 ) on Wednesday December 21, 2005 @06:05AM (#14307498) Homepage
    SSH tunnels and VPN can already today be done using ssh and pppd. i have used it for many years. It is still a stupid idea and useless for other things than toy networks.

    SSH uses TCP as transport. You should NOT transport TCP/ip ontop of TCP. TCP over TCP has well known and well documented poor performance characteristics.

    Google for TCP over TCP to find any number of researchpapers on why this just doesnt work, or try running IP traffic yourself across an SSH tunnel and find out first hand why TCP over TCP just dont work well.

    Maybe, I hope, they plan to add a new SSH mode that uses UDP and will use UDP-SSH as basis for the tunnel. That would work. But you can neveruse more than one single TCP layer in any stack. If not (i.e. they plan to tunnel traffic atop a TCP ssh session) it will fail and they will learn.
  • Re:Yes and no. (Score:3, Interesting)

    by rodac ( 580415 ) on Wednesday December 21, 2005 @06:20AM (#14307528) Homepage
    You dont understand the problem. The problem is not the TCP overhead (which is neglible) nor can it be solved by TCP offload engines.
    The problem is that TCP over TCP just doesnt work and has well understood and well documented perfromance characteristics.

    IPsec which does work, as CIPE and things like IPIP and GRE all have in common that they do NOT use TCP as a transport. IF you use TCP as transport for the tunnel and IF you transport TCP atop said tunnel it will just not work.

    When tail packetloss occurs in TCP there will be a retransmission timeout, this will stall the tunnel and wreak havoc for the clock in the next upper layer TCP causing even worse stalls. Somehing that was well documented and understood many years ago.

    You just can not run TCP over TCP. It just doesnt work. An offload engine will not change this.
  • by bodger_uk ( 882864 ) on Wednesday December 21, 2005 @08:38AM (#14307910)
    Those hidden bits in full:

    Another statistic suggests that more than 80% of the SSH servers on the Internet run OpenSSH. I'm wondering if you have ever verified which version they are running, and what is the average behaviour of an OpenSSH administrator. Does people update the server as soon as a new release is available?

    Damien Miller: Funny you mention this, we just completed another version survey with the assistance of Mark Uemura from OpenBSD Support Japan. The results of this should be going up on OpenSSH.com [openssh.com] soon.

    I don't have detailed OpenSSH version histories for usage surveys before last year's. Certainly the use of paleolithic versions (such as 2.x) is very infrequent, but beyond this it is difficult to tell how quickly users update - many vendors will keep relatively ancient versions (such as 3.1p1) on life-support with spot security fixes. This will avoid known security problems, but it doesn't give their users the benefit of any of the proactive work that we do, nor any of the new features.

    It is worth noting that OpenBSD, which has a very conservative policy on its stable trees, typically updates supported OpenBSD releases to the latest OpenSSH version when it is released.

    Being very popular means also being a good platform for a worm. Did you adopt any specific measures to fight automated attacks?

    Damien Miller: Privilege separation alone probably makes a worm targeting a bug in sshd impractical. An attacker would need to break into the unprivileged sshd process that deals with network communications and, because this just gives them access to an unprivileged and chrooted account, then exploit a second vulnerability to either break the privileged monitor sshd or escalate privilege via a kernel bug. This would add a fair amount of complexity, fragility and size to a worm - it would probably need to implement a fair chunk of the SSH protocol just to propagate.

    We also implemented self re-execution at the c2k4 Hackathon. This changes sshd so that instead of forking to accept a new connection, it executes a separate sshd process to handle it. This ensures that any run-time randomizations are reapplied to each new connection, including ProPolice/SSP stack canary values, shared library randomizations, malloc randomizations, stack gap randomizations, etc.

    Without re-exec, all sshd child processes would share the same randomizations. This would allow an attacker to exhaustively search for the right offsets and values for their exploit by making many connections (millions probably) to the server. With re-exec, each time they connect the values will all be different so there is no guarantee that they will ever stumble upon the right combination.

    Another security improvement, just introduced in openssh-4.2 was the "zlib@openssh.com" compression method. This was an idea that Markus Friedl had after the last zlib vulnerability was published.

    The SSH protocol has supported zlib compression for a long time, but the standard "zlib" protocol method requires this to be started early in the protocol: after key exchange, but (critically) before user authentication successfully completed. This exposes the compression code to unauthenticated users.

    Our solution is to define a new compression method that still performs zlib compression, but delays its start until after user authentication has finished, so only authenticated users get to see it. This is another significant reduction in attack surface with effectively zero performance impact. This also makes the writing of a worm that targets the zlib code in OpenSSH impossible.

    Did you develop any measure to fight timing based attacks?

    Damien Miller: There are two classes of timing attacks, one of which matters and the other is not so important.

    The not so important timing attacks allow active detection of which usernames are valid by differing timings i

"More software projects have gone awry for lack of calendar time than for all other causes combined." -- Fred Brooks, Jr., _The Mythical Man Month_

Working...