Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Security

Classic Computer Vulnerability Analysis Revisited 173

redtail writes "The original authors of the classic vulnerability analysis of Multics have revisited the lessons learned almost thirty years later. Their new paper, along with the original vulnerability analysis is published here by IBM. The original vulnerability analysis inspired the self-inserting compiler back door described by Ken Thompson in his Turing Award Lecture. "
This discussion has been archived. No new comments can be posted.

Classic Computer Vulnerability Analysis Revisited

Comments Filter:
  • It would be a great opportunity for IBM to look back on its roots and develop another OS that is built for security from the ground up. Too bad the price for that would be ungodly. They could have a new OS platform that would be innovative, yet built on the ideas that got IBM to where they are today. I would honestly like to see ANOTHER alternative to Microsoft/UNIX/ and yes, Linux. Although I am a true believer in old school UNIX, I would like to see a new platform that is interesting, bullet-proof, and innovative. Anyone else? j
    • A mixture of OpenBSD and AIX? :)
    • Plan 9 [bell-labs.com] from Bell Labs.

      Have fun.
      • Plan 9 [bell-labs.com] from Bell Labs.

        Plan-9 was not designed from the ground up and certainly not for security. Plan-9 had some features beyond the UNIX core but it was certainly not a clean sheet of paper job. The first version even came out with the typesetter and games programs that were long since obsolete under UNIX.

        The only O/S that I know of to be designed 'from the ground up' since VM-UNIX came out is Windows NT. UNIX was started before VMS but did not leave the research lab until after VMS launched. OS-X is simply a merger of NeXTStep and Mac-OS.

        Windows NT the operating system is designed from the ground up to meet the Orange book B2 security requirements. That statement means less than it might when you find out what B2 means, i.e. almost nothing relevant to the real world. A B2 O/S cannot be connected to any sort of network and remain B2 secure, still interested?

        The point is that design of the O/S is irrelevant unless the applications are also designed to be secure. There have been remarkably few security compromises of either UNIX or Windows NT, almost all the bug reports are in the layered applications. Take Outlook off Windows and Sendmail off Unix and the stats look oh so much better. Ten years ago I had a flame war with Eric Altman which later made it to the UNIX Hater's list, basically he said that he had finaly got a grip on the bugs and I pointed out that he still had no process and no clue when it came to security. Guess what, he still hasn't.

        There are plenty of good replacements for sendmail that do not introduce arbitrary Turing complete languages for arbitrary purposes. Unfortunately the UNIX world simply won't use them.

        There is a company working on a secure O/S, it requires secure hardware and is codenamed Palladium. You still want more security?

        • Then by your own description Windows NT wasn't designed from the ground up, either. Bits of Windows 3.x and BSD protocol stacks made it into NT. No modern OS has ever been developed secluded from the outside world. BeOS also shares many features, but not all, with UNIX OSs. Realtime and embedded OSs are probably the only ones that are even close to have been developed in a vacuum.

          As for Sendmail: Yes, there are lots of more secure replacements for sendmail, but most have far few features. The few that don't suffer from the lack of features suffer from sucky licenses. Such as Dan Bernstein's license on all his software, where he's anal retentive about the directory structure and exact functionality of any binaries produced. Many replacements are licensed under the GPL, which many companies still fear. The same is true of DNS, and the venerable BIND. It's still vulnerable to attacks much as it's always been, and although you no longer need to run it as root constantly, there's still potential for trouble.

          • Then by your own description Windows NT wasn't designed from the ground up, either. Bits of Windows 3.x and BSD protocol stacks made it into NT.

            The TCP/IP stack is not part of the NT kernel, nor is it a security subsystem. Microsoft used the reference code released in NET2 under BSD license.

            The design of the kernel is designed to support B2 level security without multi-ring security support in the processor.

        • "A B2 O/S cannot be connected to any sort of network and remain B2 secure, still interested?"
          Not true. The network connections simply must be part of the defined security architecture. The DOCKMASTER system ran B2 Multics with Internet connections and was trusted by many commercial vendors to protect proprietary information they were sharing with Government evaluators.
          • "A B2 O/S cannot be connected to any sort of network and remain B2 secure, still interested?" Not true. The network connections simply must be part of the defined security architecture

            Which the Orange book gives no information on the analysis of.

            If you take Orange book seriously then a B2 computer can only talk to other B2 computers...

    • WHy? Well first off they are first vendor to introduce pallidium. The proof [ibm.com] is in the pudding. Here are some nice crippled laptops, [ibm.com] with the ultra secure TCPA security subsystem clearly advertised. ALso here are some nice netvista [ibm.com] workstations with the TCPA security 2.0 subsystem all nicely installed for the RIAA oops I mean your convience.

      We all talk about HP comming out with drm pc's this christmass yet IBM has been selling them to the public for several months already! THe difference between the TCPA vs the Pallidium standard is that the TCPA has a protected boot sector. Meaning without a key you can not boot any other OS but WindowsXP. IBM is an enemy of Linux and should righteously be boycotted. They care more about the profits of software developers and hollywood then opensource.

    • I'm not a security expert but I believe there are configurations of Z-OS (formerly MVS / S390) which are B security.

  • This kinda makes you wonder how today's security problems / vulnerabilities / exploits will be viewed in 30 years from now, and how these views will differ from today's.
  • Is this award named after the A.I. theoretician? I remember reading up on him while taking notes of William Gibson's Neuromancer.

    ""Wintermute is the recognition code for an AI. I've got the Turing Registry numbers. Artificial intelligence."

    Can anyone recommend any good books on the subject?

    • Yes! Do you want books about him or his contribution to CS? If you read Sipser's Introduction to computation you can get a lot of info about Turing Machines which are perhaps the backbone of all computing. Thats if you follow graph theory well. Books about his life I'm a little less knowledgeable about, I'd try the local library, bound to have something. I do know he was influential in the WW2 post WW2 period and I believe he committed suicide because he was unaccepted in society for being homosexual.
    • There was an excellent docu film done on his life. http://us.imdb.com/Title?0115749
    • Re: Alan Turing? (Score:3, Informative)

      by Black Parrot ( 19622 )


      > Is this award named after the A.I. theoretician?

      s/A.I. theoretician/computer scientist/

      He did have an influence on AI (cf. "Turing test") and on the more general concept of intelligence-as-computation (whether natural or artificial), but we generally think of him for his more fundamental contributions to computer science (cf. "Turing machine").

      • s/A.I. theoretician/computer scientist/

        actually, shouldn't that be s/A.I. theoretician/biologist\/cryptogropher\/computer scientist\/mathematician? I know he was originally a biologist by trade and then was better known for his Bletchley Park which was related to his CS theory but in a more Von Neumann sort of way.
    • Re:Alan Turing? (Score:3, Interesting)

      You learned about Turing from a Gibson novel? Is this a troll? I am suddenly overcome by a strange and unusual desire to yell something that usually confronts newbies in #linux on undernet when they want help installing Mandrake.

      But I will resist it. I don't know what you mean by "the subject," so I'll try different angles.

      For a biography, try Hodges' Alan Turing: The Enigma [amazon.com]. I've not read it myself, but it has been very well received.

      For an intro to some of his most influential ideas, try Introduction to the Theory of Computation [amazon.com] by Sipser (the easiest book on the subject I've come across, but might be too hard anyway if you have no background in math or CS).

      For his ideas on AI, see his original paper [abelard.org] from 1950, which is now since long available online.

      Also, you could just do a Google search (and should! Resorting to this kind of off topic questions is usually only defensible when finding information is hard).
    • Turing was an old computer theoritician, responsible for most of the foundations of computing, like the concept of a language to describe an algorithm more complex than simple standard algebraic symbols, and the "Turing Machine." Of course there were others in the field, like von Neumann. He was also noted for being gay during which it was not only discriminated against, but outright illegal. Finally, I believe there's a statue in England of him eating an apple; noteworthy because he committed suicide by eating a poisoned apple. Quite a morbid statue, huh?

      Yes, he did come up with the original Turing Test but compared to his groundwork in Computers, the turing test is like a book of science fiction: somewhat innovative, but mostly inspired by other people's (Asimov) work.
  • ..."Unfortunately, the mainstream products of major vendors largely ignore these demonstrated technologies. In their defense most of the vendors would claim that the market-place is not prepared to pay for high assurance of security"...

    ..."In our opinion this is an unstable state of affairs. It is unthinkable that another thirty years will go by without one of the two occurrences: either there will be horrific cyber disasters that will deprive society of much of the value computers can provide, or the available technology will be delivered, and hopefully enhanced, in products that provide effective security. We sincerely hope it will the latter".

    Poll: what might they refer to with these "major vendors". Sadly, I think that is very true. These "major vendors" are digging a huge hole behind the average users by just kludging together cheap fixes when the system is fundamentally wrong. As result, many will be in deep - unkludgeable - %&t in some 5-7 years when the system collapses.

    • These "major vendors" probably include Red Hat, SuSE, Debian, Sun Microsystems, SGI, IBM, FreeBSD vendors, OpenBSD vendors, NetBSD vendors, Microsoft and most other "vendors" of operating systems today. It may not be a popular opinion, but both Windows and *nix are insecure(others may debate which is _more_ insecure). Perhaps a system similar to plan 9 will become popular; perhaps new ideas will become a new _better_ system.

      It will be interesting to see what the most common OS will be in 30 years.

      --xPhase
    • there will be horrific cyber disasters the system collapses.

      You mean like Y2K???? Pfft, predictions of the electronic apocalypse / armeggedon always amuse me. In 5-7 year there will probably STILL be people using Windows 98SE, vendors are STILL selling them, Msft is STILL making them available because it STILL brings in cash.

      • > In 5-7 year there will probably STILL be people u sing Windows 98SE, vendors are STILL selling them

        exactly. And the most amusing part is that the Windows 98 will be much more secure than the latest version. You fail to see my (very weak?) point: currently, every kludge added to windows for example - as an instant fix for a vulnerability - further weakens the package in the long run.

    • I think someone did try this... it was called Pr1me and it was quite an interesting system... to say the least. Full hardware protection, with ACLs and very restricted socket system.

      I know that the BLM (Bureau of Land Management) used it. I worked on one at college.

      The "unix" compatibility mode was anything but though.. and the C compiler was written by a chimp.

      Pan
    • Price of security (Score:3, Interesting)

      by 0x0d0a ( 568518 )
      Unfortunately, the mainstream products of major vendors largely ignore these demonstrated technologies. In their defense most of the vendors would claim that the market-place is not prepared to pay for high price for high assurance of security

      Keep in mind that the price to pay for security is often not simply a higher initial purchase price.

      It can be in difficulty maintaining code. If you write something in Eiffel or SML, you avoid buffer overflow attacks on the stack, but you have a much smaller pool of programmers to hire from.

      It can be in performance. Java is the most popular "safe" (array and pointer deref checked) language, but you pay a severe performance hit when using Java over C/C++.

      It can be in convenience. I'm used to troubleshooting my system by just booting into single-user mode. If I was really secure, I'd have the bootloader passworded and an encrypted filesystem. But that's enough of an irritation to me that it's just not worth it.

      And it's very difficult to get a good, objective overview of security. Most security analysts don't really know all that much -- they're working off their own biases and feelings as well. They tend to try to sell companies on one-time-cost, backed-by-a-big-name products like firewalls or expensive IDS systems, because that's what companies want to hear. Also, *so* many products are so insecure that it's really painful and can feel futile to try to secure a system -- you might fix one problem with your device only to find that an IC manufacturer that's one of your vendors has some testing mode that breaks all your security guarantees.

      We need people willing to pay the price, a wider bed of knowledgeable security consultants, software written from scratch in a safe language, with strong constraints on it, components that one can build secure products out of, and a decade to put everything into place before we can really get secure products.
    • I have to disagree. I think the average user is getting pretty much the security : cost trade off they want. Any time a customer actually has to pay extra for security features you see pretty fast how little they care about security. Security in the sense of this paper, being resistant to knowledgeable determined insiders acting as hackers; could easy multiply by 5 the IT budget. That is would easily throw most companies annual budgets well into the red. Quite simply the data that would be stolen is likely worth far less than the cost of protecting it.

      People pay lip service about security because its without paying lip service they could get sued. But few agencies care. AFAIK from news stories things like FBI counter intellegence (which are about the most likely places imaginable for a knowledgeable determined attack) don't really have this level of security in place. Lets say 200 computers in the country are setup with that kind of top to bottom security; my guess is that these are the 200 which probably deserve top to bottom security.

  • Isn't Multics the legendary debacle that never turned into a usable product? Wasn't it a Grand Unified Solve-Everything Architecture? Wasn't it some kind of apotheosis of top-down design and other Best Practices?

    Right! So, lesson number one from Multics is this: Don't do it that way. That, in fact, is the only lesson to be learned from Multics. You want more detail? Shoot the academics if they dare set foot off campus. Tucked away in their offices honing their algorithms, they do useful work -- but you can't ever, ever, ever let 'em influence the design of anything bigger than an algorithm.

    Multics was the Java of operating systems.

    • Multics was the Java of operating systems.

      What do you mean by this? Do you mean Multices was originally an operating system marketed for web interoperability but eventually found its foothold in the server application arena, and is now very sucessfull both in this regard as well as a web application platform? No? Didn't think so.

    • Hmm, I think you better run off and get back to class. Kinder garden class that is.

      Look up the history of Multics. What came before Multics?
    • Wrong. (Score:5, Informative)

      by Animats ( 122034 ) on Monday September 09, 2002 @04:20PM (#4222738) Homepage
      Multics was one of the best operating systems ever in terms of reliablity and security. Go read the papers. People are still reusing basic ideas from Multics, like good CPU schedulers. Ease of use was mediocre, but then, this was ten years before DOS.

      The big problem was that Multics was tied to a specific model of General Electric computer with custom security hardware. GE built some good early time-sharing systems in the 1960s, but sold off their computing business to Honeywell in the 1970s. Honeywell never marketed the Multics product line seriously, because it competed with other product lines that sold in bigger volume.

      • Multics was one of the best operating systems ever in terms of reliablity and security.

        If it never saw the light of day, then this statement is definitely true.
        • Sorry lad, but it saw the light of day. As recently as ten years ago there was an ad in the Toronto paper for a Multics person for Bell Canada. Bell, it seems, preferred it to Unix...

          --dave (DRBrown.TSDC@Hi-Multics.ARPA) c-b

          • NSA ran DOCKMASTER, their external-access system, on MULTICS until 1998. The hardware was donated to the Computer Museum after its retirement.
    • >Isn't Multics the legendary debacle that never >turned into a usable product?

      Nope. Multics begat Unix. Thompson took many of the lessons learned from Multics and used them in writing Unix.

      Brian Ellenberger
      • Ken Thompson stole some cool ideas from Multics, like for example command-line plumbing and a hierarchical filesystem.

        Nevertheless, Unix was an entirely new and much less ambitious system. See Thompson's thoughts on the subject here [computer.org].


  • This Research Report consists of two invited papers for the Classic Papers sectoin of the 18th Annual Computer Security Conference (ACSAC) to be held 9-13 December 2002 in Las Vegas, NV. The papers will be available on the web after the conference at http://www.acsac.org/

    Uh, it appears that they're already on the web.
    • The papers may be posted, but you should still come and attend that session at the conference, for there you can get the full give and take.

      [I'll also note the full program is up for the conference, so you can see what other papers, sessions, and tutorials we'll be having. The conference web page is www.acsac.org [acsac.org]]

      Daniel

  • DoD used Multics for a long, long time. DOCKMASTER, the NSA's public access Multics machine, ran until 1998. They couldn't find something secure enough to replace it. Finally, Honeywell just wouldn't supply parts any more.
  • by Black Parrot ( 19622 ) on Monday September 09, 2002 @04:31PM (#4222822)


    > The original vulnerability analysis inspired the self-inserting compiler back door described by Ken Thompson in his Turing Award Lecture.

    This is a nifty concept, but I remain skeptical that it would ever work in practice, at least for an open-source produce such as gcc.

    For starts, the compiler would have to detect that it was compiling itself. I.e., if you compile your favorite flight similator and the resulting program is a compiler rather than a flight simulator, the game is up already.

    But recognizing itself isn't going to be such an easy task. For instance, if it just compressed the input and compared the result to a saved target, you could easily defeat it by something as simple as changing the names of identifiers in the source code. (If the match was done on just part of the code, you would merely need to do global string substitutions on the identifiers.)

    So AFAICT, the trojan would have to identify itself by looking at the structure or semantics of the source file. But that's going to be tricky too. If I add a feature to the compiler or fix a bug in it, recompile, and discover that the feature is missing or the bug is still present, the game is up (after a bit of vigorous head-scratching). So it looks to me like the trojan must not only detect structure or semantics reliably, it must also limit detection to a very small block of code, to reduce the risk that it will be modified and break the trojan. That block of code must be specific enough that the trojan is never triggered when compiling other programs, and non-arbitrary enough that no one will re-write it in a zeal of code clean-up.

    And, as others have pointed out in the past, the trojan has no way of slipping past if you use another compiler to compile it with. Even with two untrusted compilers, you can get a clean compile so long as the two compilers don't support each others' trojans.

    • Its totaly posible to write a virus/trojan that can modify gcc so that no matter what you compile has a trojan atached that monitors keyboard input and broadcasts possible usernames and passwords back to a server, of corse you would need permession to write to gcc but it could still be done, by the time you notice you system broadcasting it could be to late, if your super parinoid security type person it wont happen to you but most people are not , hell most people run win9x or xp in single user mode
      • I did not know one can run win98 ( and I assume win95, and ME) in multi-user mode. Could you please elaborate a little more on this subject plz.
      • You are talking about the wrong thing. Such an infected compiler, if told to compile the compiler, would make a new program that had the keyboard monitor in it, but the new program would not have the keyboard-monitor *INSERTING* code in it. Using the new compiler might transmit your keystrokes, but would not infect other programs.

        The trick is to recognize that you are compiling the compiler and insert the *inserting* code. You can also infect other programs (and copy this infecting code when compiling the compiler) but that is trivial once you figure out this hard part.

        I agree with the original poster that this is interesting in theory but impossible in practice. I would expect it to quickly fail in that either the inserted code becomes non-effective, or the resulting compiler does not work or crashes.

    • Skepticism is irrelevant... it *did* work. I believe it simply checked the name of the output file... if it was creating an output target called 'gcc' (or the equivalent, whatever it had back then) then it compiled in the hack. I do not know how robust it was, but from what I remember reading, it worked, so it was obviously robust enough.

      • > Skepticism is irrelevant... it *did* work. I believe it simply checked the name of the output file... if it was creating an output target called 'gcc' (or the equivalent, whatever it had back then) then it compiled in the hack. I do not know how robust it was, but from what I remember reading, it worked, so it was obviously robust enough.

        There's a big difference between working for a demo and working in the face of active countermeasures by a well-informed security administrator. In the example you cite, it would be sufficient for the SA to rename the source file before compiling it.

        Also, notice that I'm not offering "skepticism" as "proof" of anything. I am skeptical that it would work in practice for very long against even fairly trivial countermeasures, but you'll notice that part of my post was an analysis of what it would take to make it work in the face of countermeasures on the user's part.

        It looks to me like the basic requirement is "a very stable block of code that is only used in this compiler". And the result is probablistic, since we all know what "very stable block of code" means in practice. But presumably you could hide it fairly well if you could limit both detection and trojanized output to a single function, so that arbitrary changes elsewhere in the code would neither give a false trigger of the trojan nor disable it during ordinary maintenance. But even then there could be detections, e.g. if I decided to hack gcc to write a Pascal compiler and got stangeness in the output.

        What would be fun, as well as educational, would be to get volunteers to form a Red team and a Blue team, to see whether the Red team could design such a compiler that the Blue team couldn't trip up, and vice versa. I would probably bet on Blue, though that might depend on the details of the rules of the game.

        • Trojans and their ilk succeed when people do not know they exist. Do you go around renaming every source file? That's not as easy when you also have to rename every reference to it in the makefile and other related files. Making changes like the name of the output and input files can get annoying... why would you do it for EVERY compile you do just because MAYBE, just MAYBE, this compile contains a trojan based on the input filename.

          The answer is that you would not.

          Even technically savvy professionals can get taken in by a suitably complicated trojan. The saving grace usually is the fact that in this Internet age, we have instant communication disseminating known viruses, trojans, and hacks.

          But even a savvy individual who keeps up on the lists could be the first dupe. Targeted hacks aimed at a single company are even more difficult, since no mailing list will tell you about a hack that is going to be used nowhere else but on your network. A network administrator cannot scan the source of every file from every thing that gets inside their network... assuming the source is available, assuming it is not obfuscated.

          Would you refuse using a well known and useful tool just because you couldn't understand what one function did? What if that tool only activated and became a trojan if it detected YOUR domain on the local computer's reverse resolve?

          There is no security.

          • There is no security.

            Sure there is. You do things so that the attacker has to pass everybody's scrutiny to get to you at the same time you compartmentalize everything so as to minimize potential damage.

            One cheap shot is to just download from a random mirror. From a different system.
            Another cheap shot is to NEVER blindly apply ANY purported patches. One uncrackable IBM system was cracked by leaving behind an official looking IBM patch tape.
            Yet another cheap shot is to never have just one vendor.

            A well-known and useful tool is extremely unlikely to be targeted at you. Something claiming to be that tool is much more likely, particularly if you can be targeted to receive the "special package".
        • There's a big difference between working for a demo and working in the face of active countermeasures by a well-informed security administrator. In the example you cite, it would be sufficient for the SA to rename the source file before compiling it.

          The story I was told was that not only did it work while he was testing it, it tripped up people. He wrote this "bug" into the code, tested it, and then removed this version of the complier.

          Some time later, one of his co-workers came up to him and said that they had noticed something strange with one of the programs they were building, and could he help them figure out what was wrong.

          He was working on this on a system, and somebody else must have been logged on and compiling things. Some how the modified compiler was sitting on a system somewhere, and people were using it to write code. This was unintentional, but how sure are you that something similar hasn't happened to some one at RedHat, Mandrake, Microsoft, Apple, Sun, IBM, HP etc? :-) You'ld notice it if you were looking for it I'm sure, but have YOU ever looked?

          Before you say this can't be done, this guy writes compilers by himself, which put him in RMS league. The person who told me this story (and he may have been embellishing this) worked closely with Ken Thompson, and would have had ample opportunity to hear the story first hand.
    • So it looks to me like the trojan must not only detect structure or semantics reliably, it must also limit detection to a very small block of code

      if I had to think of one task that a compiler would be good at doing in a small block of code (by resuing function already part of the source code of the compiler), that would probably be it.

      I mean, if you were trying to build this same kind of back door into a video game, you'd have to write a language and semantics analyzer to recognize all the semantics and language structure. but hey, for this compiler backdoor project, we've already got a handy engine for just that task.
    • For starts, the compiler would have to detect that it was compiling itself. I.e., if you compile your favorite flight similator and the resulting program is a compiler rather than a flight simulator, the game is up already.

      I thought that he had made it so that if the compiler detected a certain special string in the source code that it inserted the trojan-horse code into the compiler-input stream. Not only is this extremely easy to implement, but it inserts the code at the right point in the compiler's source code even if other parts of the compiler are modified. And it should be pretty easy to make the trigger string unique and innocuous-looking.

      So AFAICT, the trojan would have to identify itself by looking at the structure or semantics of the source file.

      I believe that making a computer understand the semantics of a source file would first require solving the Halting Problem.
    • I think you are missing the point. This is more of a theoretical argument than a real threat. I am pretty sure none of the compilers I had used had this kind of bug. It is a very clever and sophisticated attack. And of course, it is hard to implement. The very interesting conclusion is, opensourcing cannot help this kind of attack. This attack moves the broken link in security chain out of the source.

      It is not impossible though. Realize that if you are actually planning to attack someone by this method you must be a trusted party by your victim-to-be. You must be the party that provides the compiler, so you probably also have the manpower to implement this.

      Your suggestion of discovering this during a patch can be bypassed very simply by putting the trojan code into initialization or cleanup part of the code which nobody would dare to change because it would break compatibilities.Also probably we are talking about a well established compiler that comes as the standard compiler with the OS itself. Those compilers have huge areas which are (believed to be) bug-free so will typically never be patched.

      This kind of attack can be prevented though, by monitoring. "Secrets and Lies" by Bruce Schneier deals with this issue. If you cannot stop someone at the gates, you can always catch them inside :-)

    • > The original vulnerability analysis inspired the self-inserting compiler back door described by Ken Thompson in his Turing Award Lecture.

      This is a nifty concept, but I remain skeptical that it would ever work in practice, at least for an open-source produce such as gcc.

      Before you dismiss the possibility, why not read Thompson's lecture [acm.org]?

      So AFAICT, the trojan would have to identify itself by looking at the structure or semantics of the source file.

      Yes, he did, in very clever way, which he explains. And his trick recognizes both the compiler and login command.

      After descibing the trick, he goes on to say, "First we compile the modified source with the normal C compiler to produce a bugged binary. We install this binary as the official C. We can now remove the bugs from the source of the compiler and the new binary will reinsert the bugs whenever it is compiled. Of course, the login command will remain bugged with no trace in source anywhere."

      Whether one could get away with this in gcc (or whatever is questionable, but at some point every compile must be done with a binary. So if the first binary on a system was trojaned, it is possible that future ones could inherit the bug.

      • Whether one could get away with this in gcc (or whatever is questionable, but at some point every compile must be done with a binary.

        Not necessarily. It's possible to understand the code that goes into a compiler, so it's theoretically possible to step through a compilation manually by looking through the source of the compiler and the code it's trying to compile. Obviously your first target would be the simplest compiler you could write, which you could then use to compile something more complete like GCC.

        Alternately, you could write a new compiler in a different language and compile the source using that. There's no perfect guarantee that this will protect you against a Trojan, but it can get the risk low enough to be acceptable to anyone but the most diehard paranoid. After all, what's the chance that somebody's Trojaned Java (or Perl, or Visual Basic, or Pascal, or Emacs Lisp) so that it will created a Trojan when it's used to compile gcc using your home-built C compiler?

    • I remain skeptical that it would ever work in practice, at least for an open-source produce such as gcc.

      The technique works fine and not difficult to do -- as long as it isn't GCC. And the rest of your post assumes that GCC is in use. (How many other compilers are out there which have source available (you mention changing contents of the source code) and are designed to be built by multiple arbitrary compilers (the GCC bootstrap mechanism)?

      Of those, how may are meant to be modified by the user? (Where "meant" == "it's possible".)

      So, reminding the rest of /. readers that most compilers in use are not gcc (unfortunately), we proceed:

      Compilers that detect special semantics would be easy, because you don't know what the semantics are, only I do. My compiler can perform some completely useless action with no side-effects (i.e., you can't detect it), and while compiling itself, would a) realize that its own code is being compiled, and b) throw out the useless actions, like any other optimizing compiler would.

      Remember that the special thing being detected doesn't have to be detectable at runtime, only at compile time. That opens the doors much wider.

      So, use GCC and prevent this trojan. :-) Otherwise, yes, it's very possible.

    • So AFAICT, the trojan would have to identify itself by looking at the structure or semantics of the source file. But that's going to be tricky too.

      For a compiler? Really? Get real...

      That block of code must be specific enough that the trojan is never triggered when compiling other programs, and non-arbitrary enough that no one will re-write it in a zeal of code clean-up.

      You don't think that the GCC source base has enough such blocks that are relatively static as to uniquely identify it's GCC souce? Especially when the identifier is a compiler itself? Many Ansi C features haven't changed and for a grande part won't.

      How much do you know about programming to begin with? Did you read the article? Did you understand it? Who the fsck moderated you up?
    • > For instance, if it just compressed the input and compared the result to a saved target, you could easily defeat it by something as simple as changing the names of identifiers in the source code.

      _If_ you thought there was a Trojan in there to defeat. Until Ken's lecture nobody had thought of it (or if they had, they kept quiet).

      It wouldn't work _now_. Back when there was only one Unix, which had only one C compiler, used to compile the single login source, it could have worked. And apparently did.
  • Back doors (Score:3, Interesting)

    by ceswiedler ( 165311 ) <chris@swiedler.org> on Monday September 09, 2002 @04:32PM (#4222828)
    Ken's back door program is most definitely the most interesting hack ever devised, IMO. Read about it here [tuxedo.org].

    At the bottom, ESR claims that he "has heard two separate reports that suggest that the crocked login did make it out of Bell Labs, notably to BBN, and that it enabled at least one late-night login across the network by someone using the login name `kt'."
  • The lecture was given in 1983...

    Or am I missing something?

    • The speec refernces a paper written in 1974 that describes the hack in greater detail and axpands on variations. The Multics article Ken referenced can be found HERE [ucdavis.edu] and the trapdoor he discusses is around page 52 or so.
  • Stacks (Score:3, Insightful)

    by Spazmania ( 174582 ) on Monday September 09, 2002 @05:01PM (#4223175) Homepage
    Third, stacks on the Multics processors grew in the positive direction, rather than the negative direction. This meant that if you actually accomplished a buffer overflow, you would be overwriting unused stack frames rather than your own return pointer, making exploitation much more difficult.

    How hard would this be to integrate this into GNU C and Linux? As I understand it, growing the heap from the bottom and growing the stack from the top with the yet unused space in the middle is just a matter of convention. How much trouble would it be to reverse the two so that the heap grows from the top and the stack from the bottom?

    Seems like it ought to be a simple patch to the most vexing class of security problems we all experience.
    • two words: interrupts stacks.

      -dB

    • You don't need to reverse the stack direction. All the matters is that arrays/strings are written in the same direction the stack grows. In other words, leave the stack the same and write the strings backward. An overflow would run off the stack, and it could not overwrite the return address of the routine and force execution of malicious code. Instead, it would just run into empty space reserved for the stack.

      It can't be that easy, can it? (It wouldn't stop all possible stack smashing attacks, but many.)

    • There have been *nix systems with upward growing stacks. I worked on two (Perkin-Elmer's Edition VII, based on Version 7, and XELOX, based on System V).

      There were several problems in making this work: (1) you need hardware support, (2) you end up fighting code that assumes stack growth is downwards.

      WRT the first, on the PE hardware, the stack *had* to grow up, because the memory management hardware wouldn't give a proper interrupt if you ran off the bottom of a page in the middle of an instruction, but would if you ran off the top of the page. So there was no choice.

      WRT the second, there was (and probably still is!) a lot of code in the UNIX utilities that assumed the order variables were put on the stack, and used that in figuring out when the end of an array had been reached. So even if it's trivial to add positive stacks to the Linux kernel, I'd bet you'll break an awful lot of applications.

      And that's assuming you go to the trouble to modify the compiler to generate upward growing stacks, and recompile everything to get the new prologues/epilogues.

      Ob-trivia: PE's computers were originally Interdata, which was bought by PE, which then spun them out as Concurrent Computer. Concurrent has gone through several iterations, and I'm quite certain the old systems are no longer sold.
    • Never trust a compiler that eats your source file:
      105. The statement has been truncated.
      120. An identifier or constant has been truncated.
      149. Text was deleted from this statement.
      Ah, I remember writing PL/I extensively on IBM mainframes. IIRC, the language could not be represented using any of the grammar types available at that time (as an example of the difficulty, there were no reserved keywords, "if" could be a variable name, and some people used it that way). The ad hoc parser in IBM's compiler did not like to give up, so when it got desperate it would delete text until it reduced the offending statement to something that would parse.

      You had to declare all your variables in advance, and if the statement that got truncated happened to be a declaration, it would typically be followed by dozens of pages of error messages about using undeclared variables.

      I did get my 15 minutes of fame when I found an error in the optimizer that caused it to emit illegal op codes...

  • What I found interesting was the reference to the Holy Grail... A1.

    They referenced GEMSOS and a VMS variant as having reached that point. I thought it was only theoretical. Doesn't A1 cert require mathematical proof of security (as well as the usual auditing)?
    • If you check out the Historical Evaluated Products List at http://www.radium.ncsc.mil/tpep/epl/historical.htm l [ncsc.mil], you'll see that quite a few products made it to the A1 rating:

      • Boeing's MLS LAN Secure Network Server System
      • Boeing's MLS LAN Network Component MDIA, Version 2
      • Gemini's Gemini Trusted Network Processor
      • Honeywell's SCOMP Version STOP Release 2.1

      The BLACKER system also made A1, but wasn't a commercial product. The VMS varient was designed for A1, but I"m not sure if it ever completed evaluation (I don't see it on the EPL).

      Yes, a mathematical proof of security was required, at least for the mandatory access control policy (folks typically didn't model DAC).

      Daniel

    • Re:A1 (Score:2, Interesting)

      by oldunixguy ( 607536 )
      A1 requires a mathematical proof of the model, and also requires that you show conformance between the source code and the model. It's not easy, but it's not proving the code correct.

      To quote from http://www.radium.ncsc.mil/tpep/epl/epl-by-class.h tml: "The distinguishing feature of systems in this class is the analysis derived from formal design specification and verification techniques and the resulting high degree of assurance that the TCB is correctly implemented. This assurance is developmental in nature, starting with a formal model of the security policy and a formal top-level specification (FTLS) of the design. In keeping with the extensive design and development analysis of the TCB required of systems in class (A1), more stringent configuration management is required and procedures are established for securely distributing the system to sites. A system security administrator is supported."

      There haven't been many A1 systems: GEMSOS, SCOMP, and Boeing's MLS LAN are the only that completed (see http://www.radium.ncsc.mil/tpep/epl/historical.htm l for the full list of every product). The VAX project referred to in the paper never got the A1 approval, although it was clearly designed to meet the requirements.

  • One thing I find missing for C programming is a generally accepted security tool. In the rush to get things out the door with new and whizbang features, little is done on security tools. This is true of both commercial and OpenSource environments. People still use gets, people still take things from the environment unsafely. Taintperl is great, why isn't there TaintC or the equivalent? Why is there not a single tool that has a database of past exploits and knows how to detect at least a subset of these? yes, I realize that a program can never replace a good set of eyes looking at a program, but I also keep my compiler warnings cranked up high to detect "I'm a dumb-ass" mistakes. Why can't we hook this into the optimizer, since it must do lexical and data analysis anyway? Get these mistakes caught early before they get stuck in released in the wild code. Why is sling still such vapor? AFAIK, they still haven't updated lint to work with C++ code yet, we're still in someways in the sticks and stones era of tools for programming.

    And no flame wars on language choice, please.
    • There are tools that find many of the exploits through source code analysis. For example, ITS4 (available from http://www.cigital.com) finds some, and RATS (available from http://www.securesoftware.com). Coincidentally, the ITS4 paper won the "best paper" award at ACSAC (http://www.acsac.org) a few years ago... the same conference where the Multics paper that's the subject of this discussion will be presented in December!
  • How many people contribute Open Source or Free Software to the world? How many opportunities have been created by the existence of successful, minimally reviewed GNU/Linux distributions for professional cracker community members to quietly introduce bugs? How many sites now run Red Hat, for example? How many of those installations are mission or security critical?

    The security problems pointed at (ever so esoterically) in the cited articles are vast, serious, real, and pressing. They are made worse by vendors who persue dubious features and broken, overly complex architectures rather than remembering to KISS and empower customers to differentiate their installations and (especially in a disaster) manage their own source.

    The good news: using plenty of available technologies and techniques to reduce the RISKS here can, as a side effect, radically improve the overall quality of the Free/Open Source software being vended these days. We need fewer, but much better features, managed by much better software engineering practices.

    That (in my opinion), rather than panicing over security issues, is where our various corporate friends ought to focus in response to this and similar articles.

    -t

  • Even then, they understood that bugs should be published openly. To quote section 1.3.1 "Concealment of such penetrations does nothing to deter a sophisticated penetrator and can in fact impede technical interchange and delay the development of a proper solutions. A system which contains nulnerabilities cannot be protected by keeping those vulnerabilities secret. It can only be protected by the constraining (sic) of physical access to the system."

  • KeyKOS [upenn.edu] solved a lot of the problems this paper describes in the 80s, and its descendent EROS [eros-os.org] is solving them today (and open source, too!).

    Unfortunately, in the 80s people were so infatuated with micros that secure timesharing wasn't a big market, and today people have been living with insecure systems so long they have stopped caring.
  • Facts, anyone? (Score:3, Informative)

    by bensonm ( 604149 ) <benson@@@basistech...com> on Monday September 09, 2002 @09:11PM (#4224765)
    Multics had significant commercial success, both in secure timesharing applications in the US and in Europe. In the end, Honeywell placed its bets elsewhere, and Multics withered away. To those of us who worked in it, the sneering comments about 'top down debacle' are an ongoing demonstration of Gresham's law as it applies to information on the Internet. Ignorance is never, seemingly, an impediment to a smart-ass comment. Try using, perchance, a system in which all the command line arguments were consistent and predictable, and the command names were meaningful. Or, for that matter, a system in which the fundamental data access model was mapping into memory. Or in which there are more security domains than 'all-powerful-root' and 'everyone else'. Unix was born as an effort to get some approximation of Multics onto minicomputer hardware. It worked pretty well. The authors of Unix weren't too fond of our rather structured development process. They didn't need the security and reliability that we did all that work to try to get, and they did want heaps of functionality from unpaid grad students in no time flat. Over the years, many of Multics' ideas have slowly leaked back into Unix: dynamic linking, memory mapping, command args with names and not just letters, etc. No surprise: they were good ideas, and Unix has absorbed them as processor power, memory prices, and the slow pace of rediscovery of the wheel has allowed. There's quite a platoon of Multics alumni in the industry, applying the lessons we learned, good and bad, wherever we go.
    • About 1980 Paul Green and (I think) other multicians started a successful fault-tolerant computer company called Stratus, an effective competitor to Tandem. VOS, the OS, was clearly inspired by Multics and had, or rather has, a consistency and reliability unparalleled in Unix.

      Its Multics traits include PL/1 as the systems programming language (actually a subset), transparent networking (attach to any device or file anywhere, rather like Apollo Domain), an ancient Emacs port (or clone?) and other useful features like a transactional file system with indexing.

      Lots of these expensive machines were sold to banks and other demanding users in the late 80s, there are plenty still around and indeed they're still available [stratus.com].

      The boxes were large and well engineered. Quality seemed to be taken very seriously since any crash was a major event - I only remember one OS crash bug in 6 years of working with them. Machines were connected via dialup to the service centre, and core dumps uploaded for analysis automatically (security settings permitting).

      All items of hardware, disks, CPU, memory, networking cards were duplicated - except the real-time clock (!) monitored for failures. Any failed device would be removed from service and could be replaced while the system was running - 'go on, pick a board to unplug' demos were always popular.

      Coming back to Unix after platform was a pretty rude shock. What horribly eccentric and buggy shell commands! Why does it keep crashing? In many ways I'd quite like to have one today, certainly something like Java would complement it quite nicely.
  • Remember what he did? Disassembly was one important thing... reverse engineering being a more appropriate name for it.

    Face it... if it was assembled, it can be disassembled. Once disassembled, the secrets implanted by a compiler or programmer are revealed. DeCSS is a good example of disassembly finding something hidden.

    Worried about your login program being trojaned? Open it up in a disassembler and check near the login part of the code. Hell, open the program up in a debugger and just step through the code. That would trivially show you any "if password = backdoorpassword" statements. In many ways this is even faster than looking at the source itself. For those paranoid about the OS itself, there is even hardware you can buy that'll debug the os itself.

    What really worries me isn't this kinda security issue... what worries me is not being able to legally look for these kind of security issues at all. (read: DMCA)

  • From the introduction of the new paper: "However, the bottom line conclusion was that 'restructuring is essential' around a verifiable 'security kernel' before using Multics (or any other system) in an open environment (as in today's internet)".

    Paragraph 3: "We hypothesised that professional penetrators would find that distribution of malicious software would prove to be the attack of choice."

    Dropped in as a random sentence in paragraph 5, without much relation to the surrounding text: "Multics source code was readable by any user on the MIT site, much as source code for open source systems is generally available today."

    From the conclusion of the new paper: "In nearly thirty years since the report, it has been demonstrated that the technology direction that was speculative at the time can actually be implemented and provides an effective solution to the problem of malicious software employed by well-motivated professionals. [...] And customers have said they have never been offered mainstream commercial products that give them such a choice, so they are left with several ineffective solutions [...]".

    If you look at what is actually being demonstrated in the original paper, is that a monitor is needed that governs every memory access, without bypasses. The new paper admits that the VAX was the first to have this, and of course all modern MMUs do exactly that.

    So what's the big deal? Why was this published at this time? If you read carefully, you can correctly conclude that we don't need no steenkin' Palladium to get secure systems today; we already have all the hardware features needed. However, if you skim the new paper, you could be led to believe that Palladium is absolutely essential...
    • This poster is completely correct. There's nothing magic about hardware. It's particularly hard to tamper with, and that's it.

      If someone wanted to relaunch the Multics gate/ring inter-domain call mechanism on X86, (say, to give a meaningful alternative to setuid) I'd recommend against using the incredible hair-ball for the purpose built into the X86 architecture. You can't 'read the source' and it would take an immense amount of testing.

      Instead, I'd suggest isolating a very carefully written and reviewed microkernel to take responsibility for validating inter-protection-domain calls and copying parameters.
      • Actually, x86's architecture for call gates and segment descriptors isn't that bad. To some extent, they are always needed; even the Linux kernel (which uses only paging, no segmentation) has to supply correct code, data and stack segment descriptors, and has provide interrupt gates (very similar to call gates) for the syscall and hardware interrupts.

        The only drawbacks is that you have only four rings and that it's non-portable as hell. But it's very well documented, and works, too.

        A setuid program is the exact software equivalent of a call gate, BTW. You need a certain privilege level to use the call gate, and after that, the routine uses the (possibly higher) privilege level set by the call gate. Exactly the same as having a setuid program that's executable only by a certain group. It's a very common (and useful) concept in both hard- and software.

        There's nothing fundamentally wrong with the concept of setuid programs on Unix; you use them to check untrusted input before submitting it to higher privileged programs that need to trust it. The problem is that a lot of setuid programs don't do their job of checking input properly, or worse, don't treat environment variables as untrusted input as well.

        (Aside: I've sometimes thought that Unix' way to connect applications to devices (read/write, seek, ioctl, that's it) is still so successful because it mimicks the way a CPU is connected to periferial devices in hardware: a data bus, an address bus and a few out-of-band control lines.)
  • I hope not to get trolled for posting a reference that is probably not too hard to find on search engines. However, if any of the readers of this would like to see a much broader range of information on Multics, I can recommend:
    The reference web site for Multics history [multicians.org]

BLISS is ignorance.

Working...