Classic Computer Vulnerability Analysis Revisited 173
redtail writes "The original authors of the classic vulnerability analysis of Multics have
revisited the lessons learned almost thirty years later. Their new
paper, along with the original vulnerability analysis is published here
by IBM. The original vulnerability analysis inspired the self-inserting
compiler back door described by Ken Thompson in his Turing
Award Lecture.
"
I see an opportunity for IBM (Score:2, Funny)
Re:I see an opportunity for IBM (Score:1)
Re:I see an opportunity for IBM (Score:2)
Have fun.
Re:I see an opportunity for IBM (Score:3, Insightful)
Plan-9 was not designed from the ground up and certainly not for security. Plan-9 had some features beyond the UNIX core but it was certainly not a clean sheet of paper job. The first version even came out with the typesetter and games programs that were long since obsolete under UNIX.
The only O/S that I know of to be designed 'from the ground up' since VM-UNIX came out is Windows NT. UNIX was started before VMS but did not leave the research lab until after VMS launched. OS-X is simply a merger of NeXTStep and Mac-OS.
Windows NT the operating system is designed from the ground up to meet the Orange book B2 security requirements. That statement means less than it might when you find out what B2 means, i.e. almost nothing relevant to the real world. A B2 O/S cannot be connected to any sort of network and remain B2 secure, still interested?
The point is that design of the O/S is irrelevant unless the applications are also designed to be secure. There have been remarkably few security compromises of either UNIX or Windows NT, almost all the bug reports are in the layered applications. Take Outlook off Windows and Sendmail off Unix and the stats look oh so much better. Ten years ago I had a flame war with Eric Altman which later made it to the UNIX Hater's list, basically he said that he had finaly got a grip on the bugs and I pointed out that he still had no process and no clue when it came to security. Guess what, he still hasn't.
There are plenty of good replacements for sendmail that do not introduce arbitrary Turing complete languages for arbitrary purposes. Unfortunately the UNIX world simply won't use them.
There is a company working on a secure O/S, it requires secure hardware and is codenamed Palladium. You still want more security?
Re:I see an opportunity for IBM (Score:1)
As for Sendmail: Yes, there are lots of more secure replacements for sendmail, but most have far few features. The few that don't suffer from the lack of features suffer from sucky licenses. Such as Dan Bernstein's license on all his software, where he's anal retentive about the directory structure and exact functionality of any binaries produced. Many replacements are licensed under the GPL, which many companies still fear. The same is true of DNS, and the venerable BIND. It's still vulnerable to attacks much as it's always been, and although you no longer need to run it as root constantly, there's still potential for trouble.
Re:I see an opportunity for IBM (Score:2)
The TCP/IP stack is not part of the NT kernel, nor is it a security subsystem. Microsoft used the reference code released in NET2 under BSD license.
The design of the kernel is designed to support B2 level security without multi-ring security support in the processor.
Re:I see an opportunity for IBM (Score:1)
Not true. The network connections simply must be part of the defined security architecture. The DOCKMASTER system ran B2 Multics with Internet connections and was trusted by many commercial vendors to protect proprietary information they were sharing with Government evaluators.
Re:I see an opportunity for IBM (Score:2)
Which the Orange book gives no information on the analysis of.
If you take Orange book seriously then a B2 computer can only talk to other B2 computers...
Re:I see an opportunity for IBM (Score:2)
Yeah, yeah, if you take a legalistic view you can kinda sorta say you are compliant, perhaps but in the process you have pretty much demonstrated why B2 and Orange book mean very little.
There is a reason the guidelines are generally considered obsolete.
I remember reading through the guidelines with Ron Rivest and someone pointing out that the guidelines state that cryptographic enforcement mechanisms don't count. Ron summed up the response of the room by calling the requirement 'disappointing'.
The point is that evaluation criteria are only a guide to the security of an O/S, particularly when ther criteria fail to consider major developments since they were written - e.g. networking.
Windows NT was written to be B2 secure and does provide a pretty good foundation for secure applications, that has not prevented the terminally clueless from building what they have on top.
Re:I see an opportunity for IBM (Score:2)
I said that they designed WNT to be B2 compliant, not that the B2 compliant configuration would be worth anything, VMS was not any use in B2 config either.
It would be kinda suprising if anyone continued to work on Orange Book certification now it has been replaced by the Common Criteria.
There is not a tremendous incentive for anyone to get CC certification since the USGovt does not enforce the requirement in most procurement
Wrong, boycott them (Score:2)
We all talk about HP comming out with drm pc's this christmass yet IBM has been selling them to the public for several months already! THe difference between the TCPA vs the Pallidium standard is that the TCPA has a protected boot sector. Meaning without a key you can not boot any other OS but WindowsXP. IBM is an enemy of Linux and should righteously be boycotted. They care more about the profits of software developers and hollywood then opensource.
Re:I see an opportunity for IBM (Score:2)
Re:I see an opportunity for IBM (Score:1)
Re:I see an opportunity for IBM (Score:2)
Re:I see an opportunity for IBM (Score:2)
They had a real opportunity at the time to, audio under windows NT wasnt possible (crappy driver model/support), 95/98 was unstable (and dont get me started on macs). *EVERYONE* was looking for a new platform and the consensus was "if it works we'll buy it". If they had gotten any one of the 3 major sequencers on BEOS at this exact moment, they would ahve ruled the roost. But when Win2k came out and was stable, the window of opportunity closed, and thats how BEOS lost the music industry.
Re:I see an opportunity for IBM (Score:1)
Thirty years from now (Score:1)
Alan Turing? (Score:1)
""Wintermute is the recognition code for an AI. I've got the Turing Registry numbers. Artificial intelligence."
Can anyone recommend any good books on the subject?
Re:Alan Turing? (Score:1)
Re:Alan Turing? (Score:2)
Re: Alan Turing? (Score:3, Informative)
> Is this award named after the A.I. theoretician?
s/A.I. theoretician/computer scientist/
He did have an influence on AI (cf. "Turing test") and on the more general concept of intelligence-as-computation (whether natural or artificial), but we generally think of him for his more fundamental contributions to computer science (cf. "Turing machine").
Re: Alan Turing? (Score:2)
actually, shouldn't that be s/A.I. theoretician/biologist\/cryptogropher\/computer scientist\/mathematician? I know he was originally a biologist by trade and then was better known for his Bletchley Park which was related to his CS theory but in a more Von Neumann sort of way.
Re:Alan Turing? (Score:3, Interesting)
But I will resist it. I don't know what you mean by "the subject," so I'll try different angles.
For a biography, try Hodges' Alan Turing: The Enigma [amazon.com]. I've not read it myself, but it has been very well received.
For an intro to some of his most influential ideas, try Introduction to the Theory of Computation [amazon.com] by Sipser (the easiest book on the subject I've come across, but might be too hard anyway if you have no background in math or CS).
For his ideas on AI, see his original paper [abelard.org] from 1950, which is now since long available online.
Also, you could just do a Google search (and should! Resorting to this kind of off topic questions is usually only defensible when finding information is hard).
Re:Alan Turing? (Score:1)
Yes, AI among other things (Score:2)
Yes, he did come up with the original Turing Test but compared to his groundwork in Computers, the turing test is like a book of science fiction: somewhat innovative, but mostly inspired by other people's (Asimov) work.
Re:Alan Turing? (Score:1)
Re:Alan Turing? (Score:1)
http:/
Links missing in my previous post.
Preview wasn't working...
...last few lines conclude it well (Score:2, Interesting)
Poll: what might they refer to with these "major vendors". Sadly, I think that is very true. These "major vendors" are digging a huge hole behind the average users by just kludging together cheap fixes when the system is fundamentally wrong. As result, many will be in deep - unkludgeable - %&t in some 5-7 years when the system collapses.
Re:...last few lines conclude it well (Score:1)
It will be interesting to see what the most common OS will be in 30 years.
--xPhase
Re:...last few lines conclude it well (Score:1)
You mean like Y2K???? Pfft, predictions of the electronic apocalypse / armeggedon always amuse me. In 5-7 year there will probably STILL be people using Windows 98SE, vendors are STILL selling them, Msft is STILL making them available because it STILL brings in cash.
Re:...last few lines conclude it well (Score:2)
exactly. And the most amusing part is that the Windows 98 will be much more secure than the latest version. You fail to see my (very weak?) point: currently, every kludge added to windows for example - as an instant fix for a vulnerability - further weakens the package in the long run.
Re:...last few lines conclude it well (Score:2)
I know that the BLM (Bureau of Land Management) used it. I worked on one at college.
The "unix" compatibility mode was anything but though.. and the C compiler was written by a chimp.
Pan
Price of security (Score:3, Interesting)
Keep in mind that the price to pay for security is often not simply a higher initial purchase price.
It can be in difficulty maintaining code. If you write something in Eiffel or SML, you avoid buffer overflow attacks on the stack, but you have a much smaller pool of programmers to hire from.
It can be in performance. Java is the most popular "safe" (array and pointer deref checked) language, but you pay a severe performance hit when using Java over C/C++.
It can be in convenience. I'm used to troubleshooting my system by just booting into single-user mode. If I was really secure, I'd have the bootloader passworded and an encrypted filesystem. But that's enough of an irritation to me that it's just not worth it.
And it's very difficult to get a good, objective overview of security. Most security analysts don't really know all that much -- they're working off their own biases and feelings as well. They tend to try to sell companies on one-time-cost, backed-by-a-big-name products like firewalls or expensive IDS systems, because that's what companies want to hear. Also, *so* many products are so insecure that it's really painful and can feel futile to try to secure a system -- you might fix one problem with your device only to find that an IC manufacturer that's one of your vendors has some testing mode that breaks all your security guarantees.
We need people willing to pay the price, a wider bed of knowledgeable security consultants, software written from scratch in a safe language, with strong constraints on it, components that one can build secure products out of, and a decade to put everything into place before we can really get secure products.
Re:...last few lines conclude it well (Score:2)
People pay lip service about security because its without paying lip service they could get sued. But few agencies care. AFAIK from news stories things like FBI counter intellegence (which are about the most likely places imaginable for a knowledgeable determined attack) don't really have this level of security in place. Lets say 200 computers in the country are setup with that kind of top to bottom security; my guess is that these are the 200 which probably deserve top to bottom security.
Re:...last few lines conclude it well (Score:2)
> and changes it a bit to fit a new function...what you call kludge
Re:...last few lines conclude it well (Score:1)
Uhhh... Multics?! Yeah, there's a lesson there... (Score:1, Interesting)
Isn't Multics the legendary debacle that never turned into a usable product? Wasn't it a Grand Unified Solve-Everything Architecture? Wasn't it some kind of apotheosis of top-down design and other Best Practices?
Right! So, lesson number one from Multics is this: Don't do it that way. That, in fact, is the only lesson to be learned from Multics. You want more detail? Shoot the academics if they dare set foot off campus. Tucked away in their offices honing their algorithms, they do useful work -- but you can't ever, ever, ever let 'em influence the design of anything bigger than an algorithm.
Multics was the Java of operating systems.
Re:Uhhh... Multics?! Yeah, there's a lesson there. (Score:2)
Multics was the Java of operating systems.
What do you mean by this? Do you mean Multices was originally an operating system marketed for web interoperability but eventually found its foothold in the server application arena, and is now very sucessfull both in this regard as well as a web application platform? No? Didn't think so.
Re:Uhhh... Multics?! Yeah, there's a lesson there. (Score:1, Flamebait)
Look up the history of Multics. What came before Multics?
Wrong. (Score:5, Informative)
The big problem was that Multics was tied to a specific model of General Electric computer with custom security hardware. GE built some good early time-sharing systems in the 1960s, but sold off their computing business to Honeywell in the 1970s. Honeywell never marketed the Multics product line seriously, because it competed with other product lines that sold in bigger volume.
Re:Wrong. (Score:2)
If it never saw the light of day, then this statement is definitely true.
Re: Wrong. (Score:1)
--dave (DRBrown.TSDC@Hi-Multics.ARPA) c-b
Re: Wrong. (Score:2)
Re:Uhhh... Multics?! Yeah, there's a lesson there. (Score:2)
Nope. Multics begat Unix. Thompson took many of the lessons learned from Multics and used them in writing Unix.
Brian Ellenberger
Compare: B begat C. C is not B. (Score:1)
Ken Thompson stole some cool ideas from Multics, like for example command-line plumbing and a hierarchical filesystem.
Nevertheless, Unix was an entirely new and much less ambitious system. See Thompson's thoughts on the subject here [computer.org].
Cat out of the bag! (Score:2)
Uh, it appears that they're already on the web.
Re:Cat out of the bag! (Score:1)
[I'll also note the full program is up for the conference, so you can see what other papers, sessions, and tutorials we'll be having. The conference web page is www.acsac.org [acsac.org]]
Daniel
DoD use of Multics (Score:2)
Self-inserting compiler. (Score:4, Interesting)
> The original vulnerability analysis inspired the self-inserting compiler back door described by Ken Thompson in his Turing Award Lecture.
This is a nifty concept, but I remain skeptical that it would ever work in practice, at least for an open-source produce such as gcc.
For starts, the compiler would have to detect that it was compiling itself. I.e., if you compile your favorite flight similator and the resulting program is a compiler rather than a flight simulator, the game is up already.
But recognizing itself isn't going to be such an easy task. For instance, if it just compressed the input and compared the result to a saved target, you could easily defeat it by something as simple as changing the names of identifiers in the source code. (If the match was done on just part of the code, you would merely need to do global string substitutions on the identifiers.)
So AFAICT, the trojan would have to identify itself by looking at the structure or semantics of the source file. But that's going to be tricky too. If I add a feature to the compiler or fix a bug in it, recompile, and discover that the feature is missing or the bug is still present, the game is up (after a bit of vigorous head-scratching). So it looks to me like the trojan must not only detect structure or semantics reliably, it must also limit detection to a very small block of code, to reduce the risk that it will be modified and break the trojan. That block of code must be specific enough that the trojan is never triggered when compiling other programs, and non-arbitrary enough that no one will re-write it in a zeal of code clean-up.
And, as others have pointed out in the past, the trojan has no way of slipping past if you use another compiler to compile it with. Even with two untrusted compilers, you can get a clean compile so long as the two compilers don't support each others' trojans.
Re:Self-inserting compiler. (Score:1)
Re:Self-inserting compiler. (Score:1, Interesting)
Re:Self-inserting compiler. (Score:1)
Re:Self-inserting compiler. (Score:2)
The trick is to recognize that you are compiling the compiler and insert the *inserting* code. You can also infect other programs (and copy this infecting code when compiling the compiler) but that is trivial once you figure out this hard part.
I agree with the original poster that this is interesting in theory but impossible in practice. I would expect it to quickly fail in that either the inserted code becomes non-effective, or the resulting compiler does not work or crashes.
Re:Self-inserting compiler. (Score:3, Insightful)
Re: Self-inserting compiler. (Score:2)
> Skepticism is irrelevant... it *did* work. I believe it simply checked the name of the output file... if it was creating an output target called 'gcc' (or the equivalent, whatever it had back then) then it compiled in the hack. I do not know how robust it was, but from what I remember reading, it worked, so it was obviously robust enough.
There's a big difference between working for a demo and working in the face of active countermeasures by a well-informed security administrator. In the example you cite, it would be sufficient for the SA to rename the source file before compiling it.
Also, notice that I'm not offering "skepticism" as "proof" of anything. I am skeptical that it would work in practice for very long against even fairly trivial countermeasures, but you'll notice that part of my post was an analysis of what it would take to make it work in the face of countermeasures on the user's part.
It looks to me like the basic requirement is "a very stable block of code that is only used in this compiler". And the result is probablistic, since we all know what "very stable block of code" means in practice. But presumably you could hide it fairly well if you could limit both detection and trojanized output to a single function, so that arbitrary changes elsewhere in the code would neither give a false trigger of the trojan nor disable it during ordinary maintenance. But even then there could be detections, e.g. if I decided to hack gcc to write a Pascal compiler and got stangeness in the output.
What would be fun, as well as educational, would be to get volunteers to form a Red team and a Blue team, to see whether the Red team could design such a compiler that the Blue team couldn't trip up, and vice versa. I would probably bet on Blue, though that might depend on the details of the rules of the game.
Re: Self-inserting compiler. (Score:2)
The answer is that you would not.
Even technically savvy professionals can get taken in by a suitably complicated trojan. The saving grace usually is the fact that in this Internet age, we have instant communication disseminating known viruses, trojans, and hacks.
But even a savvy individual who keeps up on the lists could be the first dupe. Targeted hacks aimed at a single company are even more difficult, since no mailing list will tell you about a hack that is going to be used nowhere else but on your network. A network administrator cannot scan the source of every file from every thing that gets inside their network... assuming the source is available, assuming it is not obfuscated.
Would you refuse using a well known and useful tool just because you couldn't understand what one function did? What if that tool only activated and became a trojan if it detected YOUR domain on the local computer's reverse resolve?
There is no security.
Re: Self-inserting compiler. (Score:2)
Sure there is. You do things so that the attacker has to pass everybody's scrutiny to get to you at the same time you compartmentalize everything so as to minimize potential damage.
One cheap shot is to just download from a random mirror. From a different system.
Another cheap shot is to NEVER blindly apply ANY purported patches. One uncrackable IBM system was cracked by leaving behind an official looking IBM patch tape.
Yet another cheap shot is to never have just one vendor.
A well-known and useful tool is extremely unlikely to be targeted at you. Something claiming to be that tool is much more likely, particularly if you can be targeted to receive the "special package".
Re: Self-inserting compiler. (Score:1)
The story I was told was that not only did it work while he was testing it, it tripped up people. He wrote this "bug" into the code, tested it, and then removed this version of the complier.
Some time later, one of his co-workers came up to him and said that they had noticed something strange with one of the programs they were building, and could he help them figure out what was wrong.
He was working on this on a system, and somebody else must have been logged on and compiling things. Some how the modified compiler was sitting on a system somewhere, and people were using it to write code. This was unintentional, but how sure are you that something similar hasn't happened to some one at RedHat, Mandrake, Microsoft, Apple, Sun, IBM, HP etc?
Before you say this can't be done, this guy writes compilers by himself, which put him in RMS league. The person who told me this story (and he may have been embellishing this) worked closely with Ken Thompson, and would have had ample opportunity to hear the story first hand.
Re:Self-inserting compiler. (Score:1)
if I had to think of one task that a compiler would be good at doing in a small block of code (by resuing function already part of the source code of the compiler), that would probably be it.
I mean, if you were trying to build this same kind of back door into a video game, you'd have to write a language and semantics analyzer to recognize all the semantics and language structure. but hey, for this compiler backdoor project, we've already got a handy engine for just that task.
Re:Self-inserting compiler. (Score:2)
I thought that he had made it so that if the compiler detected a certain special string in the source code that it inserted the trojan-horse code into the compiler-input stream. Not only is this extremely easy to implement, but it inserts the code at the right point in the compiler's source code even if other parts of the compiler are modified. And it should be pretty easy to make the trigger string unique and innocuous-looking.
So AFAICT, the trojan would have to identify itself by looking at the structure or semantics of the source file.
I believe that making a computer understand the semantics of a source file would first require solving the Halting Problem.
Re:Self-inserting compiler. (Score:2, Informative)
I think you are missing the point. This is more of a theoretical argument than a real threat. I am pretty sure none of the compilers I had used had this kind of bug. It is a very clever and sophisticated attack. And of course, it is hard to implement. The very interesting conclusion is, opensourcing cannot help this kind of attack. This attack moves the broken link in security chain out of the source.
It is not impossible though. Realize that if you are actually planning to attack someone by this method you must be a trusted party by your victim-to-be. You must be the party that provides the compiler, so you probably also have the manpower to implement this.
Your suggestion of discovering this during a patch can be bypassed very simply by putting the trojan code into initialization or cleanup part of the code which nobody would dare to change because it would break compatibilities.Also probably we are talking about a well established compiler that comes as the standard compiler with the OS itself. Those compilers have huge areas which are (believed to be) bug-free so will typically never be patched.
This kind of attack can be prevented though, by monitoring. "Secrets and Lies" by Bruce Schneier deals with this issue. If you cannot stop someone at the gates, you can always catch them inside :-)
Re:Self-inserting compiler. (Score:1)
This is a nifty concept, but I remain skeptical that it would ever work in practice, at least for an open-source produce such as gcc.
Before you dismiss the possibility, why not read Thompson's lecture [acm.org]?
So AFAICT, the trojan would have to identify itself by looking at the structure or semantics of the source file.
Yes, he did, in very clever way, which he explains. And his trick recognizes both the compiler and login command.
After descibing the trick, he goes on to say, "First we compile the modified source with the normal C compiler to produce a bugged binary. We install this binary as the official C. We can now remove the bugs from the source of the compiler and the new binary will reinsert the bugs whenever it is compiled. Of course, the login command will remain bugged with no trace in source anywhere."
Whether one could get away with this in gcc (or whatever is questionable, but at some point every compile must be done with a binary. So if the first binary on a system was trojaned, it is possible that future ones could inherit the bug.
Re:Self-inserting compiler. (Score:2)
Not necessarily. It's possible to understand the code that goes into a compiler, so it's theoretically possible to step through a compilation manually by looking through the source of the compiler and the code it's trying to compile. Obviously your first target would be the simplest compiler you could write, which you could then use to compile something more complete like GCC.
Alternately, you could write a new compiler in a different language and compile the source using that. There's no perfect guarantee that this will protect you against a Trojan, but it can get the risk low enough to be acceptable to anyone but the most diehard paranoid. After all, what's the chance that somebody's Trojaned Java (or Perl, or Visual Basic, or Pascal, or Emacs Lisp) so that it will created a Trojan when it's used to compile gcc using your home-built C compiler?
Not that hard (Score:2)
The technique works fine and not difficult to do -- as long as it isn't GCC. And the rest of your post assumes that GCC is in use. (How many other compilers are out there which have source available (you mention changing contents of the source code) and are designed to be built by multiple arbitrary compilers (the GCC bootstrap mechanism)?
Of those, how may are meant to be modified by the user? (Where "meant" == "it's possible".)
So, reminding the rest of /. readers that most compilers in use are not gcc (unfortunately), we proceed:
Compilers that detect special semantics would be easy, because you don't know what the semantics are, only I do. My compiler can perform some completely useless action with no side-effects (i.e., you can't detect it), and while compiling itself, would a) realize that its own code is being compiled, and b) throw out the useless actions, like any other optimizing compiler would.
Remember that the special thing being detected doesn't have to be detectable at runtime, only at compile time. That opens the doors much wider.
So, use GCC and prevent this trojan. :-) Otherwise, yes, it's very possible.
Re:So wrong (Score:2)
When will /. get it through its skull that you don't have to use GCC to build GCC? And in fact, that most people don't use GCC to build GCC?
Re:Self-inserting compiler. (Score:2)
For a compiler? Really? Get real...
That block of code must be specific enough that the trojan is never triggered when compiling other programs, and non-arbitrary enough that no one will re-write it in a zeal of code clean-up.
You don't think that the GCC source base has enough such blocks that are relatively static as to uniquely identify it's GCC souce? Especially when the identifier is a compiler itself? Many Ansi C features haven't changed and for a grande part won't.
How much do you know about programming to begin with? Did you read the article? Did you understand it? Who the fsck moderated you up?
Re:Self-inserting compiler. (Score:2)
_If_ you thought there was a Trojan in there to defeat. Until Ken's lecture nobody had thought of it (or if they had, they kept quiet).
It wouldn't work _now_. Back when there was only one Unix, which had only one C compiler, used to compile the single login source, it could have worked. And apparently did.
Back doors (Score:3, Interesting)
At the bottom, ESR claims that he "has heard two separate reports that suggest that the crocked login did make it out of Bell Labs, notably to BBN, and that it enabled at least one late-night login across the network by someone using the login name `kt'."
30 years? (Score:1)
The lecture was given in 1983...
Or am I missing something?
Re:30 years? (Score:1)
Stacks (Score:3, Insightful)
How hard would this be to integrate this into GNU C and Linux? As I understand it, growing the heap from the bottom and growing the stack from the top with the yet unused space in the middle is just a matter of convention. How much trouble would it be to reverse the two so that the heap grows from the top and the stack from the bottom?
Seems like it ought to be a simple patch to the most vexing class of security problems we all experience.
Re:Stacks (Score:2)
-dB
Re:Stacks (Score:2)
It can't be that easy, can it? (It wouldn't stop all possible stack smashing attacks, but many.)
Re:Stacks (Score:1)
There were several problems in making this work: (1) you need hardware support, (2) you end up fighting code that assumes stack growth is downwards.
WRT the first, on the PE hardware, the stack *had* to grow up, because the memory management hardware wouldn't give a proper interrupt if you ran off the bottom of a page in the middle of an instruction, but would if you ran off the top of the page. So there was no choice.
WRT the second, there was (and probably still is!) a lot of code in the UNIX utilities that assumed the order variables were put on the stack, and used that in figuring out when the end of an array had been reached. So even if it's trivial to add positive stacks to the Linux kernel, I'd bet you'll break an awful lot of applications.
And that's assuming you go to the trouble to modify the compiler to generate upward growing stacks, and recompile everything to get the new prologues/epilogues.
Ob-trivia: PE's computers were originally Interdata, which was bought by PE, which then spun them out as Concurrent Computer. Concurrent has gone through several iterations, and I'm quite certain the old systems are no longer sold.
Programming in PL/I for Better Security (Score:2, Funny)
Re:Programming in PL/I for Better Security (Score:2)
You had to declare all your variables in advance, and if the statement that got truncated happened to be a declaration, it would typically be followed by dozens of pages of error messages about using undeclared variables.
I did get my 15 minutes of fame when I found an error in the optimizer that caused it to emit illegal op codes...
A1 (Score:2)
They referenced GEMSOS and a VMS variant as having reached that point. I thought it was only theoretical. Doesn't A1 cert require mathematical proof of security (as well as the usual auditing)?
Re:A1 (Score:1)
The BLACKER system also made A1, but wasn't a commercial product. The VMS varient was designed for A1, but I"m not sure if it ever completed evaluation (I don't see it on the EPL).
Yes, a mathematical proof of security was required, at least for the mandatory access control policy (folks typically didn't model DAC).
Daniel
Re:A1 (Score:2, Interesting)
To quote from http://www.radium.ncsc.mil/tpep/epl/epl-by-class.h tml: "The distinguishing feature of systems in this class is the analysis derived from formal design specification and verification techniques and the resulting high degree of assurance that the TCB is correctly implemented. This assurance is developmental in nature, starting with a formal model of the security policy and a formal top-level specification (FTLS) of the design. In keeping with the extensive design and development analysis of the TCB required of systems in class (A1), more stringent configuration management is required and procedures are established for securely distributing the system to sites. A system security administrator is supported."
There haven't been many A1 systems: GEMSOS, SCOMP, and Boeing's MLS LAN are the only that completed (see http://www.radium.ncsc.mil/tpep/epl/historical.htm l for the full list of every product). The VAX project referred to in the paper never got the A1 approval, although it was clearly designed to meet the requirements.
Security tools (Score:2)
And no flame wars on language choice, please.
Re:Security tools (Score:1)
that's just the tip of the iceberg (Score:1)
How many people contribute Open Source or Free Software to the world? How many opportunities have been created by the existence of successful, minimally reviewed GNU/Linux distributions for professional cracker community members to quietly introduce bugs? How many sites now run Red Hat, for example? How many of those installations are mission or security critical?
The security problems pointed at (ever so esoterically) in the cited articles are vast, serious, real, and pressing. They are made worse by vendors who persue dubious features and broken, overly complex architectures rather than remembering to KISS and empower customers to differentiate their installations and (especially in a disaster) manage their own source.
The good news: using plenty of available technologies and techniques to reduce the RISKS here can, as a side effect, radically improve the overall quality of the Free/Open Source software being vended these days. We need fewer, but much better features, managed by much better software engineering practices.
That (in my opinion), rather than panicing over security issues, is where our various corporate friends ought to focus in response to this and similar articles.
-t
Security bugs should be openly published (Score:1)
Check out KeyKOS and EROS (Score:2)
Unfortunately, in the 80s people were so infatuated with micros that secure timesharing wasn't a big market, and today people have been living with insecure systems so long they have stopped caring.
Facts, anyone? (Score:3, Informative)
Stratus VOS is still around (Score:2)
Its Multics traits include PL/1 as the systems programming language (actually a subset), transparent networking (attach to any device or file anywhere, rather like Apollo Domain), an ancient Emacs port (or clone?) and other useful features like a transactional file system with indexing.
Lots of these expensive machines were sold to banks and other demanding users in the late 80s, there are plenty still around and indeed they're still available [stratus.com].
The boxes were large and well engineered. Quality seemed to be taken very seriously since any crash was a major event - I only remember one OS crash bug in 6 years of working with them. Machines were connected via dialup to the service centre, and core dumps uploaded for analysis automatically (security settings permitting).
All items of hardware, disks, CPU, memory, networking cards were duplicated - except the real-time clock (!) monitored for failures. Any failed device would be removed from service and could be replaced while the system was running - 'go on, pick a board to unplug' demos were always popular.
Coming back to Unix after platform was a pretty rude shock. What horribly eccentric and buggy shell commands! Why does it keep crashing? In many ways I'd quite like to have one today, certainly something like Java would complement it quite nicely.
+ORC (Score:1)
Face it... if it was assembled, it can be disassembled. Once disassembled, the secrets implanted by a compiler or programmer are revealed. DeCSS is a good example of disassembly finding something hidden.
Worried about your login program being trojaned? Open it up in a disassembler and check near the login part of the code. Hell, open the program up in a debugger and just step through the code. That would trivially show you any "if password = backdoorpassword" statements. In many ways this is even faster than looking at the source itself. For those paranoid about the OS itself, there is even hardware you can buy that'll debug the os itself.
What really worries me isn't this kinda security issue... what worries me is not being able to legally look for these kind of security issues at all. (read: DMCA)
What's IBM's position on Palladium, I wonder? (Score:2)
Paragraph 3: "We hypothesised that professional penetrators would find that distribution of malicious software would prove to be the attack of choice."
Dropped in as a random sentence in paragraph 5, without much relation to the surrounding text: "Multics source code was readable by any user on the MIT site, much as source code for open source systems is generally available today."
From the conclusion of the new paper: "In nearly thirty years since the report, it has been demonstrated that the technology direction that was speculative at the time can actually be implemented and provides an effective solution to the problem of malicious software employed by well-motivated professionals. [...] And customers have said they have never been offered mainstream commercial products that give them such a choice, so they are left with several ineffective solutions [...]".
If you look at what is actually being demonstrated in the original paper, is that a monitor is needed that governs every memory access, without bypasses. The new paper admits that the VAX was the first to have this, and of course all modern MMUs do exactly that.
So what's the big deal? Why was this published at this time? If you read carefully, you can correctly conclude that we don't need no steenkin' Palladium to get secure systems today; we already have all the hardware features needed. However, if you skim the new paper, you could be led to believe that Palladium is absolutely essential...
Re:What's IBM's position on Palladium, I wonder? (Score:1)
If someone wanted to relaunch the Multics gate/ring inter-domain call mechanism on X86, (say, to give a meaningful alternative to setuid) I'd recommend against using the incredible hair-ball for the purpose built into the X86 architecture. You can't 'read the source' and it would take an immense amount of testing.
Instead, I'd suggest isolating a very carefully written and reviewed microkernel to take responsibility for validating inter-protection-domain calls and copying parameters.
Re:What's IBM's position on Palladium, I wonder? (Score:2)
The only drawbacks is that you have only four rings and that it's non-portable as hell. But it's very well documented, and works, too.
A setuid program is the exact software equivalent of a call gate, BTW. You need a certain privilege level to use the call gate, and after that, the routine uses the (possibly higher) privilege level set by the call gate. Exactly the same as having a setuid program that's executable only by a certain group. It's a very common (and useful) concept in both hard- and software.
There's nothing fundamentally wrong with the concept of setuid programs on Unix; you use them to check untrusted input before submitting it to higher privileged programs that need to trust it. The problem is that a lot of setuid programs don't do their job of checking input properly, or worse, don't treat environment variables as untrusted input as well.
(Aside: I've sometimes thought that Unix' way to connect applications to devices (read/write, seek, ioctl, that's it) is still so successful because it mimicks the way a CPU is connected to periferial devices in hardware: a data bus, an address bus and a few out-of-band control lines.)
Resource for the curious (Score:2, Informative)
The reference web site for Multics history [multicians.org]
Er, uh, not exactly (Score:3, Insightful)
Big shock. AC does not read article. Weakly attempts a troll.
What? (Score:1)
2) *Nix can also be insecure, depending on the software, and more importantly the skills of the person running the show. However (next point),
3) The article is *not* an OS comparison.
4) The article is *about* security flaws.
5) Most anti-"M$" security complaints here are regarding how Outlook automatically allows binary attachments to run unauthorized (and there are no priveleges to prevent anything).
6) The parent posted as an AC.
Re:How little has changed (Score:5, Interesting)
Indeed it is. However, if I recall correctly (the link is slashdotted, so I cannot check) the whole point of the paper is that this is a security "hole" (actually, it's not really a hole in itself, but more of a way to ascertain that a hole is not discovered) that cannot be closed. It describes a way of inserting a trojan into a program without it being visible in the source; the bottom line being that you can never trust code you didn't write directly for the machine, from scratch, yourself. And if this sort of bug was implemeted at hardware microcode level, you could not even trust writing directly for the machine.
That summary does not make the paper justice. Read it yourself when someone has posted a mirror. It's fascinating, simple, and absolutely brilliant.
Re:How little has changed (Score:1)
well computers havnt changed much in 30 years, least on the hardware lvl anyway sure theres faster have more ram and hd, but on the most basic level they still run the same, they require software to be compiled with the need bootstraping code to get the program running, software tech has grown leaps and bounds, but how the software talks to the hardware hasnt changed much
Re:How little has changed (Score:2)
- Art of War
is still relevant thousands of years later.Some people just hit the nail on the head, that's all.
Technically, it's an Oblate Spheriod . . . (Score:1)
Pythagoras theorized in the 6th century BCE that the earth must be a sphere. However, those same "educated classes" continued to worship numerous astrological deities that today are recognized as false. But 200 years before Pythagoras, Isaiah 40:22 referred to the 'circle of the earth', using a Hebrew term 'chugh' which is defined as round or spherical object.
It is amazing that the Bible presented a scientifically sound statement, free from mythological ideas, thousands of years before the invention of UNIX . . . Quite amazing!
Oops . . . that's spelled SPHEROID! (Score:1)
Re:Greek cosmology (Score:2)
Re:I hate pdf (Score:2)