Linux Study Argues Monolithic OS Design Leads To Critical Exploits (osnews.com) 198
Long-time Slashdot reader Mike Bouma shares a paper (via OS News) making the case for "a small microkernel as the core of the trusted computing base, with OS services separated into mutually-protected components (servers) -- in contrast to 'monolithic' designs such as Linux, Windows or MacOS."
While intuitive, the benefits of the small trusted computing base have not been quantified to date. We address this by a study of critical Linux CVEs [PDF] where we examine whether they would be prevented or mitigated by a microkernel-based design. We find that almost all exploits are at least mitigated to less than critical severity, and 40% completely eliminated by an OS design based on a verified microkernel, such as seL4....
Our results provide very strong evidence that operating system structure has a strong effect on security. 96% of critical Linux exploits would not reach critical severity in a microkernel-based system, 57% would be reduced to low severity, the majority of which would be eliminated altogether if the system was based on a verified microkernel. Even without verification, a microkernel-based design alone would completely prevent 29% of exploits...
The conclusion is inevitable: From the security point of view, the monolithic OS design is flawed and a root cause of the majority of compromises. It is time for the world to move to an OS structure appropriate for 21st century security requirements.
Our results provide very strong evidence that operating system structure has a strong effect on security. 96% of critical Linux exploits would not reach critical severity in a microkernel-based system, 57% would be reduced to low severity, the majority of which would be eliminated altogether if the system was based on a verified microkernel. Even without verification, a microkernel-based design alone would completely prevent 29% of exploits...
The conclusion is inevitable: From the security point of view, the monolithic OS design is flawed and a root cause of the majority of compromises. It is time for the world to move to an OS structure appropriate for 21st century security requirements.
Everything old is new again... (Score:5, Insightful)
Maybe Tanenbaum [wikipedia.org] was right. 26 years isn't that long for this debate to come back around again.
Re: (Score:2, Interesting)
He was right then. At the time CPUs were slow. Now we have pretty fast CPUs, faster than most people need. The overhead for a microkernel isn't so bad now.
Re:Everything old is new again... (Score:4, Insightful)
He was right then. At the time CPUs were slow. Now we have pretty fast CPUs, faster than most people need. The overhead for a microkernel isn't so bad now.
In a way, many people now are doing that. Running the kernel inside a virtual machine, on top of a hypervisor is likely less efficient than running against a microkernel.
Re: (Score:2, Insightful)
+1
A bunch of security flaws were published in the last year that were directly related to favoring extreme performance over security
Why I suggested Linus' profanity is from stress... (Score:2)
... resulting from monolithic design problems:
https://linux.slashdot.org/com... [slashdot.org]
https://www.mail-archive.com/f... [mail-archive.com]
https://slashdot.org/comments.... [slashdot.org]
"Some companies have long considered Smalltalk their "secret weapon" because they could upgrade their systems at least at the application level while the applications continued to run. I guess I've been in computing so long and seen much better innovations like QNX and Smalltalk get passed by in favor stuff like Linux and Java that I guess I don't expect good innov
Re: (Score:1)
You mean now we have fast enough CPUs with flaws ( a.k.a. performance enhancements ) that make reliable process isolation impossible? A micro kernel wont protect you if it runs on an Intel CPU.
Re: (Score:3, Insightful)
Exactly what I was thinking. Still, Linux might never have happened if Torvalds took that advice.
"doesn't work at all" is a bug (Score:5, Insightful)
Security isn't just confidentiality. It's Confidentiality, Integrity, and Availably (CIA). If the machine isn't running, it isn't provide secure services to the users.
The micro-kernel architecture ala Tanenbaum fails the security requirement of Availability; micro-kernel systems don't provide what people need. People use Linux because the design works well for building what people need.
Re: (Score:2)
If I may add? It is also "provenance". Being able to trace where the data came from, or the software is a critical aspect of security that is too often obscured in the name of confidentiality. I've recently had just that discussion, walking through the complete lack of visibility of their work. Coupled with the complete lack of an API, it made it nearly impossible to design software to work with their systems because the code involved and its internal behavior was deliberately concealed.
Re: (Score:2)
More accurately, security in computing is system only being able to do, exactly what they were intended to do. That means modular design and only putting in exactly the bits you need, that can only do the exact limited range of functions you required.
That of course is the security market, in the general market the consumer market flexibility is a requirement. So the right kind of product, for the right kind of market and the consumer market should not subsidise the security market.
Want it really secure,
Re: (Score:2)
Following this argument to its natural conclusion, only what exists today can possibly be secure because everything that does NOT exist today fails "the security requirement of Availability". Therefore we can conclude that all future systems are insecure for precisely the same reason you conclude that a "micro-kernel architecture ala Tanenbaum fails the security requirement of Availability."
Linux also failed to build "what people need" until it did. It's an ignorant argument useful for nothing beyond gettin
Exist since 1969. Fad in the early 80s, again 90s (Score:2, Informative)
Doesn't exist? Micro kernels have to existed since 1969. They were a fad, a buzz word, in the early 1980s, like block chain in now. Then again in the 1990s, a resurgence of micro-kernel articles in the trade mags, and academic research. Some of the largest companies tried micro-kernel. They found out it doesn't work. The services that run in kernel mode, within the kernel address space, on all successful kernels are there for a reason. Multiple separate kernel THREADS work, and even a monolithic kernel lik
50 years. Not time to market. Time to execute (Score:2)
Thanks for your thoughtful post.
The thing is, it's not about time to market. It's about time to execute an operation. Fifty years is enough time to bring something to market.
Memory access takes half a tick, Double Data Rate RAM (DDR) transfers data on the rising and falling edge.
Context switching is an order of magnitude slower, theoretically, and two orders of magnitude in real life. As Meltdown and Spectre remind us, when switching context you have to flush all the caches, meaning everything is going to c
Re: (Score:3)
Following this argument to its natural conclusion, only what exists today can possibly be secure because everything that does NOT exist today fails "the security requirement of Availability".
Exactly. Raymorris grossly misstates the security criterion of Availability.
Availability is about outage: where the app is unavailable or malfunctions either directly due to an attack or indirectly due to features meant to thwart an attack. For example, if your account is locked as a result of 3 wrong passwords, that's a hit to availability. Which is why NIST 800-63 says to rate-limit password attempts rather than imposing lockouts.
Merely having to throw more hardware at it because of a linear change in run
Re: (Score:2)
The strongest security argument you can make against microkernels is that it's a security failure to spend more money protecting an asset than the asset is worth.
This is especially the case if you can make the loss someone elses.
Re: (Score:2)
The micro-kernel architecture ala Tanenbaum fails the security requirement of Availability; micro-kernel systems don't provide what people need.
I would argue that with more cores available, this is no longer quite as true as it once was. Back when this debate was finalized, one core was the de facto standard for the majority of computers. Micro kernels took a HUGE hit when running on a single core. If the core operating system stayed on a single core and managed running processes on other cores, I suspect a lot of the latency can be avoided in relation to individual processes.
Sorta. How that ends up ... (Score:2)
Context switching with core pinning on a 12 core, hyperthreaded processor takes several microseconds in total. 1600ns for the actual switch, then a multiple of that for for LLC, etc. Contrast in-context memory access of 1ns with cache hit, up to 12ns for cache miss. Micro-kernel with many cores is 100-1,000 times slower.
So then maybe you start making changes to the micro-kernel model in order to have it work well on physical processors. Maybe you ask "how CAN we use SMP to separate concerns, putting aside a
Re: (Score:2)
Hm. I am not classically trained. I have no religious attachment to THE microkernel design. I am commenting off of ideas I had back in the 90s but were impracticable due to consumer devices having a single core unless you had lots of cash.
Describing it as more of a hypervisor is along the lines of what I was thinking; although having a full independent kernel on each core is a bit much I think. Perhaps a very limited kernel, without drivers but with limited memory management capabilities is in order.
I like
Without drivers is exactly the thing (Score:2)
> Describing it as more of a hypervisor is along the lines of what I was thinking; although having a full independent kernel on each core is a bit much I think. Perhaps a very limited kernel, without drivers but with limited memory management capabilities is in order.
"Without drivers" is exactly how I use virtualization, and how I believe it's most often done at scale. The guests use virtio, not hardware drivers. The kernel is effectively divided into one thing that ONLY handles hardware, and completely
Re: (Score:2)
Very cool. It sounds like we are on the same page here.
Congrats, you find one of application (Score:2, Insightful)
Congratulations on finding one oddball application for which someone decided to use a micro-kernel. Specifically where it's not a general-purpose computer, performance doesn't matter, and there's no need to run more than one application at a time.
The next time I'm trying to build a slow as hell access KVM on specially designed hardware, I'll consider a micro-kernel. Only, of course, if I can't use a GPL kernel because I'm trying to keep everything secret.
Re: Everything old is new again... (Score:1)
Perhaps HURD will take over & replace Linux!
More likely, some open source version of Google's Fuchsia OS will be the future.
How is Tanenbaum vs Torvalds relevant today? (Score:2)
IIRC, the debate was really over whether or not an OS should span different system processors to provide users with a similar experience/capabilities rather than just concentrate on the latest processors with a specific operating model. I don't see how it applies to the security debate of today.
I would consider the '86-'286-'386 to be different processors because of the 16 bit unprotected page ('86) versus 16 bit protected ('286) and 32 bit protected and flat ('386). The Minix micro-kernel was custom to e
Re: (Score:2)
NCSA Mosaic was first released in January 1993, 25 years and 8 months ago. Lynx only predated it by 6 months; it's tenure on top was short indeed.
Home users didn't particularly discover the Internet until 1995.
Security was primitive. Even by 1996, most of DoD hadn't deployed firewalls yet and NAT's use was pretty rare.
Re: (Score:2)
>
Regardless of the approach, don't forget that 25+ years ago, networking was quite primitive - most home users were using telephone line modems and businesses had closed networks (if they had them at all). Lynx was the web browser of choice (Mosaic was still a year or two away). Network/computer security was in its infancy (Sandra Bullock's "The Net" was a few years away). Email barely existed (I was working at IBM at the time and was able to get "myke@ibm.com" without anybody questioning it or there even being standards applied to email accounts).
When Linus was using Minix to create Linux and for a short time after, web and HTML didn't exist, let alone browsers such as lynx. If home users were using modems, it was to connect to BBSen or commercial systems like AOL and CompuServ. AOL didn't do internet email and it was hard for any home user to get email. Just about everyone with email got it through work or college.
I worked at one of those rare businesses with email & a shared 56k internet connection in '92. I remember when Mosaic came out.
Re: Everything old is new again... (Score:3)
Tanenbaum was indeed right as far as the security is concerned.
Torwalds was indeed right as far as the performances are concerned.
Maybe it's time to consider (again) L4.
Re: (Score:2)
First law of futurology: never predict what and when at the same time.
First law of making a billion dollars (or shipping a billion systems): always predict what and when at the same time.
Why Futurist Ray Kurzweil Isn't Worried About Technology Stealing Your Job [fortune.com] — 24 September 2017
Tanenbaum-Torvalds debate on monolithic vs micro (Score:3, Informative)
Tanenbaum - Torvalds debate [wikipedia.org]
The debate opened on January 29, 1992, when Tanenbaum first posted his criticism on the Linux kernel to comp.os.minix, noting how the monolithic design was detrimental to its abilities, in a post titled "LINUX is obsolete".[1] While he initially did not go into great technical detail to explain why he felt that the microkernel design was better, he did suggest that it was mostly related to portability, arguing that the Linux kernel was too closely tied to the x86 line of processors to be of any use in the future, as this architecture would be superseded by then. To put things into perspective, he mentioned how writing a monolithic kernel in 1991 is "a giant step back into the 1970s".
Since the criticism was posted in a public newsgroup, Torvalds was able to respond to it directly. He did so a day later, arguing that MINIX has inherent design flaws (naming the lack of multithreading as a specific example), while acknowledging that he finds the microkernel kernel design to be superior "from a theoretical and aesthetical" point of view.
Oh joy (Score:1)
Real-world examples (Score:5, Insightful)
Consider QNX and its vulnerabilities [cvedetails.com] (the entire software stack) and here's what we have for the Linux kernel [cvedetails.com] (again, kernel alone) whose source is ostensibly verified by millions of eyes.
And here's another almost shameful development: Linux and Open Source are all the rage amongst Open Source fans, yet for some reasons it's been hinted that Google is transitioning from the monolithic Linux kernel (lacking internal stable API/ABI) to its own microkernel, Fuchsia [wikipedia.org] (with stable API/ABI).
Re: (Score:2, Insightful)
Utterly irrelevant to bring up a little embedded OS fit for cars and blackberries and compare it to Linux. You have no point,
Re: (Score:2)
Re: (Score:2)
I remember running the 1.44 MB floppy QNX demo. http://toastytech.com/guis/qnxdemo.html [toastytech.com] It booted to a GUI, web browser, ppp stack and modem dialer and a few tiny utilities. QNX boasted its microkernel would stay in 486 internal cache.
Fuchsia is for computers too (Score:2)
Utterly irrelevant to bring up a little embedded OS fit for cars and blackberries and compare it to Linux. You have no point,
You are misinformed. Google Fuchsia is being designed for embedded applications, mobile devices and computers.
It may very well replace also those closed modified Linux boxes used internally.
Re: (Score:2)
wasn't talking about Fuchsia but QNX, yet to be seen if Fuchsia can do anything it claims. Put your crystal ball away and we'll see someday what reality is.
Re: (Score:2)
wasn't talking about Fuchsia but QNX, yet to be seen if Fuchsia can do anything it claims. Put your crystal ball away and we'll see someday what reality is.
The reality is that the project developers are saying desktops are also a target.
Re: (Score:2)
Re:Real-world examples (Score:4, Informative)
Linux did NOT start out as an embedded OS.
Linux did NOT start out as real time operating system.
Huge difference between the two, not for the same purpose nor the same job.
Re: (Score:2)
Re: (Score:3, Insightful)
means nothing, there are reactors being "run" by logic arrays with no microprocessors at all.
get it through your head, embedded real time OS is different thing than Linux
Re: (Score:2)
the Wind River one is dead.
RTL does have some use...does anyone use RTL in real world besides for radio (not knocking that use)
Re: (Score:2)
yeah that's really cute. you can run HURD on some desktops too.
Re: (Score:2)
No, QNX is not fit for general purpose OS or server, it's a real time OS. Corporations are not serving internet and middleware and database from QNX servers.
It is not fit for desktop, though yes some here have loaded it onto desktop proving exactly nothing, it's a toy. No one here is posting from QNX desktop and they're not using QNX desktop at work.
Re:Real-world examples (Score:5, Insightful)
I think Google's transition from Linux to the Zircon microkernel (Fuchsia) is still quite some time in the future.
I believe that Google's reason for Fuchsia is mostly to do with Linux' security model being a bad match for Android. In Android, every app is running as its own Linux user. In Zircon, there is instead the "Job" abstraction that can contain processes, and access rights are based on capabilities [wikipedia.org] that the mainline Linux kernel does not have. ("Posix Capabilities" are not capabilities :-P)
Re: Real-world examples (Score:2)
Millions? I think the Linux kernel code is under the eyes of no more than 200 persons. And only a dozen in its full extent.
A really smart hacker could sneak a subtle vulnerability by spreading it in different code areas.
When the code counts by millions of lines, open source is not a direct implication of security.
Re: (Score:2)
Year of the Hurd desktop (Score:4, Insightful)
Re: (Score:2)
Actually, GNU/Hurd has seen relatively much development only in recent years and you can install and run it in command-line mode.
Userland GNU/Hurd is the same as GNU/Linux, so your statement does not really hold.
Re: (Score:2)
Stallman has completed more code than all of the idiots who complain about his "rhetoric" combined.
Unforutnately, anything can be hacked... (Score:2)
... all it takes is time and effort. The idea that computers can be "secure" when they need to be fundamentally honest if one is to maintain performance of being demanded its a bit of bullshit. You can have slow and secure or you can have blazing fast and honest. Many "Security issues" are really just artifacts of hardware or software architecture.
The reality is security has to be designed from the get go from both a hardware and software standpoint, you can't just do it when fundamentally for most of x8
Re:Unforutnately, anything can be hacked... (Score:5, Informative)
Although I agree with your post's subject, I think the argument here is the degree to which things can be successfully hacked. I believe they also mischaracterize macOS... as it's a Darwin branch of BSD and much tinier in size (the kernel, not the kexts) than Linux or Windows 10/2016.
In this ideological world surmised by someone who I believe has an agenda of their own (the cited paper). Any kernel with popularity is going to get bashed and hacked and crunched and messed with; this is inevitable. The author cites no evidence that a non-monolithic kernel with a comparable number of installations is going to be any more secure. Nada.
A nano kernel is the answer? If one is deployed, it's not very useful and has to be aided by other apps, a design forced largely by the chipset makers. If you look at motherboards 20, then 10 years ago, you'll note that the amount of discrete components is shrinking rapidly, replaced largely by SoCs.
Worse, kernel design has been somewhat forced by the whimsy of the Intel/AMD/NVidia cabals. In 2008, a decade ago, we had laptops, desktops, and servers. There were some portable devices, but diffuse and there were numerous architectural battles going on for how they would turn out.
They turned out like this: crazed IoT, myriad phones, laptops, desktops, pre-made servers, DIY architecture servers, based on Intel/AMD/NVidia, along with a minor share of IBM chips, and a superfluity of ARM versions, some of which are compatible.
If you're a developer, learning machine language is not high on your list for most. And so porting your valuable app to a target device is now what 1) enables that hardware architecture with functionality and 2) common OS support provides a foundation for your app to run. The chicken-and-egg problem is that a new family of devices needs a common substrate for apps to work. No apps, no functionality, no sales.
The argument about the # of CVEs justifying a monolithic kernel or something other than a monolithic kernel is more or less moot.
All this said, Intel and AMD and to a lesser extent ARM licensees are in deep crap because there are very serious fundamental architectural problems with their current designs. How many CVEs make up for that?
I believe the paper cited is deeply flawed.
Re: (Score:2)
I mean, anything can be hacked, sure. But some things are harder than others. It's why Windows 98 has more security issues seLinux
Probably irrelevant (Score:5, Insightful)
These days, the largest security threat is probably web browsers: They usually have direct access to the most critical information a user has (passwords, all personal files under their user account, data from all the external services the user accesses, etc.) Under the very same OS user account, web browsers also download and run thousands of untrusted programs from random locations on the internet every day (we'll ignore the handful of hardcore geeks who run Noscript).
The boundary separating these two realms is enormous and incredibly convoluted, involves many layers of abstraction (some of which can be breached by a single misplaced bracket or quote character), and is enforced entirely by the web browser itself. It presents a massive attack surface that dwarfs even the most monolithic OS API.
Re: (Score:2)
Sadly, you're right on the money. For the vast majority of people almost all the important data they have will be backed up online somewhere and on their own computer they'll almost certainly have the password stored. If you have their Dropbox/iCloud/GDrive account you're pretty much home free. And if they don't have it stored, well install a trojan because they'll almost certainly enter the password into the web browser soon.
Re: (Score:2)
For this reason Chrome/Chromium is more or less sandboxed on every platform it runs --- and I say "more or less" because it has to be in a different way on each platform to take advantage of what each platform provides.
On some platforms such as MS Windows, the code for making it run somewhat securely can be quite convoluted.
Re: (Score:2)
It's good to know that servers, ATMs, IoT devices are all immune from security exploits because they don't have web browsers on them. Phew!
Re: (Score:2)
Re:Probably irrelevant (Score:4, Insightful)
the idea that a sandboxed in the browser website is a potential risk, and a compiled binary program installed with full access to the CPU/RAM/HDD and OS is somehow less of a risk
You do realize that modern web browsers compile downloaded programs to binary?
The whole problem is the nebulous nature of the browser "sandbox" you mention. New exploits for these are published on a daily basis. Even as these are fixed, all of the major browsers add new features and complexity at a breakneck pace (unlike most OSes). So sandbox exploits will keep appearing daily. Also unlike OSes, browser sandboxes are ill-defined and constantly in flux. OS security boundaries are usually clearly documented and defined.
Like I mentioned, most all of a user's important data reside within one OS account which also runs the browser, so the OS is of little help here.
(For servers as opposed to clients, replace "browser" with "web server, middleware stack and database" for a similar huge and ill-defined attack surface)
Re: (Score:2)
What are you talking about?
Any locally installed app run by a user will have 100% access to all of that users' data files, and could trivially upload them all to any website in the world. (Except maybe on Android and iOS, where it would first have to ask the user for "network access", who would click "yes" 99.9% of the time.)
The OS does not help protect users from locally installed programs at all.
HTML is an installed App (Score:2)
It lives in a very, very complex sandbox, with a huge attack surface. But it is just as much an installed app as anything else.
What is needed is Secure HTML. It would not support everything that flashes and spins. No JavaScript. Not much CSS. Very limited interactions with non-origin sites. Such a thing could be made reasonably secure, and still support the all-important Material Design.
But nobody cares.
As to an "App" having access to all a users files, you are thinking too much of *nix. The world ch
Re: (Score:2)
Exactly, lol, sorry I think you missed my meaning, I am arguing that installed programs are a security issue of a greater magnitude than the security risks posed by just visiting a website.
On a home computer, unless the installed program compromises the web browser it may well be less of a threat as these days fewer people store important financial data or passwords on their computer. And whilst a key logger might be installed, it is much easier to harvest login details or CC numbers by using the web browser and its ability to detect the fields being used, and the URLs on which this is occurring, than using a key logger.
Re: (Score:2)
the idea that a sandboxed in the browser website is a potential risk, and a compiled binary program installed with full access to the CPU/RAM/HDD and OS is somehow less of a risk or a equal risk is utter shite.
It may be a lesser risk to the operating system, but most people care about the effects of a compromise, wherever it happens, not the technical details of the compromise. A system-wide compromise has a greater potential effect, but poor web browser and web security leading to your credit card details being stolen has a sufficiently bad effect on the domestic user it is a significant threat. And the threat comes in two main broad categories - nefarious plugins, and nefarious websites (which have multiple sub
Donâ(TM)t let inconvenient facts get in the w (Score:1)
Re: (Score:3)
Re:Donâ(TM)t let inconvenient facts get in th (Score:5, Interesting)
Having spent several thousand hours of my life dredging through Darwin's kernel interfaces, I can tell you the beating heart of Mach, the actual microkernel inside of MacOS, is literally dwarfed by the monstrous amounts of monolithic BSD and Mac bolt-ons.
In the end, I found the Mach aspect of Darwin served little purpose beyond making it more annoying to work in that Kernel. It sure didn't slow me down in my task of modifying the Kernel's page tables from user-space on an iPhone.
I love it when people who have no idea what they're talking about make such confident assertions.
Re: (Score:1)
MacOS (formerly OS X, not the weird crap that preceeded it) was never a microkernel design. Nor its Mach-based NEXT/OPENSTEP predecessors. Same goes for Tru64.
Let's get this clear: Any operating system that has filesystems or network stacks in kernel space is strictly monolithic. All this talk of 'microkernels' and 'hybrids' is tech press wank.
Re: Donâ(TM)t let inconvenient facts get in t (Score:2)
Unintended design consequences. (Score:2)
While monolithic kernel design results in more code being given access that do not need, it also mandates a homogeny of code for provided services. This help ensure oversight of critical sections of code that could otherwise be poorly implemented, left unreviewed or have high implementation fragmentation.
Microkernels are technically safer designs but the culture of code review is equally important.
Same shit, same real world problems. (Score:1)
The problem with microkernels is the same as it always was, performance. Largely caused by the overheads intrinsic NOT having access to necessary data to perform some function and having to call out for it. Security auditing is a biggy there.
And if you think a microkernel is so wonderful, instead of doing pseudo-studies to prove they are so great, write one instead and prove it.
Re: (Score:2)
Monolithic kernels had a syscall performance advantage ... before the Meltdown patches were applied.
The security advantage of microkernels is that you could reduce the attack surface by making your Trusted Computing Base as small as possible.
The seL4 microkernel has been formally proven safe. That is possible because it is small enough, but it still took years. You can't do that with something as Linux.
The locus in OS research seems to have moved on towards multikernels for better performance on systems wit
Re: (Score:2)
> Today 6 core CPUs are common
No they aren't. They are readily available but they are rather unusual. They are restricted primarily to power users who are willing to pay more than average for a better product.
Most people are cheap bastards content with the cheapest available option.
That's how we got the dominance of Microsoft.
Re: (Score:2)
Today 6 core CPUs are common
No they aren't. They are readily available but they are rather unusual. They are restricted primarily to power users who are willing to pay more than average for a better product.
It's true, 6-core CPUs aren't common. On the other hand, 8-core CPUs are. AMD has been churning them out cheaply for years now. The budget-conscious user is highly likely to have an 8-core CPU as a result.
This is academic (Score:2)
This is academic until we have solid open-source operating systems based on a microkernel fit for general use - and we're almost there.
Genode (https://www.genode.org) is a really interesting project, a microkernel OS wherein the subsystems and policies are set up for resiliency and security. It actually can run on Linux, but it also works in conjunction with a variety of the L4 microkernels which go a long way towards solving the overhead problem characteristic of microkernels. It does seem to blur the li
Tanenbaum wins in the end (Score:2)
https://en.wikipedia.org/wiki/... [wikipedia.org]
A corollary applies to monolithic applications (Score:3)
Web browsers rival operating systems in size and complexity, and are also hopelessly insecure. The main problem, shared with microkernels, is that the protection mechanisms available in common hardware don't allow efficient or convenient communication between protection domains, which are tied to address spaces. In order to cross the boundary, the address mappings must be flushed and reloaded, or at least manipulated, which are both very expensive operations. This makes any IPC very expensive, so the preferred means of communicating is by sharing memory, and for convenience and performance, nearly everything ends up in the same address space. Thus, the inevitable compromise of any part of these monolithic kernels and applications, is a compromise of the whole.
Without better hardware mechanisms for protection, that allow for efficient protection within the kernel and applications themselves, effective security will remain illusory. The furious and endless effort will continue in a futile attempt to hold the line against the flood of exploits. It is an intractable problem, unless we can shrink the protection domains to contain the effects of inevitable breaches. Capability-based addressing [wikipedia.org] as with CHERI [cam.ac.uk] offers one approach, and the Mill architecture [millcomputing.com] offers another. (see the Memory, Security, and IPC talks specifically.) Each represent a different set of trade-offs, which will limit applications. In any case, it is an area that needs work, so if there really are any nerds left on Slashdot, get to it, or at least help fund such efforts.
Remember MkLinux? (Score:2)
Apple's first attempt to get an OS running on Mach was, of all things, a Linux port. [mklinux.org] I don't think the code has been updated since the 2.x kernel series, but if it were to be resurrected, then perhaps there's less of a need to "move on" than the article's author believes.
Myopic view of security (Score:2)
Will a microkernel mitigate application-level exploits? No. And really, the application level is the important level because the OS itself is a pretty useless source of user data.
Will a microkernel prevent a certain class of exploits? Probably. But if the platform is unsuitable for applications, then the question is moot.
Maybe they want to commercialize Mach 3? Mach 3 was supposed to be BoBW, but apparently nobody actually believed that.
strong logic (Score:1)
So they found that a number of exploits against the Linux kernel would fail against a micro-kernel. Their conclusion: micro-kernel is safer.
Wait a moment: how do they rule out that new exploits would come up expressly targetting the micro-kernel and failing against the monolithic kernel?
Strong logic.
MacOS is microkernel (Score:2)
So STFU - Linux is the worst - get over it.
Today? (Score:1)
Minix?
https://en.m.wikipedia.org/wik... [wikipedia.org]
Today? (Score:1)
Minix?
https://en.m.wikipedia.org/wik... [wikipedia.org]
Compared to Windows... (Score:2)
Linux is still a microkernel.
Re: (Score:1)
The NT microkernel design came from Digital (DEC) VMS operating system which was running VAX minicomputers. VAX/VMS was arguably the most secure OS at the time, and incredibly annoying to use due to that fact (everything required privileges, all the files were versioned, horrible shell syntax etc).
Microsoft stole NT... because VMS rocked. (Score:4, Informative)
It's not quite accurate to say the "design came from Digital..." Dave Cutler, who worked on VMS V4 went to work for MS and built the W/NT (Windows/New Technology, and also WNT=VMS+1) based on the knowledge he'd acquired at Digital. Digital sued, and won.
The VAX/VMS system, later OpenVMS (because "Open" was a popular word, not because it was any more open than any other proprietary O/S, although you could get sources, originally on microfiche and later on CD) not only WAS but still IS one of the most secure systems. Banks, hospitals, medical facilities, and the government continue to use it today because of that.
You don't like the "horrible shell syntax"? No worries, Dave Kashtan from SRI/TGV/Cisco wrote Eunice, a Unix-style shell and tools so you could have your favorite CLI environment without having to learn Digital Comand Language (DCL). Dave and Ken Adelman (the guy who beat Barbra Streisand and created her eponymous "effect") used their knowledge of the VMS kernel and Eunice to write a TCP/IP networking stack that worked with the kernel at kernel speeds... beating out the inferior stacks by halfass developers like Process Software, Wollongong, and even Digital itself. (Of note is that Carnegie Mellon University built an open-source stack called CMU-TEK that (once Tektronix released their claims on it) was free, you could build it yourself, and was a great learning experience).
The point of all this is that the VMS kernel was secure, is secure, but wasn't a microkernel at all. While it made system calls to the File Management System (FMS) and the On Disk System (ODS-2) and the Record Management System (RMS, what would be like a file based record management system) were part of the library of system calls, the implementation operated within the kernel.
The VAX processor in 1978 had five operating modes, and putting aside PDP-11 compatibility mode, those were in the onion-layer model User, Executive, Supervisor, and Kernel. This was the first hardware processor to put into play the concepts we use today *EXCEPT* that it was totally enforced by hardware.
That includes an execute bit for page mapped memory. DECADES ahead of anyone else doing anything like that. /history
E
Re:Microsoft stole NT... because VMS rocked. (Score:5, Informative)
Yeah. The original NT microkernel was not VMS derived, but was a salvage-job from the design work that Cutler and team started for DEC Prism, in their Portland R&D location.
DEC cut Cutler over the period that they re-targeted for Alpha as their RISC evolution. Gates swooped on him, to deliver the kernel of his vision for "32-bit OS/2", and break away from the control IBM held over roadmap. IBM OS/400 business was effectively gatekeeping a 32-bit PC OS, even as 32-bit CPU's were effectively mainstream.
Once NT was faced with competitive challenges and requirements to match earlier windows desktop use cases, the microkernel design fell rapidly to the wayside, with principal OS modules run as additional processes in Ring 0, or completely included as functions of the OS kernel. You saw this in graphics first, I think as early as 3.51, but it MIGHT have been 4.0.
Interesting diverging paths (Score:2)
Windows NT was Cutler's next time around. MacOS X (technically Mach) was Tevanian's first time.
Pretty interesting how things worked out.
Re: (Score:2)
Yeah. OSX really inherited the great bundle from NeXT, including a unified imaging model for graphics. They learned a lot, and did a great reboot, jettisoning parts that didn't work and upgrading others. Similar to NeXT hardware, where loading the whole OS and storage from a single R/W optical was ambitious, but ultimately impractical. The changes were for the good, mostly.
Re: (Score:2)
Once NT was faced with competitive challenges and requirements to match earlier windows desktop use cases, the microkernel design fell rapidly to the wayside, with principal OS modules run as additional processes in Ring 0, or completely included as functions of the OS kernel. You saw this in graphics first, I think as early as 3.51, but it MIGHT have been 4.0.
I remember developers loving that the graphics driver could die and *not* take down the system while allowing the user to restart it. But then Microsoft broke it.
Re: (Score:2)
MS were going for the Workstation market. They couldn't match OpenGL performance, and HW partners bitched. MS bought SoftImage as a part of this campaign to take over in the segment, and also did a backroom deal for SGI to have Rick Belluzzo take reigns as CEO and introduce the Intel-based Visual Workstation line.
I still have a couple of IRIX 6.5 machines here as space heaters. Screw Windows.
Re: (Score:2)
DirectX began in the Windows 9x series. It was a way to bypass the Windows imaging and windowing models, and blit direct to hardware. Programmers at the time laughed and said, "Oh yeah, DOS!"
Re: (Score:1)
The VAX processor in 1978 had five operating modes, and putting aside PDP-11 compatibility mode, those were in the onion-layer model User, Executive, Supervisor, and Kernel. This was the first hardware processor to put into play the concepts we use today *EXCEPT* that it was totally enforced by hardware.
As with all such history this is not correct. The VAX protection rings enforced by hardware were designed (and acknowledged by the developers) as a somewhat simplified version of that used in the amazing MULTICS system. MULTICS didn't sell as many processors as the VAX, but it was certainly "put in play" in the market.
Re: (Score:2)
So your argument against X (hardware protection) having been implemented in Y (the computers MULTICS ran on) even earlier than somewhere else is that Y is old? That's not exactly strengthening your case.
Furthermore, what Multics had to do with PDP-10 is beyond me. You sure you didn't get some crosstalk in your wiring?
Re: (Score:2)
There is a pretty good analysis at https://www.itprotoday.com/man... [itprotoday.com], consistent with my knowledge of VMS and of Microsoft's behavior at the time. Taking a team of 20 with him shows that David Cutler wanted to continue his previous work as a project for a new company. But his non-disclosure agreement with DEC, coupled with the amount of code and the functionality that was replicated whole, demonstrated theft.
Re: Broken clock still right twice a day (Score:3, Insightful)
NTOSKRNL runs its filesystems, object manager, executive, etc. in kernel space. In what way was this ever a microkernel design by any commonly accepted definition of the term?
Re: (Score:2)
> Looks like even Microsoft did something right with their micro kernel design.
Except their userland is crap. So that negates all of the benefits of the microkernel.
Re: (Score:2)
Looks like even Microsoft did something right with their micro kernel design.
Uhm...
...a monolithic system, such as Linux, Windows or MacOS...
Hell, they even moved graphics subsystem and a web server into kernel space.
Re: (Score:2)
That is the last thing I would think of. I would first printout that Linux is emphatically not a microkernel and Linus would never, ever concede that he was wrong in the micro vs macro kernel argument so that will not change. No, the first organisations I would think of would be GNU (HURD), FreeBSD, Apple (OS/X) and others.
Re: (Score:2)
Re: (Score:2)
Maybe you were thinking of VMS?
Re: IP Camera Price In Bangladesh (Score:2)
Can you show us those cameras have no security hole punched in?
Could you please show us the code?