Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Security Bug Operating Systems Linux

Linux Study Argues Monolithic OS Design Leads To Critical Exploits (osnews.com) 198

Long-time Slashdot reader Mike Bouma shares a paper (via OS News) making the case for "a small microkernel as the core of the trusted computing base, with OS services separated into mutually-protected components (servers) -- in contrast to 'monolithic' designs such as Linux, Windows or MacOS." While intuitive, the benefits of the small trusted computing base have not been quantified to date. We address this by a study of critical Linux CVEs [PDF] where we examine whether they would be prevented or mitigated by a microkernel-based design. We find that almost all exploits are at least mitigated to less than critical severity, and 40% completely eliminated by an OS design based on a verified microkernel, such as seL4....

Our results provide very strong evidence that operating system structure has a strong effect on security. 96% of critical Linux exploits would not reach critical severity in a microkernel-based system, 57% would be reduced to low severity, the majority of which would be eliminated altogether if the system was based on a verified microkernel. Even without verification, a microkernel-based design alone would completely prevent 29% of exploits...

The conclusion is inevitable: From the security point of view, the monolithic OS design is flawed and a root cause of the majority of compromises. It is time for the world to move to an OS structure appropriate for 21st century security requirements.

This discussion has been archived. No new comments can be posted.

Linux Study Argues Monolithic OS Design Leads To Critical Exploits

Comments Filter:
  • by Strider- ( 39683 ) on Saturday August 18, 2018 @03:40PM (#57150850)

    Maybe Tanenbaum [wikipedia.org] was right. 26 years isn't that long for this debate to come back around again.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      He was right then. At the time CPUs were slow. Now we have pretty fast CPUs, faster than most people need. The overhead for a microkernel isn't so bad now.

      • by Strider- ( 39683 ) on Saturday August 18, 2018 @03:48PM (#57150890)

        He was right then. At the time CPUs were slow. Now we have pretty fast CPUs, faster than most people need. The overhead for a microkernel isn't so bad now.

        In a way, many people now are doing that. Running the kernel inside a virtual machine, on top of a hypervisor is likely less efficient than running against a microkernel.

      • by Anonymous Coward

        You mean now we have fast enough CPUs with flaws ( a.k.a. performance enhancements ) that make reliable process isolation impossible? A micro kernel wont protect you if it runs on an Intel CPU.

    • Re: (Score:3, Insightful)

      by Serif ( 87265 )

      Exactly what I was thinking. Still, Linux might never have happened if Torvalds took that advice.

      • by raymorris ( 2726007 ) on Saturday August 18, 2018 @04:47PM (#57151110) Journal

        Security isn't just confidentiality. It's Confidentiality, Integrity, and Availably (CIA). If the machine isn't running, it isn't provide secure services to the users.

        The micro-kernel architecture ala Tanenbaum fails the security requirement of Availability; micro-kernel systems don't provide what people need. People use Linux because the design works well for building what people need.

        • If I may add? It is also "provenance". Being able to trace where the data came from, or the software is a critical aspect of security that is too often obscured in the name of confidentiality. I've recently had just that discussion, walking through the complete lack of visibility of their work. Coupled with the complete lack of an API, it made it nearly impossible to design software to work with their systems because the code involved and its internal behavior was deliberately concealed.

          • by rtb61 ( 674572 )

            More accurately, security in computing is system only being able to do, exactly what they were intended to do. That means modular design and only putting in exactly the bits you need, that can only do the exact limited range of functions you required.

            That of course is the security market, in the general market the consumer market flexibility is a requirement. So the right kind of product, for the right kind of market and the consumer market should not subsidise the security market.

            Want it really secure,

        • by dfghjk ( 711126 )

          Following this argument to its natural conclusion, only what exists today can possibly be secure because everything that does NOT exist today fails "the security requirement of Availability". Therefore we can conclude that all future systems are insecure for precisely the same reason you conclude that a "micro-kernel architecture ala Tanenbaum fails the security requirement of Availability."

          Linux also failed to build "what people need" until it did. It's an ignorant argument useful for nothing beyond gettin

          • Doesn't exist? Micro kernels have to existed since 1969. They were a fad, a buzz word, in the early 1980s, like block chain in now. Then again in the 1990s, a resurgence of micro-kernel articles in the trade mags, and academic research. Some of the largest companies tried micro-kernel. They found out it doesn't work. The services that run in kernel mode, within the kernel address space, on all successful kernels are there for a reason. Multiple separate kernel THREADS work, and even a monolithic kernel lik

          • Following this argument to its natural conclusion, only what exists today can possibly be secure because everything that does NOT exist today fails "the security requirement of Availability".

            Exactly. Raymorris grossly misstates the security criterion of Availability.

            Availability is about outage: where the app is unavailable or malfunctions either directly due to an attack or indirectly due to features meant to thwart an attack. For example, if your account is locked as a result of 3 wrong passwords, that's a hit to availability. Which is why NIST 800-63 says to rate-limit password attempts rather than imposing lockouts.

            Merely having to throw more hardware at it because of a linear change in run

            • by Agripa ( 139780 )

              The strongest security argument you can make against microkernels is that it's a security failure to spend more money protecting an asset than the asset is worth.

              This is especially the case if you can make the loss someone elses.

        • The micro-kernel architecture ala Tanenbaum fails the security requirement of Availability; micro-kernel systems don't provide what people need.

          I would argue that with more cores available, this is no longer quite as true as it once was. Back when this debate was finalized, one core was the de facto standard for the majority of computers. Micro kernels took a HUGE hit when running on a single core. If the core operating system stayed on a single core and managed running processes on other cores, I suspect a lot of the latency can be avoided in relation to individual processes.

          • Context switching with core pinning on a 12 core, hyperthreaded processor takes several microseconds in total. 1600ns for the actual switch, then a multiple of that for for LLC, etc. Contrast in-context memory access of 1ns with cache hit, up to 12ns for cache miss. Micro-kernel with many cores is 100-1,000 times slower.

            So then maybe you start making changes to the micro-kernel model in order to have it work well on physical processors. Maybe you ask "how CAN we use SMP to separate concerns, putting aside a

            • Hm. I am not classically trained. I have no religious attachment to THE microkernel design. I am commenting off of ideas I had back in the 90s but were impracticable due to consumer devices having a single core unless you had lots of cash.

              Describing it as more of a hypervisor is along the lines of what I was thinking; although having a full independent kernel on each core is a bit much I think. Perhaps a very limited kernel, without drivers but with limited memory management capabilities is in order.

              I like

              • > Describing it as more of a hypervisor is along the lines of what I was thinking; although having a full independent kernel on each core is a bit much I think. Perhaps a very limited kernel, without drivers but with limited memory management capabilities is in order.

                "Without drivers" is exactly how I use virtualization, and how I believe it's most often done at scale. The guests use virtio, not hardware drivers. The kernel is effectively divided into one thing that ONLY handles hardware, and completely

    • by Anonymous Coward

      Perhaps HURD will take over & replace Linux!

      More likely, some open source version of Google's Fuchsia OS will be the future.

    • IIRC, the debate was really over whether or not an OS should span different system processors to provide users with a similar experience/capabilities rather than just concentrate on the latest processors with a specific operating model. I don't see how it applies to the security debate of today.

      I would consider the '86-'286-'386 to be different processors because of the 16 bit unprotected page ('86) versus 16 bit protected ('286) and 32 bit protected and flat ('386). The Minix micro-kernel was custom to e

      • NCSA Mosaic was first released in January 1993, 25 years and 8 months ago. Lynx only predated it by 6 months; it's tenure on top was short indeed.

        Home users didn't particularly discover the Internet until 1995.

        Security was primitive. Even by 1996, most of DoD hadn't deployed firewalls yet and NAT's use was pretty rare.

      • by tbuskey ( 135499 )

        >

        Regardless of the approach, don't forget that 25+ years ago, networking was quite primitive - most home users were using telephone line modems and businesses had closed networks (if they had them at all). Lynx was the web browser of choice (Mosaic was still a year or two away). Network/computer security was in its infancy (Sandra Bullock's "The Net" was a few years away). Email barely existed (I was working at IBM at the time and was able to get "myke@ibm.com" without anybody questioning it or there even being standards applied to email accounts).

        When Linus was using Minix to create Linux and for a short time after, web and HTML didn't exist, let alone browsers such as lynx. If home users were using modems, it was to connect to BBSen or commercial systems like AOL and CompuServ. AOL didn't do internet email and it was hard for any home user to get email. Just about everyone with email got it through work or college.

        I worked at one of those rare businesses with email & a shared 56k internet connection in '92. I remember when Mosaic came out.

    • Tanenbaum was indeed right as far as the security is concerned.
      Torwalds was indeed right as far as the performances are concerned.
      Maybe it's time to consider (again) L4.

    • by epine ( 68316 )

      Maybe Tanenbaum was right. 26 years isn't that long for this debate to come back around again.

      First law of futurology: never predict what and when at the same time.

      First law of making a billion dollars (or shipping a billion systems): always predict what and when at the same time.

      Why Futurist Ray Kurzweil Isn't Worried About Technology Stealing Your Job [fortune.com] — 24 September 2017

      Early on, I realized that timing is important to everything, from stock investing to romance—you've got to be in the right pl

  • by Anonymous Coward on Saturday August 18, 2018 @03:46PM (#57150880)

    Tanenbaum - Torvalds debate [wikipedia.org]

    The debate opened on January 29, 1992, when Tanenbaum first posted his criticism on the Linux kernel to comp.os.minix, noting how the monolithic design was detrimental to its abilities, in a post titled "LINUX is obsolete".[1] While he initially did not go into great technical detail to explain why he felt that the microkernel design was better, he did suggest that it was mostly related to portability, arguing that the Linux kernel was too closely tied to the x86 line of processors to be of any use in the future, as this architecture would be superseded by then. To put things into perspective, he mentioned how writing a monolithic kernel in 1991 is "a giant step back into the 1970s".

    Since the criticism was posted in a public newsgroup, Torvalds was able to respond to it directly. He did so a day later, arguing that MINIX has inherent design flaws (naming the lack of multithreading as a specific example), while acknowledging that he finds the microkernel kernel design to be superior "from a theoretical and aesthetical" point of view.

  • by Anonymous Coward
    Hurry, someone finish Hurd.
  • by Artem S. Tashkinov ( 764309 ) on Saturday August 18, 2018 @03:49PM (#57150894) Homepage

    Consider QNX and its vulnerabilities [cvedetails.com] (the entire software stack) and here's what we have for the Linux kernel [cvedetails.com] (again, kernel alone) whose source is ostensibly verified by millions of eyes.

    And here's another almost shameful development: Linux and Open Source are all the rage amongst Open Source fans, yet for some reasons it's been hinted that Google is transitioning from the monolithic Linux kernel (lacking internal stable API/ABI) to its own microkernel, Fuchsia [wikipedia.org] (with stable API/ABI).

    • Re: (Score:2, Insightful)

      by iggymanz ( 596061 )

      Utterly irrelevant to bring up a little embedded OS fit for cars and blackberries and compare it to Linux. You have no point,

      • Blackberry 10 is also an embedded OS or I've missed something? Also, QNX can be perfectly run as an x86 desktop OS [qnx.com]. Yeah, a really limited embedded only OS.
      • Utterly irrelevant to bring up a little embedded OS fit for cars and blackberries and compare it to Linux. You have no point,

        You are misinformed. Google Fuchsia is being designed for embedded applications, mobile devices and computers.

        It may very well replace also those closed modified Linux boxes used internally.

        • wasn't talking about Fuchsia but QNX, yet to be seen if Fuchsia can do anything it claims. Put your crystal ball away and we'll see someday what reality is.

          • wasn't talking about Fuchsia but QNX, yet to be seen if Fuchsia can do anything it claims. Put your crystal ball away and we'll see someday what reality is.

            The reality is that the project developers are saying desktops are also a target.

      • by q_e_t ( 5104099 )
        I've run it on the desktop. It was very nice.
    • by Misagon ( 1135 ) on Saturday August 18, 2018 @05:58PM (#57151400)

      I think Google's transition from Linux to the Zircon microkernel (Fuchsia) is still quite some time in the future.

      I believe that Google's reason for Fuchsia is mostly to do with Linux' security model being a bad match for Android. In Android, every app is running as its own Linux user. In Zircon, there is instead the "Job" abstraction that can contain processes, and access rights are based on capabilities [wikipedia.org] that the mainline Linux kernel does not have. ("Posix Capabilities" are not capabilities :-P)

    • Millions? I think the Linux kernel code is under the eyes of no more than 200 persons. And only a dozen in its full extent.

      A really smart hacker could sneak a subtle vulnerability by spreading it in different code areas.

      When the code counts by millions of lines, open source is not a direct implication of security.

  • by jfdavis668 ( 1414919 ) on Saturday August 18, 2018 @03:53PM (#57150922)
    After 28 years, you think someone would have finished the GNU Hurd kernel. By now, it is so old it is probably full of potential exploits, too.
    • by Misagon ( 1135 )

      Actually, GNU/Hurd has seen relatively much development only in recent years and you can install and run it in command-line mode.

      Userland GNU/Hurd is the same as GNU/Linux, so your statement does not really hold.

  • ... all it takes is time and effort. The idea that computers can be "secure" when they need to be fundamentally honest if one is to maintain performance of being demanded its a bit of bullshit. You can have slow and secure or you can have blazing fast and honest. Many "Security issues" are really just artifacts of hardware or software architecture.

    The reality is security has to be designed from the get go from both a hardware and software standpoint, you can't just do it when fundamentally for most of x8

    • by postbigbang ( 761081 ) on Saturday August 18, 2018 @04:45PM (#57151106)

      Although I agree with your post's subject, I think the argument here is the degree to which things can be successfully hacked. I believe they also mischaracterize macOS... as it's a Darwin branch of BSD and much tinier in size (the kernel, not the kexts) than Linux or Windows 10/2016.

      In this ideological world surmised by someone who I believe has an agenda of their own (the cited paper). Any kernel with popularity is going to get bashed and hacked and crunched and messed with; this is inevitable. The author cites no evidence that a non-monolithic kernel with a comparable number of installations is going to be any more secure. Nada.

      A nano kernel is the answer? If one is deployed, it's not very useful and has to be aided by other apps, a design forced largely by the chipset makers. If you look at motherboards 20, then 10 years ago, you'll note that the amount of discrete components is shrinking rapidly, replaced largely by SoCs.

      Worse, kernel design has been somewhat forced by the whimsy of the Intel/AMD/NVidia cabals. In 2008, a decade ago, we had laptops, desktops, and servers. There were some portable devices, but diffuse and there were numerous architectural battles going on for how they would turn out.

      They turned out like this: crazed IoT, myriad phones, laptops, desktops, pre-made servers, DIY architecture servers, based on Intel/AMD/NVidia, along with a minor share of IBM chips, and a superfluity of ARM versions, some of which are compatible.

      If you're a developer, learning machine language is not high on your list for most. And so porting your valuable app to a target device is now what 1) enables that hardware architecture with functionality and 2) common OS support provides a foundation for your app to run. The chicken-and-egg problem is that a new family of devices needs a common substrate for apps to work. No apps, no functionality, no sales.

      The argument about the # of CVEs justifying a monolithic kernel or something other than a monolithic kernel is more or less moot.

      All this said, Intel and AMD and to a lesser extent ARM licensees are in deep crap because there are very serious fundamental architectural problems with their current designs. How many CVEs make up for that?

      I believe the paper cited is deeply flawed.

    • I mean, anything can be hacked, sure. But some things are harder than others. It's why Windows 98 has more security issues seLinux

  • by Waffle Iron ( 339739 ) on Saturday August 18, 2018 @04:11PM (#57150972)

    These days, the largest security threat is probably web browsers: They usually have direct access to the most critical information a user has (passwords, all personal files under their user account, data from all the external services the user accesses, etc.) Under the very same OS user account, web browsers also download and run thousands of untrusted programs from random locations on the internet every day (we'll ignore the handful of hardcore geeks who run Noscript).

    The boundary separating these two realms is enormous and incredibly convoluted, involves many layers of abstraction (some of which can be breached by a single misplaced bracket or quote character), and is enforced entirely by the web browser itself. It presents a massive attack surface that dwarfs even the most monolithic OS API.

    • by Kjella ( 173770 )

      Sadly, you're right on the money. For the vast majority of people almost all the important data they have will be backed up online somewhere and on their own computer they'll almost certainly have the password stored. If you have their Dropbox/iCloud/GDrive account you're pretty much home free. And if they don't have it stored, well install a trojan because they'll almost certainly enter the password into the web browser soon.

    • by Misagon ( 1135 )

      For this reason Chrome/Chromium is more or less sandboxed on every platform it runs --- and I say "more or less" because it has to be in a different way on each platform to take advantage of what each platform provides.
      On some platforms such as MS Windows, the code for making it run somewhat securely can be quite convoluted.

    • by Balial ( 39889 )

      It's good to know that servers, ATMs, IoT devices are all immune from security exploits because they don't have web browsers on them. Phew!

      • by q_e_t ( 5104099 )
        ATMs don't tend to have people logging onto them directly, so the compromise will typically come from inside the bank (all bets off), except for some really specialist hacks that come to light now-and-then. IoT, well, still no direct user access, but there security can be cut down as they are low power and it is assumed that lack of direct user access can save them. The risk from IoT seems to be more what ongoing hacks can be launched from them, in terms of gross financial damage, unless you have IoT door l
  • For example: - MacOS has a micro kernel - OpenBSD is a monolithic design and is probably the most secure OS
    • MacOS is based on Mach microkernel yet somehow its Darwin kernel is mostly monolithic. What gives? Perhaps microkernels made too many difficulties for practical use?
    • by DamnOregonian ( 963763 ) on Saturday August 18, 2018 @04:40PM (#57151090)
      Don't let your ignorance get in the way of your mouth, either.

      Having spent several thousand hours of my life dredging through Darwin's kernel interfaces, I can tell you the beating heart of Mach, the actual microkernel inside of MacOS, is literally dwarfed by the monstrous amounts of monolithic BSD and Mac bolt-ons.

      In the end, I found the Mach aspect of Darwin served little purpose beyond making it more annoying to work in that Kernel. It sure didn't slow me down in my task of modifying the Kernel's page tables from user-space on an iPhone.

      I love it when people who have no idea what they're talking about make such confident assertions.
    • by Anonymous Coward

      MacOS (formerly OS X, not the weird crap that preceeded it) was never a microkernel design. Nor its Mach-based NEXT/OPENSTEP predecessors. Same goes for Tru64.

      Let's get this clear: Any operating system that has filesystems or network stacks in kernel space is strictly monolithic. All this talk of 'microkernels' and 'hybrids' is tech press wank.

    • Hmm... cherry pick much? Lots of critics justifying calling MacOS monolithic despite the Mach kernel, but not a peep about OpenBSD, which IS monolithic and which IS the most secure OS
  • While monolithic kernel design results in more code being given access that do not need, it also mandates a homogeny of code for provided services. This help ensure oversight of critical sections of code that could otherwise be poorly implemented, left unreviewed or have high implementation fragmentation.

    Microkernels are technically safer designs but the culture of code review is equally important.

  • by Anonymous Coward

    The problem with microkernels is the same as it always was, performance. Largely caused by the overheads intrinsic NOT having access to necessary data to perform some function and having to call out for it. Security auditing is a biggy there.

    And if you think a microkernel is so wonderful, instead of doing pseudo-studies to prove they are so great, write one instead and prove it.

    • by Misagon ( 1135 )

      Monolithic kernels had a syscall performance advantage ... before the Meltdown patches were applied.

      The security advantage of microkernels is that you could reduce the attack surface by making your Trusted Computing Base as small as possible.
      The seL4 microkernel has been formally proven safe. That is possible because it is small enough, but it still took years. You can't do that with something as Linux.

      The locus in OS research seems to have moved on towards multikernels for better performance on systems wit

  • This is academic until we have solid open-source operating systems based on a microkernel fit for general use - and we're almost there.

    Genode (https://www.genode.org) is a really interesting project, a microkernel OS wherein the subsystems and policies are set up for resiliency and security. It actually can run on Linux, but it also works in conjunction with a variety of the L4 microkernels which go a long way towards solving the overhead problem characteristic of microkernels. It does seem to blur the li

  • by KonoWatakushi ( 910213 ) on Saturday August 18, 2018 @06:23PM (#57151504)

    Web browsers rival operating systems in size and complexity, and are also hopelessly insecure. The main problem, shared with microkernels, is that the protection mechanisms available in common hardware don't allow efficient or convenient communication between protection domains, which are tied to address spaces. In order to cross the boundary, the address mappings must be flushed and reloaded, or at least manipulated, which are both very expensive operations. This makes any IPC very expensive, so the preferred means of communicating is by sharing memory, and for convenience and performance, nearly everything ends up in the same address space. Thus, the inevitable compromise of any part of these monolithic kernels and applications, is a compromise of the whole.

    Without better hardware mechanisms for protection, that allow for efficient protection within the kernel and applications themselves, effective security will remain illusory. The furious and endless effort will continue in a futile attempt to hold the line against the flood of exploits. It is an intractable problem, unless we can shrink the protection domains to contain the effects of inevitable breaches. Capability-based addressing [wikipedia.org] as with CHERI [cam.ac.uk] offers one approach, and the Mill architecture [millcomputing.com] offers another. (see the Memory, Security, and IPC talks specifically.) Each represent a different set of trade-offs, which will limit applications. In any case, it is an area that needs work, so if there really are any nerds left on Slashdot, get to it, or at least help fund such efforts.

  • Apple's first attempt to get an OS running on Mach was, of all things, a Linux port. [mklinux.org] I don't think the code has been updated since the 2.x kernel series, but if it were to be resurrected, then perhaps there's less of a need to "move on" than the article's author believes.

  • Will a microkernel mitigate application-level exploits? No. And really, the application level is the important level because the OS itself is a pretty useless source of user data.

    Will a microkernel prevent a certain class of exploits? Probably. But if the platform is unsuitable for applications, then the question is moot.

    Maybe they want to commercialize Mach 3? Mach 3 was supposed to be BoBW, but apparently nobody actually believed that.

  • So they found that a number of exploits against the Linux kernel would fail against a micro-kernel. Their conclusion: micro-kernel is safer.

    Wait a moment: how do they rule out that new exploits would come up expressly targetting the micro-kernel and failing against the monolithic kernel?

    Strong logic.

  • So STFU - Linux is the worst - get over it.

  • Linux is still a microkernel.

"When the going gets tough, the tough get empirical." -- Jon Carroll

Working...