Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Bug Intel Networking IT

How Are Sysadmins Handling Spectre/Meltdown Patches? (hpe.com) 49

Esther Schindler (Slashdot reader #16,185) writes that the Spectre and Meltdown vulnerabilities have become "a serious distraction" for sysadmins trying to apply patches and keep up with new fixes, sharing an HPE article described as "what other sysadmins have done so far, as well as their current plans and long-term strategy, not to mention how to communicate progress to management." Everyone has applied patches. But that sounds ever so simple. Ron, an IT admin, summarizes the situation succinctly: "More like applied, applied another, removed, I think re-applied, I give up, and have no clue where I am anymore." That is, sysadmins are ready to apply patches -- when a patch exists. "I applied the patches for Meltdown but I am still waiting for Spectre patches from manufacturers," explains an IT pro named Nick... Vendors have released, pulled back, re-released, and re-pulled back patches, explains Chase, a network administrator. "Everyone is so concerned by this that they rushed code out without testing it enough, leading to what I've heard referred to as 'speculative reboots'..."

The confusion -- and rumored performance hits -- are causing some sysadmins to adopt a "watch carefully" and "wait and see" approach... "The problem is that the patches don't come at no cost in terms of performance. In fact, some patches have warnings about the potential side effects," says Sandra, who recently retired from 30 years of sysadmin work. "Projections of how badly performance will be affected range from 'You won't notice it' to 'significantly impacted.'" Plus, IT staff have to look into whether the patches themselves could break something. They're looking for vulnerabilities and running tests to evaluate how patched systems might break down or be open to other problems.

The article concludes that "everyone knows that Spectre and Meltdown patches are just Band-Aids," with some now looking at buying new servers. One university systems engineer says "I would be curious to see what the new performance figures for Intel vs. AMD (vs. ARM?) turn out to be."
This discussion has been archived. No new comments can be posted.

How Are Sysadmins Handling Spectre/Meltdown Patches?

Comments Filter:
  • That's how (Score:5, Insightful)

    by Artem S. Tashkinov ( 764309 ) on Sunday February 25, 2018 @04:41AM (#56183989) Homepage

    Both vulnerabilities are blown out of proportions and you need to rush to actively fix them only when your platform runs untrusted code which is mostly relevant for VPS/clouds/etc.

    When you only run your own trusted code (say a DB or an HTTP server), there's little if any need to patch them urgently. Of course, this implies that your authentication process is properly secured and when it's not, the intruder might as well find other local unpatched vulnerabilities.

    • by AmiMoJo ( 196126 )

      Trust isn't binary. No code is fully trusted. There is a whole spectrum from core kernel security functions in an open source OS to random Javascript served up by ads.

      Most people run some proprietary software. Most people have not carefully security audited all their open source software. That's why operating systems have feature to isolate tasks, to protect the kernel and manage hardware access rights.

      For most people the Meltdown patch is essential. Exploits are already in the wild.

      • by Z00L00K ( 682162 )

        And it doesn't help that some fixes out there has created problems, being recalled and then replaced with a modified version.

        As a sysadmin where uptime is important and the servers aren't in a high exposure position it's better to wait and see that things are stable before patching the systems in a panic.

        In addition to this - what about the network equipment like routers and switches - aren't they vulnerable as well? Maybe not to the same extent, but some are.

      • Re:That's how (Score:5, Informative)

        by tlhIngan ( 30335 ) <slashdot.worf@net> on Sunday February 25, 2018 @06:45AM (#56184141)

        Trust isn't binary. No code is fully trusted. There is a whole spectrum from core kernel security functions in an open source OS to random Javascript served up by ads.

        Most people run some proprietary software. Most people have not carefully security audited all their open source software. That's why operating systems have feature to isolate tasks, to protect the kernel and manage hardware access rights.

        For most people the Meltdown patch is essential. Exploits are already in the wild.

        No, it's overhyped. Perhaps if you're running a VM and intermix publicly accessible services with internal services, then you will want to worry about meltdown and spectre potentially causing the public VM to grab data from the secure VM. Of course, the other solution can use is to separate the machines physically, so someone exploiting meltdown on your public VM gets access to the other public VMs.

        Here, the threat is not from the software on the VM, but from someone finding an exploit in the software and exploiting it. But there is nothing you can run that will get you access to the other private servers, especially with proper firewalling in place.

        For single-server machines, the patches aren't as useful - if you break into the server via an exploit and then get root, just because you have patched it against meltdown means nothing - since you can access kernel memory anyways much more easily.

        Plus, there are plenty of user-mode meltdown patches out there - the whole javascript exploit is now useless because all the major browsers have made it so "high resolution timers" aren't so high-resolution - they're around the 1msec range which is enough for scripts, but too coarse to actually do a meltdown exploit (the timing difference between cached and uncached is small and 1msec is not fine enough to tell).

        The goal is to recognize that the problem is localized to one machine, and it inadvertently allows processes to read memory they're not supposed to. For a VM server, this is bad, since it means once VM can read memory of another VM. For a cloud service provider, this is disastrous since it means an evil VM can read other customer's data.

        Within a company, it's a lot less serious if you already have the proper network segregation in place, you don't mix internal and externally accessible VMs on the same machine and other precautions. In a non-VM situation, it's a non-event - exploiting the service grants you access to the machine. And once that's happens, it can be assumed you can access the entire filesystem and everything accessible to the machine anyways.

      • You're talking about trust in code. The GP is talking about trust in logged in users.

        There's no patch for the vulnerability you describe.

    • When you only run your own trusted code (say a DB or an HTTP server), there's little if any need to patch them urgently.

      Any server which is remotely accessible, which is all of them, could potentially be vulnerable to some kind of remote code injection flaw in one of its public-facing services. So no. Absolutely no.

  • After decades of struggling with virus scanners that insisted on slowly, laboriously scanning every .h file on every access during every compile, the insistence of sysadmins on braindead security policies has already wasted months of my life. I guess my only question is: what's different now? Is it, perhaps, that they themselves would also be bothered by it this time?

    Go do your f'ing job and install the patches from hell, I say. And if the drop in performance bothers you, maybe we can finally talk about tur

    • What caused them to run the virus scanner in the first place?
    • Go do your f'ing job and install the patches from hell, I say. And if the drop in performance bothers you, maybe we can finally talk about turning down the virus scanner to a normal level of security.

      You do realize that it's the users that are always clamoring for more power? Sysadmins were happy with the 68k.

      • As much as I love the connectedness of the internet, I also kind of miss the old days of DOS gaming, Win3.11, the early days of Linux and hardware without built-in backdoors ala Intel ME.

        Maybe it's just nostalgia and rose-tinted glasses, but life was a lot less complicated then.

        • Maybe it's just nostalgia and rose-tinted glasses, but life was a lot less complicated then.

          The complications were different. You had to fiddle with hypermodem or xzmodem or UUCP, you had to know the AT command set, you had to know arcane technical details of the PC ISA just to get a sound card working.

  • I thought the sysadmin had pretty much been eliminated in favor of outsourcing IT and making the developers do it themselves. It's a prime area for cost cutting, good sysadmins aren't cheap and you won't notice they're gone because they tend to automate their jobs.
  • by swb ( 14022 ) on Sunday February 25, 2018 @08:38AM (#56184441)

    I guess what I'm referring to is digging into every single patch to try to figure out what the fuck it actually patches. And if you *do* get some kind of detail on what a specific patch actually fixes, is the information meaningful enough to decide whether you *should* apply this specific patch (relevance, risk, etc)?

    Is it easier or harder now with so many vendors releasing "rollup" patches which contain multiple patches, some of which are all-inclusive and some of which require some previous rollup installed? Now picking and choosing specific patches is more or less out the door.

    And then there's the question of whether the vendor even makes it easy/hard to have any control over patches, automatically just giving you patch(es) in some form or other. And of course let's not forget support -- will the vendor provide any support if you are missing patches or do you have to have them all installed anyway?

    I guess what I see this boiling down to is "Who cares?" Install all the latest available patches and hope for the best. Only a full-time dedicated patch admin for a narrow product silo has the time/energy/understanding to break down the compound patching environment into something coherent and also probably is also the only one to have a complex patch management system that gives them granular control over which patches get installed and which don't.

    Also, based on the last few years of software quality we're all beta testers anyway. Pretty much everything released is beta quality and hits true stability and reliability just about the point the new version is released and taming its worst initial bugs.

  • Panic, patch, patch, panic, remove patches, reappy patches, panic, remove patches, deal with screaming users, patch, curse Intel....
  • Our company needed a new web server in any case, so I figured the time was to take Ryzen for a spin. Very happy with it!

  • As some have undoubtedly pointed out the true vulnerability of your system depends on exposure and the type of code run on the system, but the idea of patching only a certain segment is less than appetizing. Any business of any size has a tier of test/QA , maybe one of dev, and finally a production line. Obviously you patch one set, test/QA or dev. allow your developers to abuse the hell out of it hopefully, then roll it out to production. The company I currently work for actually has QA engineers who follo

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...