Time to End Microsoft's Patch Tuesday? 256
buzzardsbay writes "Techtarget's resident security curmudgeon, Dennis Fisher, is calling for an end to Microsoft's monthly security patching cycle. Fisher points out that 'a hacker only needs one unpatched system, one little crack in the fence in order to launch a major attack on a given network. The sheer volume of the patches Microsoft releases each month makes it quite difficult for even the most conscientious IT department to get every patch out to all of the affected systems in a reasonable amount of time.'"
I have always wondered... (Score:5, Interesting)
Re: (Score:2, Insightful)
Re: (Score:2)
That pissed me off a couple days ago. I had stuff downloading overnight and scheduled SageTV recordings that got interrupted. I woke up to my computer at the login screen and thought the power must have gone out. Then the friendly green shield kindly informed me that it rebooted without my permission.
That's the cue for me to disable the Automatic Updates service. The idea is good but the implementation is aw
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Insightful)
Re: (Score:2)
Because if it's the latter, no thanks. I'd rather download the updates so they're quick to apply, then do the actual application on my own terms.
Re:I have always wondered... (Score:5, Interesting)
The reason Windows updates require reboots is because open files cannot be replaced. So if a DLL is in use at the time of update, it won't actually be installed until you reboot.
Unix systems, otoh, have decided that the name of a file (the thing the user has control over) is not what actually ids a file, but instead the location on disk is the id. Hence why Unix updates don't require reboots and instead result in the problems you've mentioned.
I've always wondered how someone could consider the Unix design a good idea. Two different programs can open what they think is the same file, yet get completely different results. And yet some people don't seem to get why this is a really bad thing for shared libraries (or even files in general).
Re:I have always wondered... (Score:5, Informative)
I still love the ability to replace in-use libraries. The only problems that ever crop up are when you dynamically load another library, and that library disappears (Windows doesn't help here, either), or its API changes (although usually that results in a new library name, so you still get the old one). If you still have a library loaded when it gets deleted, you maintain a filehandle to it so its disk space is not reclaimed or reused. Shut down all applications still loading the old library, and then the disk space gets reclaimed.
I've updated X.org at least a couple times since the last time I restarted my X server. So I have a bunch of old libraries still sitting on my disk with no way to refer to them (well, there are ways to get them back involving funky lsof/proc tricks, but let's not go there). Nothing will overwrite them. But, when I feel I have the time, I can shut down all my X apps, restart my X server, and free up all that space. But I don't need to take down mysql, apache, or anything not X-based to do so.
I don't get how anyone could consider this a bad idea. The only times it falls over is when people don't follow convention (change your library number when changing APIs!), or in cases that Windows will fall over, too (dynamically loading libraries that don't exist anymore - although that usually doesn't crash as hopefully most people catch the error return and handle it). Otherwise, it maximises the uptime of your server, so that you only need to restart programs that actually use your library when you want to.
(PS - thanks for this thread - it answers a question my wife posed - why her windows machine rebooted overnight when she was in the middle of sorting digital photos to send to be printed, and there was no power outage.)
Re: (Score:3, Informative)
In case you're interested, since starting this thread I did some googling and came up with a solution for both XP Pro and Home.
how to [ejabs.com]
registry entries [microsoft.com] (works with XP Home as well)
I guess this has been an issue for about 3 years for people, but it never bugged me bad enough to fix it until I st
Re: (Score:3, Interesting)
For example, you can create a temporary file by opening it (with create option), then deleting its name while keeping the file open.
Your file will be available as long as you don't close it, and will vanish automatically when you close the file, your program crashes, the system reboots, or whatever.
No more TEMP directory filling with crap, no need for a program that removes old tmpfiles left when a program crashes, etc.
Re: (Score:3, Interesting)
This is not a "trick". A file in Unix exists independent of its name(s). Each file has 1 name when created, but you can delete the name or add more names. When the number of names becomes zero, the file is deleted as soon as all processes that have it open do close it. As long as it is open, it is a fully functional file that occupies spac
Re: (Score:2)
That's because most of them have discovered how to reboot their systems to ensure that there are no long-running processes using a different version of the same library.
It's good because even if you have an unkillable process zombie'ing around your system, you can still replace a file, reboot, and have things work without having to actually go to single user mode and do things manually. It'
Re: (Score:2)
I've always wondered how someone could consider the Unix design a good idea. Two different programs can open what they think is the same file, yet get completely different results. And yet some people don't seem to get why this is a really bad thing for shared libraries (or even files in general).
It's because you only ever deal with pointers to files (inodes) and the inodes are what are cached when a file is open.
I consider it a friendly feature when you're debugging vital system libraries. The system stays alive and repairable even if you have just installed a bad libc. You can always find out which instance of a library a running process is linked against anyway. No big deal.
Re: (Score:3, Interesting)
The possibility to update without rebooting is great. The problems you mention are very rare. In fact I have only seen that kind of problem once, in the 10 years I have been using Unix systems. And the case where I saw it, it was not even two programs using different versions, but rather one program being started while it was in the middle of being updated causing it to end up with different versions of the diff
Re: (Score:2)
Re:I have always wondered... (Score:4, Informative)
Windows update will try to replace each file, and when it succeeds everything is fine. When not, it will put the file on disk under a different name, add a "rename" operation to a list, and continues with the next file. At the end, when the list is not empty, it requests a reboot. At reboot, the list is processed (the new files renamed over the old ones), and the list emptied.
But merely stopping an application and closing the file that was in use will not make it rename that file and remove it from the list. You will need to reboot.
Re: (Score:2)
I really just want it to install everything. Only thing I don't like is the reboots. Should really be clearly visible option in the AU panel. Thanks for your help. Can't believe some of the other responses I've gotten though...
Re: (Score:2)
That's not really what I want. I don't want to be nagged. It's not really "automatic" then, right? And I don't want the nag yellow shield in the tray all the time. I just want some decent configuration options for it, like "just fscking do it and quit bothering and interrupting me". Guess if you have Pro and open gpedit.msc and hack some options you can get that, but that's no good to me. I'd rather di
Re: (Score:2)
Re: (Score:2, Insightful)
Re: (Score:2, Insightful)
Re: (Score:2)
Re:I have always wondered... (Score:5, Insightful)
If the updates come out on a random schedule, as done before, you cannot plan ahead for the testing required to ensure the updates don't break functionality.
That's the Problem (Score:5, Insightful)
Your comment is accurate, and gets to the heart of the problem. The current system minimizes cost, at the expense of security.
The pundit would rather companies get more staff, do rolling testing, etc., whatever it takes - to maximize security.
Now, as a non-user of Microsoft products and a victim of attacks by unpatched machines, some of them corporate, it's clear that the current strategy just shifts the costs off of the companies and onto me. If it just crashed their networks I couldn't care less. But it's more than that.
So I need to side with the proposal - the users need to improve their security. They can do this by having rolling patches from Microsoft or picking a more secure product to use. I don't care how they do it, but they need to stop expecting me to pay for their poor performance.
Unfortunately, liability is poorly defined in this realm, otherwise I could theoretically sue for damages, and their insurance company would make sure they were in good shape or charge them through the roof for being in bad shape.
Re: (Score:2)
Making patches available as soon as possible, the administrators can schedule testing and patching as most convenient, maybe weekends are preferred for rolling out patches. And they can decide which patches to fast track
Liability issue (Score:2)
A business that knowingly delays deployment of a security-related patch for reasons of convenience, and suffers loss of customer data or other issues that would otherwise have been prevented -- I would not be so certain that the business itself would necessarily be invulnerable to lawsuits from injured third parties.
Re: (Score:3, Insightful)
And more importantly the current system shifts cost off of those with poor security and onto everybody else. Since there's no downside for those doing the shifting, it is a good state of affairs for them. The trouble is with all those insecure goats, the commons are becoming bare.
Re:I have always wondered... (Score:5, Insightful)
Re:I have always wondered... (Score:5, Insightful)
Re: (Score:2)
With a predictable schedule, you can schedule resources up front to make sure you're actually addressing things without letting the major projects drop.
Life in a large company is a different world.
Plus, once you've announced the patches, you've increased the threat exposure by several orders of ma
Re: (Score:2)
Re: (Score:2)
Then again, since most home users don't update their computers as it is, that's what's happening already.
Re: (Score:2)
I have seen one example of how this could work at a university where I used to be employed. They disabled automatic Windows updates by default, and they had some 3rd party software that pushed o
Re: (Score:2)
Why don't they just release patches as the make them? Is there a specific reason that they hold them all until "patch Tuesday?"
My guess is precisely to keep it a manageable, once a month job. I don't see how a patch-a-day is going to make IT's life any easier (although it would be a good excuse to hire more staff.)
Re: (Score:2)
Re: (Score:2)
On the other hand, if you're a Microsoft hater, you might think Microsoft is using this to hide how many vulnerabilities Windows has. If users had to reboot 7 times for this week's patches over the course of a month instead of just once a month, they
Re: (Score:2, Insightful)
This is a stupid idea though. It saves the administrators some hassle, but if Microsoft is putting out a patch for a vulnerability then don't you think that maybe, just maybe, the hackers already know about the vulnerability and are actively exploiting it? Why should I have to wait a month for a patch to a critical vulnerability just because some company's IT department only wants to work one day a mo
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
That's a nonsensical argument. You could make the same argument for any piece of software at anytime. So it's a useless factor in your analysis of the criticality of the particular issues addressed by any particular patch.
Each individual user should be dec
Re: (Score:2)
Re: (Score:2)
Re:I have always wondered... (Score:5, Insightful)
As for why they do them that way now, their large corporate customers asked them to. In large corporate settings there are often lots and lots of in-house-developed applications the company runs. Each time a new patch comes out, the IT dept must go through a lengthy (sometimes several weeks) process of testing the new patch, on test beds of the various models/configurations of computers the company uses, to make sure it doesn't break any of those apps, or any other purchased applications. They often run into many bugs/conflicts that MS doesn't in their testing.
If MS comes out with a patch, the company starts testing it out, then 3 days later MS comes out with another patch, the big corp now has multiple cycles of testing trying to go on at the same time, using up tons of IT resources, backing things up in the pipeline. If their testing cycle is 2 weeks, and MS releases 6 patches during those two weeks, the pipeline is now filled up with 12 weeks worth of throughput. Not fun.
If, on the other hand, MS releases on a regularly scheduled day each month, the company can easily run their test suite just a single time, freeing up IT resources, and also letting them plan for the patches/testing, rather than being surprised and having to pull folks off of other projects to work on testing if MS suddenly goes on a streak of releasing several patches in a row.
Re: (Score:2)
here's where your thinking diverges from reality.
Re: (Score:2)
Generally the problem the is that patch changes some behavior the software counted on. Usually this is undocumented behavior of an API (but then, most behavior of most APIs are undocumented: there are just too mant corner cases). Sometimes a patch actually removes a documented feature of a
Re: (Score:2)
I've heard people who say they don't update because they get sick of downloading patches, don't think they are of that much importance, etc. Maybe its because almost all the patches are for "critical venerabilities". It's like crying wolf after awhile. The term becomes mean
Re: (Score:2)
Try managing patches for a hundred thousand systems and you'll understand why less frequency is more important.
So, why not release them as they make them? Two factors
1) Your average large IT department is going to schedule the deployment,
2) Once the patch goes out, all you have to do is look at the patch to see what it changed and you know there was an exploit in the previous version. As a result, looking at patches actually make it easier to find exploits.
So combining the two, i
Volume of patches won't get better (Score:3, Interesting)
So the sheer volume of daily patches would make this better?
Now, MS should take a clue from Apple and have a lot more "rollup" packages than they currently do.
Re:Volume of patches won't get better (Score:4, Insightful)
The real question is when are they going to patch the patch system. The 100% CPU svchost bug is killing me and KB916089 (and its predecessor) doesnt do squat.
Seriously. (Score:2)
Re: (Score:2)
Re: (Score:2)
Actually, SVCHOST at 100% doesn't necessarily mean it's the bug this KB article is talking about. SVCHOST, as the name suggests, is a host process for services. Any service that it hosts could cause the CPU utilization.
You need to do
Re: (Score:2)
Re: (Score:3, Funny)
Re: (Score:2)
The author of the article is an idiot or never andministrated massively patched software if he thinks that more frequent and releases would make things easier.
If there is any testing, the majority of it would be redundant between patch stuff, to make sure critical things weren't inadvertantly broken. Say that takes 1 day per patch set, now if there are 10 patch sets in a month instead of 1, you just had 10 days spent.
That being said, while a release-when-done actually make an administ
I hate "rollups". (Score:2)
But a far FAR better solution is Debian's approach. I get TINY patches and ONLY for what is specifically running on my system.
It is so much simpler and easier to test with that approach.
Particularly when compared to things like WinXP's sp2 (with firewall) approach.
Re: (Score:2)
according to MSFT own developers the windows codebase is filled with circular dependancies. Think the backup department of the government department of redundcies.
Unitl they actually break compatibility, and actually rewrite the codebase instead of just porting windows XP to the Win2k3 kernel, and make the whole system modular, your going to get massive patches.
Re: (Score:2)
How does this help (Score:2)
And why are you relying on MS to keep your network secure?
SUS (Score:2, Insightful)
Re: (Score:2)
Testing is a huge issue. Rolling out the patches isn't. If the testing takes 2 weeks, and MS releases a new patch every other day for 10 days, they don't want to suddenly have 10-weeks worth of testing in the pipeline. They want to do it all at once.
Reality check:
Often hackers do come out with new novel exploits for un
The Real Reason for Patch Day (Score:4, Insightful)
Re: (Score:2, Funny)
Dennis Fisher fails to grok.
True. Patch Tuesday will arrive when waiting is filled.
Not MS' problem (Score:2)
Re: (Score:2)
Re: (Score:2, Insightful)
Patching is dangerous. It is not for the foolhardy, or ignorant. Your IT department is there to protect you from the "just do it" mentality. Trust them, and when they wine about problems in the process, take heed.
Our systems have been taken down twice this year due to bad patches from good old MS. Patches that we in IT were FORCED to deploy before proper testing. Guess who has control of the process in our organization now?
Re: (Score:2)
Again I say this is the fault of IT. In your case the managers that "FORCED" you to deploy before testing. The article is mainly concerned with the volume, limit the number of patches. Thats great, just leave vulnerabilities unpatched.
And what's so heinous about it?
Where I come from you are held accountable for your performance. And that includes no protection by blaming the consultant, if you hire a consultant and he fails, you AND the consultant are terminated.
Patch Tuesday (Score:2, Insightful)
Re: (Score:2)
The Problem is Volume (Score:2)
It sounds to me like the only real solution is to make better code so that you do not have to release patches as often. It might just be an inevitability that IT must live with.
I call Bullshit on the Red Bull (Score:3, Insightful)
I call bullshit on this anecdotal bit of trivia. Is the author of the article actually suggesting that some companies rush to test the new Winblows patches all through the night on Tuesday so that the patches are ready to deploy on Wednesday ? This sounds like a fresh steaming load of bullshit... what places actually force their employees to work ridiculous hours like this just due to an arbitrary vendor schedule! I would not work at such a place, regardless of the amount of free pizza or Redbull available.
My point is that this bit of exaggeration in the article has no basis in fact and should be supported by quotes from someone who actually enforces this policy at their IT department.
Re:I call Bullshit on the Red Bull (Score:4, Informative)
You may be right. My previous job was with a company that did a lot of VAR stuff, including various email systems. It didn't matter to us what you wanted - Notes, Exchange, Unix, anti-virus, anti-spam - we could sell you whatever combinations you wanted. I didn't work with Exchange, but the Exchange guys told me that in the past they used to rush out and patch systems with every "critical" Microsoft patch release and then they applied some patch that totally broke Exchange. The patch had nothing to do with Exchange, but it broke it. It took hours to fix the broken servers. After that fiasco, we regarded all Microsoft patches as suspect and we had a group in another state that one of their jobs was to test new patches on Exchange servers and see if Exchange still worked. It didn't matter to us how "critical" Microsoft considered a patch. We didn't patch any of Exchange servers until our test group gave the OK, which was usually a month later.
Mod parent up Re:I call Bullshit on the Red Bull (Score:2)
Workstation patches roll in enterprises of any size via WSUS or similar. As far as testing of workstations patches go, that's Microsoft's job. You hold the w/s patches for a few days on your WSUS server, wait to see if there are any issues, and if not, let them roll. If we had to test w/s patches on a per patch basis, we wouldn't be able to run the enterprise. If we were patching w/s's outside of a WSUSish service, w/s's wouldn't get patched.
So, WSUS manages the roll-out of patches to workstations, and
Fix The Real Problem (Score:3, Insightful)
The real problem is the patching system Microsoft chose is highly disruptive. Too many still demand user attention even if applied remotely by an administrator. Although less often, too many still require a reboot which is a larger disruption to the user's work. Should Microsoft consider changing how patching is done so that it isn't so "hands on" and pesters the users and administrators to take action? Improve patching to the point where patches can be applied painless from the IT Center and "Patch Whateverday" goes away.
My Thoughts (Score:5, Informative)
I am the Sys Admin for ensuring that our roughly 1800 desktops and notebooks get updated with the latest updates. Microsoft's strategy is the very least of my concerns. The patches show up on WSUS the Wednesday morning after they are released. I read up on them, noting any "caveats" in the KB articles and inform our help desk if I find anything signficant. Then, I set my approvals and decline any superseded updates. The clients check in and install the updates over night. I am not sure where all this talk about long nights with Red Bull and whatever come into play. If we have mission critical systems, we withold approval for that group for a week or so until we are confident that there are no undisclosed "caveats." Super simple.
I like having a regular schedule for updates. But I wouldn't mind a little more frequency. Why not the first and third tuesday of every month? Sounds reasonable to me.
Now if were only that easy for all the other software vendors out there like Adobe (Acrobat / Flash), Sun (Java), and so on. Where are their enterprise patch management solutions? Why can't I configure my Java clients to check into to one of my servers to automatically apply security updates? Instead I have to spend more money on a 3rd party patch management solution. And I haven't found one yet that is as reliable and simple as WSUS.
Re: (Score:2)
Don't those get logged?
Can't you come up with a tally of how many machines have need hand care, and how long it's taken... and say "whether or not it's supposed to work, in these cases it ACTUALLY didn't work"?
Re: (Score:2)
They should have a patch hour (Score:2)
Leave it! (Score:2)
I do also notice sometimes the fourth Tuesdays of each month might have other non-critical releases.
Billg's Response (Score:2, Funny)
Corporations have no one to blame but themselves (Score:2)
If corporations were better at updating their software (and det
Re:Corporations have no one to blame but themselve (Score:2)
No (Score:3, Insightful)
1. You have to patch within a short period of release
2. One patch may break any functionality, so you must test all of it
3. If Microsoft releases patches all the time, you must test all the functionality all the time
In 99% of the companies out there, that's just not going to happen. I love getting daily patches, my desktop or home server isn't a critical business machine. I'm mostly interested in avoiding someone hacking it so I have to set it up again, far more than a broken patch. At the very least that leaves the machine in a "known broken" state that hopefully be fixed by another patch, where as a decent virus infection might end in a reinstall. For many a corporate machine down means you're down. Sales lost, salaries roll and nothing gets done. Sometimes data gets stolen but most of the time the cost is downtime - whether it's broken software or infected software. Quite often the solution is the same - rollback to a known good state (after you've figured out how to not get reinfected). Under those conditions I see why they prefer a mad scramble every patch Tuesaday instead of a mad scramble all the time.
It's in my diary.. (Score:3, Informative)
This is the way it goes..
Friday: Look at the advanced notification to get an idea of the scale of the patches. Once or twice a year there a none.. yippee!
Wednesday: In the morning we closely analyse the patches to figure out the impact on our organisation. Servers and clients are differently impacted so we look at this to see if we will need to patch servers. Patches are tested on some representative computer systems.
Thursday: raise the inevitable paperwork for any system changes and monitor for any issues.
Friday: Check for issues with the patches and then authorise for client distribution via WSUS.
Saturday: If necessary, patch those servers that are vulnerable. Claim overtime. Yippee.
We know in advance when this is coming up. We can make plans. We ensure that someone always looks at the patches on Wednesday morning and does the analysis. It's a monthly event that we don't miss. This works pretty well.
Sure, sometimes you need to apply an out-of-cycle patch.. these are rare but Microsoft seems to understand that they are needed. If we miss it, then we'll alway pick up on it again later.
Yeah, hardcore sysadmins might like patch and reboot PCs every couple of days or so, but most sysadmins have other things to worry about than constant patching and in my view Microsoft have the balance about right. (One of the few things I like about them!)
I've already ended "Patch Tuesday" (Score:2)
After repeated experiences of the extremely awful "check for updates" software, which would lock out other apps from starting, including the task manager, I switched it from "notify" to "off". Now I'll do it manually on my own, thanks.
It still puts a x-shield icon on the bar, and I can live with that, but if it nags me one more time that the automatic update is switched off, it will find that it can't live with that because I will hunt down that clippy-like code and make it non-executable.
Re: (Score:3, Informative)
That's not true. They're released before the patches come out. Microsoft provides vulnerability information through a webpage now.
All the more reason to ditch the patch tuesday, and just release patches when they are ready. As I have repeatedly pointed out otherwhere recently, if you want to install the patches monthly, you can wait for some arbitrary day of the month, and then install the patches.
This is how Microsoft schedules patch
Re: (Score:2)
Obviously take
Re: (Score:2)
Re:End Patch Tuesday (Score:5, Insightful)
Re: (Score:2)
What makes them sitting ducks? Probably a lot of factors. Throwing patches at machines is a coping strategy, not a solution.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Oh, really? (Score:2)
And they're not necessary trivial issues. Among the multiple security-related bugs in today's update, for instance, include a potential remote exploit, a privilege-escalation exploit, and a somewhat-remote DOS attack.
Re: (Score:2)
There's of course the option of releasing it both ways and people can click an option in Windows Update which they prefer.
Of course, if you release patches both ways, it leaves the bulk updaters more vulnerable, as hackers would reverse engineer from the earlier release.
So the immediate patches should be only for known exploits already in the wild, and bulk for the rest.
Re: (Score:2, Interesting)
I haven't patched anything from MS since years, but as far as I recall there was always some downtime due to reboots after applying a patch. I think MS had to release patches monthly, else there would be more downtime. Now that the Patch Tuesday goes to
Re: (Score:2, Informative)
As my weather radio keeps reminding me when there is a thunderstorm alert: "... and stay away from windows".
Re: (Score:2)
The corporate folks that wanted patch tuesday already have WSUS servers. That's not the issue.
Re: (Score:2)
more like patch them as they're found rather than wait until "dam repair tuesday"