Openness and Security on Campus 145
djeaux writes "The April issue of Syllabus includes an interview with Jeff Schiller, Network Manager at MIT, about openness and security in academic computing. Schiller has some interesting things to say about product liability for software, including an out for open source software and boils security down to a simple maxim: You must install patches. He also says that what makes security hard is that it's a 'negative deliverable.'"
Campuses need more openness (Score:5, Funny)
Re:Campuses need more openness (Score:2)
Do you really want to see the average MIT geek running naked around campus?
Forced patch upgrading (Score:1, Interesting)
For server side and data center machines, patches usually result in more problems since they break things that already work.
It's common practice in the mainframe world to skip every other patch/upgrade as well as let patches age for a while before applying (to avoid getting an untested in the field patch).
Desktop users are more able to get and apply patches since their reliability requirements are much lower.
Simpler than that (Score:5, Insightful)
those fences can be visible or invisible, incorporated or separated, But they will NEVER stop dis-honest people. No fence will categorically keep out all burglars. No computer security(short of pulling all the plugs) will keep everyone off your computer. Openness and security can co-exist ONLY when everyone is trustworthy.
Re:Simpler than that (Score:2)
Secuirty Starts with physical security - If I have physical access I can walk in, take the Hard Drive and do what ever.
Re:Simpler than that (Score:2)
Re:Simpler than that (Score:1)
Re:Simpler than that (Score:1)
Re:Simpler than that (Score:3, Funny)
Re:Simpler than that (Score:5, Insightful)
You don't need security if everyone is trustworthy, and you can't have openness is everyone is not.
Just quibbling.
Re:Simpler than that (Score:2, Funny)
Shhhhhh. Don't let the OSS community hear that, it may discourage them.
Re:Simpler than that (Score:1)
Governments tend to have a firm grasp of security and trust... and even occassionally security without trust.
If you can trust the gatekeeper then you MAY not need to trust all who walk through the gate.
American culture. (Score:4, Interesting)
You understood openness correctly, but mis-understood security. A safe is secure, even if 500 people know the combo... as long as those people are trustworthy.
Interesting point.
But using the same example, what if an outsider pretended to be someone that one of those 50 people knew, found out details from that person, and used it to trick one of the other 50 people, etc...
One thing that struck me about American culture in general is that people seem to be a lot more trusting, and despite what a lot of Americans think, it IS a lot more of an open society than (probably most) other parts of the world.
Coming from South Africa to study in the US (between 1999 and 2001) was an eye-opening experience. I don't know how much things have changed since the 9-11 incident and so on, but back then I was amazed at how open and helpful people were, for example, getting student visas, a social security number, a driver's license at the DMV...all very smooth, despite the fact that I was a complete forgeiner. In South Africa, it is often more difficult to get basic things like licenses and so forth processed as a citizen than it was to get them done as a forgein student in the USA! I don't know if it's just a different outlook people in the USA have, but dealing with South African bureaucracy has become even more painful since I returned to South Africa, remembering how comparitively smooth everything was in the US.
The same with campus security. I'm fairly sure that if someone wanted to be underhanded, they could fairly easily socially engineer situations to break security systems.
Re:American culture. (Score:1)
America is more open than a lot of other countries but it's still not the most open/'free' place in the world, then again nothing can beat the freedom of an uninhabited island.
Re:American culture. (Score:2)
Re:Simpler than that (Score:5, Insightful)
The sad truth is that you can't have openness if anyone is untrustworthy.
Re:Simpler than that (Score:2)
Re:Simpler than that (Score:1)
Re:Simpler than that (Score:4, Interesting)
I'm not entirely certain what you mean by that, but I don't think any "open" security details short of handing out keys and passwords should automatically destroy the security. It might make it a lot harder to keep everything going safely, but there are plenty of benefits too. I don't think security requires a "fence" if the thing behind the fence is safe. In the physical world, an invasion involves someone physically entering an area. In the electronic world, someone has to find some way to get the thing behind the fence to do something it wasn't intended to do.
1) If the thing behind the fence is extremely well-designed, it won't allow something like this.
2) If security is "closed", it's only secure because nobody understands it or because nobody has a chance to touch it.
That sounds a lot like locking yourself in a secret underground bomb shelter and calling yourself "secure".
Re:Simpler than that (Score:1)
BUT you have the added dis-advantage of not being able to(YET?) categorically determine that joe is joe. Sue might be joe. or joe might be jake.
In meatspace there are ways to with certainty say Joe is Joe.
Re:Simpler than that (Score:3, Insightful)
Actually, in meatspace there are ways to impersonate someone. If you are holding something to be delivered only to Joe, Jake can get ahold of fake ids and a convincing story and make you believe he is Joe (unless you personally know Joe, that is).
Re:Simpler than that (Score:3, Insightful)
I emphasize: if the thing behind the [nonexistent] fence is very safe, no "fence" should be necessary. I define the fence as the thing that prevents people from having a chance to interact with the fenced item. In the real world, someone can use their strength to break through a fence or break through a wall within the fence. In the electronic world, there needs to be an actual mistake or problem before a similar thing can happen.
Re:Simpler than that (Score:3, Interesting)
Anyway, there's a way to have openness and seurity.
You put a table in a field and put a log of nice candy on it. (the goodies, no fence)
Then you put an east-german martial arts instructor in a soviet-era uniform with an AK-74 and a german shepherd on a short leash next to the table. (security)
Anyone can come and browse, but I guarantee you they won't take any candy without leaving a few dimes in the jar.
Security should be obvious, a
Re:Simpler than that (Score:3, Funny)
You forgot the razor wire, the minefield, the 18 foot tall concrete wall, and the ant-aircraft guns. Oh, and don't forget about the B-1 Bomber fleet with a heaping pile of MOAB's... While we're at it, let's throw in some propaganda and tactical nukes and some chemical and biological--
Oh wait... This is just getting plain silly.
Firewalls, patches, and frequent monitoring for suspicious activities... yep... Along with a prayer, that's about the best you can do.
Re:Simpler than that (Score:2, Insightful)
Banks and military installations are hard targets for a reason and yet are still penetrated occasionally... WHY?
because there is added value to penetrating those systems. The average person isn't in any direct danger from the people who rob banks or break into military bases.. and a bank isn't in any danger from someone who busts out a car window and steals a radio.
OTOH if you put up that sort of security around your ho
Which side of the fence is which? (Score:4, Insightful)
Some university administrations are concerned with protecting the rest of the Net from their students; others think that interferes too much with legitimate research. Some other poster commented that their university's policies are to be "open", but they block incoming Port 80 and Port 25 to student residence networks - meaning that students can't run their own web servers or mail servers, which is distinctly *not* openness.
Defeating security by munging URLs (Score:5, Insightful)
Patches? (Score:5, Funny)
Re:Patches? (Score:5, Insightful)
sPh
Re:Patches? (Score:3, Insightful)
Windows update does break stuff, but it is not the only option for automatic or manual updates from Microsoft. They even offer a corporate version which doesn't rewrite policy everytime you update which is why most apps break when they do
Re:Patches? (Score:2)
Damn I need to read more carefully
Re:Patches? (Score:3, Interesting)
And why do you say the patches "particularly [break] competitor's applications"? All this means to me is that Microsoft tests the patches thoroughly with their own software. I certainly wouldn't expect them to release patches that break their own software (that they know and can test) more than their competitors' software.
Re:Patches? (Score:5, Informative)
That's the worst I know of (since it was marked a security release, and since it affected so many sites), but I have certainly run across others.
And while I agree Microsoft can't test _every_ 3rd party app out there, I do think that given their 96% desktop market share (at that time; closer to 99% today) that they have a responsibility to test the leading apps of the leading functions, whether or not they are Microsoft's. Novell certainly used to do that.
sPh
Re:Patches? (Score:2)
Ever run a proprietary application you or another company wrote to interface with an MS SQL Server?
Re:Patches? (Score:2)
Re:Patches? (Score:2)
Re:Patches? (Score:2)
True, but in the long run whats better? Switching over to Linux and have no one to sue if your server gets hacked due to a security flaw? Or stay with Windows and have someone to take the heat when your server crashes from an update?
Linux is great and all, but if you don't have someone, who
Re:Patches? (Score:3, Insightful)
Please name the last time any organization of any size successfully sued Microsoft over a product liability issue. I'll even take FOAF references to orgs getting under-the-table reimbursments if that's all you have.
sPh
Re:Patches? (Score:3, Funny)
Don't expect any work from him for the rest of the day though. Just let him gibber quietly in the corner. It'll go away.
KFG
Nada (Score:1)
Negative Deliverable (Score:5, Insightful)
One thing that gives me pause... (Score:5, Insightful)
Nor would I applaud Automatic Update as a triumph for the end-user -- it delivers more than security fixes and can affect the stability of a machine. But the point about firewalls only being as good as the policy on employee laptops is a good one.
Re:One thing that gives me pause... (Score:3, Interesting)
[
but who's to say offhand that Triple-DES or
AES are better than Blowfish or plain DES
]
No-one does. There is no proof that for any algorithms we've thought up yet that there isn't a way to recover the encrypted text faster than brute force.
It is possible DES is more secure than AES or Blowfish.. we just don't know..
So like most things business, it's a risk management issue. The chances are that encryption is your strongest link. You need to insure you've got your weaker links covered: namely,
Pause (Score:2)
Jeff Schiller obviously, as an author of kerberos I would expect him to be reasonably knowlegable on this.
Anyone even reasonable familiar with the details can say that 3DES is more secure than DES. DES's keyspace is too small and has been so for several years.
That said, the algorithm behind DES and hence 3DES has withstood 3 decades of scrutiny. It is optimally strong against differential cryptanalysis because the IBM de
Software liability (Score:5, Insightful)
But, I fear that the commercial interests in this game, if they felt that Congress was backing them into a situation where they would have to accept liability, my guess is they would strenuously lobby that liability applies to everything, including open source, in an attempt to kill off open source. So that's the conundrum.
That was a very insightful quotes regarding the worry I've been having off late. Given their way, lawyers, lobbyists, anti-opensource corporations and their political puppets will all rally to impose liability for software on the end-developer.
If such a development happens, we could very well see software developers forced to buy "malpractice insurance" like doctors/medical professionals - that alone will be enough to kill opensource software, not to mention the plethora of lawsuits and ugly frivoulous lawsuits which've plagued the US medical system and escalated medical costs.
And ust to play devil's advocate to his suggestion that free software developers not be held liable - since they're "giving away" their stuff: somebody could turn my anology around and make outrageous claims like "exempting voluntary software developers from liability is like encouraging quacks to pursue their medical endeavours".
Re:Software liability (Score:2)
Except that it doesn't quite work like that. Liability is generally based on causality - if you make something happen, especially knowingly, you assume liability for the
Re:Software liability (Score:2, Insightful)
However, if I make plans for a car, call it a "concept", and give you (for free) the plans for it, and you make a car that then injures you, how much liability would I assume? Very little.
You actually think you wouldn't get sued by at least one person that tried to build the car? And remember, once that lawsuit starts you've already lost regardless of outcome if you aren't insured. Don't let the way you want the world to be cloud yo
Re:Software liability (Score:2)
I think not.
Re:Software liability (Score:1)
Re:Software liability (Score:2)
Does that mean that it would be harder to hold an OSS author liable?
Of course, that still leaves Red Hat and the like out in the cold.
Re:Software liability (Score:2)
Re:Software liability (Score:4, Insightful)
It's natural to assume that placing barriers or restrictions would hurt the vendors. Intuitively, anti-drug laws would hurt drug dealers, but in reality they drive the price up, and therefore the dealers' profits.
It's the same with software vendors. It would take more time to develop a quality product, and so it would eliminate most of the smaller developers. In effect, it would drive the price of software up across the board. Most consumers don't care about security or stability, they really don't. And developers would shy away from some of the most useful features for fear it could be considered a security problem. So the consumers are getting no real benefit, but paying a huge cost.
In the case of doctors, a patient's body would qualify, in computer terms, as "mission critical", meaning one problem is too many. So the patient loses if they see a quack. But, if a consumer gets bad software they reboot a few times a week, and maybe re-download some mp3s.
A better solution is if the vendors who actually do provide mission-critical software would provide guarantees. You can get a lot better guarantee from IBM or Oracle than MS, and enterprises recognize that.
What about me? (Score:3, Interesting)
The other thing that makes me laugh is "indemnification." I'm running around "indemnifying" multi-billion dollar corporations against lawsuits from people who might claim that our code violates their pa
well, duh! (Score:4, Insightful)
It doesn't matter that he has no knowledge of how to code a similar sploit himself, or that he could not admin your university WAN. It doesn't matter that university cut-backs mean you don't have enough money for a test LAN to make sure the latest buggy patches won't break business critical software/services or bring your servers to their knees. All that matters is that he can go on IRC and tell everyone how "k-rad 133t" he is.
Stupidity wants to be free!
All in One Box (Score:3, Interesting)
Re:All in One Box (Score:3, Interesting)
Firewalls work because they enforce a single point of entry with a single method of entry: none.
However, once you start asking for "features" like password-based logins, tunnelling, VPN, port forwarding, etc., then you increase the complexity, and therefore the likelihood that a human being will make a mistake and leave invisible door open, or at least un-double-bolted.
There are three kinds of mistakes that can be made:
1. Forgetting to secure something in the long list of thin
Re:All in One Box (Score:3, Insightful)
Re:All in One Box (Score:1)
This sort of thing would be valuable even on more secure OS's like Linux or BSD. I'm not sure if any are available, but I know of none installed or enabled by default.
Give them a reason to patch (Score:5, Insightful)
I'm certain there are countless flaws in this idea. But hey, you don't post to slashdot without some risk of being shown what a moron you are right?
How about having DSL/Cable companies give an incentive to customers whose computers do not become infected during the blitz of mass email worms and trojans. Something like a few bucks off of your ISP bill to free software. Some kind of incentive for NOT getting infected besides the fact that you don't have anything on your computer.
It would benefit them in that it lowers their costs and increases their reliability if hundreds to thousands of their customers aren't sending DOS, etc.
Of course, there are issues such as privacy implications (how would they know you're infected or not) to hardware costs for the ISP.
Re:Give them a reason to patch (Score:2, Insightful)
Or how about making the ones who _do_ get infected pay an extra fee? After all, it's more fun to punish the people who cause damage than to reward those who don't.
It would benefit them in that it lowers their costs and increases their reliability if hundreds to thousands of their customers aren't sending DOS, etc.
Well, if it's against their ToS, th
Re:Give them a reason to patch (Score:2)
Only problem with punishing is that you loose customers, by rewarding the good ones you'll gain customers.
Re:Give them a reason to patch (Score:1)
Or they can kinda do what Comcast does with their cable internet/cable tv. Give a $10 credit for use of both.
Just charge $15 extra each month and give it back for those who don't get a virus.
Re:Give them a reason to patch (Score:3, Insightful)
problem is that many times the "software" that comes with your DSL and Cable modem is riddled with spyware... (comcast's certianly is)
the cost of a HARDWARE front line NAT box that has all incomi
Play by campus rules (Score:2, Insightful)
I only agree somewhat with this article. (Score:1, Informative)
I think it would be irresponsible of a network/system administrator to NOT keep their systems up to date with the latest patches and fixes, along with using SSH and similiar tools.
But at the same time I believe in having a firewall, though I do agree it will not solve all of your problems.
I don't believe in just patching your systems. I work at a top west coast university, and the academic computing department's a
Re:I only agree somewhat with this article. (Score:5, Insightful)
In my experience, there are basically two things that are *MOST* commonly seen in academic networks; one is either internal or external parties trying to take advantage (and misuse) the massive bandwidth that campuses have available, or someone trying to discover and manipulate potentially sensitive documents (such as grades).
I think firewalls have their place, you're right. But being at the receiving end of a rather draconian installation/firewalling policy for no apparent reason other than just reducing work for the systems operators (and increasing work for students, supervisors in general); I'm thinking that there should at least be a set of carefully monitored, but open machines for people to just mess around with. It's a campus, a seat of learning. Sometimes, when you're trying to learn something, things break. Do you want to be too worried about breaking a piece of "mandated" software and having a risk of getting your ass chewed, instead of experimenting ?
Campuses have different security requirements and needs from commercial outfits, IMHO. Sometimes, administrators just don't understand that and try to implement the same policies willy nilly. Security isn't just about procedures and blanket firewalling.
Re:I only agree somewhat with this article. (Score:3, Insightful)
Re:I only agree somewhat with this article. (Score:3, Informative)
I believe in an open academic network for the students, faculty and researchers.
But for the administrative computing, where I work, which does all the data processing, there is no reason for an open network.
The funny thing is is that the major research projects we have on campus, have erected firewalls to protect themselves. And basicaly have told academic computing to g
Re:I only agree somewhat with this article. (Score:2)
I teach at a community college, which is different from MIT in many ways :-) One big difference is that we have a lot less funding. A result of this is that we have some security problems that happen simply because there aren't enough tech people to manage the number of machines we have. The figure I've heard bandied about is that if we were a major corporation, the r
most patches aren't trustworthy (Score:4, Insightful)
in the "real world", when there is a security
threat, such as a gas leak, you call the repair
person, who fixes it.
This is the equivalent of "install patches"
Note that there is a level of confidence in
calling the repair person, that they won't
paste adds all over your living room, or install
a wire-tap on your phone line, or a spycam
in your bedroom.
unfortunately, in the computer world, all too
often the "patches" are used as trojans.
they change user settings, put in spyware,
brake working code, etc
so, ppl are hesitant to apply patches, with
good reason.
Re:most patches aren't trustworthy (Score:5, Insightful)
This is why the OSS model works better for security. I *can* run urpmi --update and trust that the results will be what I want. I can also look under the hood at exactly what gets updated and how. Or, I can download individual packages... or download things and compile them from source... or, if I want and have the skill and time, I can fix things myself.
Now, simply because there are alternatives, there is competitive pressure on the people who make autoupdaters to make them efficient, effective, and transparent--because, otherwise, people will stop using them.
Re: (Score:2, Insightful)
Patches work both ways... (Score:2)
Re:Patches work both ways... (Score:2)
Re:Patches work both ways... (Score:2)
If your autoupdater checks package signatures and the private signature keys are kept on machines that are only connected to the outside world via SneakerNet, MitM and server compromises only directly act as DoS attacks. Now, maybe an at
Re:Patches work both ways... (Score:2)
Re:Patches work both ways... (Score:2)
Re:Patches work both ways... (Score:2)
Re:Patches work both ways... (Score:2)
Also, there isn't enough energy in the known universe to perform 2**2048 electron transitions or spin flips, so how do you propose an attacker keep track of state while bruit forcing a 4096-bit RSA key?
Now, there are known attacks that are much much much more efficient than bruit forcing, but it will still take you millions of year
From the Article (Score:4, Insightful)
No. More secure, but not secure. For one thing, things will be overlooked. For another, there will always be things that were not known to be security holes at the time, but that will later turn out to be such.
``JS: I think Linux is much more secure than a lot of the other stuff that's out there, because so many people look at the source code--not everyone looks at it, but enough people do, so that problems get fixed earlier, rather than later.''
Many people look at the sources, but do they find the vulnerabilities? See also above.
In short, nothing is going to give you guaranteed security. Having said that, crackers will only go so far to break a system, so absolute security isn't even required. This makes any security measure useful, including firewalls (which JS argues against).
As a closing remark, despite these minor points, I found the article a very good read; JS seems to have his heart in the right place. Heh, it makes me frown every time people say "security" and mean "restrictions" (see also MicroSoft and Trusted Computing).
Re: (Score:2)
I just HOPE (Score:4, Funny)
Re:I just HOPE (Score:2)
My campus is all security, no openness. (Score:5, Interesting)
For instance, the "start" button on every lab computer has been disabled--people only have access to the icons on the desktop. Furthermore, right-click context menus have been disabled.
On some public computers, even access to the address bar in IE is disabled--all you can do is follow the links from the homepage in IE.
When I took a Mathematica class in the physics lab, we used a heavily neutered version of Windows NT, with file permissions set unusably tight. Browsers would crash on startup because they didn't have write access to their cache files, virtual memory was disabled (!), and the like.
Network Services also has banned the use of BitTorrent on campus, causing consternation among people wanting to download contraband like, uh, Mandrake images.
This is the same campus where average packet loss on ResNet is 20-30%. Students play games over dialup because it's faster and more stable than ResNet.
Re:My campus is all security, no openness. (Score:2, Interesting)
I attend the University of Alabama in Huntsville, an engineering/research institution with enrollment around 15k. The Network Services people around here aren't really concerned about the value of openness to academia; in fact, most of their security is directed inward, against the students who have to use the machines.
Wow, sounds exactly the opposite to UNLV. I remember one department had a few NT lab machines that students often remotely accessed and filled the Desktop folders with shortcuts... made
Re:My campus is all security, no openness. (Score:5, Interesting)
Here at UA, everyone gets a real IP address: there is no NAT. There is a "traffic shaper" on resnet which limits upload speeds and blocks incoming connections on some of the lower service ports (80, 25, etc). Central computing blocks incoming connections to port 25 except for mailservers, but that is just to prevent open-relay spam. Other than that, there is no firewall.
Each college has it's own labs. The arts and sciences labs are locked down one way, the engineering another way, c&ba another way, etc. In most cases students can't copy files to the hard drive or fiddle with the control panel, but other than that there is no real "lock down".
I work for one of the colleges on campus and we have been trying to get a firewall for our labs and faculty for years, but central computing won't allow it. They won't the network to be open, not for academics sake, but so that they can keep tabs on what everyone is doing. They think that if we put up a firewall it will keep THEM out too.
Re:My campus is all security, no openness. (Score:1)
ie enough to keep honest people honest and make it difficult enough for the average criminal to move on the the next house.
My Campus (Score:1)
The School of Education had their lab computers locked down so hard, you had to login as a certain user to use the scanner, then logoff and login as a different user to use Photoshop. This is the way it was for almost every application. The lab assistant had to do the login for you. Many things were broken as in the above posting. This was all to keep the lab assistant from having to fix so many "bro
what seems like dumb admins to me.......... (Score:1)
Re:My campus is all security, no openness. (Score:1)
I'm probably stupid... (Score:2, Interesting)
I'd just suggest that the users computer serves the white-hat worm for a day or two (kind of like a Bit Torrent), and then automatically deletes it.
Is that a bad idea?
Re:I'm probably stupid... (Score:1)
Comment removed (Score:4, Interesting)
It's the same old saw (Score:4, Insightful)
It is always possible to make security problems at the design level, like forgetting to check an account balance before allowing a withdrawal in bank software, but humans are very good at thinking in those ways, and those kinds of problems are rare.
---------
Create a WAP server [chiralsoftware.net]
Comment removed (Score:4, Insightful)
But MS has a solution for this (Score:2)
words of wisdom (Score:1)
I definatly need to send that to the net admins here at school. I can surf the web, read e-mail, and use instant messaging. Thats about it. Everything else is restricted on a dorm-to-dorm basis. So I can play games with people from my building but my friends on the other side of campus are shit outta luck.
What a great article (Score:4, Interesting)
Maybe I'm visiting the wrong web sites, but it's great to hear these things from someone who's been on the cusp of network administration from the beginning.
S: So education is a part of this?
JS: Education is a part of this, both for the people who own personal computers and work with the data and for the people running these systems.
I can vouch for the end part of the article for sure, as I'm sure many Slashdot readers can. Right now I'm doing an Information Security Risk Assessment as part of a graduate level class that I'm taking. Fortunately, for the K-12 schools on which we perform these assessments we cover user education as part of an overall Information Security program. Also, it gives us the chance to see user education and awareness from their point of view, which helps us make the case for having user awareness training. A lot of end users don't realize that having a weak password is like giving away the key to your organization (or school in this case). I'll give you two guesses as to the biggest topic that we've discussed with the school corp. and the first one doesn't count ;)
You would not believe how woefully inadequate schools are when it comes to an Information Security Program. If you have the opportunity to help a school out, do it. It will help you learn something, help the school better themselves, and better the community by protecting the little ones' information.
My experience on MIT ResNet... (Score:3, Interesting)
In general, the MIT "firewalls are false security" mantra is a good thing, particularly at MIT where there is a high concentration of bright and inquisitive people. You can never count on the black- and grey-hats being on the other side of your fire wall. You have to assume that the networks on both sides of your firewall are hostile. Each host must be a castle unto itself. This is simply a much more robust security model than "keep the bad guys over there".
On the other hand, shortly before MS started covering IIS on WindowsUpdate, the house had a rash of IIS exploits and RPC exploits. I asked for advice about setting up an OpenBSD firewall to only allow outgoing connections from most machines (and knocking holes in the firewall for MIT Network Security's vulnerability scanners). The response I got was basically "If you have to ask, we won't help you. Just patch everything and it will be fine." They didn't seem to realize that a sophmore can't just run around the house pestering everyone to keep their machines up to date. Basically, my powers were limited to waiting for problems and then finding the offender and saying "MIT is threatening to cut the entire house off from the Internet in two hours unless you do what I say now!". Sure, I send out reminders and heads up emails, but when they didn't listen and got compromised I would invariably be the one to do their OS reinstall because if I didn't, half of them would just put the compromised machine back online without fixing anything.
This last year, MIT actually stepped out of the ivory tower and did some port-based filtering (firewalling) when tons of students came back from Summer to take their computers out of storage. Many of the students would get compromised while updating, even if they patched as soon as connecting the machine to the Internet.
I think they also permanently firewall off their MS Windows-Athena computer cluster. (side note: the internal code name for the project to modify Windows to work with the rest of the Athena network was Pismere -- Latin for horse piss)
I also pestered MIT for about a month after RedHat released the ptrace bug kernel fix and they hadn't pushed the fix out to the official RedHat-Athena packages. Their position was that local root exploits weren't a problem since MIT gives the root password to most of the machines to students who ask. I pointed out that many departments and individual students set up machines so that absolutely anyone with an Athena account could SSH in as a normal user. There had been no warning emailed out that RedHat-Athena machines were still vulnerable to the ptrace local root exploit. Most of these machines owners assumed that the problem had been taken care of by RedHat-Athena's daily automatic updates. It was by sheer luck that I looked at the file modification date on my friend's kernel and realized the modification date was long before the ptrace vulnerability had been discovered. After all, I had already checked that it was up to date on all of the patches MIT put out for RedHat-Athena.
In short, MIT netowrk security policy is a strange patchwork of opinions.
Re:GENIUS! (Score:1)
I want to know how they are dealing with those issues! How can you "protect" a wide open environment with a large number of unpatched systems? What tools does he use? Or, has he simply written off the whole thing?