Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Education Security

Openness and Security on Campus 145

djeaux writes "The April issue of Syllabus includes an interview with Jeff Schiller, Network Manager at MIT, about openness and security in academic computing. Schiller has some interesting things to say about product liability for software, including an out for open source software and boils security down to a simple maxim: You must install patches. He also says that what makes security hard is that it's a 'negative deliverable.'"
This discussion has been archived. No new comments can be posted.

Openness and Security on Campus

Comments Filter:
  • by SnappingTurtle ( 688331 ) on Tuesday April 06, 2004 @03:25PM (#8783399) Homepage
    For beginners, streaking has totally gotta come back in style.
    • For beginners, streaking has totally gotta come back in style.

      Do you really want to see the average MIT geek running naked around campus?
    • by Anonymous Coward
      It is just a bad practice to upgrade to each and every patch released by a vendor.

      For server side and data center machines, patches usually result in more problems since they break things that already work.

      It's common practice in the mainframe world to skip every other patch/upgrade as well as let patches age for a while before applying (to avoid getting an untested in the field patch).

      Desktop users are more able to get and apply patches since their reliability requirements are much lower.
  • Simpler than that (Score:5, Insightful)

    by stanmann ( 602645 ) on Tuesday April 06, 2004 @03:28PM (#8783442) Journal
    Security is simpler than that. Security requires fences, in the electronic world just as in the physical world.

    those fences can be visible or invisible, incorporated or separated, But they will NEVER stop dis-honest people. No fence will categorically keep out all burglars. No computer security(short of pulling all the plugs) will keep everyone off your computer. Openness and security can co-exist ONLY when everyone is trustworthy.
    • pull the plug on the computer

      Secuirty Starts with physical security - If I have physical access I can walk in, take the Hard Drive and do what ever.
    • by lukewarmfusion ( 726141 ) on Tuesday April 06, 2004 @03:36PM (#8783539) Homepage Journal
      Openness and security are mutually exclusive (if I'm understanding your use of 'openness' correctly).

      You don't need security if everyone is trustworthy, and you can't have openness is everyone is not.

      Just quibbling.
      • Openness and security are mutually exclusive

        Shhhhhh. Don't let the OSS community hear that, it may discourage them.
      • You understood openness correctly, but mis-understood security. A safe is secure, even if 500 people know the combo... as long as those people are trustworthy.

        Governments tend to have a firm grasp of security and trust... and even occassionally security without trust.

        If you can trust the gatekeeper then you MAY not need to trust all who walk through the gate.
        • American culture. (Score:4, Interesting)

          by PlatinumInitiate ( 768660 ) on Tuesday April 06, 2004 @04:17PM (#8784102)

          You understood openness correctly, but mis-understood security. A safe is secure, even if 500 people know the combo... as long as those people are trustworthy.

          Interesting point.

          But using the same example, what if an outsider pretended to be someone that one of those 50 people knew, found out details from that person, and used it to trick one of the other 50 people, etc...

          One thing that struck me about American culture in general is that people seem to be a lot more trusting, and despite what a lot of Americans think, it IS a lot more of an open society than (probably most) other parts of the world.

          Coming from South Africa to study in the US (between 1999 and 2001) was an eye-opening experience. I don't know how much things have changed since the 9-11 incident and so on, but back then I was amazed at how open and helpful people were, for example, getting student visas, a social security number, a driver's license at the DMV...all very smooth, despite the fact that I was a complete forgeiner. In South Africa, it is often more difficult to get basic things like licenses and so forth processed as a citizen than it was to get them done as a forgein student in the USA! I don't know if it's just a different outlook people in the USA have, but dealing with South African bureaucracy has become even more painful since I returned to South Africa, remembering how comparitively smooth everything was in the US.

          The same with campus security. I'm fairly sure that if someone wanted to be underhanded, they could fairly easily socially engineer situations to break security systems.

          • You can't get into legal trouble for certain things that other countries would kill you for, but many American citizens are more close minded than the citizens of other countries, therefore instead of being killed by the police for something you get killed by your neighbor.

            America is more open than a lot of other countries but it's still not the most open/'free' place in the world, then again nothing can beat the freedom of an uninhabited island.
          • I think one of the reasons behind this is that being a foreigner isn't that unusual in the US. Here in California for example, a major portion of or population is recent immigrants, legal and other wise. When that's the case it's just natural to not think much of whether or not someone is a citizen.
      • by ColonelPanic ( 138077 ) on Tuesday April 06, 2004 @03:46PM (#8783678)
        You don't need security if everyone is trustworthy, and you can't have openness is everyone is not.

        The sad truth is that you can't have openness if anyone is untrustworthy.
    • Re:Simpler than that (Score:4, Interesting)

      by Rikus ( 765448 ) on Tuesday April 06, 2004 @03:43PM (#8783628)
      Openness and security can co-exist ONLY when everyone is trustworthy.
      I'm not entirely certain what you mean by that, but I don't think any "open" security details short of handing out keys and passwords should automatically destroy the security. It might make it a lot harder to keep everything going safely, but there are plenty of benefits too. I don't think security requires a "fence" if the thing behind the fence is safe. In the physical world, an invasion involves someone physically entering an area. In the electronic world, someone has to find some way to get the thing behind the fence to do something it wasn't intended to do.
      1) If the thing behind the fence is extremely well-designed, it won't allow something like this.
      2) If security is "closed", it's only secure because nobody understands it or because nobody has a chance to touch it.
      That sounds a lot like locking yourself in a secret underground bomb shelter and calling yourself "secure".
      • In the electronic world, just as in the real world you can always go over or through the fence.

        BUT you have the added dis-advantage of not being able to(YET?) categorically determine that joe is joe. Sue might be joe. or joe might be jake.

        In meatspace there are ways to with certainty say Joe is Joe.
        • In meatspace there are ways to with certainty say Joe is Joe

          Actually, in meatspace there are ways to impersonate someone. If you are holding something to be delivered only to Joe, Jake can get ahold of fake ids and a convincing story and make you believe he is Joe (unless you personally know Joe, that is).
        • by Rikus ( 765448 )
          ...you can always go over or through the fence

          I emphasize: if the thing behind the [nonexistent] fence is very safe, no "fence" should be necessary. I define the fence as the thing that prevents people from having a chance to interact with the fenced item. In the real world, someone can use their strength to break through a fence or break through a wall within the fence. In the electronic world, there needs to be an actual mistake or problem before a similar thing can happen.

          ...not being able to(YET?)
          • Speaking of stolen items: there's a reason people call them "fenced".

            Anyway, there's a way to have openness and seurity.

            You put a table in a field and put a log of nice candy on it. (the goodies, no fence)

            Then you put an east-german martial arts instructor in a soviet-era uniform with an AK-74 and a german shepherd on a short leash next to the table. (security)

            Anyone can come and browse, but I guarantee you they won't take any candy without leaving a few dimes in the jar.

            Security should be obvious, a
    • Security requires fences

      You forgot the razor wire, the minefield, the 18 foot tall concrete wall, and the ant-aircraft guns. Oh, and don't forget about the B-1 Bomber fleet with a heaping pile of MOAB's... While we're at it, let's throw in some propaganda and tactical nukes and some chemical and biological--

      Oh wait... This is just getting plain silly.

      Firewalls, patches, and frequent monitoring for suspicious activities... yep... Along with a prayer, that's about the best you can do.

      • by stanmann ( 602645 )
        You don't want or need bank/military security unless you are a bank or military.


        Banks and military installations are hard targets for a reason and yet are still penetrated occasionally... WHY?

        because there is added value to penetrating those systems. The average person isn't in any direct danger from the people who rob banks or break into military bases.. and a bank isn't in any danger from someone who busts out a car window and steals a radio.

        OTOH if you put up that sort of security around your ho
    • by billstewart ( 78916 ) on Tuesday April 06, 2004 @05:47PM (#8785348) Journal
      One of the canonical Internet security threats was always "some college student with lots of resources and technical skill and too much time on their hands" attacking your system. If you're running the Internet security for a university, a firewall is not going to keep that kind of threat _out_, because the students are already _inside_. (Ok, it'll discourage students from other colleges from hacking your college, but the most motivated threats are already inside your firewall.) Protecting administration computers is a different problem from protecting student computers, faculty computers, and shared workspace computers. Some of this can be helped by appropriate partitioning, and Schiller's point about keeping all the machines patched and as secure as possible is critical.

      Some university administrations are concerned with protecting the rest of the Net from their students; others think that interferes too much with legitimate research. Some other poster commented that their university's policies are to be "open", but they block incoming Port 80 and Port 25 to student residence networks - meaning that students can't run their own web servers or mail servers, which is distinctly *not* openness.

  • by tcopeland ( 32225 ) * <tom&thomasleecopeland,com> on Tuesday April 06, 2004 @03:30PM (#8783461) Homepage
    From the interview:

    S:Are there any other weaknesses to keep in mind, particularly when accessing data on the Web?
    JS: This gets into engineering implementations. The devil is in the details. Let me give you an example. There's a Web site out there--I won't identify them--that offers survey services. You can set up surveys and revisit them to see the data collected or to edit them. But if you look closely at the actual URL in the little bar at the top of your browser, you will see some long number.

    A few of us wanted to know, "Well, wonder what happens if we go into that title bar there where the URL is and just add one to that number?" And we did so, and all of a sudden we were looking
    at somebody else's survey, and seeing their answers. The devil is in the details.
    Yup. Each HTTP request needs to be checked separately for privilege violations. Not doing so is like opening your internal API to anyone who wants to call it... next thing you know, someone is injecting SQL and your database is executing a "DROP TABLE users". Yikes.
  • Patches? (Score:5, Funny)

    by Swamii ( 594522 ) on Tuesday April 06, 2004 @03:30PM (#8783470) Homepage
    I read in a magazine recently that a Microsoft exec said Windows users would be "much safer" if we all would just download software patches from Windows Update. According to the article, no one took him seriously.
    • Re:Patches? (Score:5, Insightful)

      by sphealey ( 2855 ) on Tuesday April 06, 2004 @03:41PM (#8783612)
      I read in a magazine recently that a Microsoft exec said Windows users would be "much safer" if we all would just download software patches from Windows Update. According to the article, no one took him seriously.
      Well, there's that little problem where Microsoft patches tend to break other applications, particularly competitor's applications. Which makes automatic patching a bit of a concern when mission-critical apps get broken.

      sPh

      • Re:Patches? (Score:3, Insightful)

        by Vancorps ( 746090 )
        I would seriously home no one uses Windows update to patch a mission-critical server. In such environments you have an onsight SUS server. You apply the patch to your testing server and if its successful you use SUS to push the patch out.

        Windows update does break stuff, but it is not the only option for automatic or manual updates from Microsoft. They even offer a corporate version which doesn't rewrite policy everytime you update which is why most apps break when they do

      • Re:Patches? (Score:3, Interesting)

        by harvardian ( 140312 )
        You know, I've read this argument a couple of times here on Slashdot, and I've never in my life heard of this happening to anybody I know. Can somebody provide an example?

        And why do you say the patches "particularly [break] competitor's applications"? All this means to me is that Microsoft tests the patches thoroughly with their own software. I certainly wouldn't expect them to release patches that break their own software (that they know and can test) more than their competitors' software.
        • Re:Patches? (Score:5, Informative)

          by sphealey ( 2855 ) on Tuesday April 06, 2004 @04:51PM (#8784579)
          The canonical example is Windows NT Service Pack 6, which broke Lotus Notes (both server and client). Note (ha ha) that Notes had at that time both the largest market share and by far the largest installed base of any corporate e-mail system. Microsoft denied the problem for about 6 weeks, then suddenly released SP6a with no explanation.

          That's the worst I know of (since it was marked a security release, and since it affected so many sites), but I have certainly run across others.

          And while I agree Microsoft can't test _every_ 3rd party app out there, I do think that given their 96% desktop market share (at that time; closer to 99% today) that they have a responsibility to test the leading apps of the leading functions, whether or not they are Microsoft's. Novell certainly used to do that.

          sPh
        • Can somebody provide an example?

          Ever run a proprietary application you or another company wrote to interface with an MS SQL Server?

        • There was a service pack coupla years back (I forget if it was XP or 2000) that effectively disabled use of a proxy server... so those machines couldn't get out to get the fixed SP a couple of days later.
        • I can't totally prove it, because I can't tell which of about 3 different MS patches did the dirty deed, and I'm not particularly interested in de-installing them to hunt down the issue, but over the course of about six weeks my HP printer (officejet v40) driver software rotted and died. Re-installing the driver software didn't help at all, same symptoms. I don't use the device that much, so it's impossible to pinpoint exactly when the driver got hosed up. I do know that I didn't install anything else du
      • Well, there's that little problem where Microsoft patches tend to break other applications, particularly competitor's applications. Which makes automatic patching a bit of a concern when mission-critical apps get broken.

        True, but in the long run whats better? Switching over to Linux and have no one to sue if your server gets hacked due to a security flaw? Or stay with Windows and have someone to take the heat when your server crashes from an update?

        Linux is great and all, but if you don't have someone, who

        • Re:Patches? (Score:3, Insightful)

          by sphealey ( 2855 )
          True, but in the long run whats better? Switching over to Linux and have no one to sue if your server gets hacked due to a security flaw?
          OK, now its my turn ;-)

          Please name the last time any organization of any size successfully sued Microsoft over a product liability issue. I'll even take FOAF references to orgs getting under-the-table reimbursments if that's all you have.

          sPh

    • Re:Patches? (Score:3, Funny)

      by kfg ( 145172 )
      Wanna have some fun? Just walk up quitely behind your sysadmin and say, in a mild voice, "Windows patch."

      Don't expect any work from him for the rest of the day though. Just let him gibber quietly in the corner. It'll go away.

      KFG
  • It's only a 'negative deliverable' if it's on the company's negative agenda. Security isn't hard, TOTAL security, now that's a neg-a-tive.
  • by re-Verse ( 121709 ) on Tuesday April 06, 2004 @03:34PM (#8783511) Homepage Journal
    People have to accept security as a regular part of life. There are LOTS of negative deliverables we subscribe to in our lives, and pay quite handsomly for. Off of the top of my head, I think of auto insurance. I mean - yeah we see nothing making it better.... but we know very well the hell that may arise if we don't have it.
  • by Sheetrock ( 152993 ) on Tuesday April 06, 2004 @03:34PM (#8783514) Homepage Journal
    Anybody that can give an answer about the cryptographic algorithms one should use that quickly without reflecting on the different strengths and weaknesses inherent worries me a bit. Sure, most of the focus should be on making access simpler and easier in practical situations, but who's to say offhand that Triple-DES or AES are better than Blowfish or plain DES?

    Nor would I applaud Automatic Update as a triumph for the end-user -- it delivers more than security fixes and can affect the stability of a machine. But the point about firewalls only being as good as the policy on employee laptops is a good one.

    • quote
      [
      but who's to say offhand that Triple-DES or
      AES are better than Blowfish or plain DES
      ]

      No-one does. There is no proof that for any algorithms we've thought up yet that there isn't a way to recover the encrypted text faster than brute force.

      It is possible DES is more secure than AES or Blowfish.. we just don't know..

      So like most things business, it's a risk management issue. The chances are that encryption is your strongest link. You need to insure you've got your weaker links covered: namely,
    • by fw3 ( 523647 ) *

      who's to say offhand that Triple-DES or AES are better than Blowfish or plain DES?

      Jeff Schiller obviously, as an author of kerberos I would expect him to be reasonably knowlegable on this.

      Anyone even reasonable familiar with the details can say that 3DES is more secure than DES. DES's keyspace is too small and has been so for several years.

      That said, the algorithm behind DES and hence 3DES has withstood 3 decades of scrutiny. It is optimally strong against differential cryptanalysis because the IBM de

  • Software liability (Score:5, Insightful)

    by GillBates0 ( 664202 ) on Tuesday April 06, 2004 @03:35PM (#8783515) Homepage Journal
    JS:Now, the problem is that if you decide to put liability upon software authors, you destroy open source--because those people can't tolerate any liability. So, if I were king, I would rule that if you're selling software then you bear a certain liability; but if you're giving it away in open source, then you don't.

    But, I fear that the commercial interests in this game, if they felt that Congress was backing them into a situation where they would have to accept liability, my guess is they would strenuously lobby that liability applies to everything, including open source, in an attempt to kill off open source. So that's the conundrum.

    That was a very insightful quotes regarding the worry I've been having off late. Given their way, lawyers, lobbyists, anti-opensource corporations and their political puppets will all rally to impose liability for software on the end-developer.

    If such a development happens, we could very well see software developers forced to buy "malpractice insurance" like doctors/medical professionals - that alone will be enough to kill opensource software, not to mention the plethora of lawsuits and ugly frivoulous lawsuits which've plagued the US medical system and escalated medical costs.

    And ust to play devil's advocate to his suggestion that free software developers not be held liable - since they're "giving away" their stuff: somebody could turn my anology around and make outrageous claims like "exempting voluntary software developers from liability is like encouraging quacks to pursue their medical endeavours".

    • If such a development happens, we could very well see software developers forced to buy "malpractice insurance" like doctors/medical professionals - that alone will be enough to kill opensource software, not to mention the plethora of lawsuits and ugly frivoulous lawsuits which've plagued the US medical system and escalated medical costs.

      Except that it doesn't quite work like that. Liability is generally based on causality - if you make something happen, especially knowingly, you assume liability for the
      • Without weighing in on the larger debate, you actually believe this?:

        However, if I make plans for a car, call it a "concept", and give you (for free) the plans for it, and you make a car that then injures you, how much liability would I assume? Very little.

        You actually think you wouldn't get sued by at least one person that tried to build the car? And remember, once that lawsuit starts you've already lost regardless of outcome if you aren't insured. Don't let the way you want the world to be cloud yo
      • You actually think you wouldn't get sued by at least one person that tried to build the car? ... from plans they obtained for free?

        I think not.

    • Software liability won't hurt Open Source Developers because almost every open source license specifically says they offer no warranty, use at your own risk. If anything it would hurt non-open source, like MS, since they would require the insurance but an individual developer working on Open Source in Norway/Austrailia/Russia certainly doesn't. In Open source no one really owns the product so you don't know who to who would be liable. Ultimately this shouldn't be something politicians should have to get
    • I also assume that many ways of getting OSS don't even qualify as a contract because the end user provides no compensation.

      Does that mean that it would be harder to hold an OSS author liable?

      Of course, that still leaves Red Hat and the like out in the cold.
    • Well said. IMHO this is the biggest threat to FOSS nowadays.
    • by jadavis ( 473492 ) on Tuesday April 06, 2004 @04:28PM (#8784219)
      One interesting point about the liability issue is that proprietary software developers would benefit greatly from liability laws, and consumers would probably suffer.

      It's natural to assume that placing barriers or restrictions would hurt the vendors. Intuitively, anti-drug laws would hurt drug dealers, but in reality they drive the price up, and therefore the dealers' profits.

      It's the same with software vendors. It would take more time to develop a quality product, and so it would eliminate most of the smaller developers. In effect, it would drive the price of software up across the board. Most consumers don't care about security or stability, they really don't. And developers would shy away from some of the most useful features for fear it could be considered a security problem. So the consumers are getting no real benefit, but paying a huge cost.

      In the case of doctors, a patient's body would qualify, in computer terms, as "mission critical", meaning one problem is too many. So the patient loses if they see a quack. But, if a consumer gets bad software they reboot a few times a week, and maybe re-download some mp3s.

      A better solution is if the vendors who actually do provide mission-critical software would provide guarantees. You can get a lot better guarantee from IBM or Oracle than MS, and enterprises recognize that.
    • What about me? (Score:3, Interesting)

      by Bozdune ( 68800 )
      My little company tries to make money selling software, but I'll tell you what, I sure can't afford to shoulder liability for our mistakes. If you make me liable, I'm out of business. You use my software at your own risk, and if for some reason it becomes impossible for me to say that to you, I'm through.

      The other thing that makes me laugh is "indemnification." I'm running around "indemnifying" multi-billion dollar corporations against lawsuits from people who might claim that our code violates their pa
  • well, duh! (Score:4, Insightful)

    by evenprime ( 324363 ) on Tuesday April 06, 2004 @03:35PM (#8783517) Homepage Journal
    Of *course* you have to install patches. There is a bored 11 year old out there somewhere who thinks can prove he's "133t" by downloading a sploit off of packetstorm and owning your box.

    It doesn't matter that he has no knowledge of how to code a similar sploit himself, or that he could not admin your university WAN. It doesn't matter that university cut-backs mean you don't have enough money for a test LAN to make sure the latest buggy patches won't break business critical software/services or bring your servers to their knees. All that matters is that he can go on IRC and tell everyone how "k-rad 133t" he is.

    Stupidity wants to be free! :(
  • All in One Box (Score:3, Interesting)

    by Wedge1212 ( 591767 ) on Tuesday April 06, 2004 @03:36PM (#8783537)
    It would be perfect to have an operating system that was secure out of the box (due to features built-in) like the worlds greatest personal firewall. However I just dont see this as being a likely solution. I think an operating system should have a basic firewall like XP or any linux distro. But to ask a software developer to focus a ton of time on making me a bullet proof firewall instead of making the OS more stable just doesnt make sense. As stated in the article there's only so much development time and then you have to get your product out the door or you're going to have some pissed off users. I would want (in the case of OSes) the comapny to spend the majority of their time making the OS stable and a little bit of firewall is nice. But i would much rather use another means of securing my network instead of using 2,000 personal firewalls.
    • Re:All in One Box (Score:3, Interesting)

      by blair1q ( 305137 )
      The answer is to simplify.

      Firewalls work because they enforce a single point of entry with a single method of entry: none.

      However, once you start asking for "features" like password-based logins, tunnelling, VPN, port forwarding, etc., then you increase the complexity, and therefore the likelihood that a human being will make a mistake and leave invisible door open, or at least un-double-bolted.

      There are three kinds of mistakes that can be made:
      1. Forgetting to secure something in the long list of thin
    • Re:All in One Box (Score:3, Insightful)

      by SCHecklerX ( 229973 )
      'personal firewalls' are the wrong solution. The proper solution is to not run unnecessary services out of the box in the first place. Really, NONE. If a user needs to run a particular service, then they should know how to enable it and how to secure it. But to run things as part of a default install is silly. It's bad enough in the windows world that netbios is always-on (RPC vulns anybody?).
      • Personal firewalls do provide an important service that most don't appreciate, a personal level of egress filtering, based on applications rather than packets or ports.

        This sort of thing would be valuable even on more secure OS's like Linux or BSD. I'm not sure if any are available, but I know of none installed or enabled by default.
  • by sdjunky ( 586961 ) on Tuesday April 06, 2004 @03:37PM (#8783548)
    He also says that what makes security hard is that it's a 'negative deliverable.'"

    I'm certain there are countless flaws in this idea. But hey, you don't post to slashdot without some risk of being shown what a moron you are right?

    How about having DSL/Cable companies give an incentive to customers whose computers do not become infected during the blitz of mass email worms and trojans. Something like a few bucks off of your ISP bill to free software. Some kind of incentive for NOT getting infected besides the fact that you don't have anything on your computer.

    It would benefit them in that it lowers their costs and increases their reliability if hundreds to thousands of their customers aren't sending DOS, etc.

    Of course, there are issues such as privacy implications (how would they know you're infected or not) to hardware costs for the ISP.
    • How about having DSL/Cable companies give an incentive to customers whose computers do not become infected during the blitz of mass email worms and trojans.

      Or how about making the ones who _do_ get infected pay an extra fee? After all, it's more fun to punish the people who cause damage than to reward those who don't.

      It would benefit them in that it lowers their costs and increases their reliability if hundreds to thousands of their customers aren't sending DOS, etc.

      Well, if it's against their ToS, th
      • Or how about making the ones who _do_ get infected pay an extra fee? After all, it's more fun to punish the people who cause damage than to reward those who don't.

        Only problem with punishing is that you loose customers, by rewarding the good ones you'll gain customers.
      • Or how about making the ones who _do_ get infected pay an extra fee? After all, it's more fun to punish the people who cause damage than to reward those who don't.

        Or they can kinda do what Comcast does with their cable internet/cable tv. Give a $10 credit for use of both.
        Just charge $15 extra each month and give it back for those who don't get a virus.
    • how about Cable and DLS providers simply give away a $29.00 SMC barricade with every connection and avoid 90% of the network crippling viruses and then give away one of the free virus scan programs?? it's a tiny step that would cost almost nothing to them and make a huge difference to their network manageability.

      problem is that many times the "software" that comes with your DSL and Cable modem is riddled with spyware... (comcast's certianly is)

      the cost of a HARDWARE front line NAT box that has all incomi
  • My stance is that you're essentially playing baseball in your heighbors yard. He won't change the way you play the game, or change the rules necessarily, but he sure is going to limit how far you can hit the ball. Like the green monster at Fenway.
  • I think firewall's more precisely NATs have their place in addition to patching your system.

    I think it would be irresponsible of a network/system administrator to NOT keep their systems up to date with the latest patches and fixes, along with using SSH and similiar tools.

    But at the same time I believe in having a firewall, though I do agree it will not solve all of your problems.

    I don't believe in just patching your systems. I work at a top west coast university, and the academic computing department's a
    • by psycho_tinman ( 313601 ) on Tuesday April 06, 2004 @03:49PM (#8783713) Journal

      In my experience, there are basically two things that are *MOST* commonly seen in academic networks; one is either internal or external parties trying to take advantage (and misuse) the massive bandwidth that campuses have available, or someone trying to discover and manipulate potentially sensitive documents (such as grades).

      I think firewalls have their place, you're right. But being at the receiving end of a rather draconian installation/firewalling policy for no apparent reason other than just reducing work for the systems operators (and increasing work for students, supervisors in general); I'm thinking that there should at least be a set of carefully monitored, but open machines for people to just mess around with. It's a campus, a seat of learning. Sometimes, when you're trying to learn something, things break. Do you want to be too worried about breaking a piece of "mandated" software and having a risk of getting your ass chewed, instead of experimenting ?

      Campuses have different security requirements and needs from commercial outfits, IMHO. Sometimes, administrators just don't understand that and try to implement the same policies willy nilly. Security isn't just about procedures and blanket firewalling.

      • Mod parent up. Most of the networking people who now implement policies that reduce their workload but cripple students' ability to explore gained their skills from similar exploration years ago.

      • Well I probably should of been more specific in what I wrote. In a hurry to eat lunch, free Chinese food from the Windows server admins.

        I believe in an open academic network for the students, faculty and researchers.

        But for the administrative computing, where I work, which does all the data processing, there is no reason for an open network.

        The funny thing is is that the major research projects we have on campus, have erected firewalls to protect themselves. And basicaly have told academic computing to g
      • Schiller does a good job of explaining that a lot of the stuff he's talking about is particular to his campus, which is, after all, atypical.

        I teach at a community college, which is different from MIT in many ways :-) One big difference is that we have a lot less funding. A result of this is that we have some security problems that happen simply because there aren't enough tech people to manage the number of machines we have. The figure I've heard bandied about is that if we were a major corporation, the r

  • by foosballhound ( 769065 ) on Tuesday April 06, 2004 @03:39PM (#8783579)
    >> You must install patches.

    in the "real world", when there is a security
    threat, such as a gas leak, you call the repair
    person, who fixes it.

    This is the equivalent of "install patches"

    Note that there is a level of confidence in
    calling the repair person, that they won't
    paste adds all over your living room, or install
    a wire-tap on your phone line, or a spycam
    in your bedroom.

    unfortunately, in the computer world, all too
    often the "patches" are used as trojans.

    they change user settings, put in spyware,
    brake working code, etc

    so, ppl are hesitant to apply patches, with
    good reason.

    • by Entropius ( 188861 ) on Tuesday April 06, 2004 @03:53PM (#8783757)
      I don't think anyone objects to installing patches. What I, and others, object to is being railroaded into other things while I install them. If I own a house with a natural gas system, I don't want to sign a contract that says "you must call our technicians to fix any problems with your gas"--especially if I happen to know how to fix such things myself, or know someone else who does.

      This is why the OSS model works better for security. I *can* run urpmi --update and trust that the results will be what I want. I can also look under the hood at exactly what gets updated and how. Or, I can download individual packages... or download things and compile them from source... or, if I want and have the skill and time, I can fix things myself.

      Now, simply because there are alternatives, there is competitive pressure on the people who make autoupdaters to make them efficient, effective, and transparent--because, otherwise, people will stop using them.

  • Re: (Score:2, Insightful)

    Comment removed based on user account deletion
  • Yes, installing patches does help security, but what also creates more bugs and holes? Patches. I think the key here is that you want your patch to make less holes than your code orignally had. At least this way, no one knows where they are right off the bat.
    • More imporant what happens when the patch servers are violated? Attacking those serevers directly or performing man in the middle attacks vs them would become extreamly usefull. If everybody is allowing there computers to automaticaly install said updates it gets ugly. Auto Updates do make things better but they are not a panecea by any means and provide a method of infection. Granted security in an open college setting is very much different than the server world I'm used ot setting up (where we have c
      • More imporant what happens when the patch servers are violated? Attacking those serevers directly or performing man in the middle attacks vs them would become extreamly usefull. If everybody is allowing there computers to automaticaly install said updates it gets ugly.

        If your autoupdater checks package signatures and the private signature keys are kept on machines that are only connected to the outside world via SneakerNet, MitM and server compromises only directly act as DoS attacks. Now, maybe an at

        • The problem with the signature is it dosent change and has high value to comprimise. Granted I agree that test boxes and a local patch server are the only way to go in production. I would disagree at the patch servers not being as an effective target as a worm. Worms are noisy people know about them they set off all sorts of security apparatus. Now with a patch server you could even be selective as to who gets the trojans. By definition it would be "wanted" traffic so the allarms wont go off. Even wit
          • But my point was that the trojans won't get installed because the signatures won't validate. The private keys are a high value target... that's why you never even put a NIC in the box holding the private keys. Sure, it's a pain in the butt to move packages around via sneakernet, but it forces someone to have physical access to the machine in order to compromise the private keys.

            • Your missing the point those private keys are high value making brute foring them worthwile as they dont change. You can play at sneakernet to the high holy signing box all you want but it dosent stop the fact that you can remake a private key given computing power and time since everybody knows the public key. Yes auto update is usefull but it's not that panecea that people think it is.
              • Umm... private keys do change... it's idoitic not to set expiration dates on private keys. Package updates can change the public key used to verify packages on a regular basis.

                Also, there isn't enough energy in the known universe to perform 2**2048 electron transitions or spin flips, so how do you propose an attacker keep track of state while bruit forcing a 4096-bit RSA key?

                Now, there are known attacks that are much much much more efficient than bruit forcing, but it will still take you millions of year

  • From the Article (Score:4, Insightful)

    by RAMMS+EIN ( 578166 ) on Tuesday April 06, 2004 @03:43PM (#8783639) Homepage Journal
    ``JS: The reason it doesn't crash all that often is because system software developers took some time and effort to make that the case. If they would take the time and effort to make it be secure, it would be secure.''

    No. More secure, but not secure. For one thing, things will be overlooked. For another, there will always be things that were not known to be security holes at the time, but that will later turn out to be such.

    ``JS: I think Linux is much more secure than a lot of the other stuff that's out there, because so many people look at the source code--not everyone looks at it, but enough people do, so that problems get fixed earlier, rather than later.''

    Many people look at the sources, but do they find the vulnerabilities? See also above.

    In short, nothing is going to give you guaranteed security. Having said that, crackers will only go so far to break a system, so absolute security isn't even required. This makes any security measure useful, including firewalls (which JS argues against).

    As a closing remark, despite these minor points, I found the article a very good read; JS seems to have his heart in the right place. Heh, it makes me frown every time people say "security" and mean "restrictions" (see also MicroSoft and Trusted Computing).
  • I just HOPE (Score:4, Funny)

    by Prince Vegeta SSJ4 ( 718736 ) on Tuesday April 06, 2004 @03:46PM (#8783672)
    they make the Girls Dorm open source
  • by Entropius ( 188861 ) on Tuesday April 06, 2004 @03:46PM (#8783680)
    I attend the University of Alabama in Huntsville, an engineering/research institution with enrollment around 15k. The Network Services people around here aren't really concerned about the value of openness to academia; in fact, most of their security is directed inward, against the students who have to use the machines.

    For instance, the "start" button on every lab computer has been disabled--people only have access to the icons on the desktop. Furthermore, right-click context menus have been disabled.

    On some public computers, even access to the address bar in IE is disabled--all you can do is follow the links from the homepage in IE.

    When I took a Mathematica class in the physics lab, we used a heavily neutered version of Windows NT, with file permissions set unusably tight. Browsers would crash on startup because they didn't have write access to their cache files, virtual memory was disabled (!), and the like.

    Network Services also has banned the use of BitTorrent on campus, causing consternation among people wanting to download contraband like, uh, Mandrake images.

    This is the same campus where average packet loss on ResNet is 20-30%. Students play games over dialup because it's faster and more stable than ResNet.
    • I attend the University of Alabama in Huntsville, an engineering/research institution with enrollment around 15k. The Network Services people around here aren't really concerned about the value of openness to academia; in fact, most of their security is directed inward, against the students who have to use the machines.

      Wow, sounds exactly the opposite to UNLV. I remember one department had a few NT lab machines that students often remotely accessed and filled the Desktop folders with shortcuts... made

    • by SpaFF ( 18764 ) on Tuesday April 06, 2004 @03:59PM (#8783827) Homepage
      I attend the University of Alabama in Tuscaloosa. It's funny that two campuses in the same University system would take different approaches to security.

      Here at UA, everyone gets a real IP address: there is no NAT. There is a "traffic shaper" on resnet which limits upload speeds and blocks incoming connections on some of the lower service ports (80, 25, etc). Central computing blocks incoming connections to port 25 except for mailservers, but that is just to prevent open-relay spam. Other than that, there is no firewall.

      Each college has it's own labs. The arts and sciences labs are locked down one way, the engineering another way, c&ba another way, etc. In most cases students can't copy files to the hard drive or fiddle with the control panel, but other than that there is no real "lock down".

      I work for one of the colleges on campus and we have been trying to get a firewall for our labs and faculty for years, but central computing won't allow it. They won't the network to be open, not for academics sake, but so that they can keep tabs on what everyone is doing. They think that if we put up a firewall it will keep THEM out too.

    • That is all well and good, BUT bypassing that security is possible. There is an optimal security level and IMO it is lower than that. The optimal security level for an academic or personal system should be the electronic equivalent of a no trespassing sign and an 8 foot chain or wood fence.

      ie enough to keep honest people honest and make it difficult enough for the average criminal to move on the the next house.
    • I used to teach in the School of Business at Florida State University, my wife taught in Education at FSU.

      The School of Education had their lab computers locked down so hard, you had to login as a certain user to use the scanner, then logoff and login as a different user to use Photoshop. This is the way it was for almost every application. The lab assistant had to do the login for you. Many things were broken as in the above posting. This was all to keep the lab assistant from having to fix so many "bro
    • I attend Northern Michigan University. We have a campus resnet that has live IP's and DNS names My_Tower_for_example [nmu.edu] and if you go look port 80 is open, 8080, and 22, but port 21 and the samba port and the nfs port are closed. that and they have blocked ping packets on campus, so i would ping google for you but i can't but it probably would be in the neiborhood of 500ms+ which is jsut a little bit higher than one would expect for a collage campus. some of the biuldings on campus are behind NAT as well. so i
  • ... but what would be wrong with using a security flaw to send a virus or worm that fixes said flaw? It would be extremely cheap to implement, and nearly transparent to the end user.

    I'd just suggest that the users computer serves the white-hat worm for a day or two (kind of like a Bit Torrent), and then automatically deletes it.

    Is that a bad idea?

    • thats already been done, and the press had a field day with it, saying how bad it was that a virus could go in and repair a link that was created by another virus then delete itself, the end result is that it dies out faster because it deletes itself, if it were to spread itself repeatedly, and stay in action to constantly repair whatever it is that keeps getting changed via other maleware/virus/end user, then it might be a good thing, but wheres the percentage in this, if it stays, you get some yahoo who w
    • Comment removed (Score:4, Interesting)

      by account_deleted ( 4530225 ) on Tuesday April 06, 2004 @04:31PM (#8784268)
      Comment removed based on user account deletion
  • by ChiralSoftware ( 743411 ) <info@chiralsoftware.net> on Tuesday April 06, 2004 @04:11PM (#8784013) Homepage
    "Security is about patches." That statement implies the belief that security flaws are inevitable, an inherent part of having software. This simply isn't true. We should not accept such thinking. If a product doesn't have security holes the day it is released, it still won't have security holes a thousand years from now, patches or not. The question is, how do we ship products without holes? The reasons we have security holes in products are not because developers are stupid or careless, or because the business side of the company wants to ship the product now. No, the reason we have holes is because we're still using horrible software development tools which make security problems almost inevitable. Humans just can't think like C compilers and if we write a long enough program in plain old C, we end up with buffer overflows and lack of bounds checking on things. If we used safer tools like Java, which don't have buffers and which store data in structures which know their own size (collections), the vast majority of vulnerabilities would never even be created. If a user sends malicious input to a Java process, we know that no matter how broken the Java is, that malicious input can't stomp on memory and be executed, no matter what, because the JVM and the bytecode verifier don't allow it to. That is the kind of assurance that software should have.

    It is always possible to make security problems at the design level, like forgetting to check an account balance before allowing a withdrawal in bank software, but humans are very good at thinking in those ways, and those kinds of problems are rare.

    ---------
    Create a WAP server [chiralsoftware.net]

  • Comment removed (Score:4, Insightful)

    by account_deleted ( 4530225 ) on Tuesday April 06, 2004 @04:11PM (#8784017)
    Comment removed based on user account deletion
  • As long as you set up your active directory forest correctly, you can leave certain areas open and secure others. I cannot believe they didn't immediately think of an MS solution to a security problem.
  • JS: There is one good technique, and it's the only one that's effective. No firewall, no port blocking--none of that will work. The solution is that you must install patches.

    I definatly need to send that to the net admins here at school. I can surf the web, read e-mail, and use instant messaging. Thats about it. Everything else is restricted on a dorm-to-dorm basis. So I can play games with people from my building but my friends on the other side of campus are shit outta luck.
  • What a great article (Score:4, Interesting)

    by jkitchel ( 615599 ) <jacob_kitchel@[ ]mail.com ['hot' in gap]> on Tuesday April 06, 2004 @04:59PM (#8784712)

    Maybe I'm visiting the wrong web sites, but it's great to hear these things from someone who's been on the cusp of network administration from the beginning.

    S: So education is a part of this?
    JS: Education is a part of this, both for the people who own personal computers and work with the data and for the people running these systems.

    I can vouch for the end part of the article for sure, as I'm sure many Slashdot readers can. Right now I'm doing an Information Security Risk Assessment as part of a graduate level class that I'm taking. Fortunately, for the K-12 schools on which we perform these assessments we cover user education as part of an overall Information Security program. Also, it gives us the chance to see user education and awareness from their point of view, which helps us make the case for having user awareness training. A lot of end users don't realize that having a weak password is like giving away the key to your organization (or school in this case). I'll give you two guesses as to the biggest topic that we've discussed with the school corp. and the first one doesn't count ;)

    You would not believe how woefully inadequate schools are when it comes to an Information Security Program. If you have the opportunity to help a school out, do it. It will help you learn something, help the school better themselves, and better the community by protecting the little ones' information.

  • by karlm ( 158591 ) on Wednesday April 07, 2004 @04:15AM (#8790072) Homepage
    I was one of two network contacts for my fraternity. Basically, I volunteered for some minor network administration in the house so that we all got a free T1.

    In general, the MIT "firewalls are false security" mantra is a good thing, particularly at MIT where there is a high concentration of bright and inquisitive people. You can never count on the black- and grey-hats being on the other side of your fire wall. You have to assume that the networks on both sides of your firewall are hostile. Each host must be a castle unto itself. This is simply a much more robust security model than "keep the bad guys over there".

    On the other hand, shortly before MS started covering IIS on WindowsUpdate, the house had a rash of IIS exploits and RPC exploits. I asked for advice about setting up an OpenBSD firewall to only allow outgoing connections from most machines (and knocking holes in the firewall for MIT Network Security's vulnerability scanners). The response I got was basically "If you have to ask, we won't help you. Just patch everything and it will be fine." They didn't seem to realize that a sophmore can't just run around the house pestering everyone to keep their machines up to date. Basically, my powers were limited to waiting for problems and then finding the offender and saying "MIT is threatening to cut the entire house off from the Internet in two hours unless you do what I say now!". Sure, I send out reminders and heads up emails, but when they didn't listen and got compromised I would invariably be the one to do their OS reinstall because if I didn't, half of them would just put the compromised machine back online without fixing anything.

    This last year, MIT actually stepped out of the ivory tower and did some port-based filtering (firewalling) when tons of students came back from Summer to take their computers out of storage. Many of the students would get compromised while updating, even if they patched as soon as connecting the machine to the Internet.

    I think they also permanently firewall off their MS Windows-Athena computer cluster. (side note: the internal code name for the project to modify Windows to work with the rest of the Athena network was Pismere -- Latin for horse piss)

    I also pestered MIT for about a month after RedHat released the ptrace bug kernel fix and they hadn't pushed the fix out to the official RedHat-Athena packages. Their position was that local root exploits weren't a problem since MIT gives the root password to most of the machines to students who ask. I pointed out that many departments and individual students set up machines so that absolutely anyone with an Athena account could SSH in as a normal user. There had been no warning emailed out that RedHat-Athena machines were still vulnerable to the ptrace local root exploit. Most of these machines owners assumed that the problem had been taken care of by RedHat-Athena's daily automatic updates. It was by sheer luck that I looked at the file modification date on my friend's kernel and realized the modification date was long before the ptrace vulnerability had been discovered. After all, I had already checked that it was up to date on all of the patches MIT put out for RedHat-Athena.

    In short, MIT netowrk security policy is a strange patchwork of opinions.

Old programmers never die, they just hit account block limit.

Working...