Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security Networking IT

Tear Down the Firewall 395

lousyd writes "'What's the best firewall for servers?' asked one Slashdot poster. 'Give up the firewall' answers Security Pipeline columnist Stuart Berman. Through creatively separating server functions into different, isolated servers, and assigning them to a three tiered system of security levels, his company has almost completely eliminated the need for (and headache of) network firewalls. "Taking that crutch away has forced us to rethink our security model," Berman says. The cost of the added servers is greatly minimized by making them virtual servers on the same machine, using Xen. With the new security-enhanced XenSE, this might become easier and more possible. What has you chained to your firewall?"
This discussion has been archived. No new comments can be posted.

Tear Down the Firewall

Comments Filter:
  • Band-aid (Score:2, Informative)

    by egoriot ( 853407 )
    Firewalls are such a band-aid solution to the problem of unknown processes running on your own computers. The right way to solve the problem of rejecting incoming and outgoing requests is to make it easy to see which processes are accepting and making connections on which port.s
    • Re:Band-aid (Score:2, Insightful)

      by Badanov ( 518690 )
      Firewalls are such a band-aid solution to the problem of unknown processes running on your own computers. The right way to solve the problem of rejecting incoming and outgoing requests is to make it easy to see which processes are accepting and making connections on which port.s

      Which is what netstat -at and firewalls do...

    • How is that not already easy?
      netstat -lp --inet --numeric-ports
      I'm sure there's a Windows equivalent if you use that.
      • Re:Of course but (Score:5, Informative)

        by NeoThermic ( 732100 ) on Saturday July 09, 2005 @03:13PM (#13021906) Homepage Journal
        And for windows:

        netstat -v -o -n -b -a

        (you can ommit -v for a quicker display)

        NeoThermic
      • netstat -lp --inet --numeric-ports

        Actually, it would be more along the lines of "lsof -i -n"

        Just knowing the ports won't tell you which process is responsible for them.

        I'm sure there's a Windows equivalent if you use that

        Sysinternals [sysinternals.com] has some very nifty freewares that give this info and more.

        Still, I'd rather keep the firewall. I like granularity. What if I want to limit access to certain source ips? Limiting them on application level might still leave open to buffer overrun kind of vulnerabilit
    • Re:Band-aid (Score:3, Interesting)

      by matth ( 22742 )
      Ummm yeah I know what services are running.. some (like SSH) I only want me to be able to get to from certain IP addresses. Some (like on Windows) are needed for the machine to talk to Domain Controllers (but you certainly don't want joe-smith talking to your machine on port 139).. so yeah there are a lot of reasons to use firewalls!
    • Re:Band-aid (Score:4, Insightful)

      by Ingolfke ( 515826 ) on Saturday July 09, 2005 @03:21PM (#13021949) Journal
      You're looking at this from a server perspective. It's quite possible you don't want certain traffic on your NETWORK. I don't want people scanning my networks.
    • Firewalls are such a band-aid solution to the problem of unknown processes running on your own computers.

      Firewalls have nothing to do with processes running on computers. They are for filtering network packets.

      The right way to solve the problem of rejecting incoming and outgoing requests is to make it easy to see which processes are accepting and making connections on which port.s

      No, this is the right way of identifying which processes are sending and receiving information, not whether or not those p

  • Nice logic, but (Score:5, Insightful)

    by gcnaddict ( 841664 ) on Saturday July 09, 2005 @03:01PM (#13021847)
    obviously, if you can rethink your security model AND keep up a well-maintained firewall, you will likely be better off :) How hard can it be to do BOTH, not one or the other?
    • Re:Nice logic, but (Score:4, Insightful)

      by m50d ( 797211 ) on Saturday July 09, 2005 @03:41PM (#13022048) Homepage Journal
      If you have a good security model, the only processes listening will be the ones that need to be accessible. At that point, what good would a firewall do?
      • 2 checks are better then one, gives more chance to catch a compromise.
        • Re:Nice logic, but (Score:3, Insightful)

          by timeOday ( 582209 )
          Not always - one strong defense might be better than the same defense plus a weak one. There's a diluting effect, both technical (because defenses can create more vulnerabilities) but more importantly the human factor, because people only have so much time and attention to devote to these things, to learning them and keeping them up to date.

          Working in a bureaucracy, I've found that new rules are either ignored, or obeyed at the expense of attention to old ones. Time, attention, and willingness to compl

      • Re:Nice logic, but (Score:5, Insightful)

        by Hatta ( 162192 ) on Saturday July 09, 2005 @04:42PM (#13022362) Journal
        If you have a good security model, the only processes listening will be the ones that need to be accessible. At that point, what good would a firewall do?

        Well you could control who the processes can listen to. There's no reason an internal web server should be visible to the entire internet. Or even for publicly accessible sites, if all your customers are in the US it may make good sense to deny connections from say, romania.
      • insightful?

        OK, first issue. If you run any *significant* services, you have ports that need to be accessible by your machines, but nobody else's. The best example is database servers. My database runs on a separate machine. My webservers need to access it, but NOBODY else does. The database's access control is not enough, I don't even want anyone outside my network to see those ports, let alone try to muck with them.

        Second issue. There are always new exploits coming up for the software you *do* have to ex
    • Yeah ... this does seem like a solution looking for a problem, doesn't it? Kind of like deciding that, well, if I just start eating right and exercising regularly, I won't need my health insurance anymore.
  • Sigh... (Score:4, Interesting)

    by EQ ( 28372 ) on Saturday July 09, 2005 @03:02PM (#13021853) Homepage Journal
    Let me try selling THIS to my boss, with the Cisco guys whispering sweet nothings in his ear about PiX Firewalls and all this wonderful "solution in a box".

    Or is this another Flavor of the Month event?
  • by maxrate ( 886773 ) on Saturday July 09, 2005 @03:02PM (#13021855)
    For my servers (mostly W2K & RedHat) I do not have them on a physical firewall. I restrict what can be done by blocking ports that aren't needed. I keep the boxes up-to-date.

    Realistically, I'm not sure there is much more I can really do other than logging in a checking things out when ever I can (which is often).

    It's worked well for me (so far), and I've had server directly on the internet since 1999. I got hit with code-red on a server once.

  • "Simple" ACLS (Score:5, Interesting)

    by wcdw ( 179126 ) on Saturday July 09, 2005 @03:04PM (#13021863) Homepage
    By defining simple ACLs, we further isolate our backend servers.

    Personally, I've never found ACLs as easy (or as flexible) as other firewall solutions. But in any event, ACLs are firewalls, call them what you will....
    • Sure, but ACLs come free with your router. Why buy another piece of hardware? Firewalls are another network component that can break and another device you have to train your staff to use.

      If you know what's behind your firewall (and if you're running a bunch of web servers, you better know what's there) then there's no need for a firewall.
    • And router ACLs in most (all?) cases aren't stateful, so you need to allow access in both directions as opposed to one direction with state retained.

      If you can't set the source port for an application, and only the destination port, you might be leaving much bigger holes than you would by throwing a firewall into the mix between subnets.
  • by cerberus4696 ( 765520 ) on Saturday July 09, 2005 @03:05PM (#13021869)
    It's one thing to give up the firewall if all you have behind it is servers. It's quite another to give it up if you're protecting user workstations. While it's certainly possible to carefully arrange your external services such that they are secure, it's really only possible if you have absolute control over every single device behind the firewall.

    • The article discusses that. They deliberately leave the workstations exposed to the Net. The SERVERS are protected and application-level firewalls are also used. The advantage is that they don't have users continually frustrated at being unable to access various services on the Net due to the firewall blocking everything, and their admins have less work to do opening and closing ports for end user special purposes which presumably results in less configuration errors and less security holes.

      The overall ef
  • What is XenSE? (Score:5, Informative)

    by Lemming Mark ( 849014 ) on Saturday July 09, 2005 @03:06PM (#13021871) Homepage
    I was present at the XenSE meeting (that's me at the bottom of the list ;-) I'd like to clarify exactly what XenSE is and what it isn't:

    What XenSE isn't:
    * it's not Xen's "security issues team". It's not for patching exploits, etc.

    What XenSE is:
    * the "virtual machine monitor" equivalent of SELinux
    * mandatory access control for virtual machines
    - e.g. you might enforce some sort of information flow between virtual machines (e.g. "Top Secret" only talks to other "Top Secret")
    * enforced from the very lowest levels of the system, so should be very trustworthy

    The goal is that the complete XenSE system achieve a higher security rating than currently possible with SELinux alone. The initial prototype of the mandatory access controls has been supplied by IBM and is in the 3.0-testing tree right now. Fully achieving the project's security goals will take considerably longer (Xen 4.0 timeframe).
  • I have some Debian Linux desktops and and NetBSD/FreeBSD servers on my network, along with a 133 MHz Windows 2000 machine with 32 MB of RAM for compiling my source in MinGW. (I didn't want to put Windows 2000 on my 300 MHz PII machine, that is for my FreeBSD server). I can tell you that I need to keep my firewall. As a lazy admin, I can't worry about the adverse effects of not keeping up on the latest vurnerbilities on securityfocus. And no one should run a regular desktop machine (even Linux or *BSD) direc
  • In general, firewalls can be compared to a tarpaulin stretched on four sticks above a house. It has an effect only if:
    • the roof is leaky
    • you want to make your yard free of rain
    • you own a number of houses, and want to ensure they will be free of rain even if the houses' caretakers are idiots

    In other words, firewalls are of any use only if:

    • you're defending a grossly insecure system (Windows?)
    • you have unprotected communication on a network
    • you want to enforce a policy

    The tarp does nothing for a sturdy

    • by StupidKatz ( 467476 ) on Saturday July 09, 2005 @03:15PM (#13021921)
      I'm running all kinds of crud on the intranet that I don't want exposed to the Internet, such as NetBIOS on Windows and some permissive SAMBA shares on assorted servers.

      So, the services are running so that I can use them from the inside (with any device on the inside, without mucking with ACLs, additional equipment aside from a switch, etc.) without having the services exposed to the outside.

      Now, if you're running services which aren't being used by legitimage users at all... ;)
      • I'm running all kinds of crud on the intranet that I don't want exposed to the Internet

        I thought this would be done by NAT, with an internal network of nonroutable addresses. While a firewall box may also do NAT, they are quite separate functions.


      • The article deals with that:

        "The servers and their respective applications sit in their own DMZ, protected by an Application-layer firewall. We organize servers into three tiers: The first tier consists of presentation servers such as Web and e-mail servers--these are the only servers accessible to end users. The second tier, made up of application and middleware servers, is in turn only accessible to the presentation servers. Finally, the third tier, consisting of the database servers, is only accessible
    • You argued against yourself.

      you're defending a grossly insecure system (Windows?) - Adequately securing a system and then replicating that security policy across a disparate group of servers all serving different functions is not an easy task. On top of that, systems and software have bugs, Windows, Linux, BSD, *nix, all have exploits released at some point in time. With a firewall you can start by reducing your risk by limiting the traffic ON YOUR NETWORK and may be able to do some packet inspection o
      • For unprotected communication: of course, in many cases the cost of encrypting everything is huge. That's why I said "if", not including any chastising. scp can clog a 300Mhz box on a 100Mbit connection, in many cases you can't afford to encrypt that -- and this is exactly what firewalls are for.

        For enforcing a policy: If you control all the hosts, why would you care about people scanning you? An unsuccessful scan is an uninteresting thing due to the very fact of being scanned, and a successful one is o
    • by That's Unpossible! ( 722232 ) * on Saturday July 09, 2005 @03:39PM (#13022041)
      There is no way to attack bare kernel (ok, ping of death)

      OK, so then why did you mention that point if you are going to subsequently shoot it down with one example?

      firewalls do nothing to protect services which are already visible to the network

      Yes, higher-end firewalls can also scan the traffic on those open ports looking for exploits (ala IDS firewalls).

      And if you want to use the firewall to block off unneeded services, why in the hell are you running them in the first place?

      Are you serious? I have tons of services running on various servers that I do not want made available to the public, yet need to be available to (a) the other servers behind the firewall, and (b) trusted users that connect over our VPN... which, incidentally, is another function of a good firewall.

      The article and your post are pure lunacy. It is not that hard to maintain a firewall, and as long as you plan your internal networking with the assumption that the firewall will not stop a really good hacker, it is just one more layer of security.
      • "higher-end firewalls can also scan the traffic on those open ports looking for exploits"

        And why? So you know there are exploits being run against you? And this helps how? Your goal is to prevent exploits from being SUCCESSFUL, not from being run against you, since they will be run anyway. Check your firewall logs long enough for a big enough company, you'll see every exploit there is. So what?

        "I have tons of services running on various servers that I do not want made available to the public, yet need to
    • the roof is leaky

      Sometimes (ie, with Windows), you already know the roof leaks, and can't do a hell of a lot about it. Sometimes (ssh vulnerability, for example), the roof leaks in places you don't know, and you'll only find out when you go to look at those irreplaceable and now water-destroyed family photos you kept in a corner of the attic.


      you want to make your yard free of rain

      Yes, I occasionally throw summer parties (a (legit) visiting laptop connecting to my WAP). Though I have a wonderful u
    • by Ungrounded Lightning ( 62228 ) on Saturday July 09, 2005 @05:19PM (#13022555) Journal
      [...] firewalls are of any use only if: [your server farm has one of this set of problems]

      Beg to differ.

      Firewalls unload the server from spending cycles on filtering rules and memory on surviving DDoS attacks, just to name two functions.

      If the servers must do their own filtering, and you have enough load that you need more than one to get everything done, offloading the filtering to a separate machine means that you need less servers. The gain is not linear, too: Keeping multiple servers synchronized (espeically those changing database state due to the transactions they serve) is an extra load, which becomes a lower fraction of the transaction cost when the server count is smaller.

      Separating the functions also means that the machines can be specialized for their work - with, for instance, hardware accelleration for attack detection on the firewall - drastically cutting the box count. Putting all the eggs in a single basket means accelleartors get less usage, since they're used only for a fraction of the machines' load. Meanwhile you need more accellerators to put one on each machine - or you're stuck with using a GP machine to do the work, at much lower efficiency and a much higher box count.

      Accellerators may only be available for appliance firewall solutions, not for upgrading a machine optimized for database handling or other server tasks.

      If you have a license fee for the server software, having more servers means more licenses to buy. Another cost savings from specialization - this time a big one. If both the server and firewall software is licensed you have to have licenses for BOTH on ALL machines, rather than one or the other on each machine.

      If you need content filtering against specific identified attacks, you need a service from a specialist organization, to track new attacks as they arise and upgrade the filtering functions. You don't want an outside house tweaking the machines which contain your own proprietary data.

      Separate machines also means separate software. The firewall software can be written by people focusing JUST on secure and efficient firewalling, the server software by people focusing on efficient transaction service. Do a combined box and your firewalling functinality is just one of a bundle of functions being handled by a software team - in the server and/or the supporting system. (You only have to look at Microsoft to see the level of security produced by the latter approach.)

      I could go on. But any one of the above points, by itself, shows an advantage for the separate firewall/server approach in a commercial scale, commercial grade, service. Combine them all (and others I haven't mentined) and the argument is compelling.
  • Why not have both? (Score:3, Interesting)

    by ravenspear ( 756059 ) on Saturday July 09, 2005 @03:09PM (#13021889)
    I agree that firewalls should not be implemented as a crutch in lieu of a good security model for your servers, but why not have that and a firewall. TFA makes a good point but most sysadmins who have any experience with good security already know it. Only run the services needed on the servers dedicated to those services.

    But it seems to me that rejecting all other traffic with a firewall is a good added measure of security that can only improve the overall security of your setup. It also makes you less visible to attackers and wastes there time.
    • Your point is completely correct.

      When you design a system, you place security in at as many levels you can. This is called 'Defense in Depth' and has been practised for many hundreds of years (castles were built upon the same principle).

      If you can, stip your system to the bare minimun, run a network firewall and a local firewall, protocol analysis, IDS and host level solutions such as PaX all add up.

      It's good to work on the idea that no systetem is undefeatable, so arranging a few different security syst
    • by Master of Transhuman ( 597628 ) on Saturday July 09, 2005 @04:35PM (#13022321) Homepage

      The article makes the point that it costs money and time to "reject all other traffic" because the end users often need to access things outside the system, new applications such as Skype also need to have new ports opened, and outside visitors need to connect to the network internally which leads to security risks as firewalls are administered.

      By treating EVERYBODY outside the server ring as a potential risk, you eliminate these problems and take a more proactive, paranoid approach to the security of the internal network rather than relying on perimeter security which is hard and expensive to do. At the same time, you make the network outside the server ring more useful to end users.

      I can see the point - I'd just like to see it TESTED against a good-quality pen-test using compromised workstations against the server ring to see if Layer-Three switches with ACLs and PKI authentication and application firewalls are sufficient to protect the servers against island-hopping attacks by a good hacker.
  • PCI, CISP, ISO 17799, and SAS-70 are just 4 reasons I have to have not one, but two firewalls. I have to say, I think it is overkill. Yet, I sleep well at night knowing that it should be much easier for someone to move on to another company's servers than to get through the layers of security put in place.

    The premise is simple. Multiple layers. Sure you could probably build a box that is very difficult to get into, but do you really think anything is 100% safe? If somebody wants in, I have a belief th
  • It's a rather sensationalist headline. He's not really ditching his firewall, he's replacing the one border firewall with multiple firewalls in the internal network, and is keeping the production environment isolated from the non-production (Office & Development) networks.

    He removed the firewall between the Production Environment and the Internel, and is replacing it with several firewalls on the internal network. I count 4 firewalls-- One between the Webservers & Application server, a second firewall between the Application server and DB server, a third firewall between the production environment and non-production environments; and he discusses using ACLs to isolate subnets -- that's conceptually the same thing as a firewall.

    But that's not a very new concept, and even with his plan, it still seems like you'd be more secure if you have an external firewall on the added network.

    What's the harm in adding one more firewall and only allowing traffic on the HTTP port, HTTPS port and possibly VPN? It's cheap insurance just in case someone made a mistake and left some services running on one of the machines.
    • He's not really ditching his firewall, he's replacing the one border firewall with multiple firewalls in the internal network,

      Security consists of layers of protection. By removing his perimeter firewall, he is removing one layer of protection. Now, he can provide all the argumnents that he wants, trying to justify the removal of the perimeter firewall. But the fact remains, he has removed one layer of protection, and has made his internal protection requirements more complex because of it.

    • His "new way of thinking" isn't new at all. Many large corporate networks are set up the same way - you have clients on one segment/group, servers on another, and Internet-accessable on another. You filter between the networks.

      Not sure how he can say they "gave up" the firewalls - if it's a router doing filtering or a special "application firewall" (whatever the difference is) it's still doing *firewalling* and thus still needs to be managed.

      He never really mentioned that they removed any firewalls, rea
    • This is news how? People have been using tiered network zones for years now.
    • by Master of Transhuman ( 597628 ) on Saturday July 09, 2005 @04:54PM (#13022410) Homepage

      The "harm" is described in the article:

      "Perimeter security was originally intended to allow us to operate with the confidence that our information and content wouldn't be stolen or otherwise abused. Instead, the firewall has slowed down application deployment, limiting our choice of applications and increasing our stress.

      To make matters worse, we constantly heard that something was safe because it was inside our network. Who thinks that the bad guys are outside the firewall and the good guys are in? A myriad of applications, from Web-based mail to IM to VoIP, can now tunnel through or bypass the firewall. At the same time, new organizational models embrace a variety of visitors, including contractors and partners, into our networks. Nevertheless, the perimeter is still seen as a defense that keeps out bad behavior. Taking that crutch away has forced us to rethink our security model."

      I can see the point. However, as always,YMMV. If you can't devote the resources to doing decent monitoring of your applications and servers, and keeping the workstations patched, then you might need a perimeter firewall.

      The point of the article is that a perimeter firewall - a "moat mentality" - leads to lax security on the internal network. And it's NOT "cheap insurance" because it requires much more maintenance to secure an entire perimeter of thousands of workstations AND still provide Net access to those systems (and visitors) than it does to secure an inner ring of a few hundred servers and to treat EVERYBODY outside that ring as a threat - including your own users.
  • Firewalls are still important in the entire security model. I do a lot of working on shared servers that host websites and have found a firewall can stop a lot of headaches. When some users script gets compromised and a script kiddies goes to send out a DOS of some sort the firewall can block it. I have found that the firewall is more important for exgress monitoring for this type of market but it is very valuable.

    While it is true people have the wrong image of a firewall they are still very useful when
  • Summary (Score:5, Funny)

    by Zarhan ( 415465 ) on Saturday July 09, 2005 @03:16PM (#13021925)
    This article shows that the guy is now realizing that you also need network design besides only putting a firewall at the border and hoping it magically makes everything ok. He's quoting "innovative" networking desings, like

    - Segmenting your network to
    - Workstations
    - Internal servers
    - Internal databases etc (accessed by servers)
    - DMZ
    - Setting up stringent ACLs to only permit specific traffic between segments.

    C'mon, this is pretty much elementary stuff. Any network adming should know to design his network like this even in small companies where you have 2 workstations and a single server.

    Then he makes a claim that you don't need firewall because only things accessible to Internet (Workstations and stuff in DMZ, like your public website) are running secure OSs patched constantly. I guess they are running OpenBSD with default config then... ...except there are mentions of "Active Directory", so I guess not.

    Only real "innovation" comes at the end: The article states that they are running some sort of IDS/IDP system in their network, presumbaly monitoring for any wormlike packets. This is nothing too interesting, anybody can set up Snort and have it running at your switch's monitor port. Only thing is that if it is running only as a logger, it cannot really react fast enough if one of your boxes gets infected with the latest worm from the completely unsecured Internet connection.

    If it is running in some sort of transparent bridging mode, where it blocks those packets too on detection, it is pretty much like any...you guessed it...FIREWALL.

    He DOES have a point on the fact that numerous applications require intelligent firewalls, the most basic case of course being active FTP. However, almost any commercial firewall (and Linux kernel iptables) supports numerous protocols. Most recent additions are SIP. P2P protocols are prominently missing so far, but I'm guessing that at least Bittorrent will be added soon (at least to Cisco IOS/PIX and Checkpoint).

    Still, I wouldn't give too much credit for this article until he provides us with a detailed network diagram and more specifically states what are the exact benefits.
    • He DOES have a point on the fact that numerous applications require intelligent firewalls, the most basic case of course being active FTP.

      I would say Passive FTP is more difficult to firewall on the server than active. Passive puts the responsibility of accepting an incoming connection on an arbitrary high port on the server. Active puts it on the client.

      Now some FTP servers let you specify a range of passive ports to announce to the client, but that can break compatibility with some clients.
      • I would say Passive FTP is more difficult to firewall on the server than active. Passive puts the responsibility of accepting an incoming connection on an arbitrary high port on the server. Active puts it on the client.

        I was naturally talking about outbound connections..the usual case is that your workstations need to access services on the Internet, but Internet only needs to get through to your DMZ.

        If running a stateful firewall also for incoming connections (instead of just an ACL) the servers c
  • ...when you pry it from my cold, dead fingers.

    i do concede, though, that my environments are such that the internal networks (and users) *are* trustworthy.
  • But I ceritnaly thing it's the best practise. The principle is simple: Only allow access to things that people should have access to. That way, if something is accidentally set up that could be compramised, it's not a danger since people just can't get to it.

    It's no magic bullet for sure, the services behind the open parts of the firewall have to be secure or it does no good, but it restricts the possible places an attack can occur.
  • Seems overkill... (Score:3, Insightful)

    by MrDomino ( 799876 ) <mrdominoNO@SPAMgmail.com> on Saturday July 09, 2005 @03:20PM (#13021940) Homepage

    The post proposes a pretty novel solution---maintain separate hosts for each server---but it seems really inefficient. I mean, Xen as I understand it will run full operating systems in each of its virtual domains, including separate kernels and whatever else the system needs running.

    Why not just work with chroot jails? They accomplish the same thing---keeping things isolated from dangerous interaction with the rest of the system---but without the ridiculous performance overhead of running entire and discrete systems for each service provided.

  • Defense in depth. (Score:5, Insightful)

    by !ramirez ( 106823 ) on Saturday July 09, 2005 @03:20PM (#13021942)
    This concept can largely be summed up as 'defense in depth'. You use multiple layers to defend that which you value the most.

    Saying 'I have secured my OS, I no longer need a firewall' is like saying 'I have an airbag, thus I do not need this seatbelt'. One complements the other.
  • by Danathar ( 267989 ) on Saturday July 09, 2005 @03:22PM (#13021958) Journal
    As always the amount of security you deploy depends on the risk level you are willing take and the amount of work/money you are willing to spend.

    At the organization I have we have NO firewall because it is designed as an environment for the deployment of services (videoconferencing, ect..) and users who need unrestricted network access to the outside world. The security policy is written so that the user is completely in charge of their system. If it becomes comprimised and we find out about it...it's disconnected.

    Networks rarely are compromised but the edge devices ARE. With the exception of some vulnerabilities in routers of late, networks do what they are supposed to do.

    It's NOT nework security....it should NOT be the job of the network to protect hosts from themselves. It's HOST security and the people in charge of the HOSTS are responsible. "Not my fault" you say? Windows is insecure? It's precisely this mindset which has isolated MS for so long and pushed the responsibility back on the network admins that have kept microsoft (and OS vendors in general) and application developers from being serious about securing their systems and applications.

  • by j_kenpo ( 571930 ) on Saturday July 09, 2005 @03:24PM (#13021970)

    "Your not Stuart Berman, your really social engineering expert Kevin Mitnick, and you almost tricked everyone into taking down their firewalls".

    "And I would have gotten away with it if it wasn't for you nosey Slashdotters!"
  • I only run essential services - ssh, http, https, and secure imap. That's it. If you don't have any other services on the inet interfaces you don't need a firewall at all.
    • That is a rather bold statement. Have any evidence to back it up?

      I can think of a few instances where you would still be vulnerable without a firewall, like if there was an exploit discovered in the network stack of the OS.
  • As others have already said, "why not do both"?

    Without a firewall to block incoming-random-port traffic, client machines are still vulnerable to day-zero open-port vulnerabilities. Granted, a software firewall SHOULD prevent this but a second, independent firewall helps.

    What this guy is doing is A Very Good Thing, but there's no need to turn off those external firewalls completely.

    My rating of the original article:
    Informative, but overstated.
  • by lheal ( 86013 ) <lheal1999NO@SPAMyahoo.com> on Saturday July 09, 2005 @03:29PM (#13021996) Journal
    As a previous poster said, why not do both?

    They've taken a nugget of insight, that the reliance on a firewall can make you sloppy, and built a whole mountain of security policy on it. Trouble is, that's upside down architecture.

    Good security is about building up as many layers as you can that are easier on you than on your attacker. The goal isn't to be impenetrable, it's to look like too much work so the attacker goes away.

    We have a firewall so that we CAN be a little sloppy inside if needed. It's the balance between security and usability. It doesn't mean you rely solely on the firewall. It means that the "firewall", which you should treat more like a window screen, is just another layer of defense.

    And when everyone else has a firewall, your unfirewalled network stands out like a house with no window screens.

    There is another big picture here, too. If everyone has a firewall, having one doesn't make you look like you've got something to hide. If only 1% of networks were protected, then your firewall makes you look suspicious.

    So thanks, but quit telling people they shouldn't use a firewall. Some of them might take your advice.
  • This is better? (Score:4, Insightful)

    by Transcendent ( 204992 ) on Saturday July 09, 2005 @03:33PM (#13022016)
    Meanwhile, the clients sit in the clear. We protect them by boosting their immunity levels so that they can exist in harsher conditions. They run secure OSs, fully patched with current anti-virus protection. We assign each user a central identity, which is authenticated and validated before accessing the internal DMZ. We use central directories to manage identity privileges and PKI certificates. Existing systems, such as Active Directory, allow for low-cost private certificate authorities where PKI isn't well-established. We also log and monitor the activity and enforce acceptable application behavior.

    Sounds like a pain in the ass to me...

    Frankly, there's too many damn buzzwords.
  • Why Choose? (Score:3, Insightful)

    by Doc Ruby ( 173196 ) on Saturday July 09, 2005 @03:35PM (#13022024) Homepage Journal
    Do both. Eliminating their firewall was just the motivation to do more comprehensive security work. That motivation should come from IT management, and self-interest in preparing a manageable system, rather than fighting fires. Every insecure part of a system should be secured. A firewall has a unique role in providing a good amount of cover for an entire organization for its cost. Especially valuable when making changes to security configurations, which might temporarily expose resources in the transition.
  • Multiple layers (Score:3, Insightful)

    by Digital Pizza ( 855175 ) on Saturday July 09, 2005 @03:36PM (#13022026)
    OK, I haven't read the article (I'm on Slashdot, after all), so maybe I misunderstood the article post (they are often misleading). What the hell is wrong with having multiple layers of security? That's what's been preached for years now, and it makes sense,

    Of course one should strive for having one's servers secure enough to stand on their own in case someone breaks through the firewall, and also because attacks can come from within. You don't need to remove your firewall to do that, however; use your imagination! What happenes if there's a flaw in the server's built in security? Bugs have been known to happen. Paranoia becomes a wonderful trait when you're dealing with network securiity.

    So a firewall is that much extra work; boo hoo!
  • Here's Abe Singer's ;login: article on Life Without Firewalls [usenix.org]... and how he learned he was Tempting Fate [usenix.org] by advertising the fact. Both are .pdf's, but the second requires a USENIX membership until February '06. Essentially he says he was right to operate an enterprise without firewalls, even though he was compromised.

    Helevius

  • IP address wastage (Score:3, Insightful)

    by whoever57 ( 658626 ) on Saturday July 09, 2005 @03:42PM (#13022053) Journal
    Unless we all move to IPv6, his proposal cannot be widely implemented, since it appears to do away with NAT and hence all "clients" must have their own routable IP address.
  • I am starting a petition to tear down your firewalls all across the Internet. Please join us as we liberate these captive servers and spread security best practices all across the Internet.

    Post a child-post to this post listing your Slashdot user-id and the subnet that your firewall has been removed from so that we can validate that you have indeed joined the revolution.

    Ingolfke - 172.16.56.0/24

    --
    Bot-net for sale. Contact me.
  • that is blatantly an advertizement. Xen's PR person must be crying in happiness. Shame on you slash
  • when i first heard about firewalls a decade ago i thought "heh, thats a cool name for a lazy hack". the need for firewalls comes from the crazy overdesign of operating systems. seriously, how many people use the rpc or dcom functions of windows? or use linux rpc for much more than nfs?

    for me, a gentoo box that hasn't been around or played with long enough to have servers i don't remember running on it is easily safe enough to put up naked on the net. true, i will echo icmp and a few other in-kernel protoco
  • From TFA:

    We can do that now, thanks to layer-3 data center switches that allow for the low-cost creation of subnets. By defining simple ACLs, we further isolate our backend servers.

    So, in reality, he has not given up on firewalls, he has simply transitioned to a different firewall structure based on primitive firewalling. "Simple ACLs" are neither simple nor effective.

    The other point is that yes, you can create all kinds of contrived security structures if that's how you want to spend all your time/re
  • assigning them to a three tiered system of security levels

    I'm curious what the justification for a 3-tier system is. Why not 2 or 4? If it's arbitrary then it may be worse than what they're trying to fix.

    The cost of the added servers is greatly minimized by making them virtual servers on the same machine

    But then an attack on one virtual server for a particular functionality takes out all other virtual servers on that machine? How does this fix anything?

    With the new security-enhanced XenSE, t

  • by wotevah ( 620758 ) on Saturday July 09, 2005 @03:58PM (#13022146) Journal

    Before everyone starts posting "I've been doing that for ten years" and "of course, firewalls are teh suk", let me say that while TFA does make some good points (about "perceived safety" of firewalls), I still do not see any way that its conclusion would be correct.

    First off, redundancy in security is good. You want multiple layers of security. It does not make sense to remove a layer just because you installed a different (non-overlapping) mechanism in place.

    Second, firewalls are a policy enforcement mechanism, and a single point of control. Under stress it is much easier to control access from a firewall than the eclectic mix of machines behind it. The point needs to be made that while securing each machine is a good idea, that should not be done to replace the firewall.

    Visible services can't be assumed to be bulletproof. Compromising the frontend machines can result in them becoming rogue agents (DDOS and whatnot). Firewalls attempt to mitigate this risk by blocking outgoing access and thus rendering the network less useful to the attacker. Without a firewall, well...

    The network of machines is secure today, after a lot of careful design work. Is it stable ? Will it still be secure after the next site upgrade ?

    While more complex systems can occasionally be more secure by their inherent obfuscation, verifying such systems from the inside is also difficult, but manageable given the manpower. When the security components are mutable though (they are OS services and custom software which are upgraded often), the complexity of the system works against us, making it that much harder to verify that all the combinations still result in a secure system. Not to mention that the machine verification involves application-level checking which is either laborious or impossible for the network admin to do.

    From TFA: Meanwhile, the clients sit in the clear. We protect them by boosting their immunity levels so that they can exist in harsher conditions. They run secure OSs, fully patched with current anti-virus protection.

    So our definition of a secure OS is Windows (what other OS needs to have "current anti-virus protection"). That sure explains a lot. I suppose those machines wouldn't happen to have the firewall enabled, would they ?

  • Think cake and you'll understand. A 3 layer cake is always better than one.
    Never let oneself get tricked into thinking that one big layer of defense will keep them out. The French frogs built the Maginot Line and look where it got them.
    The best defense is not showing the world that you have the systems in the first place. Hostmasking, IP shrouding, wrecking the IP tables to the point where a hacker only winds up either getting rerouted or dev/null.

    The 2nd layer is securing the LAN. Standard firewalls on ev
  • by the_quark ( 101253 ) on Saturday July 09, 2005 @04:06PM (#13022192) Homepage
    Two words: Regulatory Compliance. Thanks to standards like CISP (the Visa security standard) and SAS-70 (the accounting standard), HIPPA (the medical privacy standard), firewalls are mandated for many US businesses, even small ones.

    At my last company, we didn't have a firewall on the website, because my philosophy was "I'm running port scanning to make sure 22, 80 and 443 are the only ports listening on the boxes - why should I put a firewall in front of it to only let those ports through?"

    Unfortunately, now, if you don't have a firewall, you're not in compliance. It's simply a cost of doing business - the security concerns are completely irellevent.

    Obviously, you should be building your networks so they would work without firewalls - that's a lot more secure. But, unfortunately, you can't just throw the firewalls out even if you don't need them.
    • ...my philosophy was "I'm running port scanning to make sure 22, 80 and 443 are the only ports listening on the boxes - why should I put a firewall in front of it to only let those ports through? ... But, unfortunately, you can't just throw the firewalls out even if you don't need them.

      But you do need them. You should assume that your servers will get rooted, in which case they may soon be listening on any other ports and initiating connections to anywhere also on any port, or even DoS'ing the rest of you
  • with having to maintain a firewall? I don't really see why that's such a chore that it's worth the risk of eliminating. Sure, it's only perimeter defense, but depending upon an operating system to defend itself from outside attack just seems risky. The more layers a cracker has to peel back to get at the juicy insides of your server the better.
  • Change is Good... ... You go first.
  • He probably doesn't believe in parachutes, condoms, or car insurance, either.
  • So his network clients can be port scanned, but the servers are in a DMZ that must be authenticated into. The advantage is that a user can now run all kinds of specialized apps that need open ports, and the admin can avoid micromanaging regulations based on specific client needs, but it opens a whole other can of worms.

    Maintaining a 100% secure client OS and specialized applications aside, if a user were to download a malicious program or visit a malicious page with a new IE exploit, couldn't his authentic
  • by sabat ( 23293 ) on Saturday July 09, 2005 @05:23PM (#13022579) Journal
    I have heard this guy propose his nonsense in person. This is a classic case of throwing the baby out of the bathwater; his proposition summarizes as "firewalls aren't a silver bullet, so they're worthless."

    He proposes that we secure all individual boxes, which is umpteen times more difficult, more time-consuming, and less secure.

    He's not an innovator; he's a contrarian.

No man is an island if he's on at least one mailing list.

Working...