Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security IT

Internet Security Moving Toward 'White List' 316

ehud42 writes "According to Symantec, 'Internet security is headed toward a major reversal in philosophy, where a 'white list' which allows only benevolent programs to run on a computer will replace the current 'black list' system' as described in an article on the CBC's site. The piece mentions some issues with fairness to whose program is 'safe' including a comment that judges need to be impartial to open source programs which can change quite rapidly. Would this work? The effort to maintain black lists is becoming so daunting that white lists may be an effective solution."
This discussion has been archived. No new comments can be posted.

Internet Security Moving Toward 'White List'

Comments Filter:
  • Works for me! (Score:3, Insightful)

    by BadAnalogyGuy ( 945258 ) <BadAnalogyGuy@gmail.com> on Wednesday September 19, 2007 @03:11AM (#20664613)
    I'm all for this idea. We're counting Flash and Javascript as external programs too, right?
    • Re: (Score:3, Interesting)

      by moranar ( 632206 )
      You can disable those in your browser, you know? You don't even have to install Flash.

      Or is this a *WOOSH* moment?
      • Re:Works for me! (Score:5, Insightful)

        by walt-sjc ( 145127 ) on Wednesday September 19, 2007 @05:53AM (#20665281)
        There is whitelisting, and there is disabling. Two different things. Noscript for Firefox is a whitelisting tool.

        Surf safe. Use Noscript.
        • by moranar ( 632206 )
          Not adding flash and javascript to the whitelists, as the OP suggested, is not exactly "whitelisting" sites.
          • Trying to parse your sentence here... "Syntax Error Line 1"...

            The OP WANTED to add flash and javascript apps to a whitelist system, which is the exact opposite of what you just said (or appeared to say.)

            But to clairfy things, Noscript is a domain / host level tool, and doesn't have the ability to whitelist individual scripts. Given the dynamic nature of the internet and how many sites around the world change their scripts on a daily basis (including dynamically generated javascript,) it wouldn't be feasible
            • by moranar ( 632206 )
              Sorry, but that reading "wanted to add flash and js to a whitelist system" isn't evident at all, not to me at least, unless you mean "...to keep them blocked and blacklisted", which was what I understood, and the reason of my original response. Since useful sites employ javascript and flash, you can't ban the technologies altogether; though, if you wanted to, you can already do this, by not installing flash and by disabling js in the browser. Viceversa, since other sites abuse them, you can't fully whitelis
    • by Moraelin ( 679338 ) on Wednesday September 19, 2007 @06:06AM (#20665317) Journal
      Frankly, I'm not all for this idea. It creates a cumbersome and abusable solution to something that was solved better already.

      E.g., whatever happened to running something in a sandbox, ffs? You can go as far as running something untrusted (e.g., a plugin, ActiveX control, etc) in a virtual box, but even a chroot jail is a good start. It _is_ possible to isolate something to the point where it can't do any harm at all, and can't touch anything except itself. It's also possible to nice it to the point where it only runs when nothing else wants to, so it can't DOS your system that way.

      So why doesn't anyone do just that already? E.g., MS could have fixed their own ActiveX crap that way ages ago. Instead we got this baroque, but fundamentally broken, model where you get to decide (or have decided for you based on zones) whether something can't run at all, or can run with full rights as an executable. Except if a malicious one slipped through the cracks, it's still a full executable running on your machine.

      Heck, even Java is essentially the wrong way about it as a browser plugin. It tried to implement itself some restrictions which belong in the OS or browser itself, and if the JVM itself is compromised (there _have_ been a couple of JVM vulnerabilities), it can do anything. Kudos to Sun for trying that, but it's a workaround essentially. It shouldn't have been the JVM which does that, it should have been the OS and browser.

      Whitelisting is just an extra step in that wrong direction, essentially. Instead of making sure that a malicious thing in the browser can't touch anything else, we're one step further in the baroque, fragile and monumentally work-intensive direction of determining which of them should be allowed. Except again, if something slipped through the cracks, you'll still get screwed so hard you'll walk bow-legged for a week.

      Am I the only one who finds that dumb?
      • Re: (Score:3, Interesting)

        by Mike89 ( 1006497 )
        I remember reading on Slashdot in the past that when Anti-Vir was first around (I think the old DOS Program Norton Navigator was refererenced), we started with a White List. The same White List idea outlined here. Then for whatever stupid reason we moved to a blacklist. There's only a finite number of good programs, whereas bad ones spring up every 5 minutes.
        • by yuna49 ( 905461 ) on Wednesday September 19, 2007 @09:07AM (#20666321)
          There's only a finite number of good programs, whereas bad ones spring up every 5 minutes.

          And how many of those good programs are at Sourceforge? What happens when a program at version 2.5.11 goes to version 2.5.12? Will Symantec and company suddenly rush to create the hashes needed to keep up with open-source development?

          Implmenting a policy like this can only benefit the large, established developers who'll be publishing software well-known to the whitelisters.

          What about programs that run on, say, Java? Will every version of Azureus need to be whitelisted, or just the JVM software that talks directly to the operating system? What about programs that update themselves online? Will the new version still be whitelisted, or will the program stop working until McAfee updates its hash database?

          I suppose you could let users add unknown programs to their whitelist, but given that we know many users will click OK in response to any dialog box, that seems to undermine the entire system. If someone's gone to a bogus website to download that "NFL Game Tracker" that was advertised in recent spams, do you think they'll then refuse to add it to their whitelist if given the chance? I think they'll click the OK button and install the Storm trojan.

          As other posters have said, there are other, better ways to solve these problems than whitelisting.

          • by dave562 ( 969951 ) on Wednesday September 19, 2007 @04:41PM (#20672433) Journal
            Like so many technologies that come out, this one is obviously aimed at the enterprise. A whitelist would just be a headache for a home user who wants to tinker with their box. On the other hand, the secretary in HR doesn't need to be running any program that isn't on the approved list of programs. She doesn't need to be visiting any websites that are running constantly changing code bases. She doesn't need to be downloading crap off of Sourceforge and checking it out. In that kind of environment, a white list is a great idea.
        • Re: (Score:3, Interesting)

          by slashname3 ( 739398 )
          The problem with implementing a white list approach is that this ultimately is going to be a real pain to maintain. Not only that but it is going to require (as the article alludes to) cooperation between a lot of companies to get it implemented. Based on the article they are going to have to setup an authority that will blessed all the good programs.

          I wonder just how much it is going to cost you to get your program blessed? And how long will it take?

          From what I can tell they want a white list of app
      • Re: (Score:3, Interesting)

        by chocobot ( 715114 )
        Check out Usable Interaction Design [berkeley.edu]
        Also relevant: Capability security.
        E Language [erights.org]
        Capability Security [wikipedia.org]
      • by ivoras ( 455934 ) <ivoras@NospaM.fer.hr> on Wednesday September 19, 2007 @07:09AM (#20665531) Homepage

        The problem is that, like a computer with its Ethernet cable unplugged, an application completely isolated from everything else is useless. For example: at the very least you need to allow an embeddable object (like a Java applet, ActiveX, etc.) to draw itself on screen. To do that you need to enable it to do a large number of GUI-oriented calls. What happens if one of these calls is found to be exploitable by a malicious process? It would be like you did nothing at all for security.

        Todays software has *so many* interdependencies that it's practically impossible to segregate everything into neat little boxes whose security can be managed individually. For example, a modern Windows application can (and often does) interact with a large number of subsystems that have been, and still are, found fallible, which fall into these broad categories:

        • Win32 API, meaning KERNEL32, USER32, GDI32 and others
        • OLE2/ActiveX API, connecting its tendrils (i.e. users can embed their own executable code!) throughout the desktop environment (shell, Windows Explorer) and subsystems like database management, logging, etc.
        • .Net API, which uses the above two APIs

        The obvious "solution" is: blame Microsoft - it's bad design practice to enable so many possible interactions throughout the system. But this would mean that users won't be able to use such nifty things like "live" copy & paste throughout their applications (OLE), Explorer shell extensions (like WinZip), unified database drivers (ODBC, OLE, ADO), etc. -- and all of these things are selling points (AND, unsurprisingly, these are some of the more important things users miss when they try to use Linux). If you try to do it partially, for example disable OLE calls from ActiveX controls, business users will be angry because their embedded ActiveX applications will stop working.

        And if you DO try to lock everything down, you'll get hordes of angry users complaining about needing to click "Allow" every time they move the mouse pointer :)

        • Nope. Think of a hypervisor plus extremely restricted virtual machines... hypervisor handles hardware access, and where each vm runs one application with a subset of access to the system (eg, keyboard and mouse input, some screen space, some partitioned file io if it needs it). And that plugin doesn't know it's running in a browser, it's got the whole screen to itself, but the hypervisor only gives it a small screen (think picture-in-picture) in the position where they plugin would be drawn...
      • Re: (Score:3, Insightful)

        by XenoPhage ( 242134 )

        E.g., whatever happened to running something in a sandbox, ffs? You can go as far as running something untrusted (e.g., a plugin, ActiveX control, etc) in a virtual box, but even a chroot jail is a good start. It _is_ possible to isolate something to the point where it can't do any harm at all, and can't touch anything except itself. It's also possible to nice it to the point where it only runs when nothing else wants to, so it can't DOS your system that way.

        It's always possible to "break" that, though, by compromising the container itself. While I agree that, in principle, this is a good idea, there's too much that can go wrong. Having a whitelist of some sort could possibly help a little here in that we could ensure that the container modules are safe.

        So why doesn't anyone do just that already? E.g., MS could have fixed their own ActiveX crap that way ages ago. Instead we got this baroque, but fundamentally broken, model where you get to decide (or have decided for you based on zones) whether something can't run at all, or can run with full rights as an executable. Except if a malicious one slipped through the cracks, it's still a full executable running on your machine.

        Because there will always be that one application that needs access to more than one zone. Take, for instance, a web-based virus scanner. Sure, you can isolate it within a container, but then how does it s

        • Re: (Score:3, Insightful)

          by pintpusher ( 854001 )

          ...application that needs access to more than one zone. Take, for instance, a web-based virus scanner...

          There have been several of these in the comments today in discussions about sandboxes or other methods of restricting apps for security reasons: "what about app bar that needs to do bar and baz? It can't work in this context." How many of these apps are conceived in a world where they're required? The web based virus scanner seems to be one of these. What exactly is the point of a web based virus scanner? Its relying on a potentially compromised machine to reveal things about itself. That's next to useles

      • Re: (Score:3, Insightful)

        by db32 ( 862117 )
        Explain to me what part of your idea actually makes sense outside of the geek community.

        First of all your VM thing is a bit of a pipe dream. People are already upset about the cost of Windows. Do you think they are going to be happy about having to purchase multiple copies AND licenses for a VM? Tack on all the latest licensing issues and limited install issues and you have a recipe for great fun. Nevermind that its only been relatively recently that hardware has made this much of a feasable possibili
    • a) I see this as a great way of stifling innovation (while you may get a temporary reprieve from malware, until the malware begins breaking into your programs [e.g. via word-macros,... - or would we need to get macros added to the whitelist, too?])...

      b) I see that this may end up in taxing innovation as well (if the whitelist was free, it could be fairly easily knocked out by everyone who hates it writing some small 'hello world' program and requesting their program to be put on the whitelist. (if this s
  • by DragonTHC ( 208439 ) <Dragon AT gamerslastwill DOT com> on Wednesday September 19, 2007 @03:11AM (#20664617) Homepage Journal
    My Internet security philosophies have always been drop 'em all, let iptables sort 'em out!

  • Follow the money (Score:3, Interesting)

    by mdm42 ( 244204 ) on Wednesday September 19, 2007 @03:12AM (#20664621) Homepage Journal
    Sounds to me more like a scheme to squeeze money out of software producers: "Give us teh money if ya wants yer program whilelisted."
    • Re: (Score:3, Interesting)

      Comment removed based on user account deletion
    • Re:Follow the money (Score:4, Informative)

      by Crayon Kid ( 700279 ) on Wednesday September 19, 2007 @07:33AM (#20665655)
      Jesus, there's so much paranoia and resistance that apparently everybody forgets that black listing is one of the dumbest things you could do when it comes to security. It's no rocket science to see that if you're dealing with bots that attack blindly and dozens of new threats every day there's no way you're going to be able to keep track of all of them.

      White listing is not about someone approving the list for you, it's just a generic mechanism that allows YOU to white list.

      More explanations for a security expert here: The Six Dumbest Ideas in Computer Security [ranum.com].
    • Re: (Score:2, Interesting)

      by vettemph ( 540399 )
      I think it is worse than that. Microsoft needs to stop FOSS from running on windows. Anyone who has used Firefox, OpenOffice, Gimp and many other applications may realize that no one needs windows anymore. If you don't need windows, you don't need AV software. If microsoft convinces AV providers to go "white list" on everything, Microsoft can disable/hobble the FOSS/Linux enabler and the AV firms get to live. They are scratching each others back as usual. Microsoft of course needs to stay in the background
  • Not going to happen (Score:5, Interesting)

    by MadMidnightBomber ( 894759 ) on Wednesday September 19, 2007 @03:12AM (#20664623)

    Can someone send me a list of all IPv4 hosts which are not malicious? k thanx bye.

    PS. please can you also send me an update whenever a new machine is compromised?

    • by Architect_sasyr ( 938685 ) on Wednesday September 19, 2007 @03:26AM (#20664717)
      127.0.0.1
    • by Burz ( 138833 ) on Wednesday September 19, 2007 @04:51AM (#20665043) Homepage Journal
      Indeed, the only possible "success" from the whitelist idea is that the Internet morphs into television (shudder).

      Q: Where has the Internet failed?

      A: Its main proponents and enthusiasts ignored Drivers' Ed for the info-superhighway. They didn't teach people how to use web browser and email programs, didn't show how to read a URL and pay attention to the protocol and domain, nor instill the habit of mousing-over links to see where they go beforehand. Teaching people about the padlock symbol should have also included how to deal with SSL certificate alerts.

      The result of this neglect is that people cannot recognize authenticity on the Internet, so the value of the Internet's "currency" is spoiling. Imagine if people weren't clued-in on how to authenticate a $20 bill: Over time only certain government and corporate entities would be trusted to handle currency to prevent spoiling by counterfeiters.

      Our job as Internet cognoscenti is to keep correcting the people around you on the right way to use Web and email. Granted, this is not a cure-all given the other major factor here (Windows malware) but its several steps in the right direction. This stuff is not hard.

      The alternative is an Internet-II re-worked around big corporations and government sites through a whitelist enforced by Trusted Computing remote attestation. Don't think they won't be opportunistic enough to scare the public into that corner.
      • Re: (Score:3, Insightful)

        by feepness ( 543479 )

        Imagine if people weren't clued-in on how to authenticate a $20 bill: Over time only certain government and corporate entities would be trusted to handle currency to prevent spoiling by counterfeiters.

        Recognizing counterfeit money is a specialization within the FBI. Also, there are few fake $20 bills, not worth the effort. They usually counterfeit $100s. And ever been in a casino where they authenticate with that special marker? This is because you can't tell unless you've got years of experience. We've all probably handled counterfeit money in your lifetime without ever knowing.

        Our job as Internet cognoscenti is to keep correcting the people around you on the right way to use Web and email.

        That job isn't paying enough. Let me know when it gets past $50 bucks an hour. Until then I've got paying work and whe

        • Re: (Score:3, Insightful)

          by Burz ( 138833 )
          Average people check for counterfeits every minute of every hour at the cash register. It is not the ultimate in authentication, but then most web fraud is not the ultimate in user deception.

          That job isn't paying enough. Let me know when it gets past $50 bucks an hour. Until then I've got paying work and when I'm not doing that I'd like to spend time with the family.

          You are a Web Consumer, not a citizen then. You all want services in the form of shiny things you can click on and pay for to grease the way. Well the address and status bars are the most important factors in web security, and they aren't linked to paid consumer service industries or other notions of boutique consumeri

      • Addressing malware. (Score:5, Informative)

        by Burz ( 138833 ) on Wednesday September 19, 2007 @06:53AM (#20665463) Homepage Journal
        I'd like to expand on my first post by pointing out a few ways for fighting malware that are the most freedom-friendly, encouraging users to make responsible decisions. These depend on OS vendors employing sane UI policies:

        Do not engage in filename-mangling! If a file is named "apicture.jpg.exe" then it MUST be displayed that way and must not undergo any automatic alteration (falsification) that, for instance, makes an executable appear as data.

        Additionally, all executable files are shown with a red warning flag whenever that filename is displayed by the desktop, file manager or file dialog. This is important, as Windows will execute files ending in ".com" and this suffix is a part of most websites the user trusts; clicking on a "monster.com" file is natural so another indicator is necessary to cut down on trojans.

        Make web site scripting purely an opt-in affair by default. This goes for anything else the html engine is used for, like chat clients.

        No more "Open this file" option in download dialogs. Period. If the user cannot manage opening the file themselves from the regular UI, then hopefully they will get stuck and sign up for an introductory computer class.
        • by deblau ( 68023 )

          Do not engage in filename-mangling! If a file is named "apicture.jpg.exe" then it MUST be displayed that way and must not undergo any automatic alteration (falsification) that, for instance, makes an executable appear as data.
          Misses the point.

          Do not attach semantic meaning to filenames in the first place! Windows has been broken like this from day one.

      • by deniable ( 76198 ) on Wednesday September 19, 2007 @08:38AM (#20666069)
        It may not be hard to teach, but how many of them want to learn. It's only a computer. Microsoft makes it user friendly, so why do I have to learn all this extra stuff. I just want to use 'The Internet.'

        Once you lower the bar, there's no raising it back up again.
  • by Beryllium Sphere(tm) ( 193358 ) on Wednesday September 19, 2007 @03:13AM (#20664633) Journal
    A lot of the work my computer does for me happens via Google's Javascript. Will I have to whitelist it all over again every time the gmail implementation changes? If it's whitelisted by domain, then you still have to protect against cross-site scripting attacks somehow (all hail NoScript!)

    The whole idea of a program being a quasi-static executable installed locally is starting to seem quaint.
    • Re: (Score:2, Interesting)

      by darthflo ( 1095225 )

      protect against cross-site scripting attacks

      Your browser takes care of securing you against XSS, so you'd make sure it's not an insecure [secunia.com] software [secunia.com] and use reliable [secunia.com] instead [secunia.com]. HTTPS would protect against phishing and "real" man-in-the-middle attacks and the mentioned whitelist would make sure nobody messes with yer browser. Problem solved :)

  • Is it me (Score:5, Interesting)

    by damburger ( 981828 ) on Wednesday September 19, 2007 @03:15AM (#20664647)

    Or is this going to really screw small-scale windows developers?

    Seems to me to be a blatant attempt by the big boys to lock users into their software (or software from companies they have an arrangement with. Since the majority of users probably won't know how to disable this 'feature', they will have less choice, and therefore higher costs.

    • Re:Is it me (Score:5, Interesting)

      by beakerMeep ( 716990 ) on Wednesday September 19, 2007 @03:47AM (#20664793)
      maybe, but coming from symantec this is just marketing tripe for their own services or future services. As an approach to security this already takes place. Think of firefox or a firewall asking you "are you sure you would like to run this program?"

      Though it does seem like they are position themselves to be the gatekeepers of all software, good or bad. Want to run a program? Don't ask the user, ask Symantec. People wont stand for that though. There is a certain level of control over a computer most users are willing to give up in certain circumstances to the OS or an outside party or the like, but this is total control. Even novice users would probably find some piece of software they wanted to run that wasn't in the system and get annoyed at symantec for breaking their computer while more technical users would likely never want to be early adopters of something like this.

      not only that, but I wonder.... wouldn't the list of "good" software be unimaginably larger than the list of malicious trojans and viruses?

      Think about that number for a second. The only way they would ever look good would be if every single one of the users only ever ran software on the list. So for each user that uses dozens of applications, if even just one of those dozens isn't on the list, they are going to blame symantec.

      sadly i don't think this will stop them from trying to pull this off anyways and at least getting a small userbase of complete novices and maybe corporate IT depts that want to lock down the drones.
      • Re: (Score:3, Insightful)

        I think the main point is that applications rarely if ever take active steps to hide themselves whereas black hat applications often try ever so hard. So a whitelist is likely to be more reliable, at least in principle, than the blacklist. Of course the question is how things would get on to a whitelist in the first place- you don't want virus writers simply registering their malware before distribution; in principle distributing voting might work.
    • I wish I could see it that way myself, but I really think the state of things is so bad that short of dumping Windows entirely, it's just too unsafe to run software under Microsoft Windows. The blame is pretty evenly spread, though, among the users, the criminal and Microsoft, but the history of what led us to this point is so wide and deep that no one could really be held seriously accountable.
    • Its interesting, I've heard intel talking about this before (wish I remembered a particular link). Reportedly anyone willing to pay enough could buy a license to sign their software. Along with viral protection they mentioned enhanced DRM... meaning the ability to prevent "circumvention" tools from running.
  • Unlikely to work (Score:3, Interesting)

    by Dibblah ( 645750 ) on Wednesday September 19, 2007 @03:15AM (#20664651)
    Why? Because AV vendors want your money.

    With a whitelist, the user clicks 'Accept' for everything he runs. Then he's protected until he installs something else.

    Blacklists are great since they require yearly subscriptions.
    • by MoonFog ( 586818 )
      First McAffee's CEO claims that cybercrime is bigger than drug crime, and now Symantec says that we need white lists. Has there been so little noise around viruses and trojans lately that they need to do this to get attention?
    • by mrjb ( 547783 )
      Why? Because AV vendors want your money. I once released a commercial anti-virus and got this type of comment all the time and got really tired of it. I understand your train of thinking, but remember that the AV guys are supposed to be the good guys.
  • The flip side? (Score:2, Interesting)

    isn't the flip side of this that now you're only allowed to run approved programs on your computer? Only IE is approved for web browsing, only MSN Live is approved for instant messaging. I know that I, for one, welcome our corporate overlords.

    White lists have been proposed since the beginning of time - from web filtering to spam provention, and now to malware provention - and they all suffer from exactly the same problem, which is the fact that humans are not all identical clones of each other, and neithe
  • by Colin Smith ( 2679 ) on Wednesday September 19, 2007 @03:16AM (#20664659)
    This application has not been signed by Microsoft. Do you want to run this application? Yes/No

    Are you sure you want to run this application? Yes/No

    Are you really sure you want to run this application? Yes/No

    I mean, if it's not Microsoft, it's not really "official", what makes you sure you should be running this application. You probably shouldn't. There's a nice Microsoft alternative which is "official". Wouldn't you like to download that instead? Yes/No

     
    • by Anonymous Brave Guy ( 457657 ) on Wednesday September 19, 2007 @04:35AM (#20664971)

      I mean, if it's not Microsoft, it's not really "official", what makes you sure you should be running this application. You probably shouldn't. There's a nice Microsoft alternative which is "official". Wouldn't you like to download that instead? Yes/No

      You forgot option 3:

      [T]hanks, but I already did download an alternative to Microsoft.

      Seriously, though, how can anyone possibly believe this could ever work? The computing world is driven by countless specialist applications, many of them written in-house by small businesses, or just by individuals to solve a specific problem they have. It's pretty obvious that no organisation could possibly whitelist all of this stuff effectively, without having some sort of automated system that every malicious developer in the world could abuse just as easily.

    • by Terrasque ( 796014 ) on Wednesday September 19, 2007 @04:36AM (#20664977) Homepage Journal
      Microsoft has not authorized this. Continue? No / Cancel
    • by bentcd ( 690786 ) <bcd@pvv.org> on Wednesday September 19, 2007 @04:39AM (#20664993) Homepage
      Heh.

      "This software has been signed by Microsoft. Are you sure you want to install?"

      (yes)

      "This software has been signed by Microsoft. Are you sure you want to install?"

      (yes)

      "Proceeding will void your warranty. Are you sure?"

      (yes)

      "Well, it's your funeral. Please wait."
  • by Zouden ( 232738 ) on Wednesday September 19, 2007 @03:17AM (#20664665)
    anyone has ever suggested about computer security.
  • Again? (Score:5, Insightful)

    by suv4x4 ( 956391 ) on Wednesday September 19, 2007 @03:18AM (#20664669)
    Certificates were intended as a white list: you protect the submitted data and have certificate from a central authority that this is indeed the company the certificate says it is.

    We know how this ended (certificates given left and right without proper verification).

    Now they try again with new certificates, which are more expensive.

    So that's about that part.

    What about site filters. Whitelisting sites in security suites has got to be the dumbest idea I've heard in a long time. Last I checked there's like billions of pages out there, some of which safe and some not.

    So now that we find it impossible to cover the entire subset of malicious pages, what do we do? Yes, we try to cover the even great subset of legal pages.

    This will either end with many small harmless sites filtered out, or sites having to pay ransom to all security suite vendors out there to get whitelisted or something of a similar nature.

    Not happening.
    • I don't have a problem with whitelists as such. The wonderful addon to Firefox called NoScript is whitelist based and seems to work fine. Everything is blocked until you choose to unblock it.
      • by suv4x4 ( 956391 )
        I don't have a problem with whitelists as such. The wonderful addon to Firefox called NoScript is whitelist based and seems to work fine. Everything is blocked until you choose to unblock it.

        The subtle difference is, the suite vendors get to make the list, not you. Imagine NoScript, but with a whitelist of sites you're allowed to *view*.

        We alreayd have a taste of the Allow/Deny whitelisting in Vista, I don't think it solved anything either. I believe revokeable company certificates is the way.

        This way you g
  • by rucs_hack ( 784150 ) on Wednesday September 19, 2007 @03:19AM (#20664679)
    Take me for example. My open source software has a tiny number of users, being very specialised, and I'm not alone in having this class of software. We can't all be Apache developers. How will people like me get their program approved? Is it going to cost money? That's what I want to know.

    I'd be interested in knowing how they deal with the fast release cycle of open source software (excluding mine, oh for a 48 hour day...).

    I'm pretty keen on the whitelist idea though. If nothing else it'll make malware more inventive, they'll start imitating the fingerprints of validated software.
    • by jrumney ( 197329 )
      To prevent imitation of fingerprints by malware, the scheme should be based on digital signatures rather than a simple fingerprint. Users can either choose to trust the developer's signature, in which case they get upgrades without any problem, or they can sign the binaries themselves if they want to limit the approval to a particular version. To cater to both open source and commercial software, such a scheme would have to accept GPG signatures as well as signatures from Verisign issued keys.
  • This is not a new idea, and many have talked about it before [ranum.com]

    Really, black lists were a bad idea from the start. Usually, the programs people want to run on a computer will remain fairly static, with perhaps a few changes when they update or find something online that looks interesting.

    I'm sure they're must be some security software that uses whitlists already. Does anyone know of any free ones?
    • Re: (Score:3, Interesting)

      by 1u3hr ( 530656 )
      I'm sure they're must be some security software that uses whitlists already. Does anyone know of any free ones?

      Many firewalls use the whitelist principle. Eg, Zonealarm. When you install it, nothing is approved. As any program tries to access hte network, you get a popup asking you to approve one-time-only, or to put the program on the trusted list. Seems to work quite well, 5 years, and none of the PCs I or my family use have had any security issues.

      But it does require some judgement. The stereotypical

  • High time too (Score:5, Interesting)

    by jimicus ( 737525 ) on Wednesday September 19, 2007 @03:21AM (#20664689)
    The Internet in general terms started moving in this direction years ago when people started to configure their firewalls to block everything and allow only what you need through. Previously it was reasonably common practise not to have a firewall at all - or if you did, all it did was block against things which were known to be malicious.

    It is a lot of work to maintain any whitelist of any significant size. But the reason you do it is because it's a lot more work to maintain any blacklist of any significant size, and even more work still to clear up the mess after something slips the net.

    I thnk residential ISPs will be the first - I'd be surprised if it was even possible to connect outside your own ISPs network. Email through their SMTP server, web access through their proxy, sucks if you want any other service your ISP doesn't provide. Some of the more expensive ISPs may set up some sort of "sign a disclaimer and we'll let you do anything, but we reserve the right to pull the plug if we see so much as a single malicious packet" system.
    • by aj50 ( 789101 )

      I thnk residential ISPs will be the first - I'd be surprised if it was even possible to connect outside your own ISPs network.

      Wasn't that how AOL started?

    • Re: (Score:3, Insightful)

      by Kjella ( 173770 )
      What you're asking for is basicly for AOL to go full circle and close up to their own AOLweb again. Not going to happen, ever. People use Internet for all sorts of stuff, and noone is going to be able to put that cat back in the bag.
      • by jimicus ( 737525 )
        As far as I can gather, your argument against a whitelist-based service is "It's too hard".

        My argument is that a blacklist service is also too hard. Maybe a happy medium will be found - blocking things like SMTP outside the ISPs network, that kind of stuff.

        But I don't hold out much hope.
  • Once we whitelist all legit programs, we only have to blacklist the legit programs with injected code (via open source or assembler hacks) and we're done!

    Amazing!

    Or will security suites actually have to whitelist every single modification of the program? Will I be locked out of my HelloWorld.cpp program as soon as I compile it?
    • well, yes, you would be. Unless they created some kind of sandbox for developing code. This would then become an attack vector for virus writers who would inject code to this 'run anything' region. If you allow such a system onto your pc, you will certainly end up in confirmation box hell regardless of the method they initiate to cater for developers.

      What will most likely happen is that the firms offering whitelists will offer the software equivalent of a gated compound that people can choose to be inside,
  • My home pc's Symantec firewall already has a whitelist. The first time an application tries to use the internet, it gets in the way to check. If the program's size/date changes, it does it again.

    This makes the fix-compile-test-fix cycle on a simple net client application just a little harder, since each time I run a new build, the firewall comes up all over again. Not to mention that by the time I clean it out, the whitelist contains 30+ records of old builds, and the Ui to that list sucks dead donkeys thr
  • I would like to see an OS that maintains
    several rings (concentric circles) into which programs can qualify
    through increasingly rigourous standards and testing as they
    get closer to the central core ring of software.

    So essentially this OS would have a core ring of whitelisted and essential
    programs. Just outside this would be a 2nd ring of whitelisted but
    optional programs.

    Then a ring of "grey listed" (reputationally vouched for, for both security
    and usefulness and quality)

    Followed by a "wild west" outer ring.

    T
    • The OS would be designed so that programs in a more outer (less trusted, and less essential) ring, could not have any access to the memory or disk areas of more inner programs, and could only ever use the services of inner programs through narrow public interfaces supervised by the OS.

      Dude.

      This is how all operating systems (even Windows, in theory, not in practice) works already. Except everything is in the outermost ring. Want to write to disk? Have to go through the system call. Not allowed to write to this file? Tough shit. Want to write to memory? Are you allowed to write here? No? Then die a gruesome death and end with a coredump.

      • by drsmithy ( 35869 )

        This is how all operating systems (even Windows, in theory, not in practice) works already.

        How does it not work in practice ?

  • by Lonewolf666 ( 259450 ) on Wednesday September 19, 2007 @03:51AM (#20664807)
    For instance, users in a corporate environment where setups are exactly defined and IT can check out in advance what works.

    For a private user with a mostly static set of application, it should still work but expect the occasional blocked program.

    For developers and the rest of the /. crowd: forget it, the whitelist wil annoy you more that it helps ;-)
  • by CaptainZapp ( 182233 ) * on Wednesday September 19, 2007 @03:54AM (#20664823) Homepage
    Remember the Sony rootkit fiasco [wikipedia.org]? Remeber that F-Secure was the only security company detecting it and approaching Sony?

    This leads to the conclusion that all other "security"-companies where either in bed with Sony, or that their "security"-products are utterly useless. I'm not sure, which is worse.

    So why again should I give a rats ass about the opinion of those guys, when it comes to security?

  • ...execute permissions and mandatory access control, yeah?

    Now where have I seen this before...

  • by A1kmm ( 218902 ) on Wednesday September 19, 2007 @04:14AM (#20664899)
    I think people should look at the big picture before taking this too seriously as a security measure: Programs only run on a system if they are either started by the end-user, or started by some other code on the system which has explicitly allowed that program run. Put another way, the current first line of defense is a 'white-list' like approach where processes only run when they are allowed to run.

    The problem is that there are lots of people / large software monopolists in the world who don't know how to code well, and this creates security flaws which cause this authorised code to do things on behalf of other code, including possibly executing arbitrary.

    This code is then theoretically built on top of a kernel which attempts to restrict what the code can do even if it is executed (of course, often there are flaws here too, and often the exploited code is run with more privileges than it should have, so the entire system can be compromised).

    Virus scanners and other security software of this kind are supposed to provide an extra, reactive layer of defense on top of the existing proactive measure for anything which slips through the cracks. Suggesting that they be turned into another white-list is therefore not a logical suggestion, and implies that they are not being entirely honest:
        * They might just want to create hype to utilise unsuspecting journalists to sell more of their products for them.
        * Perhaps this is part of another Digital Restrictions Management style plot to take the decisions of what runs on computers from computer owners and give it to some central pseudo-authority so they can (mis)use the power for their own purposes.
  • It won't just be "you're on the list, welcome to the party" but access to each resource will be given only if that particular access is whitelisted.

    You already see this in some security programs, where program A is white-listed for ports 80 and 443, program B is listed for ports 20 and 21, etc. etc. etc.

    Eventually, this will be locked down even more. Program A may be whitelisted for port 80, but only for the purposes of self-updating or reporting bugs to its manufacturer, and only to a short list of domain
  • Would this work? The effort to maintain black lists is becoming so daunting that white lists may be an effective solution.

    You see, a white list would be bigger than the black list. But how come then a black list is daunting to create, and a white isn't?

    Simple, they'll charge the legal software vendors to be white listed.

    It's funny, laugh.. Hmm, no one is laughing.
  • by Aceticon ( 140883 ) on Wednesday September 19, 2007 @04:43AM (#20665009)
    Being a gatekeeper in a whitelist scheme is a great business opportunity:

    After all, businesses would be willing to pay to get their products into said whitelist, while one hardly expects virus makers to pay for getting their creations into a blacklist.

    Of course, i'm sure the Symantec guys are naturally not at all thinking of all those extra $$$
  • I just released version 421 of a scientific simulation model The model is mostly of interest to our own students and research partners, but occasionally a unrelated ph.d. student might try it out. So we distribute it from our home page. If any single version is downloaded by five people, that is unusually popular.

    Should each version of this program be "judged" in order for others to run it?

    There are zillions of these kinds of highly specialized scientific programs, and other branches have their own ad-ho
    • No one will be forced to use this software.

      If you want to run a limited number of well known programs, install this. If you want to have a general purpose computer, stay away from it.
  • And always been a good idea, but whitelists should be personal, with distributed advice and combined with greylisting and blacklisting algorithms. That is to say, I want the OS, when it installs, to have a few things in userland whitelisted, but only when I install something, can I add to the whitelist. You may throw in a bit of internet opinion, as in - 70% of users think that this program is Ok and 0% of users think that this program is malware, or sandbox this greylisted program until I whitelist it in
    • by base3 ( 539820 )
      The problem with a distributed solution is that the bad guys have control of multi-thousand machine botnets who will all say $BADAPP is the bee's knees and safe to run.
  • According to Symantec, 'Internet security is headed toward a major reversal in philosophy, where a 'white list' which allows only benevolent programs to run on a computer

    According to Symantic, *Windows system* security is headed towards a major reversal in philosophy, where a "white list" managed by us, Symantec, will allow only benevolent programs that registered with us (for a small, very reasonable fee. No, really!) to run.

    They have to find a new way to make money now that Vista broke their existing busi

  • Yes, because when I think "desktop application", I think "the file format parsers in this application are totally not vulnerable to complete and utter compromise, the effect of which would be the evasion of software restriction policies."
  • by thsths ( 31372 ) on Wednesday September 19, 2007 @05:18AM (#20665143)
    There is only one problem with this approach: once you install a white list, you no longer have a general computing device (short: computer), but an embedded device. You are limited in what you can do by what is on the list.

    Developers will be the first to notice: you can still write and compile a program, but you cannot test it. But the typical user will also be affected: what about the useful firefox extension you like? Bummer, not on the list. Want to use facebook? Sorry, the javascript in the new version is not approved.

    The white list is a pretty futile anyway, because you can program on several levels. Javascript is only an example: what if the browser is approved, but your javascript code does nasty things? Or what about a heap overflow in the browser? Suddenly you are running custom code, but how is the white list going to notice this?
  • Two questions... (Score:2, Insightful)

    by darthflo ( 1095225 )
    1: What kind of person even remotely interesting in anything "Internet Security" would even consider dreaming about considering taking Symantec seriously?
    2: Didn't we have this discussion not too long ago except the "List" would've been administered by MSFT (&co), called TCPA (then Palladium then NGSCB then OMGWTFBBQ) and be a little bit more "hardware-assisted"? (For anti-microsoft-fanboy coverage, check out AgainstTCPA [againsttcpa.com], for msft coverage try Microsoft, Wikipedia [wikipedia.org] has some rather neutral insights)
  • ...That if people could start using more secure OS's, meaning more of the necessary apps gets developed for said OS's, white, black, grey etc listing wouldn't be needed. I think all PC's should have a sensor, which senses if a certain user is going to do something stupid, then knock said user out with a blunt (and semi soft) instrument, pick it self up and run away. The bane of PC security is users doing stupid things. (This is coming from a guy who just have had to spend a day cleaning out RavMon from a
  • by bjornte ( 536493 ) on Wednesday September 19, 2007 @06:12AM (#20665339)
    It's already like this in the mobile environment, and it's a terrible pain for developers.

    When making apps in Java/J2ME or Symbian (e.g. for Nokia nSeries), you need to have the client signed by a third party in order to use native resources like memory efficiently. While the signing process it not technically the same as a white list, is has similar consequences: You are hindered in successfully demonstrating your software for potential customers until some unknown person has expressed his subjective opinion about it.

    I know cause we make such an application right now, and during development we're screwed, as we can't get around these limitations even on our development devices. It's no good.

    IF this idea catches on, real world developers need to test the god damn system before they enforce it on people.
  • As I've mentioned before, what would help would be sandbox templates.

    Basically a program requests the template sandbox it'd like to run in, and it runs in that sort of sandbox if the user has approved that before (or approves it now), or the program is signed by User Trusted Vendor X to run in that template.

    Then even if the program is inherently evil or is exploited by some "save game" or other stuff, the program still can't break out of its sandbox.

    In contrast, the problem with plain whitelisting methods,
  • I propose a radical new approach!
    1. Let's invent a new operating system where processes in regular user space cannot alter resources belonging to other users (unless access is specifically granted.)
    2. Let's make this operating system so that the need for super-user access is limited.
    3. Let's have a generic toolset with this operating system by which the need to download trivial programs is minimized. (We must think of editors, file manipulation and systems management tools.)
    4. Let's invent a runtime envir
  • Remember SyGate and those other firewalls, where you would "whitelist" traffic.
    Every geek would encourage non-geeks to install a firewall (the non-geeks knew it would protect them, after banging it in, but couldn't grasp the concept.)
    However, the non-geek would "whitelist" everything because he got conditioned into thinking "I can't do what I try to do, until I click 'yes - remember'" and didn't understand what an "incomming request on port 1234" ment anyway.
  • "According to Symantec, 'Internet security is headed toward a major reversal in philosophy, where a 'white list' which allows only benevolent programs to run on a computer"

    Well DOH, is this the best that the security 'innovators' have come up with in 2007. How about a module in embedded hardware that runs a checksum on every executable and disables it if it fails the pass. It would have an install mode and a run mode. Only executables that are installed can be run. The original DOS executable had a file

It is easier to write an incorrect program than understand a correct one.

Working...