Forgot your password?
typodupeerror
Security Software Linux

The Next 50 Years of Computer Security 128

Posted by Zonk
from the i'll-be-grey-haired-and-senile dept.
wbglinks writes "An informative interview with Linux guru Alan Cox, with an emphasis on Linux and security. Alan will be the keynote speaker at EuroOSCON this October." From the article: "It is beginning to improve, but at the moment computer security is rather basic and mostly reactive. Systems fail absolutely rather than degrade. We are still in a world where an attack like the slammer worm combined with a PC BIOS eraser or disk locking tool could wipe out half the PCs exposed to the internet in a few hours. In a sense we are fortunate that most attackers want to control and use systems they attack rather than destroy them."
This discussion has been archived. No new comments can be posted.

The Next 50 Years of Computer Security

Comments Filter:

  • This reminds me of a conversation I had with my business partner regarding computer security:

    Imagine a hacker group that offered to protect your system against other hackers. In exchange for x% of your computer cycles, x% of your HDD space, a predetermined number of pop-up ads, etc., the group would guard your computer against others attempting to compromise it for its own use. The group would connect to your system from the internet, install their rootkits, and regularly scour your system looking for i
    • Sleeping....? (Score:4, Insightful)

      by Valiss (463641) on Tuesday September 13, 2005 @01:51PM (#13549593) Homepage
      Seems to be the classic 'sleep with the devil' scenario. The problem occurs when the hackers, over time, want more than you want give/barter with.

      • Re:Sleeping....? (Score:1, Flamebait)

        by dotgain (630123)
        And once you let someone compromise your system, you'll never be able to fully trust it again. It's about the stupidest idea yet in computer security. The only reason it wasn't on that list of "top six stupid things" the other day is because it's not an adopted practice, and isn't taken seriously.

        And since TripMasterMonkey is an incessant troll, please, don't be gentle.

        • Re:Sleeping....? (Score:5, Insightful)

          by Tackhead (54550) on Tuesday September 13, 2005 @02:17PM (#13549818)
          > In exchange for x% of your computer cycles, x% of your HDD space, a predetermined number of pop-up ads, etc., the group would guard your computer against others attempting to compromise it for its own use. The group would connect to your system from the internet, install their rootkits, and regularly scour your system looking for intruders, which they would zealously remove
          >
          > And once you let someone compromise your system, you'll never be able to fully trust it again. It's about the stupidest idea yet in computer security. The only reason it wasn't on that list of "top six stupid things" the other day is because it's not an adopted practice, and isn't taken seriously.

          Is that not the functional specification for Windows Update? ( Ha ha, only serious. [catb.org])

          For that matter, is that not the functional spec for every automatically self-updating piece of software?

          Your machine is as trustworthy as those you permit to administer it. To the extent that you install auto-updating software, your machine is only as trustworthy as the authors of that software.

          I'm highly confident that when my cron job asks apt-get to phone home, the maintainers of $MY_PET_DISTRO won't take advantage of the opportunity to place anything nasty on my machine.

          I'm somewhat confident that Microsoft isn't going to auto-disable even pirated Windows installations, nor to install a RIAA/MPAA sniffing trojan as part of its updates - at least, not without providing a few weeks of warning.

          I had so little confidence (as a matter of personal opinion) that the auto-updating and installation of DRM/software subscription services from www.steampowered.com, that I never purchased Valve's Half-Life 2. (If you trust Valve, hey, go for it -- but Steam is, IMO, fundamentally no different than having companies like EA and Adobe decide to outsource the management of "licencing component services" to organizations like Macrovision and the BSA. Would you like to get your "security components" from DRM providers?

          And finally, I'd have no confidence whatsoever in any machine that was required, as part of the Homeland Cybersecurity Act of 2012, to download security updates from updatefarm.cybersec2012.gov.

          On that scale, I'd place the original "cracker group" (perhaps affiliated with the Russian mafia) installing its own rootkits as somewhere between "less trustworthy than Steam, but more trustworthy than bsa.org".

          But there's fundamentally no difference between any of these options.

        • You mean like hiring a sysadmin to remotely administer your machine, and paying him in CPU cycles?

          That actually sounds like a relatively sound method for the wholeale of distributed CPU time.

          People give CPU time away to organizations like SETI. The only problem, is the maket value of CPU cycles vs the cost of administering a whole pile of insecure windows boxes.
      • That would be cavities and gingivitis in the original poster's dental analogy, eh?
    • by starfishsystems (834319) on Tuesday September 13, 2005 @01:53PM (#13549613) Homepage
      Isn't that rather like setting the fox to guard the henhouse?

      The controls that an organization would need to put in place to avoid being utterly exploited in such a scenario are pretty much the same controls needed to manage systems securely in the first place. So as a thought experiment, this is useful. As an actual practice, forget it.

    • by taustin (171655) on Tuesday September 13, 2005 @01:54PM (#13549623) Homepage Journal
      Sounds like a classic protection racket to me.
    • The reason your idea will never take off is that if this scheme turned out to be profitable to both the racketeers and the people paying for "protection", the government would step in and demand a monopoly in the "protection" racket. Now, you don't want the government installing their rootkits on your computer, do you?
    • Sounds like a good idea in principle but who is to stop this group of hackers from using your resources for their own milicious intentions.
    • by Clovert Agent (87154) on Tuesday September 13, 2005 @02:08PM (#13549745)
      "Yes, Mr Sarbanes Oxley Auditor, I exposed my entire desktop computing infrastructure to a group of self-proclaimed hackers so they could uninstall spyware for me. Great idea, huh? Huh? Hey! Come back! I haven't told you about the foxes guarding the corporate henhouse yet."

      I have a better idea. Swap some other commodity (like, say, money) for the same service, and call it an MSSP.
    • Hey guys, I've got an idea, why don't we just get the barbarians to guard the gates of Rome?

      KFG
      • Um, dude, about those gates. We had to remove them because they were interfereing with us getting in and out of the city to rape and pillage in our 20% of Rome. Oh, and by the way, we decided that we would rather rape and pillage in the 20% of Rome that contains the forums. Raping and pillaging in the slums wasn't working.

        Yours truly,
        The Visigoths.
    • Problem (Score:5, Insightful)

      by mcc (14761) <amcclure@purdue.edu> on Tuesday September 13, 2005 @02:12PM (#13549784) Homepage
      There are a large number of problems with your suggestion. I will outline only one.

      One problem is that your suggestion is wholly founded on the assumption of computational resources being valuable. This is to an extent incisive, since you have realized that the reason why the formation of zombie networks has increasingly become the endgoal of worms and such is that there is commercial value in those networks' computational resources. But this breaks down when you start to think about what they use those computational resources for.

      Computational resources, by themselves, aren't particularly valuable or hard to obtain; even bandwidth resources are beginning to become expendable if you're smart about how you use them. Your average PC is absolutely awash in power it doesn't need. 20 years of "your computer is obsolete as soon as you buy it" has crashed out into "your five-year-old computer technically isn't obsolete yet". People who used to buy supercomputers often now just buy cheap PCs and leash them together. Anybody who just has a legitimate need for a lot of computation these days can most easily obtain this through totally legitimate channels.

      The reason why hackers, worm-builders, spyware peoples, etc obtain their resources through illegitimate means (like worms) is because they have illegitimate intents for those resources. They don't so much want 20% of the resources of a PC, they want 20% of the resources of a PC that can't be traced back to them. This is because once they have these resources, they're going to be using them for things like, warez. Sending spam without compliance with local laws. Hosting dubious and virus-like spyware. Extorting businesses for money in exchange for not launching DDOS attacks against them. If you willingly give these people 20% of your hard drive and CPU they aren't going to be using it for things like 3d rendering or protein folding; if that was all they wanted, they wouldn't need to be using hacker methods to get it in the first place.

      Instead, if we go by your scenario, you'll give them 20% of your hard drive, CPU and bandwidth; they will protect you from the other hacker groups; everyone will be happy; ... and then six months later your computer will be part of a gigantic DDOS or some other illegal act so large it will attract the FBI's attention. From here there are two possibilities. Possibility one is, the people you've been contracting with here are a legitimate business, in which case the FBI will get their contact information from you and have them arrested. Possibility two is, the people you've been contracting with here are not a legitimate business, in which case the FBI will arrest you for conspiring with an organized crime group. We can assume no group even remotely competent enough to even get into this hypothetical security "protection" business in the first place would be stupid enough to let possibility one happen. This leaves possibility two. See the problem?
      • Re:Problem (Score:3, Funny)

        by gatekeep (122108)
        We can assume no group even remotely competent enough to even get into this hypothetical security "protection" business in the first place would be stupid enough to let possibility one happen. This leaves possibility two. See the problem?

        While I largely agree with your point, the quoted line made me think of this;

        Man in black: [turning his back, and adding the poison to one of the goblets] Alright, where is the poison? The battle of wits has begun. It ends when you decide and we both drink - and find out w
      • Possibility two is, the people you've been contracting with here are not a legitimate business, in which case the FBI will arrest you for conspiring with an organized crime group.

        Plausible Deniability

        And don't forget... You can't arrest a corporation. Just the individuals that work for it. Thirdly, you can't go after the shareholder's assets unless they have been directly implicated in the crime.

        Lastly, the crime might have been intentional in order to get the FBI's intention. Of course you'd be dealing wit
      • Your average PC is absolutely awash in power it doesn't need. 20 years of "your computer is obsolete as soon as you buy it" has crashed out into "your five-year-old computer technically isn't obsolete yet".

        First of all, computers have always been on a fairly constant cycle of getting faster, but it's never been 'overnight' except for people who go out and buy the bargain PCs the day before they release the new 'latest greatest' models. so, 'your computer is obsolete as soon as you buy it' only applies to c
      • Computational resources, by themselves, aren't particularly valuable or hard to obtain; even bandwidth resources are beginning to become expendable if you're smart about how you use them.

        FWIW, it really depends on how much you need. If you need a lot of computing power, it tends to be very expensive, and the same is true of bandwidth. (Sometimes it is possible to do tricks like those done by SETI@home, but many problems just aren't decomposable that way.) But if you're willing to put up with just using reso
    • Let me be the first to offer you those services as it describes my company exactly. We exchange security for a small meagre portion of your vast unused computer cycles and HDD space.

      For everyone else, Do you need mass advertising? Do you need to get your message out in a cheap and effective manner? Contact me for mass electronic messaging promotions.

    • What we have now isn't that different. When we set up a box, we can choose from a set of operating systems and applications. With all those choices we implicitly trust their creators and maintaners to some extent.

      But it's not absolute trust; just as helpful bacteria in our mouths can get out of control, software may (will?) prove vulnerable. So we still have to monitor and maintain our systems, installing security patches and changing administration practices accordingly.

    • The problem is that the hacker would be using your computer resources for other illicit purposes, such as hacking computers belonging to other businesses. It would solve your problems at the expense of others. And imagine the liability of having their attacks traced back to your computers.

      It would be no different than giving guns to thugs to protect your business. When they do finally get busted, the FBI will find your fingerprints on the guns.
    • Sounds like how Ankh Morpork runs - Vetinari legimised crime by creating the guilds but made them responsible for keeping crime to within agreed limits. Of course, he had leverage over the guild leaders to make them comply. Not sure what I'd have over some Russian kid who I've never met.
    • I've always assumed that was what Norton was doing when it randomly stole half my CPU to not scan anything. I mean, it makes a lot more sense for them to *steal* my processor cycles than just *waste* them, right?
    • Ah so you noticed! I was wondering what Microsoft was doing with my hardware all these years...
    • That sounds to me a lot like paying your neighborhood gang "protection money".
    • This is also when the cops coming knocking at your door about all the illegal activity thats been coming from your IP :)
    • 2 questions:

      1) Why would I give my computer up to 'hackers', by which I assume you mean people who break into machines illegaly or maliciously. There would be nothing to stop them from fully taking over the machine and doing whatever they want - ie. under this arrangement I have no power or control over them to ensure they hold up their end of the bargain. Since what they are doing is probably illegal and they are more than likely in a far off country I have no legal hold over them either.

      2) What's the poin
    • I'd think of it a bit like buying "cleaning services" for 10 cents a year, sure in aggregate that might be worthwhile for someone to do (if they can get 2 million victims, say) ... but if some super-new virus happens that takes out %1 of their userbase, they sure as hell aren't going to care.

      For that kind of "price" it's going to be all automated software, which a bunch of companies already do ... for not significantly more, per. customer, and are much more likely to not want bad press with problems of e

  • 50 years, eh? (Score:5, Insightful)

    by grub (11606) <slashdot@grub.net> on Tuesday September 13, 2005 @01:49PM (#13549573) Homepage Journal

    [...] at the moment computer security is rather basic and mostly reactive.

    OpenBSD [openbsd.org] has been proactive since Day 1. And, really, can anyone speak authoritatively on computer issues 5 years in advance let alone 50?

    If I drank a strong tea brewed from Theo de Raadt's toenail clippings I could glean knowledge from perhaps a couple of days in the future, but beyond that you're getting into the realm of Xenu.

    • I'd hazard to guess that if anybody can talk about computer security in the next 50 years, it'd be Alan.

      I hate to compare him to Jesus, but he has the beard and sandals...
  • by JeanBaptiste (537955) on Tuesday September 13, 2005 @01:52PM (#13549600)
    I can't see how anyone can claim to know what is going to happen in the next 50.
    • RTFA. even if the title of the interview says "The Next 50 Years of Computer Security" the content is not related with the next 50 years, there are just some superficial random thoughts of alan cox about CURRENT security.
    • Agreed, especially with the quantum computers/cryptography security in the computer industry will be very different than it is now.
    • While I'll agree that 50 years is a long time no matter how you slice it, all the more so for computers, but I think it's pretty safe to say that since computers and all the related tech have moved beyond the "brand new" phases of develoment that it's possable to make some generlaizations regarding their future.

      Unix itself has shown to stand the test of time so far and with the continuance of Microsofts monopoly (and what amounts to the goverments near approval of it) monopoly the shape of things to come is
    • by jabber01 (225154)
      Extrapolating recent trends, Pokemon will be President of the United Corporations of America. The United Middle East will be America's closest friend. Together, we will have obliterated the EU. No one will care about poverty and disease in Africa.

      Computers will be so small, they'll be ingestable, with music players and cell phones being implanted in teeth. But DRM will be so pervasive that the RIAA will be allowed to inspect your mouth with toothpicks. The weakest link in computer security will still be the
    • Easy...watch:


      In 50 years, we'll have flying cars, world hunger and poverty will be a distant memory, and we'll all have a small nuclear fusion reactor in our basement which will power everything from our maid service robot to the 512-core 650GHz Pentium 17 computer in your home office.
      Bill Gates will disband Microsoft when he retires, and all his billions will be donated to help sick kids on Mars. (We'll have settlements there, after all, but the hospitals won't be quite up to snuff for a few more de
  • Fortunate? (Score:5, Insightful)

    by Krast0r (843081) on Tuesday September 13, 2005 @01:54PM (#13549620) Homepage Journal
    "In a sense we are fortunate that most attackers want to control and use systems they attack rather than destroy them." - however in a sense we are unfortunate that they generally take control of them to destroy someone elses computer, it just depends on how selfish you are.
    • True. Well, if a system is obliterated on infection, it can't spread... not really the behaviour of a virus. Still, by letting them not do something cruel like wipe the BIOS or trash the filesystem or something and just hijacking it's internet connectivity and letting them spread, you can get maximum exposure while still causing plenty of infuriating moments.

      My flatmate got a virus that lurked for a while and then deleted ntoskrnl.exe so Windows wouldn't boot anymore - that wasn't fun.

      • Probably a POC. One of these days, such an attack will take down many many doze boxen. It will be a real wakeup call for many people. Those that have been paying attention are already off or weaning themselves off of the MS addiction.

        Those that have not been paying attention or are buying the MS FUD are taking a huge risk.

    • Right now, the worst that happens is you have to reformat your hard drive when the pop-ups and re-directors stop you from doing anything online.

      If the systems were destroyed, you'd see a lot more effort put into protecting them.
      • There are currently 2 ways, AFAIK, to effectively destroy a system. ie, beyond the ability of reformat and reinstall to fix.

        1: Flash the BIOS to something useless that won't bring up the system.
        2: There are some not-to-be-used ATA commands that can turn a hard drive into scrap metal. A while back on lkml there was a bit of discussion on whether or not to filter them out at the driver level.

        I'm sure there are more, just waiting to be discovered. Time was, you could destroy older monitors by misprogramming th
  • by Ckwop (707653) * <Simon.Johnson@gmail.com> on Tuesday September 13, 2005 @01:55PM (#13549629) Homepage

    This last area is very important. We know the theory of writing secure computer programs. We are close to knowing how to create provably secure computer systems (some would argue we can--e.g. EROS). The big hurdles left are writing usable, managable, provably secure systems, and the user.

    It may be possible to establish "limited" proofs of security which are tightly defines in small areas but a provably secure operating system is impossible. It's impossible on so many levels that I expect that Alan Cox doesn't understand the issues deeply enough.

    There are a number of problems with creating a secure operating system. One is the amount of code it takes. You can't create a security proof on huge volumes on code. Hundreds of lines? probably. Thousands of line.. maybe.. hundred of thousands? no chance.

    The next problem is that we haven't figured out a way to make security modularise. You can't say "method 1 is secure, method 2 is secure therefore using method 1 after method 2 is secure. It just doesn't work like this. You can put two secure pieces of code and get insecurity. This means you have to treat the whole operating system as one huge program all of which needs to be proven secure.

    The third problem is that even you establish a proof of security this still isn't enough. Your proof is based on some formalisation of the language but the compiler itself might be buggy (either by accident or on purpose) and might compile in a way that breaks your proof. Ouch! cuO

    Too often we strive to absolutes in security. Security is not binary. It is not a zero or one but a complex set of trade-offs and risk mitigation.

    Simon.

    • by querencia (625880) on Tuesday September 13, 2005 @02:25PM (#13549883)
      "I expect that Alan Cox doesn't understand the issues deeply enough."

      I hope someday I am cocky enough to make that statement.

      "You can put two secure pieces of code and get insecurity."

      Of course you can. But you can also put two secure pieces of code and prove that the combination is secure. The fact that the two pieces that you're combining are provably secure means that there is less work for you to do. Nobody is talking about writing the "Linux is secure" proof. If you start with the building blocks of secure systems and make them provably secure, you can absolutely combine them to come up with "provably secure systems."

      "... a provably secure operating system is impossible."

      You are wrong. Perhaps a provably secure Linux is impossible. But Alan Cox didn't say "operating system." He said, "system." Always pause (at least briefly) before suggesting that you have a better understanding of operating systems than Alan Cox.
      • You are wrong. Perhaps a provably secure Linux is impossible. But Alan Cox didn't say "operating system." He said, "system." Always pause (at least briefly) before suggesting that you have a better understanding of operating systems than Alan Cox.

        These guys [coyotos.org] are working on just such a concept, attempting to write a microkernel OS in a language that supports formal semantics amenable to verification and correctness proofs. It seems they are still just getting underway, but it looks like an interesting project
      • by jd (1658) <{moc.oohay} {ta} {kapimi}> on Tuesday September 13, 2005 @04:05PM (#13550931) Homepage Journal
        Here are some steps to produce a provably secure version of the Linux Operating System. And, yes, I mean provably secure. It is not the only method, it is not necessarily the best method, it is merely a workable* method.

        *Workable means you can do this in finite time.

        1) For each function, determine the preconditions, postconditions and the formal description of that function.

        2) For each of the derived specifications, modify the specifications to be robust (ie: no invalid states are possible).

        3) For each subunit of code that is referenced outside of the unit it is within, add mandatory access controls with a default of "deny", except for the mandatory access control system's check access function which should have a default access of "accept", and the bootstrap code which should have no access controls as the MAC system won't be running at the time.

        4) MAC systems should be heirarchically defined in terms of linking a set of users to a set of rights those users can have. You then have as many mappings of this kind as you need. But because it is heirarchical, an application run by another application cannot assign rights it doesn't know about, nor can it assign rights to users it doesn't know about. An application accessed by paths with different rights must associate the rights to the path used to connect to it and define those as the superset of rights that path has when calling sub-components.

        Oh, and MAC system interaction should follow the paradigm laid out under the Bezantine General's Problem - in other words, MAC systems should distrust each other enough that they can detect any MAC system that turns traitor.

        5) MAC should apply to EVERYTHING. The network, memory pools, swap space, shared memory, everything. No resource should have permit access rights by default and no resource should allow unconstrained access granting. The resource should be able to control who can be granted access, so no one central system hands out access.

        6) Remote connections (via any kind of connection outside of the defined physical machine) should be secure channels (host authentication, user authentication and data validation) and should have access rights limited to the subset of rights allowed to both remote connections, the remote host and the user who is performing the access. This is in addition to any constraints imposed by the application being connected to or any access rights it inherits (and is therefore limited to).

        7) As part of 5, no "superuser" account should exist. Administrator accounts should only be permitted to administer, they should not be permitted to do anything else. There would be no "root" account, for example.

        8) Once the specification has been hardened as above, it then needs to be re-implemented as code and then the code must be formally verified against the specification for correctness.

        The first consequence of all of this is that paths would be very tightly constrained, making any kind of breaking out of the box about as close to impossible as you can get.

        The second consequence is that because all access control is independent (but heirarchical), breaking the security of one module won't affect the security of anything else and won't grant any rights in excess of the subset defined by the intersection of the rights allowed by the path of connection, the broken module, the module then accessed and the broken module's rights within the module then accessed.

        The third consequence is that, because the default is "deny", nothing can do anything not explitly authorized by the entire chain of connections.

        Could this be done in Linux? Sure. If you add the kernel, X, KDE/QT, Gnome/Gtk, the GNU suite, etc, together, you're probably talking a billion lines of code. One million coders could probably do this entire eight-step lockdown over the whole of that codebase in a year, maybe two. There are more than a million coders o

        • Sounds like an excellent plan of attack. And it is even somewhat feasible on some level - doing the whole software stack is probably going to be far too much to bite off in one go, but the problem can be broken down and attcked in pieces. Just securing the kernel itself to this degree would be a significant benefit, and worth doing. Equally you can break that project down to some extent: just having a side project in the kernel performing steps 1 and 2, probably just in core functions at first, is quite fea
    • The main issue isn't complexity, at least not on an OS level. Systems like EROS and other Take-Grant type systems can be provably secure. The problem comes in the administration of multi-user system.

      People have enough trouble managing simple systems like Unix-style permissions and Novell NDS permissions.

      Most multiuser systems I've come across in actual use have pretty glaring security problems, just because of the complex nature of the way people want to use them.

      At some point it becomes easier to just sa
    • There are A1 systems by the orange book criteria, all of which which have small, provably secure security kernels. This amounts to an existance proof that the first point is in error.

      Alas, Ckwop is right in saying it's hard (:-)) You indeed need to limit the thing you propose to have secure.

      --dave

    • You are wrong. There are at least two "provably secure operating systems" EROS and SCOMP. These are Orange book proven systems which have NEVER been hacked and many groups have tried. The only problem is that they are so secure it is very hard to get any real work done with them. Ask the folks at Los Alamos and Mitre.
    • by gr8_phk (621180)
      "You can't create a security proof on huge volumes on code. "

      So we need to write smaller code. Perhaps the "kernel" of the OS should not be responsible for memory management and device drivers, but security of communication between all parts built on top of it (including APIs and hardware access). Perhaps the micro-kernel will have its day after all. How does the security model of the Hurd differ from that of Linux?

    • The third problem is that even you establish a proof of security this still isn't enough. Your proof is based on some formalisation of the language but the compiler itself might be buggy (either by accident or on purpose) and might compile in a way that breaks your proof. Ouch!

      Don't ommit the obvious: if i unplug the computer, encase it in cement, and burry it in my garden, it is secure.

      Functional? No. Secure? Yes.
    • by starfishsystems (834319) on Tuesday September 13, 2005 @08:40PM (#13553272) Homepage
      The next problem is that we haven't figured out a way to make security modularise.

      You raise several really interesting points.

      I think it would be more correct to say that we haven't found a way to reduce the general security problem by means of modularization. It's an open conjecture that we could do so, even in principle, since we don't actually know what the general security problem is.

      However, to the degree that we can isolate information processing into modular elements, we can individually reason about their security, and as far as I understand, those security properties are preserved under composition.

      There are two parts to this. The first is to show that the application of functions such as F(G(x)) or (F*G)(x) need not expose functions F and G to each other. That is, composition doesn't violate modularity in the ordinary sense. I take your point that a faulty compiler is in a position to violate modularity, but that's an implementation error, not a reason to discard the formalism.

      The second is that we have formalize what composition means in terms of information exchange. Ordinarily, composition is assumed to be purely a matter of topology. As in circuit topology, the wires don't count. But in the context of security, the interface explicitly exposes communication. But communication security has been very well studied, and we should be able to apply the results here directly.

      Some details of my understanding may be wrong, and I'd be grateful for your thoughts on any of this.

  • Bull! (Score:4, Insightful)

    by cbiltcliffe (186293) on Tuesday September 13, 2005 @01:55PM (#13549630) Homepage Journal
    In a sense we are fortunate that most attackers want to control and use systems they attack rather than destroy them.
    Not a chance. Because with that, we've got millions of clueless users who think that because their computer turns on, it can't possibly have a virus/worm/spy trojan, so they do absolutely jack shit about it. Meanwhile, I'm still getting copies of Netsky.P emailed to me. It's almost a year and a half old, for Pete's sake!!!
  • 'tis a pity... (Score:5, Insightful)

    by advocate_one (662832) on Tuesday September 13, 2005 @01:55PM (#13549636)
    We are still in a world where an attack like the slammer worm combined with a PC BIOS eraser or disk locking tool could wipe out half the PCs exposed to the internet in a few hours. In a sense we are fortunate that most attackers want to control and use systems they attack rather than destroy them.

    cos if they actually destroyed them, then people would take proper care... apparently, it's quite normal for people to view their ms-windows boxes filling up with vermin etc. as just a fact of computer life... they only do something when they can't get online anymore... and then it now appears cheaper to buy a new box than get the damned thing fixed properly...

    • and then it now appears cheaper to buy a new box than get the damned thing fixed properly... ...or install Linux on it.
    • ... but it would be pretty interesting days to live in for a time. Just imagine the circus! =)

      Then again, it might just be good for us who run not Windows. I mean, most important servers and the like aren't running Windows anyway, and those who do are probably pretty well firewalled. So we'd have the internet all to ourselves - probably the only thing I'd notice for quite some time is a shorter "Online Buddies" list. ;-)

      Now, if we had the games, imagine those ping times!
    • apparently, it's quite normal for people to view their ms-windows boxes filling up with vermin etc.

      Not just users....
      A Laptop at work got a virus. I was asked to help cleanit up. After 'cleaning' it, I suggested that we reboot and check again (actually, I suggested we just wipe the box and start again).
      Sure enough, the reboot-and-scan found a few more files.
      The local 'admin' just shrugged and said: "well, that's normal for windows, isn't it?"
  • by traveyes (262759) on Tuesday September 13, 2005 @01:56PM (#13549644)
    ...when a virus just wiped your harddrive....

    .
    • Thank you CotDC for the Churnoble virus.

      Erase the flash, write random data on the first MB of each HD (luckily it didn't use the hardware and only hit drives Win recognized, so my Linux partition was in tact).

      The was the first time I used Linux exclusivly (I had an old computer at the time that I could run BBox on or the CL. I learned all about splitting windows in Emacs (to use an AIM client) and eventually I learned about ALT+Fkey so I didn't need to CTRL+Z emacs and than bg it so that I could run lynx a
    • by TheRaven64 (641858) on Tuesday September 13, 2005 @02:32PM (#13549973) Journal
      The most successful ones back then waited a few days / weeks and infected every floppy disk you inserted (executables and boot sector) so that they didn't die out immediately. Of course, the longer this period was, the more copies of the virus would exist and the more successful it was. Eventually, the period extended to infinity - the virus would infect the `host organism' and use it to create copies of itself until it was detected and killed. A virus with this strategy was far more successful - in fact the most successful virus would be one that didn't have any adverse effect on the computer at all.

      And that, my friends, is an example of both evolution and intelligent design in operation.

      • In the [non-computer] diseases (particularly epidemics) work the same way. The most deadly ones are usually quite infamous when they attack and kill many people, but they usually die off into obsurity rather quickly.

        Take for example the common cold, which has stayed with us for many years, but is hardly as deadly as it used to be.
  • by bigtallmofo (695287) on Tuesday September 13, 2005 @01:56PM (#13549645)
    Professor Frink: Well, sure, the Frinkiac-7 looks impressive, don't touch it, but I predict that within 100 years, computers will be twice as powerful, 10,000 times larger, and so expensive that only the five richest kings of Europe will own them.

  • But there's really no way we can predict what computers are going to be like 15 yrs from now... much less computer security.

    In 50 yrs I'm going to assume that IPv6 (or v7,8,9) has taken over the world. Wouldn't that do a lot for basic internet security? No more scanning and rooting boxen.

    As for stuff like BIOS erasers and disk locking tools, e-mail will no longer be a useful attack vector due to filtering. The again, nothing can defeat stupidity.

    Disclaimer: IANAL

  • "In a sense we are fortunate that most attackers want to control and use systems they attack rather than destroy them."
    Of course, we will have to worry about the attackers that inadvertently destroy systems while trying to control them.
    I'm afraid I can't let you do that, Dave...this virus is too important for me to let you jeopardize it.
  • by markana (152984) on Tuesday September 13, 2005 @02:17PM (#13549822)
    "In a sense we are fortunate that most attackers want to control and use systems they attack rather than destroy them."

    This is not necessarily a good thing. I've read that Ebola and other very nasty diseases don't spread as far as they might, because they wipe out their carrier population too quickly. As opposed to HIV, which has time to slowly spread out. If an infected PC self-destructed after one round of outbound spreading, then it's not going to be continually spewing the junk like they do today.

    Such a virus would burn through the supply of unprotected PCs quickly, and then go away.
  • by sarlos (903082)
    If we could eliminate all users, the internet would be much safer! All joking aside, what it comes down to is this: As long as there is information people want to protect, there is going to be someone who wants to read it, distribute it, sell it (?). Let's play a mental game.. Suppose we come up with a truly proactive system to protect a home PC (which are mainly target to be zombies against riper targets). All a hacker need to do is purchase a copy (or download it from IRC or some file-sharing service
  • The next 50 years of computing will see the introduction of AI to PC's in the form of an expert system designed to protect against intruders and malicious programs.
    • designed to protect against intruders

      Don't make me laugh!

      The next 50 years of computing will see the introduction of intruding AIs to PCs in order to control the integrity and lawfulness of the user.
    • The next 50 years of computing will see the introduction of AI to PC's in the form of an expert system designed to protect against intruders and malicious programs.

      So would a future version of Windows with this kind of AI uninstall itself the instant its switched on?
  • by someone1234 (830754) on Tuesday September 13, 2005 @02:23PM (#13549869)
    A worm which would spread fast like slammer and destroy infected machines after a short time is actually benevolent. It will destroy only machines that would otherwise be used as spam zombies. The day after the outbreak the internet would be clean again!
  • Whitehat Extremists (Score:5, Interesting)

    by mrwiggly (34597) on Tuesday September 13, 2005 @02:28PM (#13549924)
    A group of whitehat extremists may become tired of lusers that don't patch their systems, and decide that they don't deserve to use the internet.

    They then launch their virus and destroy on all non-patching infidels.

    What, it could happen.
  • by Anonymous Coward
    This is a vision of the future produced by someone stuck in the past. :)
    No offense, but a *lot* can happen in 50 years...
    • No offense, but a *lot* can happen in 50 years...

      Yep. 50 years ago, the computers we have now would have been inconceivable. But 50 years ago, the computers we had 30 years ago were also inconceivable.

      • You think? Alan Turing introduced the concept of the Universal Turing Machine in 1936 - almost 70 years ago. Everything we've done since then has just been making smaller and smaller implementations - actually, they're not even full implementations, since a UTM has infinite storage space.
  • Fortunate? (Score:2, Insightful)

    by Anonymous Coward
    From the summary...

    In a sense we are fortunate that most attackers want to control and use systems they attack rather than destroy them.

    Personally, I find it unfortunate. We would be more fortunate if the attackers did seek to destroy. I'd rather irresponsible people's computers were fried than to get tons of spam and viruses sent by them.
  • We are still in a world where an attack like the slammer worm combined with a PC BIOS eraser or disk locking tool could wipe out half the PCs exposed to the internet...

    Wouldn't a variant of this attack be great for hardware vendors? Read the BIOS and kill a certain percentage of the oldest computers per year. They're old, so folks probably wouldn't think twice about a hardware failure.

    Instant upgrade.

    Profit!
  • The vast majority would probably be much happier on another OS instead of Windows, cause lets face it all modern malware is for Windows. UNIX and Linux offer enough options to satisfy just about any group and the user/company gets to keep closer control on their systems and dont have to pay licensing fees to MS or buy antivirus programs and subscriptions on top of that taking up computing cycles and HD space. UNIX and Linux users only have to keep their firewalls activated or if youre an OSX user you can
  • "In a sense we are fortunate that most attackers want to control and use systems they attack rather than destroy them."

    Right, but I really don't see too much of a difference between a computer under the control of a hacker or hacker group and a destroyed computer, because either one makes a computer unusable for your average end user.

    It's an exhaustive effort to get rid of hackers once they're in since they install all kinds of nasty software, so for people who don't know much except their computer is doing
  • the shitty programming languages we use for building software? yes, I am talking about C and C++. Before I am modded as flamebait, I urge people to think twice about the programming languages we use.
  • Don't give them ideas!
  • Someone should release a destructive virus that is capable of spreading to most systems out there. This would clearly identify the idiots that run systems that are not secure enough to be allowed on the internet. Once those systems are destroyed those users should then be barred from owning a computer ever again.
  • Always endeavour to make computer security somebody else's problem because it is with out doubt the most mindless head f**k of the digital age, a route to instant endless digital paranoia. There are just so many different ways to break in, it isn't funny. Yeah gods, hardware, software, network infrastructure and incoming data. Governments, corporations and crackers all looking for ways to sneak in when ever they want to. FUD for the day, how do you know that you already havn't been hacked its just that they

Chairman of the Bored.

Working...