Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Security

The Future of Security 331

Kvorgette writes "Scott Berinato in The Future of Security presents a very dark future of security in the years around 2010. Several computer security experts expect that a major security-related problem (a 'digital Pearl Harbour') will change software development procedures and remove the freedom in computer use we are striving for. The worst part is, most experts apparently think removal of software tools and access to information from the majority of computer and Internet users would be a good thing."
This discussion has been archived. No new comments can be posted.

The Future of Security

Comments Filter:
  • I know, different Charles Baio.

    Still, unless you count Buddy, Charles provided a great role model and environment for the kids to grow up in. Security through education, not necessarily obscurity or technological whizbangitry.

    To reiterate: 1) Security can only be achieved through education. 2) I would have liked to fuck the older sister on that show.
  • by Anonymous Coward on Monday January 19, 2004 @07:11AM (#8019577)
    When you got ONE company runing the whole damn show, what will MAKE them focus on security, its not like some else will/can step in to take over.

    People cant see the forest for bare trees...
    • by CdBee ( 742846 ) on Monday January 19, 2004 @07:37AM (#8019691)
      I could as easily argue that diversification of software and a multiplicity of non-binary-compatible platforms will lead to better security.

      Monopoly suppliers can produce good code, but this places an excess of trust in the end user - a group who historically have not been eager and diligent in software patching.

      Security loopholes become an issue when the software becomes omnipresent, as in Windows today.
      • I even wonder if M$ have deliberately incorporated security holes (otherwise how could their products be so bad?) as another part of their deceptive tactics, to further their monopoly. The average user does not even think of blaming M$ when he gets a virus, any more than he does when Word trashes the format of his document, or blows away 2 days work. They have been conned into thinking such things are normal.

        The next phase of the deception will be (and IMHO it started about 2 years ago) to shift the emphasi

        • The same goes for device drivers. Requiring signed drivers has not improved quality noticable and does raise further questions about potentially anti-competitive behaviour.
          • by *weasel ( 174362 ) on Monday January 19, 2004 @08:27AM (#8019955)
            c'mon - not everything is a malevolent plot coming out of Redmond.

            'Requiring' signed drivers is just a tech support cost cutting measure.

            Particularly with 3d video cards MS was getting too many (difficult,time-consuming,deeply technical) tech support calls from people having problems with leaked/alpha/pre-release drivers. So they added driver signing to screen some junk out.

            and how else can Microsoft be sure that someone truly is running an 'official' driver than by requiring it to be signed?

            it's not as if you can't -install- an unsigned driver. It's just an extra 'ok' button to click.
    • by swordboy ( 472941 ) on Monday January 19, 2004 @08:30AM (#8019971) Journal
      This kind of attitude is one of the reasons that Microsoft is where it is today.

      There is currently a *large* market for someone that can create a simple solution to the security problem that exists with complex operating systems. For example: I work for a large financial company that does not allow any corporate access from non-corporate PCs because of obvious security reasons (i.e. - it would be easy to install a keystroke logger on just about any PC, Windows, Apple or otherwise). So everyone is stuck lugging their laptops around.

      its not like some else will/can step in to take over.

      This is very far from the truth.

      Using the previous example, if someone created a Knoppix-like bootable "secure" distro that allowed a user to bypass the existing OS on a given PC, a company could allow users to use most any PC for access. Install some VPN software, simple self-checking environment, and perhaps a user-specific token and things become very secure. There would even be a market for a network bootable version.

      But we are all going to sit on the sidelines while MS fixes the problem with trusted computing. All because of a lousy attitude problem.
      • For example: I work for a large financial company that does not allow any corporate access from non-corporate PCs because of obvious security reasons (i.e. - it would be easy to install a keystroke logger on just about any PC, Windows, Apple or otherwise). So everyone is stuck lugging their laptops around.

        First of all, it's not any harder to install software on company machines than personal machines unless the machines are locked down tight-- both physically and systematically. Second, that approach suc
      • a company could allow users to use most any PC for access.

        Which would cover the software sniffers but not hardware, which is pretty cheap and easy to get [thinkgeek.com].

      • if someone created a Knoppix-like bootable "secure" distro

        That's exactly what we are doing here! Askemos [askemos.org] is a (gpl'ed) P2P layer, distributed on Knoppix-booted CD. It has a permission system as widely applicable as set theory can get you. And set theory is the means we use to proof that you can't abuse the administrative account.

  • FUD? (Score:4, Insightful)

    by Anonymous Coward on Monday January 19, 2004 @07:13AM (#8019585)
    Methinks this is another promotion of proprietary software. We Barbarians will find a way to protect ourselves despite what the Government and the Borg thinks is best for us.
  • by Jameth ( 664111 ) on Monday January 19, 2004 @07:15AM (#8019590)
    As is commonly the case in modern society, people focus on success at the expense of principle.

    Certainly, the average joe not having access to the internet would make the internet secure, so that would appear to be successful.

    The only issue is that this would be in violation of principles about freedom, principles which many people may not care about.

    It's the same reason that having a corporate systems with owners removed from responsibility is problematic: only successfulness is considered, not right and wrong.
    • Certainly, the average joe not having access to the internet would make the internet secure, so that would appear to be successful.

      The only issue is that this would be in violation of principles about freedom, principles which many people may not care about.


      Absolutely right. And has been said looong time ago. See Jung's "Present and Future". He warned against treating people as if they were all like the average.

      Unfortunately, even though there may be better solutions, people (specially politicians
  • FUD (Score:2, Insightful)

    by Anonymous Coward

    nothing like a clueless journalist to drive sales of security products up

    the sky is falling again oh no

    so anyone want to buy some insurance/security products/golem ?
    • Re:FUD (Score:3, Insightful)

      by tiger99 ( 725715 )
      What security products? None of themn work properly, including Norton, McAfraud, and worst of the lot, Panda, which trashes everything in sight and still lets virii through.

      At home, my email etc comes through a series of diverse operating systems, each doing at least some checking and filtering, none by M$ of course, before it arrives at the client program. I no longer ever use a M$ product on the internet. At work of course, I must use what is there, sadly a very disfunctional browser (IE) and Lotus Notes.

  • I'm an Expert (Score:5, Insightful)

    by fuzzybunny ( 112938 ) on Monday January 19, 2004 @07:17AM (#8019601) Homepage Journal
    ...or at least my customers think so. I am a security consultant, and I certainly do not believe that you'll get anywhere through removal of users' freedom. Nor do most of my "expert" colleagues. In fact, that viewpoint I've most frequently heard from fairly clueless middle management most concerned with immediate, bandaid fixes to deeper problems.

    Like it or not, that's what it comes down to--freedom and choice. Our job is not, like in other fields, to "get to the bottom of the problem", but to fix the symptoms. Because, frankly, the cure would be worse than the disease.

    Currently, you and I, as "clued" users, have access to the resources we need. We would be needlessly crippled by DRM, technical restrictions, whatnot. We all saw how effective US export controls on encryption technology were in the long run, and a lot of us have run into situations at work where we simply couldn't do the job with the given tools (all of which had to go through months of committees and acceptance testing, whatever.)

    I'll grant you that corporations have more leeway in this; a company environment is more likely (and legitimately so) to be less flexible regarding software tools available to employees. But for general use?

    I've been following loads of discussions among ISPs, for example, who see nothing fundamentally wrong with limiting traffic to ports 25, 110 and 143. Nice prospects, you say? Well take this a step further--when "someone" decides that the grannies of this world, whose PCs are currently spitting worms left and right, should be locked down, do you think that the type of legislation and technological restrictions necessary to do this will differentiate between the grannies and the "clued" users?

    I don't have the answers, but I strongly suspect they go in the direction of continuing education. A few years ago, most people couldn't spell "virus" (well, they probably still can't, but they at least know what it is.) Putting the spotlight on security holes and spam and and and for the average joe is what gets results, not locking shit down.

    Sorry for the ramble.
    • Re:I'm an Expert (Score:5, Insightful)

      by AllUsernamesAreGone ( 688381 ) on Monday January 19, 2004 @07:39AM (#8019708)
      A few years ago, most people couldn't spell "virus" .. and people still can't spell the plural of virus ;)

      Putting the spotlight on security holes and spam and and and for the average joe is what gets results, not locking shit down.

      In the long term, yes. But unfortunately locking shit down does get results in the short term, just not the ones we'd like. And that's where most companies and governments look.
      • Re:I'm an Expert (Score:5, Insightful)

        by fuzzybunny ( 112938 ) on Monday January 19, 2004 @07:48AM (#8019740) Homepage Journal
        You're completely, frighteningly correct. You wouldn't imagine how much time I've spent, (often successfully) trying to convince customers that, if some dude's looking at net porn all day, their problem goes deeper than anything that could be solved by looking over his shoulder.

        Kind of goes along the same line as blaming parents for delinquent kids--it's fascinating, how few senior management types are willing to hold lower management accountable for what their people do all day, instead preferring quick-fix surveillance "solutions".
    • I've been following loads of discussions among ISPs, for example, who see nothing fundamentally wrong with limiting traffic to ports 25, 110 and 143


      Wow, no port 80 for us ? yay. And, of course, limiting traffic to port 110 is really more secure. Like, I couldn't use some remote Http-RPC interface to telnet, (or use a POP3 email very dumb vb virus). Or a port-80-downloaded spyware.
    • I don't have the answers, but I strongly suspect they go in the direction of continuing education. A few years ago, most people couldn't spell "virus" (well, they probably still can't, but they at least know what it is.) Putting the spotlight on security holes and spam and and and for the average joe is what gets results, not locking shit down.

      I agree that more computer users need to understand more about the powerful machines that they use. The current Internet's design makes it too easy for one person
    • A few years ago, most people couldn't spell "virus" (well, they probably still can't, but they at least know what it is.)

      And even if they can spell it, they most certainly can't spell its plural!

    • Security Consultant my arse - ISPs are NOT talking about limiting any ports.

      You've confused your bedroom with the real world of B2B, VPNs and everything else - Port Numbers don't cause insecurity either.
  • A suggestion (Score:5, Interesting)

    by Zog The Undeniable ( 632031 ) on Monday January 19, 2004 @07:17AM (#8019602)
    AV software is useless against new exploits unless heuristics are turned on. Few people will do this because of false positives.

    Relying on OS patches is useless because the true dark-side hackers won't publicise any holes they've found until they've used them.

    What could be useful is - dare I suggest it - holding essential OS kernel files in ROM. Slightly awkward if you want an upgrade, but not insurmountable with socketed chips. If you use UV-erasable ROM chips, you can still burn upgrades at home but remote hacking is impossible. And your PC would start up in the blink of an eye!

    • Re:A suggestion (Score:5, Insightful)

      by tal197 ( 144614 ) on Monday January 19, 2004 @07:27AM (#8019645) Homepage Journal
      What could be useful is - dare I suggest it - holding essential OS kernel files in ROM. Slightly awkward if you want an upgrade, but not insurmountable with socketed chips. If you use UV-erasable ROM chips, you can still burn upgrades at home but remote hacking is impossible.

      ...unless you have the ability to load extra stuff from disk at startup/login, at which point there is no advantage (your computer is only virus free for the first 2 seconds after power on).

      (if you can design your ROM code well enough that it won't allow a remote attack to take control from it, then it didn't need to be in ROM in the first place)

      OS in ROM is good for other things, though (speed, impossible-to-mess-up failsafe boot, etc).

    • What could be useful is - dare I suggest it - holding essential OS kernel files in ROM.

      Even easier is to have workstations without hard drives and boot them all from a central NFS server. Configure the export to be read-only and the NFS server so that it cannot be exploited (no route to net). As an added bonus you can turn off the workstations without shutting down (no fsck needed), no drives making noise / burning watts and less maintenance since individual workstations don't need to be installed.
    • I think this suggestion, while slghtly convenient for loading (but is it the kernel that takes long to load ? If not wouldn't the whole OS be very long to load ? And there are other means to say 'read only', such as .. boot off CD-R).

      But what security point will it solve ? Either you have a 'secure' OS and it might guarantee that untrusted sources are kept off the priviledge data, or you'll have a software somewhat 'insecure' (like, 100% of software is today). And then, it'll not be possible to patch the s
    • What could be useful is - dare I suggest it - holding essential OS kernel files in ROM.

      I believe the word you are looking for is "Knoppix".

  • by Secrity ( 742221 ) on Monday January 19, 2004 @07:18AM (#8019606)
    I may be getting my three letter publisher names mixed up, but doesn't IDG do nice reviews for Microsoft? This whole scenario seems to be tailor written as FUD promoting the Trusted Computing model and it's successors. The winners of this ficticious version of Perl Harbor are very easy to pick; Microsoft, RIAA, MPAA, and the studios.
  • by Anonymous Coward on Monday January 19, 2004 @07:19AM (#8019609)
    Hackers will find a root hole in Mac OS X, and use all the macs in the world to commit terrorist acts.

    More Gnome developers will be assinated by the Korporation. Three have already.

    Linux torvolds will be arrested, become a slave for mirosoft.

    The trolls on slashdot will take over, and the GNAA members will kill micheal sims and cowboyneal

    Microsoft will take Linux, KDE, and use it for the version of windows beyond longhorn, and call it Windows Kinux.

    This post will be moderated -1, insightful.
  • by ten000hzlegend ( 742909 ) <ten000hzlegend@hotmail.com> on Monday January 19, 2004 @07:19AM (#8019610) Journal
    The very fact that we can forecast and predict which supposedly invunerable arms of the internet will fall first according to this article is disturbing enough, a digital Pearl Harbour, perhaps a lackey term, is inevitable but will come sooner, think of how much PC hardware costs have fell proportionally to consumer selling prices, broadband+ connections are down to an all time low (same as 56k five years ago) and the growth of the internet has not went hand in hand with updates to it's infrastructure, a policing system for the net can only be a good thing, not to check into whether Joe Bloggs is downloading the 30th anniversary Metallica SACD but to ensure that the near fragmented "backbone" of the net is not exploited by next decades bugs and programming errors which the article preaches rather well

    Remember, and this is just a term off my head, an ant can support it's body mass on tiny tiny legs, enlarge the ant to human size, its legs are no thicker than a pencil, it cannot support itself

    The net has became an unchecked, unpoliced medium, growing every day, there will be more than half a billion new users by 2008, the digital Pearl Harbour may come sooner than we think

    I use it for Slashdot, other than that... nada

    • Do you really think the internet is "unpoliced" ?

    • The internet is nothing more than a mesh of communication networks. The physical means of communication between the nodes on this network can be implemented and are implemented, in many different ways (modems, adsl, fiber optics, floppy disks, drums, etc...).
      But at it basic meaning, the internet means the ability to pick up someone (a person or a machine) and talk to it.That is the reason why you don't want to restrict it. You want free flow of information.
      Restrictions only benefit the people in power becau
  • by katalyst ( 618126 ) on Monday January 19, 2004 @07:19AM (#8019611) Homepage
    the internet is still a relatively infantile concept; rules are not rigid, and everyone's feeling their way around - with standards being reviewed and re-written everyday. The future may as well be as how the author claims it to be; the net surfers of today, the slashdotters will be looked upon in the future as we do at the hippies - they had their sex and drugs - we have/had any data/information we wanted. This DOES NOT mean that I disapprove of today's internet; after all who has the right to decide on our behalf - what we can know and what we can not. But with mega-organizations like RIAA pushing harder for stringent rules(yes,though they can claim to have a valid concern), I won't be surprised if our grandkids point fingers at us and say "hey - in your days, couldn't you look up how to make bombs and hack and even look at naked women?"
  • He has some points (Score:3, Insightful)

    by drpickett ( 626096 ) on Monday January 19, 2004 @07:19AM (#8019612)
    The knee-jerk reaction of politicians on both the right and left is a matter of death and taxes inevitability - I think that it is a good thing for software to have lots of people pounding on it at the same time - I also think that cyber terrorism is a bad thing - Being a gun nut, however, I don't think that preemptively taking away software tools is the way to solve the problem

    If compilers are criminalized, then only criminals will have compilers

    Open source software tools don't kill networks, people do

  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Monday January 19, 2004 @07:19AM (#8019613)
    Comment removed based on user account deletion
    • ceteris paribus:With all other factors or things remaining the same

      I agree the article seemed to leave in abeyance any positive developments and extrapolate the negatives we currently face. The existence of the article and our awareness of the potetial problems speak to the potential to develop antidotes.

  • by qortra ( 591818 ) on Monday January 19, 2004 @07:20AM (#8019620)
    Yes, and mechanics expect broken cars, teachers expect ignorant people, and doctors expect injuries. Of course, just by explaining what they "expect," security experts create more business for themselves by instilling fear in the public. Whatever.
  • by Debian Troll's Best ( 678194 ) on Monday January 19, 2004 @07:21AM (#8019621) Journal
    The 'experts' in the article seem to think that restricting access to the internet and to software applications would be a good thing for security in the long run. I'm only a humble system administrator, so it isn't for me to decide on high level policy, only to implement it. But where I feel I can comment is on a technical level. Possibly the biggest threat the average user faces today is that of the 'trojan'. No, not the prophylactic device, but the type of insidious security threat that you invite into your virtual home, where it then uncloaks into something altogether nastier. Devising systems to combat the spread of trojans is something which I devote a lot of my spare time to. Linux users think they may be immune to trojans, but that isn't true. 95% of Linux users trust their binary package managers implicitly, yet this is where the biggest hole is. I propose a solution: Trusted apt-get.

    Trusted apt-get is a fully secured, digital rights managed version of the popular package management system for Debian. However, Trusted apt-get differs in many ways. In order to avoid the situation of people being tricked into installing trojan-containing .deb files, all Trusted apt-get packages come from secured, trusted servers. Many of these are hosted in former Russian military data centres, and are easily identified by their '.ru' domain names. This is a mark of trust. Secondly, the Trusted apt-get source code has undergone a line-by-line security audit by Theo from OpenBSD. A lot of people believe that Theo isn't all that keen on Linux, but it's mostly been due to the lack of security focus. Trusted apt-get changes that. The final component is a DRM layer in apt-get, which allows for trusted, copyrighted closed source packages to be easily installed on any Debian system. This DRM layer is implemented using standard UNIX crypt() calls, so it's really portable, yet really secure.

    We can all look forward to the day when downloading trusted, trojan free software is as simple as issuing a 'trusted-apt-get install gator' command (followed by a reboot. Rebooting flushes insecure code from the processor execution stack, and is the only NSA-approved way to install software safely on a UNIX/Linux system). I believe Trusted apt-get will be available as the standard package manager from Debian 4.0 onwards. Until then, apt-get play it safe.

  • That's stupid! (Score:5, Insightful)

    by ByteSlicer ( 735276 ) on Monday January 19, 2004 @07:21AM (#8019622)
    Preventing people to access security-related information will only make things worse. Hackers will create their own tools, and find security holes on their own. Yes, there will be less people that know about the holes. But they will be able to do more damage, since there are too few people which have the knowledge to stop them.
  • by zero-one ( 79216 ) <jonwpayne@gma[ ]com ['il.' in gap]> on Monday January 19, 2004 @07:21AM (#8019623) Homepage
    It should be simple to write secure software. Most current operating systems (in their default configuration), assume that applications run by the current user should have all the powers and privileges of the current user. This is obviously wrong.

    If I install a text editor, I probably don't want it to be able to access the Internet. It should be possible to say, "for this app here, don't let it do anything network related". That way, no matter how badly the text editor is written, it can't do any harm beyond the data it is allowed to work with. If I then want to use the text editor to print to a network print, I should be able to tweak a few options to make that possible (without enabling anything else).

    Ideally, all of this would happen when an application is installed. If there were some UI that said, "This here program is asking for the following rights, is that OK?", I would immediately know what I was letting myself in for.

    I know there are various ways of doing this kind of thing at the moment (virtual machines, using permissions more effectively or using different accounts for software) but none of them are particularly easy to get going.

    With all of this implemented correctly, it should be possible to run any application (no matter where it came from) with out risking all the data on a PC and connected resources and to deal with security in a way that any normal user would understand.
    • If I install a text editor, I probably don't want it to be able to access the Internet. It should be possible to say, "for this app here, don't let it do anything network related".

      For Windows (sigh), you can use ZoneAlarm (free edition) to do exactly this. It would be nice to have something like that in the Linux kernel.

      • You can do something similar. You can decide what each application can do (the first time it tries to access the Internet) but I was thinking of a wider solution. The permissions should apply to everything that an application could do (disk access, printing, internet access, network access, etc) and it should assume that the app is not allowed to do anything until told otherwise.
    • No, it is not. (Score:5, Insightful)

      by lennart78 ( 515598 ) on Monday January 19, 2004 @07:32AM (#8019673)
      My father in law complained about his PC being slow, so I agreed to take a look at it, suspecting it was infested with spyware and such. I was right, and I wiped the machine clean as best as I could. I also installed a personal firewall, so spyware/adware should not be able to dial up to the internet at their own descretion.

      What happened next is that when somebody wanted to visit an Internet page, or collect or send some email, that firewall would first ask permission for the app to contact the Internet. The first question was whether the app was allowed to contact host X.X.X.X at UDP/53. This off course, means bollocks to the average user.

      The moral of this story is that you need in depth knowledge of computers, software and (TCP/IP)networks in order to tell your computer if an action can be conisidered save.

      You could pose that a text-editor does not need Internet connectivity. How many of you guys use freeware/shareware that is ad-supported? How many (even payware) apps 'phone home' nowadays before even displaying anything like a splash screen?

      Security of software and operating systems is primarily the responsibility of the writer thereof. You can NOT trust your average user to know what's safe and what's dangerous. You simple can't.

      Viewed in that light, locking down a users rights, even on his/her own box, seems like a decent idea. It would save a lot of spam and virus trouble, and spyware firms would be out of business before the week is over.

      I however think that I know what I'm doing, and I demand my rights. I'm willing to take a test of competence if needs be, but I will under no conditions give up the control of my system to anybody, especially to companies or governments.
      • Re:No, it is not. (Score:3, Informative)

        by gad_zuki! ( 70830 )
        That's the real problem with outbound filtering, you're relying on the end user to say yes/no. Ideally the firewall should contact its vendors (or a public) database and tell the user if the program is malicious or not. You could automate this and never bother the user with those outbound requests.
      • The new version of Kerio, by default, just asks if a given application ought to be allowed to connect to the Internet (or get a connection from the Internet). It still requires a bit of technical understanding, but not so much that you couldn't educate the average user. You need no understanding of TCP, you just need to read the window and see if the application listed is one you want to do what it is asking to do.
    • "If I install a text editor, I probably don't want it to be able to access the Internet. It should be possible to say, "for this app here, don't let it do anything network related". That way, no matter how badly the text editor is written, it can't do any harm beyond the data it is allowed to work with. If I then want to use the text editor to print to a network print, I should be able to tweak a few options to make that possible (without enabling anything else)."

      It could still write a bat file to ftp off
      • No, because the process it is running in would not have the permissions needed to execute the FTP client (or it could execute the FTP client but the FTP client wouldn't have the ability to access the Internet. Essentially you would be saying, "For this process, reject all calls to the following OS APIs...." (even if they are called by a child process). Easy and secure (as you are doing the tricky security stuff one as the OS level then in every application).
    • Perhaps you mean something like per process namespaces and device access through file interfaces controlled by normal permission checking.

      Nah, that's just crazy talk.

      oh, wait [bell-labs.com]

    • If I install a text editor, I probably don't want it to be able to access the Internet.

      Then let's all say bye-bye to emacs. (After all, vi is the One True Editor!)

    • Kerio PErsonal Firewall version 4. It doesn't do everything you talk about, but it has much of what you ask for. Of course not being integrated in the OS makes it subject to some overrides, but it's pretty good security all in all. It does as you suggest on network access. If a program tries to access the network, or if a program is listening for network access and something tries to access it, kerio pops up and asks if that is ok. You may permit it on a one time basis or permenantly. It also features some
  • Why the author is looking from the Windows point of view?
    In 6 years probably Windows will be vanishing. And there will be more Linux or other OS OSes based desktops than Windows.

    Enforcing laws stopping users from using some services won't give anything. It's like using robots.txt to stop people from mass downloading. I can easily get wget sources and modify them not to use robots.txt file. In open source world such restrictions does not apply.

    Regards

  • I don't get this.. (Score:5, Insightful)

    by -noefordeg- ( 697342 ) on Monday January 19, 2004 @07:25AM (#8019634)
    Diversity is what keeps the 'digital world' going. Standards specify how we communcate, but what we do with the information we process is up to the operation system/applications.

    What the article suggest is that we should have a 'standard' ways of doing this, "standard software patches". Now what if someone breaks that standard and introduces a bug/backdoor a standard patch which everyone will recieve? We'll have a situation much worse that what can possible happen today.

    "The federal government will mandate that users must authenticate their identity to access the Internet itself"
    -Wow! Only one place 'to hit' to deny access for everyone to the internet.
    What if I identify myself as someone else? Of course it will happen, then someone can wreak havoc and later the innocent neighbor will be arrested because:
    'It was him, without doubt, that did all this and that on the internet. Proof? We have logs which clearly showes the perpetrator logging on to the net'

    Standards and centralizing is what will bring us a 'digital Perl Harbor' (what a stupid name).
  • by Savant ( 85811 ) on Monday January 19, 2004 @07:26AM (#8019639)
    This reminds me rather of the anxiety over the Y2K bug. I think the rather doom-laden scenario being predicted here is frankly overblown.

    "Then the lights wink out. Everywhere.

    Then it begins to get cold."

    Naturally, it leads into a Big Brother state from that point on. The article's a troll; it engages in emotive button-pushing.
  • by tolan-b ( 230077 ) on Monday January 19, 2004 @07:27AM (#8019651)
    I'm sorry, I couldn't finish the article, it was just pissing me off too much.

    This guy is utterly clueless, I mean look at this:

    Five factors distinguish the digital Pearl Harbor from the virus attacks we've suffered to date.

    First, it disrupts backup systems. Fragile networks heretofore have been mitigated largely with backup. Disrupt that and badness follows.

    Second, it leads to cascading failures. All of those massively inconvenient attacks people previously referred to as Pearl Harbors pile up. Due to the loss of backup, corporate earnings data is irretrievably lost. This panics Wall Street and destabilizes the financial sector.


    OK, a couple of things. First, "it disrupts backup systems". Riiiight. So this Flaw in 'the internet infrastructure' can also get to tape backups in safes? OH NOS!!!1!

    Second, "it leads to cascading failures. All of those massively inconvenient attacks people previously referred to as Pearl Harbors pile up."
    "it attacks the Internet infrastructure--such as domain name servers and routers--and industrial systems connected to the Internet, like utility control systems.". I'm sorry but if someone connects utility control systems to the net then they are the ones who should be strung up.

    The point is that bugs aren't a risk to 'national security', they are a big problem, and will be very costly to business I'm sure, but an attack or accident that has a serious detrimental effect on peoples lives, caused by security holes just shouldn't be possible.

    This important infrastructure should not be connected to a fundamentally insecure network, and if you're looking for scapegoats, they should be those who allow that sort of level of insecurity. Look at that power station that got Blaster...
    • I'm sorry but if someone connects utility control systems to the net then they are the ones who should be strung up.

      I was dozing in a dull control systems keynote at a conference the other day (I'm a process systems engineer) when I was woken up by a slide titled "Process Control Web Interface" with a screenshot of a web page, complete with pretty coloured sensor output, valve status etc.

      The next slide had their network topography - with [Process Control], [Firewall] and [Internet] blocks.

      From what I u

  • YAWP (Score:4, Funny)

    by tomstdenis ( 446163 ) <{moc.liamg} {ta} {sinedtsmot}> on Monday January 19, 2004 @07:30AM (#8019664) Homepage
    Yet Another Weak Prediction.

    I predict in the next or previous six months you had a birthday.

    And also that it will rain on July 14th sometime in the next 50 years in Ottawa.

    Can I get a published article too now?

    Tom
  • by Debian Troll's Best ( 678194 ) on Monday January 19, 2004 @07:33AM (#8019677) Journal
    With so much of the web's infrastructure now running on Linux systems, the question needs to be asked: "How secure is the average Linux distribution". If Linux is to continue its drive into the data center, with solid distributions like Debian and Mandrake at the spearhead, is it time for the Linux kernel to undergo the same type of rigorous, line-by-line security audit that OpenBSD has been built around? What is the opinion of Slashdot users out there who have had to implement a 'front line' Linux box, exposed to the day to day attacks that are part and parcel of an Internet exposed server? Are you wanting more security, or is Linux solid enough? Is OpenBSD really necessary, or is it mostly just hype? And are our current packaging systems robust enough to prevent the kind of trojan episodes which seem to grip the Windows 2000 Server community on an almost weekly basis. Can apt-get take us up to 2010 in secure confidence? I'd love to hear your opinions.
  • by starseeker ( 141897 ) on Monday January 19, 2004 @07:37AM (#8019698) Homepage
    "Authentication doesn't scale. But surveillance does. "The costs to observe are virtually zero, so it's not a question of will it exist, but what will we do with it?" Geer asks."

    The AMOUNT of information you collect can scale, but the UNDERSTANDING of that information is limited by the processing capability of the organization collecting it. Not to mention its power and ethical use are in the hands of one organization.

    I'm hoping by 2010 we will have remembered not to trust the government too much. Power corrupts, and post Sept. 11 is no different than pre as far as that goes. Nor is post digital Perl Harbor different from pre.

    Bad things can happen - we have to accept that or do our society great damage. Any fixed target is a soft target, and computers and the internet are no different from anything else that way. The biggest liability right now on the net is unpatched Windows machines. Fixing the problems isn't enough - the fixes must be put into action. How do we solve that problem? Dunno, unless we do it right the first time (www.eros-os.org). But a free society has to be worth any price, or it will collapse. I won't accept government oversight as the price of keeping my computer safe - that price is too high. Particularly when it won't solve anything.
  • $1/day = $1,784 cash by 2008.12.07
    that and a 9mm
    oh, and a DVR loaded with stuff to catch up on.

    there, that's it.
  • pearl harbor? (Score:5, Insightful)

    by Anonymous Coward on Monday January 19, 2004 @07:48AM (#8019739)
    Politicians always think it's going to be an "electronic pearl harbor" but never imagine that it will actually be an electronic Exxon Valdez, or Bophal India.

    The entire assumption is that some rogue power will launch a suprise attack on mothership america, when really, a bit of crappy code created by a monolithic company will cause widespread harm to the network and the economy.

    It's already happened, look at Blaster/Nachi. The amount of background noise on the Internet caused by worm traffic in the core will only increase, and interestingly, probably to the point where it will make bandwidth expensive again.

    As a security professional, it is always embarrassing to hear colleagues talk like this. It's self serving, unsophisticated, and politically motivated.

    Get off it.
  • They want to take our development tools! I say we take a leaf out of Charlton Hestons book and start the National Compiler Association.

    You can prise gcc from my cold, dead hard drive!
  • by Anonymous Coward
    I am a Computer Engineering graduate from a one of the best CE schools in Canada.

    At this time I am 2 years into a software developer's career. I work at bankS (multiple). At every stage I realise how horribly lacking my education was in security. I realise that as a "professional" I cannot tell how secure a system is. I make fundamental sercurity errors in my code.

    In Skule, the only course that mentioned security was a mostly theoretic Software Engineering course. THe security it mentioned was a fault
  • In other words, today's sloppiness will become tomorrow's chaos.

    *sigh*

    Show of hands for all of you out there who are sick and tired of reading stuff like this combined with lack of action to deal with the matter.

  • Cost, skill, time (Score:5, Insightful)

    by PureFiction ( 10256 ) on Monday January 19, 2004 @07:51AM (#8019759)
    Secure programming requires additional skill and focus during design, development, testing and configuration. This drives up costs and extends schedule for any project.

    Ultimately the market decides winners in the software space (usually), and everyone needs to see security as a feature worth paying more for, in terms of employees designing and building the systems, to QA testers performing thorough audits before deployment, to users comparing choices in the corporate or consumer software space.

    The author argues that it will take a digital pearl harbor to affect this change. I doubt it will be as drastic. We are already seeing consumers, users and businesses move towards more secure systems (and adding more diversity - breaking the monoculture)

    The pain is only going to increase as attacks grow more and more prevalent, and damage more and more severe. Instead of a single, high profile event, I think we are going to see the current trend continue and accelerate: more and more people spending more money on secure systems, and diversifying their environments.

    In the software market consumers and producers are equaly responsible for the state of security - it costs more time and money and skill to build secure systems: are people paying more for the secure alternatives on the market? do people make a thorough effort to address security before purchase? Until the answer is yes, the current methods will remain the market leader. Those that ignore security (to the extent they can) will come to market faster and cheaper than their more secure alternatives.

    Those that put a premium on secure systems will spend more for a solution that gives them the stability and features they require, and understand the tradeoff involved in terms of cost, time and skill.
  • by cardpuncher ( 713057 ) on Monday January 19, 2004 @07:55AM (#8019778)
    It's a populist piece of scaremongering, but it raises one valuable point: the fact that there are fewer and fewer baskets to contain the vital infrastructure eggs.

    If you have separate wires for power, telephone and internet and an entirely separate mobile phone network you have a fair chance that enough of them are going to stay working to allow you to repair the ones that aren't.

    If your voice communications are running over IP over your powerline and the phone companies throw out their phone switches and replace them with VoIP routers which are also switching internet traffic and, incidentally, providing virtual private networks which link the utility companies' control and monitoring systems, then the chances of everything going down together are significantly increased.

    The only way to stop this tendency is to change the definition of "bottom line" and that can only be done through our old friend regulation.
  • Today's software development processes put out systems with a high level of badness and ugliness.
    (I would also suspect there to stupidness and obtuseness.)

    Microsoft has to sharpen up on security. They, and the rest of the IT industry, will sharpen up by innovating less. (Gawd. Is that, like, negative innovation?)

    Companies don't think enough about the common good.

    Hawaiians would be wise to spend the 7. of December 2008 off line.

    To be secure, we should hire 3rd world labor to read our keystrokes, or maybe
  • Ironic (Score:3, Interesting)

    by gmuslera ( 3436 ) * on Monday January 19, 2004 @08:15AM (#8019877) Homepage Journal

    ... what the article proposes is something near a monoculture of software... and thats is exactly what can cause the problem... "ok, now all follow that way of program" is a good recipe for a future disaster. Heh, maybe a better solution is to close down microsoft, or open code windows, or whatever that neutralizes that single point of failure.

    With software diversity an unified attack will be at least harder, and with freedom on discussing the problems (thing that goes a bit against what is proposed in the article) certainly helps to avoid or minimize their effects.

    Those that sacrifice freedom for security deserves to lose both, and that could be particulary true in the digital world.

  • That's the thing I find funny about the whole dot-hack anime/game series. A major computer virus that attacks every operating system on the planet except one. So they standardize on it.

    Which is the reverse of how things work. As long as there isn't a monoculture, it's simply too much work to make a computer virus that attacks more than one or two types of systems. FWIW, the Morris Worm was designed for two, Sun 68K and VAX/BSD I think, but one could only spread via Sendmail debug mode. I'm pretty sure

  • The Apocalypse 2k4 (Score:5, Insightful)

    by Sklivvz ( 167003 ) * <marco.cecconiNO@SPAMgmail.com> on Monday January 19, 2004 @08:25AM (#8019943) Homepage Journal
    This article is both bogus and dangerous. It's just a 2004-revamped prophecy of the apocalypse:
    The apocalypse:
    1) Predict utter destruction for the whole mankind
    2) People freak out
    3) Enforce your own agenda ("Give me your lands and you will be saved when the world ends in year 1000")
    4) Profit! The church is the richest state in the world.

    This FUD:
    1) Predict utter destruction for the whole mankind
    2) People freak out
    3) Enforce your own agenda ("Give me your freedom and you will be saved when the time comes!")
    4) Profit! Corporations control mankind.

    It seems so obvious to me that's scary! A few points worth considering - let's dispel the FUD:
    - The article says that every computer has 200,000 bugs in 2010. Omits to mention that in a multi-cultured internet (different computers, OSes, software) most computers would have a different set of bugs and therefore an attack couldn't possibly take down the whole, totally redundant infrastructure.
    - If the internet goes down, everything (economy, electricity...) falls with it. Omits to mention that such statements should be proved.
    - A more rigid security system would be more secure. False, people like Kevin Mitnick have been getting inside the world's most secure servers with very little problems, by using social engineering. Now, unless you can actually program the way the mind of people works, well, there's little you can do about it.
    - Look who's talking. Uhm, a security expert suggesting more security - more than a little conflict of interest there...

    I'm sure there are many more loopholes in this article, I leave to the reader the task of finding them :-)

    By the way, if someone told you "You're gonna die tomorrow! Do as I say and you will be spared!", how would you regard him/her?
  • I mean, even if I am a Linux zealot, it is widely known that monocultures are most vulnerable to viruses. This sheme applies in software.

    With something open like Linux it would be much harder to get in that kind of trouble. And if not that, then Microsoft has to reform itself with Linux as a counterpart.

    Look, there still is a catholic church, even now that Luther is a few hundred years dead. But still he made a difference. The catholics pope had to make a change if he wanted his church to survive, and so
  • Wow! It pays to increase your word power!

    That's a word I haven't actually heard in used since... um... since... um... Oscar Hammerstein II used it in the lyrics to a song in "South Pacific." ("I'm as trite and as gay as a daisy in May/A cliche comin' true!/I'm bromidic and bright/As a moon-happy night/Pourin' light on the dew!")

    Which makes about as much sense as the article.

    Bromo-Seltzer, anyone?
  • Instead of a big bang scenario I could imagine a change through software liability.

    Just imagine some slightly bigger then average small country (France? UK? Germany?) picking up the lead and explicitly cover product liability for software products. No more chickening out with boilerplate "click I AGREE" licenses.

    Software companies would either have to be good enough or gone from that market. In this scenario e.g. Microsoft might have a really hard time to hold up against the courts. They might decide

  • by lone_marauder ( 642787 ) on Monday January 19, 2004 @08:35AM (#8020000)
    The problem with the idea of a "digital pearl harbor" is the question of whether anyone would notice it. The metaphor suggests a peaceful world where computers and computer users are free to play in the wild with no fear until black Sunday finally comes and takes away all our innocence. The problem is that we don't have that innocence.

    Try to bring up a Windows2000 workstation, freshly installed with no patches, and connect it to the Internet. In minutes it will be infected by a virus. Any one of the major security stories of the past five years would far exceed Pearl Harbor in terms of actual impact upon the information world. In fact, problems such as SQL slammer are more like the invasion of the Mongols, and the spam problem is global thermonuclear war.

  • This is the same (faulty) logic that says that restricting guns stops crime.

    Any criminal will, of course, simply ignore a law that prevents them from doing what they want to. That is after all the definition of a criminal -- someone that commits a crime (breaks the law).

    The only thing that restricting access to any tool does, is stop those people you don't care about -- those that obey the law. Everyone really knows this, but this is really about control, not security or safety.
  • by sokk ( 691010 ) on Monday January 19, 2004 @08:43AM (#8020046)
    I tried to explain a co-worker of my father how insecure the net really was in the last quarter 2003. I told him that if a virus writer had wanted to, he could've pretty much put the whole society to it knees (corporations and such; hopefully not infrastructure and critical services).

    Look at it this way; the viruses and worms that haunted the net at the time was more or less friendly, concept-like viruses. It could've been much worse. What if the viruses that roamed the the net would:

    Destroy your data / the operating system silently (shredding your files so that they can't be recovered).

    Mail your documents to everyone in your contacts-registry. (Eg. mailing corporate files to competitors)

    Hopefully; the reason why the viruses wasn't dangerous was because: If you have the skill to write such a virus, you can probably imagine the consequences.

    What are your thoughts on the subject?

  • In majority of the jobs and software projects that I've ever worked the concept of security and intgerity has never been of much a concern to management. More an afterthought. Now to be clear most of the projects I'm talking about here are embedded network components and servers.

    I've always seen it as my responsiblity to try and write code that is secure. At the end of the day I'm trying to protect against such attacks. But even for all my diligence there is going to be some sort of mistake that can be exp
  • Oh good grief. (Score:5, Insightful)

    by Flower ( 31351 ) on Monday January 19, 2004 @08:47AM (#8020061) Homepage
    Who the fuck is going to let utility control systems be directly connected to the Internet? What? Private networks are going to totally go the way of the dino? We're all going to smoke crack and forget how to implement redundacy and high-availability? We won't be able to take the systems off the Internet, burn them to the ground and rebuild them incorportating the patch? Explain to me how all backups are going to be unrecoverable and more importantly how such an event is going to remain undetectable? What? No one will be running a HIDS five years from now?

    What about advances in security technology? Tageted IDS is still in its infancy. What about CERT's research into survivable systems engineering? Patch management software is going to suddenly go the way of the Dodo?

    From my understanding the general concensus is that SOX auditing will eventually include all systems which run the business - not just the ones involved in financial reporting. That auditing requires a verified disaster recovery procedure and security documentation.

    Am I saying there is absolutely no chance it could happen? No. But a lot of security people much better than me are going to have to be lobotomized before I think a digital "Pearl Harbor" is plausible.

  • misconceptions (Score:3, Interesting)

    by evil_one666 ( 664331 ) on Monday January 19, 2004 @09:02AM (#8020204)
    1)
    Based on conservative projections, we'll discover about 100,000 new software vulnerabilities in 2010 alone, or one new bug every five minutes of every hour of every day. The number of security incidents worldwide will swell to about 400,000 a year, or 8,000 per workweek.
    Finding software vunerabilities is not a bad thing. But what really matters is not how many vunerabilities you find, but how many you actually have and how quickly you fix them. Ultimately identifying vunerabilities makes applications better.

    2)

    Windows will approach 100 million lines of code, and the average PC, while it may cost $99, will contain nearly 200 million lines of code. And within that code, 2 million bugs.By 2010, we'll have added another half-a-billion users to the Internet. A few of them will be bad guys, and they'll be able to pick and choose which of those 2 million bugs they feel like exploiting.
    in 2010 nobody will be using windows

    3)

    Five factors distinguish the digital Pearl Harbor from the virus attacks we've suffered to date.

    First, it disrupts backup systems. Fragile networks heretofore have been mitigated largely with backup. Disrupt that and badness follows.Second, it leads to cascading failures. All of those massively inconvenient attacks people previously referred to as Pearl Harbors pile up. Due to the loss of backup, corporate earnings data is irretrievably lost. This panics Wall Street and destabilizes the financial sector. People run to their banks, but the banks cannot disburse funds; their networks are down. As are the credit card networks and the ATMs

    This just does not and cannot happen in a heterogeneous IT environment such as the one we have today, and the one that we will have to an even greater extent in 5-10 years. A virus that destroys a win2000 installation is not going to have much effect on a Solaris system, or the other way round. Additionally, important backups are kept in a non-networked environment, for this very reason. The only way that these can (possibly) be taken out is to launch a gradual attack over a long period of time, but such an attack would not go unnoticed over the entire globe without the alarm being raised. Besides the author talks specifically of an instantaneous attack.

    4)

    Fourth, after it's over, the attack's origin is pinpointed and the vulnerability it exploited is determined. That's another element that's been missing from most recent security events, especially virus outbreaks, and most notably in the August 2003 blackout. Blame has not been assigned; no heads have rolled. No one has even called for heads to roll. No heads can be found to roll.
    The authorities have proved startlingly ineffective when it comes to locating the point of origin of attacks in recent years. In the cases where a perpetrator has been (correctly) identified, this has generally been at the perps wishes (confession, inclusion of email address, registered server, IP address etc).

    5)

    The first response is litigation. Lawyers will prosecute vendors, ISPs and others based on downstream liability; that is, they will follow the chain of negligence and hold people accountable all along it. Hackers, whether their intent was malicious or not, will be arrested and prosecuted. If the event's nexus is overseas, foreign governments will cooperate to bring the miscreants to justice.
    Again recent history has shown a remarkable lack of international cooperation when it comes to identifying and extraditing "hackers" (lets not pick up on the misuse of this word here). Additionally, where are you going to apportion for flaws in the open source software that the backbone of the internet mostly runs on today, and will do so almost entirely in the future?

    6)

    So there will be a surge in the development of software that blocks access to applications such as chat rooms, the Web, databases, whatever. And even features within programs, like the ability to forward e-mail messages, will be shut off. Again, the thinking is that since openness got us into this mess, only a lockdown will get us out of it.
    There will be a surge in the corporate purchase of such software, but it will be extremely easy to circumnavigate
  • Autonomous Systems (Score:3, Interesting)

    by Detritus ( 11846 ) on Monday January 19, 2004 @09:04AM (#8020223) Homepage
    One idea that's been bouncing around in my head for years is to make an autonomous computer. The idea is to reserve all low-level and security sensitive functions, root access if you will, to the system software. Security policies would be enforced by the system software. There would be no Administrator or root accounts for users. There would be no backdoors for maintenance.

    I remember reading about an old computer system, I believe it was a Burroughs computer, that used software to enforce security policy. Executable programs would only be loaded and run if they had a magic attribute set. Users could not set the attribute. Only a limited number of trusted programs, like the system's compiler, could set the attribute. The compiler contained and enforced security policy. It would not allow the user to compile a program that violated the system's security policy. This allowed the system to have enforceable security checks that were implemented in software instead of special purpose hardware.

    I believe that current popular operating systems are fatally flawed at the architectural level. Fixing the thousands of implementation bugs will not solve the architectural problems.

  • by SJ ( 13711 ) on Monday January 19, 2004 @09:18AM (#8020354)
    I am not sure why they used that for an analogy as Pearl Harbor was not a surprise attack. Pearl Harbor was deliberately allowed to happen so as to force the American people into WW2 and to make sure the Japs didn't know the US had cracked their codes.

    The only way Pearl Harbor would be applicable is if you were using it in the context of Microsoft deliberately allowing crippling attacks on it's software so as to push through a new system whereby it (MS) has ultimate control.
  • by karnat10 ( 607738 ) on Monday January 19, 2004 @09:27AM (#8020440)

    Tippett argues that if we simply extend the present situation into the future, the level of complexity and vulnerability we would create will make a digital Pearl Harbor inevitable--and before 2010.

    If we simply extend the present situation... but who is simple-minded enough to believe our world works like this?

    "That [scenario] is appealing because it's one of the simplest things you can do with computers: restrict their abilities," says Peter Tippett, CTO of security vendor TruSecure and noted security expert.

    Dear Peter, if you want to restrict all abilities of a computer which can possibly be used in a dangerous way, you'll have to pull the plug.

    Tom's Rules For Reasoning About Tool Security:
    1. It's not the tool that's dangerous, it's the person using it.
    2. Every tool can be used to harm another person.
    3. Making a tool illegal won't prevent a determined person from using it.
    Tom's First Conclusions From His Rules For Reasoning About Tool Security:
    1. Educate people about the responsibility they have for themselves and society.
    2. Educate people to distinguish between statements which contribute to solve a problem, and those which just propagate FUD.
    3. Educate people not to let authorities do the thinking for them.
    4. Educate people to recognize when a tool / person / development is bad for them or others, and to recognize it as a result of their own thinking and values, and not because authorities or the law told them.
  • by miu ( 626917 ) on Monday January 19, 2004 @09:45AM (#8020589) Homepage Journal
    This article looks like another bit of soft sell for intrusive surveillance by Berinato. If you have read his articles in the past you may recognize this regretful but "realistic" pose regarding government regulation.
    However, as Dan Geer, former CTO of @Stake, notes, authentication can't possibly keep up with the number of people who need it and the number of transactions we try to control with it. Authentication doesn't scale. But surveillance does.

    ...

    Geer is convinced we're heading toward a broadly surveilled police state. "I'm sad about this," he says, "but I'm trying to be realistic."

    So how would surveillance stop a bad guy from doing his bad deeds, especially surveillance that uses the user's own machine to spy on him. There is nothing "realistic" or useful about this scenario, and I think Berinato is being a bit disingenuous here by putting the suggestion in his expert's mouth that it would be useful.

    The twin notions: that 24/7 surveillance of every computer in the US is possible, and that a national AAA system is not possible are presented and no reason is given - we are just to accept these 'facts' because they appear in the article.

  • hmmm... (Score:4, Funny)

    by Tumbleweed ( 3706 ) on Monday January 19, 2004 @11:15AM (#8021427)
    Sounds more like a _Perl_ Harbour to me.

The clash of ideas is the sound of freedom.

Working...