Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
Security IT

The Lesson of Recent Hacktivism 159

itwbennett writes "LulzSec says they're retired, which may or may not be true. But one thing the world has learned from their 'frightening yet funny escapades is that 'the state of online security stinks,' writes blogger Tom Henderson. LulzSec (and Anonymous) have 'demonstrated that an awful lot of people are either asleep at the switch or believed in arcane security methods like security through obscurity.'" A related story at the Guardian suggests that governmental attempts to control the internet are spurring these activities.
This discussion has been archived. No new comments can be posted.

The Lesson of Recent Hacktivism

Comments Filter:
  • by Anonymous Coward on Wednesday June 29, 2011 @12:31AM (#36607558)

    They believed that money spent on security products == we are secure. They were not asleep. They did not believe in security through obscurity. They trusted the industry. They gave it money in return for products that were supposed to protect them. They lived in ignorant bliss. Unfortunately, the security industry (and the rhetoric they proclaim) is all about the end goal of the industry making money. Companies are lured into a false sense of security based on what they are being told, and what they spend money on - and it seems totally reasonable from their perspective. Unfortunately, the public (and the victim companies) are not aware of one tenth of one percent of what is actually going on. Any company that has anything worth significant financial value is either compromised or is a target with a big bulls eye on their gold stash - guaranteed.

    • Re: (Score:1, Interesting)

      by Anonymous Coward

      The problem is the opposite. Actual security is ridiculously expensive and there is not a willingness to put up with that level of expense, especially not since any security you have, no matter how well done, can still be breached by someone who is sufficiently determined. So when few are willing to pay for actual security, and put up with the inconvenience required by actual security, you get products that try to patch things up a little bit for a much reduced cost. The much reduced cost may still be signi

      • by c0lo ( 1497653 ) on Wednesday June 29, 2011 @04:12AM (#36608374)

        Actual security is ridiculously expensive and there is not a willingness to put up with that level of expense

        The cost of risk prevention: if the cost if risk mitigation is lower (no matter if people are burnt []) there you have it.
        Far easier to them to externalize the cost and lobby for DCMA and anti-hacking laws - it's the populace that pays for the jail time.

        • by blueg3 ( 192743 )

          The DMCA is irrelevant here, and bringing up "anti-hacking laws" doesn't make any sense. Do you think anything that LulzSec was doing should actually be legal?

          Further, there are already anti-hacking laws. They don't really seem to prevent hacking. Apparently your idea of lobbying for anti-hacking laws to save money on security isn't really effective. I'd be surprised if any organization thought that was a viable alternative to actually having network security.

      • by Dunbal ( 464142 ) *

        Actual security is ridiculously expensive

        If you don't take those costs into account when drawing up your business plan and prefer the "let's cross our fingers and hope it doesn't happen" security method, perhaps you deserve to fail. Also, actual security is not _that_ expensive, you just need to hire the right people, or design the software properly from the start if you are going to code it yourself.

        Somehow I'm reminded of a paragraph somewhere in the instruction manual to the game "Pirates!" - the original version in the late 80's, not the rem

      • by DRBivens ( 148931 ) on Wednesday June 29, 2011 @08:33AM (#36609600) Journal
        In my experience, the COST of security matters much less to people than does the INCONVENIENCE it entails. Many organizations are quite willing to spend money on security hardware, software, and services. Secure implementations can be defeated by authorized users who either perceive the security as inconvenient or unnecessarily harsh ("I'm not going to lock my screen before I get coffee; I'll only be gone for a couple of minutes.")

        One solution might consist of better user training coupled with better security design (protect truly secret data but don't worry about disclosure of information freely obtainable by outsiders via mechanisms like FOIA, stockholder inquiry, etc.)

        It's a challenge, regardless of what you have to protect--or how you choose to protect it.

    • by rtfa-troll ( 1340807 ) on Wednesday June 29, 2011 @01:56AM (#36607892)

      They believed that money spent on security products == we are secure. They were not asleep..

      Except that, according to the reports, Sony had servers for development which were fully protected with firewalls etc. and which were not hacked / hackable (by LulzSec) and other servers for customer data where they hadn't made any investment. So they hadn't spent that money. You may be right they weren't asleep. Someone made a conscious choice that customer data is not important, but it's not that they had made any of the investment they should have done.

      • by jhoegl ( 638955 ) on Wednesday June 29, 2011 @02:02AM (#36607918)
        Or it could be that the person in charge of Development was smart enough to invest in it because they knew better and the person in charge of Customer Data was not.
        We could come up with many scenarios, the only ones that know what happened internally are not going to speak out about it willingly.
        One thing is for sure, what I have seen in the small business world is a mirror to big business. It IS ignorance at some level in the corporate model.
        Ironically, this same model helps bring down corporations and small businesses alike. All it takes is one bad stone at the right point in the pyramid to make it all come crumbling down.
        • It could be. The development department manager probably will be more willing to order a firewall then finance department manager.

          • by hitmark ( 640295 )

            Could be the age old "skimp on safety because one lawsuit in x years is less expensive then the added costs for the same period"...

      • Actually, this is quite common. It's not that I agree with it, but here's how this happens. Development is less mission critical, so it gets security updates first. Firewall rules can be written willy nilly in development environments. Custom applications get security bugs worked out of them in development. The next place all of these changes go is a testing environment. The testing environment is far less wild-west as far as what can be done, in an enterprise environment. It's still easier to get the neces
    • by phantomfive ( 622387 ) on Wednesday June 29, 2011 @02:01AM (#36607916) Journal
      It's been my experience that most companies aren't even spending money on security. If they are even thinking about security, they are ahead of most. Many companies are leaving wide open, simple holes, like failing to escape their SQL, or parse out javascript. That is the lowest-hanging fruit. Really, it wouldn't surprise me if you could use Metasploit and nothing else to break into 20% of the major websites in the world.

      If you're a web developer, let it be a lesson to you: download some basic hacking tools and try them out on your own website. You'll definitely learn something.
      • Many companies are leaving wide open, simple holes, like failing to escape their SQL, or parse out javascript

        This is less the result of a conscious decision on their part to trade off security and more often the consequence of hiring out any IT or development work to the cheapest possible bidder. Companies, like people, must sometimes learn the hard way that one gets what one pays for. For what it's worth, I've noticed that offshore outsourcing shops are especially negligent when it comes to SQL injection, script entered into forms, careless query string handling and many other common attacks. It's too bad that Lu

        • by Dunbal ( 464142 ) * on Wednesday June 29, 2011 @07:39AM (#36609246)

          I hate it when this excuse is used. And it's used often in business in many areas, not just security. It's the junior manager's way out - the way to duck and hide behind someone else. But while it's true a contractor, agency, or someone else will never do as good a job as you would if you did it yourself - at the end of the day it's the responsibility of the guy who approved and signed the cheque. If you don't even take the time to review the work you contracted, if you don't even bother to keep ONE person around who has any notion of how the work should be done and get him/her to go over it and approve it before it's accepted, then my friend, you deserve the good anal fucking that you are about to get.

          • At the very least you need a good legal department to put a damages clause into contracts. Then if you offshore and they've got a lousy security set up that causes you to lose money or business you can recover damages. The company shouldn't be left holding the ball if the contractor screws up.

            • by Dunbal ( 464142 ) *

              Then if you offshore and they've got a lousy security set up that causes you to lose money or business you can recover damages.

              Right. Done a lot of offshore dealings, have we? I recommend you investigate just how easy it is to enforce contracts, much less damage clauses in contracts, in other countries. Offshore = foreign courts which tend to perceive the "poor oppressed small local firm" as a victim of the large global company. These courts have a hard enough time putting murderers in prison. Even with a favorable verdict, you are never going to see your money again. Better to spend the time makings sure you have someone to go

        • by jellomizer ( 103300 ) on Wednesday June 29, 2011 @08:29AM (#36609568)

          So why would you put less trust in an new hire employee then a contractor. It isn't the contractor fault or choosing a contractor sometimes they can offer really good quality work for less cost then hiring (no matter what the Union propaganda tells you) The problem falls back into management. If you hire a contractor to do the work and especially if you have never worked with them before you really cannot fully trust his code. You will need to audit it, and check it. Just because they do it for a living it doesn't mean they are any good at it? If the company doesn't care about security neither will the contractor. If the company cares about security so will the contractor.

          For a lot of these outsourced companies they are tailored towards low cost. As that is what they wanted, if they wanted higher quality then it will cost them.
          There is a ven diagram for this. You have Cheap, Fast, and Good you can only pick two.

    • by CodeBuster ( 516420 ) on Wednesday June 29, 2011 @02:17AM (#36607960)

      They were not asleep. They did not believe in security through obscurity. They trusted the industry.

      It has often been said, by Bruce Schneier [] and others, that security is not a product that can be purchased, installed after the fact and forgotten, but rather an attitude and culture that must be cultivated and maintained. Knowledge and tools are important, but without the right attitudes and culture they will be of limited use. Remember that nobody cares more about your security than you do. If you don't care then nobody else will either, despite what they may tell you.

    • by hairyfeet ( 841228 ) <> on Wednesday June 29, 2011 @04:38AM (#36608488) Journal

      This reminds me of an old story i was told by a teacher: A friend of his goes in to do some hired gun work for this company and gets told by the PHB he is NOT allowed under ANY circumstances to touch the NT 3.whatever server box. It has run great for years and he don't care if it is out of date, it works so just clean the fans and go on. Now since he had worked with NT 3.whatever before he didn't see how this machine had been doing its job all these years without a single fail. So he logs into it and what does he find? It is actually some version of Fedora. apparently the guy before him got tired of the BS and just changed it out without telling the PHB.

      And it is THIS, this right here, that is often the problem. It isn't that the IT guys don't want to do a good job, it is that some PHB is cockblocking them at every turn. I myself ran into this doing some hired gun for a law firm. I told them I didn't have time to support the place but I recommended a couple of different guys who could do the job well. they had experience, their prices were reasonable, so what happened?

      Somebody decided they cost to much and "he knew a guy" that was "a whiz at computers" and could do it for half the price. I get called back a year later when they catch this clown running a gaming server and downloading porn on company time he had first of all took ALL the nice neat Dell office boxes, which were standard MOR office machines, and chunked them because they were "too slow" and instead custom built a bunch of gamer rigs from kits so of course nothing matched, then since he didn't know shit about corporate networking he bought a bunch of Dlink home routers you know, the shitty blue ones? Oh and that is not all he had more than half a dozen ISPs as his idea of "adding capacity" was just to add another ISP.

      So needless to say fixing that clusterfuck wasn't cheap, neither for me nor all the hardware I had to buy to replace his gamer shit, so did the guy that caused this mess get punished? Nope he had already got promoted a couple of times for all the money he saved them on "IT costs" and was no longer in charge of anything IT and therefor didn't get the blame...ARGH!

      So if you want to know why networks are a mess, it often ain't the IT guy (except for gamer retard) it is the stupid ass, dumb shit, WTF are they thinking, Dilbert bullshit that goes on every single damned day in this country. The PHBs get rewarded for saving money even if that money was saved by sacking anyone who knew what the fuck was going on, they cause one clusterfuck after another, but ultimately they don't care because they either fail up or use their "success story" to move to another comapny.

      This is why i had to get out of corporate and open my little shop, as the stress of absolute insane stupidity was giving me chest pains. It was like a friend who ended up being threatened with losing his job and got drug before the regional head. The PHB above him wanted him fired because, and I quote "You have NO RIGHT to tell me who i can and can't talk to! I demand you give me my emails from Melissa [] right now!". He got lucky that the regional head actually watched the news so he went "He isn't talking about the virus, is he?" and when he found out that yes, senior bigfool wanted Glenn to let Melissa loose on the network the PHB got a dressing down and Glenn got an apology and a free steak dinner.

      But it is that kinda of rampant herp derp that is the cause of this bullshit and frankly I don't see how some script kiddies are gonna undo decades of upward failure and PHBs. Oh and what you are talking about is what me and my friends called "black box thinking" which sadly I saw every time the salesmen came around. You wine and dine the PHB and give the "This (insert device) will make you (insert hacker, virus,fool) proof!" and sadly they'd bite 9 times out of 10. Needless to say the shit never worked like it was sold, but since the PHB never got dinged for it who cares, right?

      • by Dunbal ( 464142 ) *
        That's ok, there's an entirely new generation coming that is going to fix everything. /BIG sarcasm
      • This is a large part of why I keep to computers as a hobby. I do a lot of the things that the professionals do, on a smaller scale and on hardware that I own, but I don't have to deal with the headaches of folks that are trying to save some bucks and are certain to blame me when their cut rate equipment goes tits up.

        Right now I couldn't hack it when it comes to hardware on that scale, but that comes largely from the decision not to waste my time or energy studying the things which are really enterprise only

  • Regarding Lulzsec (Score:5, Interesting)

    by Anonymous Coward on Wednesday June 29, 2011 @12:36AM (#36607574)

    LulzSec might have ended, but I can guarantee you the exact same stuff is happening underground, except this time you probably won't know all your information has been stolen. Other than exposing corrupt whitehats I don't really agree with their actions, but I'm not sure if the alternative of keeping it in the hands of underground blackhats and IRC scriptkiddies is any better (not that is wasn't going on during LulzSec as well, but still).

    Regardless, the AntiSec movement seems to be picking up some steam, at least within Brazil (protests are planned for July 2nd), and the first AntiSec release has just been posted to Pirate Bay: [] with more promised tomorrow.

    Regardless of their "supposed" script kiddie status (they did break into a hacking contest website and turned down the 10k), I think it was smart for them to disband and take up a greater cause, and I guess time will tell if they are successful or just run out of water.

    • Regardless, the AntiSec movement seems to be picking up some steam

      WTF? A group actually opposed to computer security, and they are picking up steam? What ... is the rationale behind this?

      • AntiSec is against the industry as it is now, not security. It's sort of like the people under communism who were antigovernment, they weren't antigovernment in general, they were anti that particular government.

  • Arcane is not really the right word there. There's nothing "arcane" about security through obscurity. Perhaps they meant "archaic"?

  • Lulzsec's resounding accomplishment is that it will wake organizations up about the state of their security, and maybe even get us a few anti-negligence laws for the companies who think of security as an afterthought.
    • by schnell ( 163007 ) <me.schnell@net> on Wednesday June 29, 2011 @01:22AM (#36607772) Homepage

      Here's the thing: information security, just like any other type of security or insurance, is completely relative.

      My dinky little websites have adequate capacity to serve the few hundreds of people a day who visit them, but would not withstand a Slashdotting or DDoS. My house is secure enough to resist a burglar, but not secure enough to resist a Navy SEAL strike team. Does this mean I'm negligent? No, it means that I could spend thousands of dollars on additional infrastructure for security or capacity but I choose not to because it's highly unlikely I would need to.

      That's why the example of LulzSec is pathethic and not instructional. There are lots of "soft targets" on the Internet (in terms of security or capacity) that you could take down pretty easily if you wanted to, just because those sites can't justify full-time security teams or massively extensible infrastructures. I'm not talking about high-profile sites like Sony or the CIA, but stuff like EVE login servers or some county in Arizona. A bunch of douchebag script kiddies taking down some MMO server doesn't necessarily mean that anyone was truly "negligent," it just means that they picked easy targets. And there is not, nor will ever be, a shortage of easy targets on the Internet if you're willing to aim at those.

      • Often times these sites are in fact "negligent" in how they operate. Many were using outdated software with known vulnerabilities or were very poorly configured etc. Your little site in your example almost certainly will not get hacked if you follow some very basic security guidelines. For example, a quick google search turns up this page on apache security []. It took 5 seconds of searching, and would probably only take an hour or two to implement and test, and yet how many sites out there aren't followin
        • For example, a quick google search turns up this page on apache security [].

          There isn't really much there that will significantly improve security, except the suggestions to keep Apache up-to-date and maybe installing mod_security. For instance, hiding the Apache version number might actually decrease security since now you might miss yourself you are out of date. It's not going to prevent any attack from happening.

      • by wvmarle ( 1070040 ) on Wednesday June 29, 2011 @02:50AM (#36608076)

        I don't agree with your analogy, as physical and digital security are too different. Not many houses can stand a SEAL attack, yet it is perfectly possible to connect a computer to the Internet with zero vulnerabilities (think OpenBSD).

        Secondly, after a few decades of research that is still ongoing, there are plenty of known practices that make it easy to quite thoroughly secure a server. These issues include (list from memory, mainly related to recent attacks where this was the exact vulnerability):

        • ssl set up to log in without password,
        • SQL injection prevention (just escaping the input prevents most if not all of them - many libraries do this out of the box for you),
        • set a session cookie after log-in, and use it,
        • not storing passwords as plaintext but as (salted) hash - a preventative measure for in case you do get hacked,
        • separate databases, and giving the web-facing script a separate user in the database with minimum permissions - so in case the server does get hacked the attacker still can not see much,
        • a port-forwarding firewall letting through only traffic to the ports you need.

        That's what I can think of, from the top of my hat. All of them are easy to implement - and when implemented will prevent most attacks from happening. Sure you won't be immune to zero-day attacks on your web server software, or other services. But it limits the attack vectors a lot already.

        Not following such "best practice" standards I would call negligence.

        Now I readily admit that my own server is also not configured perfectly, there is a bit of "security through obscurity" too of course. Yet I have a software-firewall blocking all but whitelisted ports, my SQL queries are sent to the database through a library that does the escaping and so for me, preventing SQL injection attacks automatically. No-one else has ssl access, so no way you can social engineer the password from me. Oh yeah and I don't need to store any personal details of visitors there, that also helps.

        Most of these attacks appear to be SQL injection related. And that is easy to prevent: the MySQLdb module for Python is doing that for you already. That only leaves tests like type checking ("I expect an integer value - let's see if this string can be converted to integer"), and value checking ("this string should be no more than 20 characters", "this should be a positive integer, not larger than 100").

        And indeed there will always be lots of soft targets - yet companies that take user's personal details must not be a soft target. High-profile web sites should also know that they will be a target of hackers (the higher the profile, the bigger the lulz for a successful attack after all), and as such have also no excuse to be a soft target. Yet it is several of those that have been proven to be pretty soft targets.

        • by drsmithy ( 35869 )

          Not many houses can stand a SEAL attack, yet it is perfectly possible to connect a computer to the Internet with zero vulnerabilities (think OpenBSD).

          Not many houses are built as a small, mostly buried concrete cube with no doors or windows, which is basically what the building equivalent of OpenBSD is.

          As soon as you make that OpenBSD system usable by adding functionality, the attack vectors start to open up dramatically.

        • I don't agree with your analogy, as physical and digital security are too different. Not many houses can stand a SEAL attack, yet it is perfectly possible to connect a computer to the Internet with zero vulnerabilities...

          No such thing good sir. Open BSD may stop blaster or some windows virus attaching itself to your system but does zero against attacks on the software that actually make it usable. Rarely are online attacks directed at the operating system hosting the front end. SQL Injection attacks make a database accessible regardless of the system, Vulnerabilities [] in your HTTP server can give you access to the root of your system, a myriad of poorly coded PHP or other server side code could give access to a system.

          If you

          • The difference with using an out-of-the-box secure system is that at least you know that only what you explicitly open, is open. Nothing else. And the next step is of course to make sure that you do not open anything any more than you intend to.

            • Ahhh so I suppose Win2k7 server is a perfectly secure system then. I mean out of the box it blocks all internet traffic except to the windows update site to get the current security fixes, and then queries you to setup your firewall and network.

              Good to see we're on the same page now.

        • my SQL queries are sent to the database through a library that does the escaping

          Just a question in passing, why do you need to send SQL text to the database in the first place? Why not use stored procedures? It seems simpler to me and also cleaner from an architecture perspective (i.e., separating database model from application logic). It also prevents any and all kind of attack against the database, making them impossible even if you for instance forget to escape your strings somewhere.

          • Well six, seven years ago when I built it up, stored procedures didn't exist in MySQL. I believe it's possible now but not sure whether Debian stable has that version included. That's already a major reason.

            Secondly most queries are done through a library call, not by sending the actual SQL command. Like db.query(db, fields, where, options, ...). There is nothing more fancy in it than reading information, no calculations or whatever - simply not needed. Really the most basic use of a db.

        • by timftbf ( 48204 )

          not storing passwords as plaintext but as (salted) hash - a preventative measure for in case you do get hacked

          This. How anyone is still writing code that does this baffles me beyond all belief. I despair every time I click on a "forgot my password" link and get an email with a copy of my plain-text password...

      • by Tom ( 822 )

        I claim that a good part of that is a myth.

        Securing your house the same as a bank vault is unreasonable, because the physical changes required are massive, and costly, and require infrastructure.

        Removing telnet and moving to SSH is not even in the same category.

        Many of the "soft targets" are not soft because someone decided that a lock and a deadbolt are enough for their threat scenario, and the windows don't need to be reinforced - they are soft because nobody thought about threat scenarios at all.

        Also, be

      • It's actually a very good point.

        LulzSec made no mention of how long, prior to launching the attacks, they had spent actually *SELECTING* their targets - it could have been something they'd planned for months in advance, making lists of potential targets and choosing those which would not only get maximum publicity for themselves, but also because they were the easiest in the list to attack.

        Not that I condone anything hackers do anyway, but LulzSec actually made some big mistakes in the targets they chose -

        • I'm guessing that they spent more time deciding who was the best source of lulz than who was the easiest target. There's just way too many sites that are basically completely unsecured.

      • DDoSing is very hard to counter and small sites can be DDoSed by legitimate requests as well (see Slashdotted). Also, you don't leak sensitive data while being down. However SQL injection is just fucking pathetic. There's no excuse for that. That's developer negligence. I'm not excusing LulzSec for it, they comitted a crime etc., but it's like leaving your frontdoor open, being robbed, and then lamenting about "what the world has come to".
        Also shared PHP hosting sites are vulnerable to other malicious user,

    • Re: (Score:3, Funny)

      by Opportunist ( 166417 )

      Anti-negligence laws? I'd rather guess we'll be seeing some anti-hacker laws.

      Why legislate corporations when you can legislate people?

      • by mykos ( 1627575 )
        This is, sadly, the way it will likely go. Rather then a reasonable response, like "we already have laws against hackers, so how about you get better security", we'll probably get something closer to "PATRIOT ACT 2: WARRANTLESS WIRETAPPING BOOGALOO"
  • A related story at the Guardian suggests that governmental attempts to control the internet are spurring these activities.

    These hackers are to the internet as street thugs are to a dark alley! Catch 'em and Guantanamize 'em!

    These are acts of activism based on a desire for a better and free society, you say?

    Oh please! Next you'll be telling me that many of these hacking act thingies require education, intellect, and creativity beyond that of an average person...

  • by Anonymous Coward

    Given that these "rogues" or "hackers" are well skilled with network technology, what do these governments think they can do if the are capable of setting up their own internet? They know how to make the hardware; they know how to make the software; they know how to send communications over different spectrums. The governments would have to have complete control and ability to scramble communications over all possible channels. And even then, a new communication channel can be found and used to transfer ele

    • 1. Build reciever.
      2. Track signal.
      3. Send in police.
      4. Pound-me-in-the-ass-prison for unlicenced use of a radio transmitter.
      5. Publicise, to scare off any others who might try it.
      • Except that you'll never find a radio receiver without doing door to door searches, and the government for the last decade or so has been more concerned with keeping secrets.

        • I was telling it from the government's perspective. Obviously, the first step is to build a reciever - they'd need the reciever in order to track where the transmitters are.
  • by SuperKendall ( 25149 ) on Wednesday June 29, 2011 @01:02AM (#36607688)

    I don't think people are asleep at the switch, at all.

    I also don't think they are relying on security through obscurity.

    In large companies I have worked for, there are a lot of very competent people that care a lot about security. But the thing is, security is a minor consideration to spend time and money on compared to making working systems.

    Obviously it would be better if that would change, but I don't think honestly it can until someone has had the lesson REALLY driven home to them by a major security issue.

    I would bet that within five years Sony security is actually pretty good. It is a good wake-up call to the industry, but remember that generally the alarm clock is only really heard by the owner of the house it rings in...

  • by Opportunist ( 166417 ) on Wednesday June 29, 2011 @01:12AM (#36607736)

    Nobody wants security. Everyone wants compliance.

    From an auditor's point of view, it's very easy to explain the reason why the security in most companies is at a level that's not even laughable. No company is interested in it. What they want is certificates, they want their ISO27k and their PCI-DSS, but not because they want them to know for themselves that they're secure, they want them to display to others that they are, so they can get contracts or are compliant with legal requirements to be allowed to do something.

    Now, some might think security and compliance with security requirements is the same. Both mean that you "want" security. And that's the fallacy. Security is something you want yourself. You want security because you want to be secure. Security is in this case the primary interest and the focus by itself. Compliance is something that is forced onto you. You want security because someone else wants you to be secure. Security is in this case only the means to the goal, be it to conform with legal requirements to continue operations or be it to be allowed to process credit card payment.

    Within the last decade or so, the number of companies where I actually had the idea that they wanted security for themselves, even if only as a side effect to the compliance requirements, was very, very low. Most want to get done with it, preferably fast and without hassle. If the compliance requirement is that your door is locked and barred but doesn't say anything about your windows, they won't even listen to you if you tell them they have no windows but just big holes in the wall. Their door is sealed, that suffices to be compliant. The windows? Not part of the compliance requirement, we don't care.

    • I'm with you on the compliance vs. security angle. Recently I've started working with people who want me to certify I comply with HIPAA guidelines for touching private health care data. The emphasis on paperwork over real security practices there is mind-boggling. I'm been put into an uncomfortable position because I can't morally agree to these policies unless I really mean it--which means I'm facing a huge security expense overhead added to my business--while my competitors do a shady job but mark all

      • by cusco ( 717999 )
        When HIPAA guidelines began to be enforced my company (access control, alarms, video, etc.) was going to hit up our healthcare customers to secure their paper records. Guess what? For all HIPAA cares they could store the paper patient charts in cardboard boxes in the middle of the parking garage. Physical records aren't covered at all, so most of them still lock up their paper records with brass keys, and every person who has ever worked there has a copy.
    • by Tom ( 822 ) on Wednesday June 29, 2011 @03:26AM (#36608216) Homepage Journal

      Disclaimer: I've worked in compliance until recently, but my background is security.

      The problem you outline is real, but you are missing a point: Compliance got traction because companies don't invest in security. The risk/reward just doesn't work out. A million credit cards lost? The PR to fix that is a lot cheaper than the security investment to prevent it. And the real damage isn't for you, it's for the credit card holders and their companies.

      That's why compliance became so big, because too many people realized that unless you force them, companies won't do security. The same way that airbags in cars didn't become standard issue until some laws were passed. Human beings are horrible at risk management for everything that falls outside our daily experience.

      The quality of your compliance managers determines if you're just following the book, or actually bringing an advantage to the company. I proud myself on IT management being happy they had me (I wasn't part of IT, to them I was an outsider from the finance department, the compliance hand of the CFO). You can do compliance in a way that IT doesn't hate and that gives you actual benefits.

      Unfortunately, too few compliance managers are IT people, much less IT security experts. Which leads to them doing things "by the book". Or, as it's called in other contexts: Work-to-rule. As we all know, that's not work, that's sabotage.

      • I think maybe if some kind of financial liability was introduced, companies would take notice. Say, $50 for lost personal details (name, address), $100 per lost cc number, $5000 per lost SSN.

        Smaller companies would have to use payment processor companies with better security. Larger companies and payment processors would have an incentive to not just follow best practices and minimum compliance, but actively conduct audits to reduce risk. Insurance companies would also insist on good security, in theory.


        • by Tom ( 822 )

          I think maybe if some kind of financial liability was introduced, companies would take notice.

          No, they wouldn't.

          That's what I was saying about humans being horrible at risk assessment.

          There's a reason we didn't make car manufacturers responsible for accident injuries, but instead forced them to build in airbags, no matter what their in-house crash statistics might have said.

          We need regulation like that for IT as well. Make it a (very costly) offense to store passwords unencrypted, no matter if there's a breach or not. Stuff like that. Go through the best practice lists - there are a number of well-d

      • Airbags, to pick up on your car analogy (he did it, not me! :), have gained traction, though. Try, just try, to sell a car without airbags today. Security in cars has become a selling point. After decades of it being a minor, if any, point in car design, mind you, but it finally is a topic for car designers and a selling point for salesmen. "This is is a street cam filming someone driving our new model. As you can see, the driver fell asleep during the drive. Here you see him impact at 150mph. And here you

        • by Tom ( 822 )

          I guess it still needs a decade or two 'til people want the same in their computers, and done to their data in other people's computers.

          My point exactly.

          I think we need to force security on people. Once they have to do it, they will come around to appreciating the benefits of doing it right.

          Because once the option "ignore it" is taken away, doing it right is often the next-best one, and a more effective use of budget than doing a half-assed job that may blow up in your face.

    • It's very true.

      I actually work in security for a telecoms hardware vendor and many of my customers believe that if they state that they want PCI compliance, for example, then that is all they need to do and can hand off all the dirty work of achieving that compliance to the vendors.

      As a vendor, we provide servers in a "one size fits all" pre-hardened state because any additional hardening we can do usually depends on the customer's specific topology and environment - so the process we adopt is to let the cu

      • If someone complains about his OEM for failing a security audit, it's time to pack your stuff and go.

        I don't expect a lot from my customers. I hold their hand from the moment they start wanting compliance, all through the process development and design phase up to the moment they get audited (of course, then by someone else, since I'm technically not their auditor but their counsel in such a scenario). But if he starts "Uh... that's our OEM/ISP/whoeverelse to blame", pack your stuff and go. You could, techn

  • Don't you mean 'are known to'?
  • while i like the idea of security and keeping my stuff secure, i love the fact that this hacktivism shows one very good point. Corporations and the governments they've bought have all been chipping away at society in an attempt to go back to the good old days of serfdom, but when a few people in the masses who happen to know some shit get together, pissed off people get their message across.
  • If we make internets illegal, only criminals will have internets.
  • LulzSec (and Anonymous) have 'demonstrated that an awful lot of people are either asleep at the switch or believed in arcane security methods like security through obscurity.

    Wait what? Lulzsec showed that security though obscurity is bad? I thought the whole point to their "AntiSec" cause was to stop security companies publicly announcing vulnerabilities []. Isn't that the definition of security through obscurity?

    • Eh, this really ain't that hard. It is similar to how the Nazi's showed us how hate is bad.

      This article ain't about the agenda of Lulzsec but on what the results of their actions have revealed about IT security.

      Yes, antisec is idiotic, it is however not relevant.

      The large number of successful hacks recently have shown IT security is in a bad state. The motivations for those hacks are not relevant nor even that a single group did it.

    • According to your Wikipedia link:

      Graffiti reading "Antisec" began appearing in San Diego, California in June 2011 and was incorrectly associated with the original Antisec movement. According to CBS8, a local TV affiliate "People living in Mission Beach say the unusual graffiti first appeared last week on the boardwalk." They also reported " was quickly painted over, but the stenciled words were back Monday morning." It was later realized to be related to the new Anti-Sec movement started by LulzSec and Anonymous, some local news have seen and corrected this error.

      Same name, different movement, apparently.

  • by brit74 ( 831798 ) on Wednesday June 29, 2011 @03:23AM (#36608202)
    > "A related story at the Guardian suggests that governmental attempts to control the internet are spurring these activities."
    I have to admit, I read that sentence in the summary and I scoffed. Then I read the article, and I still scoffed.

    How about my interpretation of Loz Kaye's article: people who are deeply involved in some cause always find the reason "bad thing happened" to because of "bad thing that they don't like and have been working against". It reminded me a lot of Pat Robertson's claim that 9/11 happened because of the gays and feminists and abortionists. Uh huh. Sure it did.
  • "Hacktivism." Ugh.

  • by Reed Solomon ( 897367 ) on Wednesday June 29, 2011 @03:45AM (#36608294) Homepage

    The governments of the western world seem to have it in mind that criminalizing everything will protect them from some sort of boogeyman/men. Hackers, and in general people who steal whats "theirs". People who just want to share their free thought. What the people in power want is for you to second guess everything you say or do, and to live in fear of the consequences. They want to create a cyber police and regulate every aspect of our lives. For what? For profit. To maintain control. No other reason. We've seen thanks to the actions of Anonymous and wikileaks and others how deep the corruption is. We've seen first hand what happens when some group destroys an entire eco system (the gulf of mexico) compared to when someone attacks the state. Now all the cards are on the table. They want to shut it all down. They want three strikes laws. They want search and seizure laws. They want to do things without due process or warrents. They want to impose their twisted morality on the populace. They want to frame Anonymous/Wikileaks and the like and make them out to be pedophiles or terrorists or pirates or rapists. It's rather disgusting how obvious it is. And the most shocking thing of all is that they are actually SURPRISED by the retaliation they are receiving, as minimal as it is! The actions of what appear to be just a few people have terrified the companies who thought they had carte blanc to do as they pleased. However it hasn't pressed them to change their ways, but to hide behind a veneer of superiority and attempt to stop those selfish robin hoods of the internets.

    • The governments of the western world seem to have it in mind that criminalizing everything will protect them from some sort of boogeyman/men.

      I think its more that its the easiest line to sell to the voters, at least that's what the politicians believe. Maybe if they had more respect for the intelligence of their citizens and less for Saturday night psychology the world might be a better place.

  • The first law of security is that if anyone get in, anyone can get in. If you make sensitive data available via the web, it is accessible via the web. By anyone. You can make it hard to access, even extremely hard to access, but not impossible. So the very first step in security is the question why the hell you would want to hand over your responsibilities to some automaton that can be accessed by anyone.
  • by cheros ( 223479 ) on Wednesday June 29, 2011 @04:41AM (#36608498)

    [disclosure: I do this for a living]

    If you look over what happened over the last 5 years or so in security you'll see that nothing really new has happened. We get more sophisticated with defenses, stuff gets more expensive, but fundamentally it's deja vu all over again. 99% of what I come across suffers from a pure tactical focus - no long term thinking, no attempt at understanding the mindset of those seeking to cause harm or steal information, no strategy or root cause analysis of assaults.

    The result is that defense has simply turned into an arms race. Immensely profitable for providers, no added value for the customer.

    About 5 years ago we started to work on different approaches which normal risk assessment never touches. As a consequence of the insights gained we stamped out bank data theft for our clients without imposing new regimes or buying new equipment - all it took was a month worth of work. However, that requires people that can really think differently, whereas HR has moved towards cookie cutter tick box selections that seem to be aimed at filtering out exactly those people who can make a difference (the use of HR management seems to exacerbate this trend).

    Security management has become predictable, and with predictability comes failure. The message is clear: start thinking differently - or lose the battle.

  • by xnpu ( 963139 ) on Wednesday June 29, 2011 @05:30AM (#36608702)

    When doing consultancy a lot of people told me flat out they didn't care about security. Quotes like "Anyone can walk in here during lunch and steal whatever they like; why would I (as the IT director) spend $$$ on computer security when management doesn't even care to lock the door." were very common. While the logic is obviously flawed it does illustrate that it simply wasn't a priority - which is not the same as living in ignorant bliss.

  • Anyone who doesn't believe in Security through Obscurity should post their passwords and credit card numbers on /.

  • by petes_PoV ( 912422 ) on Wednesday June 29, 2011 @07:59AM (#36609380)
    The point of security is to increase the amount of time it will (would?) take a baddie to do bad things. We know that security can NEVER provide an absolute guarantee that the wrong people won't do the wrong thing - it can only reduce the possibility of that happening.

    So it is with obscurity. Provided it is not the ONLY security feature used, it has a place in reducing the visibility of a target - just as camoflage has been doing in the military for hundreds of years. It also adds to the overall difficulty of getting into a secure location (be that a website or building) and therefore has a deterrent effect: even if that's only to move the baddies along to try the next target on the list, rather than yourselves.

    Where does that leave obscurity? Right where it needs to be: as a valuable tool in preventing and delaying security breaches. The key thing about it (as with all security features) is to know when it is no longer effective and then to either revamp it or replace it. However, it obviously is still effective for the vast majority of institutions and therefore should not be dismissed.

  • No-one accuses a store with a glass window of being "asleep at the wheel' with respect to security just because they don't have bars in the window. Cyber security's mentality that if you haven't implemented all security features you have somehow invited the attack is simply unfair and removes the mentality of malice from those who are breaking the law. Ultimately, a culture shift to seeing those breaking into websites as common criminals to be dispised needs to happen. High-value targets will always need

Thufir's a Harkonnen now.