Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Security Bug The Internet

Is It Illegal To Disclose a Web Vulnerability? 198

Scott writes "I'm submitting my own story on an important topic: Is it illegal to discover a vulnerability on a Web site? No one knows yet, but Eric McCarty's pleading guilty to hacking USC's web site was 'terrible and detrimental,' according to tech lawyer Jennifer Granick. She believes the law needs at least to be clarified, and preferably changed to protect those who find flaws in production Web sites — as opposed to those who 'exploit' such flaws. Of course, the owners of sites often don't see the distinction between the two. Regardless of whether or not it's illegal to disclose Web vulnerabilities, it's certainly problematic, and perhaps a fool's errand. After all, have you seen how easy it is to find XSS flaws in Web sites? In fact, the Web is challenging the very definition of 'vulnerability,' and some researchers are scared. As one researcher in the story says: 'I'm intimidated by the possible consequences to my career, bank account, and sanity. I agree with [noted security researcher] H.D. Moore, as far as production websites are concerned: "There is no way to report a vulnerability safely."'"
This discussion has been archived. No new comments can be posted.

Is It Illegal To Disclose a Web Vulnerability?

Comments Filter:

  • paste up a poster in the town square, announcing that the lock is broken on the back of the hardware store?

    How is this different?

    • Paranoia and the DMCA.
    • Re: (Score:3, Informative)

      If the poster is not signed, who can be blamed?

      The problem is that there are many emperors that want to believe in security by obscurity, and when told they have no clothes, would rather shoot the messenger than face reality.

    • It's more like advertising that given brand and implementation of a lock is faulty. It may or may not impinge on you but in either case it's general enough to be of benefit to people besides you. Would you like to know that every model of the car you own happens to accidently use the same key? I would.
    • by Kadin2048 ( 468275 ) <slashdot@kadin.xoxy@net> on Tuesday January 16, 2007 @04:41PM (#17636136) Homepage Journal
      It's not, except that what gets people in trouble, is when they try to take credit for a vulnerability they've found in a production website.

      I doubt that you'd get in trouble -- and how could you? -- if you submitted the vulnerability, or even publicized it, anonymously. There are lots of ways to do this; Mixmaster comes to mind, and is practically invulnerable to tracing, particularly when your potential adversary isn't expecting an anonymous communication to come in.

      If you found a problem, realize that no good is ever going to come to you because of it, and don't expect to ever be rewarded or thanked. Once you've acknowledged those things, there's no reason to attach your name to it, when you let them know.

      It's when you try to have your cake and eat it too -- point out someone else's problem while getting rewarded for it -- that the problems really begin.
    • Re: (Score:3, Insightful)

      by kalirion ( 728907 )
      What if you want to let the store owners know that the lock is broken? When they ask "how do you know?" you reply "Well, I touched the lock, and it fell apart." So they turn you in for vandalism and breaking and entering.
    • by Lesrahpem ( 687242 ) <jason,thistlethwaite&gmail,com> on Tuesday January 16, 2007 @06:52PM (#17638418)
      I see a big difference.

      If the hardware store gets broken into it mainly effects the owner(s) of the store, the people who work there, and not many other people. If a site like yahoo (the mail aspect of it), a banking site, or paypal is broken into and exploited then it effects every single person who uses the site in a very negative way.

      I don't think publically announcing a vulnerability in a specific public service or facility is very responsible. At the same time, many businesses don't do anything to fix the problem if only one person tells them about it. The public releases we commonly see are sometimes necessary because without the pressure of the public eye the business won't correct the problems in it's service.

      I've done things similar to this on a few occasions. I found a vulnerability in Surgemail, an all-in-one mail server software for Linux, which allowed any remote user to read any mail to the root account, and to send mail as root. I emailed them about it several times and received no reply for over six months. I finally released the info on it, and they fixed it two weeks later. I did something similar with an online service schools in my area offer which allows anyone to see the grades and personal info (SS#, home address, etc) of students in the school through a SQL injection. I contacted several schools about the issue as well as the company they had contracted to write the software for them. It's been 2 years and they still haven't fixed it.
  • And if I catch you, you are going to get seven shades kicked out of you. Pissing about with what's not yours always has repercussions.
    • Re: (Score:2, Insightful)

      by Anonymous Coward
      Colorful analogy, but most vulerabilities are not specific to one person's machine. Would you go "kick someone's ass" for finding a flaw in their own house's security that just happened to affect you too?
    • Not really a good comparison since your house is private and websites are essentially open to all comers.

      It's more like checking the locks on the backside of a Walmart. Suspicious, but not illegal, and not nearly as unethical.

      Heck, you may actually have a legitimate reason to be back there - such as offloading goods from a truck.

      The same can be said for security vulnerabilities in websites. You can easily stumble across them when you're not even looking in places that you're supposed to be.
      • It's more like checking the locks on the backside of a Walmart.

        Even the backside might not be necessary. Who hasn't walked up to a storefront entrance with the intent of going in and been rebuffed by a locked door before seeing the store's hours?

        • Re: (Score:3, Interesting)

          by green1 ( 322787 )
          I actually did find a real world security vulnerability of that form... Elevator in the building I worked in was prone to malfunction. the bottom floor of the building was a pub that was not open at 8 am when I went to work. normally visitors would be kept out of said pub by the fact that you would need a key for the elevator to go to that floor. one day I got on the elevator, pressed the button for the floor my office was on, when the doors opened I stepped out without paying much attention and found mysel
    • by russ1337 ( 938915 ) on Tuesday January 16, 2007 @04:20PM (#17635752)
      Would you say anything if you were in an airport and noticed a door unlocked and ajar leading from the public area to the tarmac around the aircraft?

      • No and I would not say anything and I would just laugh if I saw you checking if these airport doors were locked and several heavily armed men drag you off for a little question and rectal examination time.
        • >>>"No and I would not say anything and I would just laugh if I saw you checking if these airport doors were locked and several heavily armed men drag you off for a little question and rectal examination time."

          I wouldn't try the doors either. But, if I saw one open then I'd tell someone, just the same as when I've baggage unattended for a suspiciously lengthy period.

          But relating this to the article, and this is where the contention starts: The web doesn't easily discriminate between 'seeing
      • by Beryllium Sphere(tm) ( 193358 ) on Tuesday January 16, 2007 @06:04PM (#17637664) Journal
        Funny you should mention that. Just this year, a woman looking for her wallet pushed open a door to a parked airplane at Newark [nytimes.com]. An alarm went off. Nobody paid any attention. She was alone on the airplane for several minutes checking around the seat for her wallet.

      • How many times have you seen a car with their lights on in a parking lot with nobody in the car?

        In the old days, someone would check the doors to see if they were unlocked and turn off the lights for the person to keep their battery from running down.

        Would you touch someone else's car today if the lights were on?

  • Eric McCarty's pleading guilty to hacking USC's web site was 'terrible and detrimental,' according to tech lawyer Jennifer Granick.

    No good deed goes unpunished. The lesson here is, lett the poor bastards find out about the problem after it's too late.
    • Re: (Score:3, Funny)

      by DrugCheese ( 266151 ) *
      That's where it's headed probably. White hats will be forced to keep their mouth shut and giggle to themselves.

      • Or just anonymously post their discoveries in a public forum. That's what I'd do at this point... being nice and telling the site admin directly is too risky, and there's no excuse to let security issues just sit unnoticed.

        • there's no excuse to let security issues just sit unnoticed.

          Sure there is, if they're going to treat you like scum when you try to help them, let them suffer the consequences. If they don't appreciate it when someone points out a problem let them face the pbulic and their customers or clients when a cracker or script kiddie exploits the vulnerability. You shouldn't have to hide just because you try to help someone.!!!

          Falcon
    • In the interest of full disclosure, Clare Boothe Luce said that [brainyquote.com]. :)
  • by gstoddart ( 321705 ) on Tuesday January 16, 2007 @04:06PM (#17635444) Homepage
    Is this about discovering a vulerability, or trying to discover a vulnerability?

    If I click a link, and something breaks, and I've 'discovered' a problem, I've probably not done anything. It just broke, and I was the one who was there.

    If I try to find a problem, and do (even if I don't exploit it), then I might have been doing something I shouldn't.

    A real world example would be, if you get caught outside of a door, trying to pick the lock, and then claim you were trying to ensure their locks were safe, you might get charged bith attempted B&E. You don't get to do a security audit on people's front doors.

    As much as we like to separate people into black hats and white hats, if you were trying to jimmy the lock, for whatever reason, you were probably doing something you shouldn't have been.

    Just my 2 cents, anyway.
    • Re: (Score:2, Insightful)

      by haddieman ( 1033476 )
      I would have to agree with you on this. The problem is that, with the internet, it is a lot easier for people to do this and not "feel" like they are doing anything wrong. Sure, most people aren't going to risk being caught trying to pick someone's lock when it's on their back door, but when you are sitting in your room at your computer it is much easier to feel that you either won't get caught or that people will appreciate your "helpfulness" even though, in real life people will still feel like their priv
    • Exactly. This is the crux of the issue: intent. Almost all crimes must have an actus reus (act) and mens reus (mental state), depending on the law/state. If the mental state (including criminal neglegence) doesn't fit with the crime, then there is no crime to prosecute (see your state's penal code for definitions for "culpable mental states"; in the Texas penal code it's Title 2 Chapter 6).

      This, however, is different in civil courts.
    • Re: (Score:3, Interesting)

      by ACMENEWSLLC ( 940904 )
      This is a gray area.

      One of my network magazines that I get at no charge by filling out survey information had expired. I got a phone call and the person on the line asked me to renew. She provided a generic website address, and then a unique ID.

      The problem was that the Unique ID was not random. It was something like 123456. When I put this in, it wasn't just a questioner. It had my personal information. I could put in 123457 or 123455 and bring up the personal information of someone else.

      It is a we
    • by 99BottlesOfBeerInMyF ( 813746 ) on Tuesday January 16, 2007 @04:39PM (#17636112)

      A real world example would be, if you get caught outside of a door, trying to pick the lock, and then claim you were trying to ensure their locks were safe, you might get charged bith attempted B&E. You don't get to do a security audit on people's front doors.

      I don't buy that analogy. Breaking and entering is a crime. Theft is a crime. Exploiting computer vulnerabilities is a crime. I'm not sure finding computer vulnerabilities is or should be a crime. I could just as easily use the analogy, "looking at the windows of houses to see if they are open or unlocked is not a crime, but climbing through a window is."

      I think laws that rely upon somehow knowing the intent of the person performing an act are pretty poor laws. If I go tell you your locks are really old and can be opened with a plastic fork because I noticed it while walking by, and you happen to run a store I do business with and hence have my CC# on file, that sure shouldn't be a crime. If I write a letter to the editor of the newspaper saying the same, it should not be a crime. If I notice on your Web site the same level of e-security, I don't see how it is qualitatively different.

      • Re: (Score:3, Insightful)

        by gstoddart ( 321705 )

        I think laws that rely upon somehow knowing the intent of the person performing an act are pretty poor laws. If I go tell you your locks are really old and can be opened with a plastic fork because I noticed it while walking by, and you happen to run a store I do business with and hence have my CC# on file, that sure shouldn't be a crime.

        I'm gonna divide that into two halves ... the one that makes sense, and the other.

        If you truly 'walked by' and noticed the windows, and told me about it, that's like notif

        • by 99BottlesOfBeerInMyF ( 813746 ) on Tuesday January 16, 2007 @05:43PM (#17637302)

          If you then went to a known burglar with the information, well, you're no longer just doing something nice and innocent now, are you??

          Yes, but no one is claiming you should be able to find vulnerabilities and give or sell them to blackhats, merely make them public or inform the site operator without worrying about being sued.

          or the second half ... WTF does having, or not having, your credit card # on file apply to this?? It seems a bit spurious to the conversation at hand, and I'll treat it as such.

          No it isn't. If they have your credit card on file (as many e-businesses might) then you have a business relationship with them and a vested interest in their security. It is perfectly legal and sometimes industry practice to hire private investigators to look into the security of current or proposed business partners.

          I don't think you've idly done nothing.

          You've done something, but nothing illegal.

          You've made available to people the means to commit and illegal act. The fact that it was just there for anyone to see (or you spent three hours trying to find it) doesn't mean you wouldn't have anything to do with them getting robbed.

          So what if the local bank, where the whole town keeps their money, tends to leave the back door propped open and the safe unlocked? Should it be illegal for me to tell the paper or the paper to write an article letting everyone know they should take their money out? Should you have to be concerned about being sued if you write the bank manager and let him know what is going on?

          I realize people figure that white hats should scream really loud so everyone knows the vulerability, because the black hats wouldn't. But, telling the black hats how to do it, you no longer get to say you're better than they are. In fact, you're probably worse, because you were the one casing the joint, as it were.

          Not at all. Whitehats do not profit from illegal actions and are aiming to improve overall security. Full disclosure is not always the best way to go about improving security, but sometimes it is. Why you think only in terms of full disclosure, however, is a mystery to me. Even the summary specifically mentions people being sued for just telling the Web service provider that the service has vulnerabilities in it.

          You don't have an obligation to ensure that everyone in the world knows how to open every unsecured lock.

          No, but sometimes telling the public how to open a particular lock is the best way to improve security. If Diebold starts selling a new combination bike lock, and I discover 1.2.3.4 always opens it, and I know at least one gang of thieves is already looking for these locks and stealing bikes via this method... I should 100% have no fear that I will suffer legal repercussions if I tell the support guys at Diebold. If Diebold refuses to acknowledge the problem I should likewise have no fear that my exercising my freedom of expression and telling the local newspaper will result in my being prosecuted for some crime. The same goes for software and services on computers.

        • For the second half ... WTF does having, or not having, your credit card # on file apply to this?? It seems a bit spurious to the conversation at hand, and I'll treat it as such. :-P

          That one is easy, the person whose credit card number is on file is at risk of having the number stolen and then having the card maxed out. If it were me I'd definitely would want you to do something or you'd loose me as a customer as well as maybe be slapped with a lawsuit.

          Falcon

    • Re: (Score:3, Interesting)

      by zero-one ( 79216 )
      A few years ago, I applied for a job at a well known company using their online application site. When I finished filling in the form, the site redirected to a page with a URL like https://www.example.com/viewapplication.asp?appli c antid=12345 [example.com] that displayed all of my details.

      I wondered what would happen if I changed the number in the URL and found that the site would happily show me the details for all the other applicants (including quite sensitive information).

      Was changing the URL "trying to discover a v
      • A few years ago, I applied for a job at a well known company using their online application site. When I finished filling in the form, the site redirected to a page with a URL like https://www.example.com/viewapplication.asp?applic antid=12345 [example.com] that displayed all of my details.

        I wondered what would happen if I changed the number in the URL and found that the site would happily show me the details for all the other applicants (including quite sensitive information).

        Was changing the URL "trying to discover a vulnerability" or "discovering a vulnerability"?
        What if the values had been sent using a HTTP POST (so I couldn't see them or edit them by just changing a URL)? What if they had been lightly encrypted or included a check-digit?

        A truely devious mind would have entered https://www.example.com/viewapplication.asp?applic antid=12345 %3B update applicants set photo_url='http://goat.ca/hello.jpg' %3B-- or something equally funny.

    • People who actively go out searching & snooping are being vigilantes (rather than "concerned citizens" who just happen to notice something and report it).
    • by Jerf ( 17166 )

      A real world example would be

      No! No metaphors!

      Computer networks aren't neighborhoods, superhighways, or libraries. Trying to shoehorn the metaphors onto a reluctant reality just means people endlessly argue about the metaphor and not the question at hand.

      The question is, "is it illegal to disclose a web vulnerability?" You also ask "What are the boundaries of permitted probing?"

      I don't have an answer, but I'll give you one aspect that is not covered by any real-world metaphor, yet is very important: If I g

      • Computer networks aren't neighborhoods, superhighways, or libraries.

        They're a series of tubes, obviously.

        (-1, too easy)

        If I go to a website and give it my credit card number, I have no assurance that they aren't doing stupid things with it.
        ...
        That's an interesting question, but there's almost no real-world analog with modern credit card systems that don't have to record the full number. And please, don't try to shoehorn a metaphor onto this.

        I know you didn't want real-world analogy, but ... you hand your credit card to the waitress at a restaurant. She goes into the back to ring it up. You have no assurance that she isn't doing stupid things with it. She isn't supposed to make a carbon copy of your CC to sell to or be stolen by someone else, either. That's a pretty clear and not-at-all contrived analogy.

        • by Jerf ( 17166 )
          The scale of offense available on the Internet, the abstractness of the attack, the inability to track it down to one person, all of these things differ, plus more I'm just not bothering to type.

          What, you honestly think I've never seen the waitress metaphor? It hardly resembles the Internet at all.

          (In fact, I almost never see an analogy that comes even close to capturing the issues of scale presented by the Internet.)
    • Re: (Score:3, Interesting)

      As much as we like to separate people into black hats and white hats, if you were trying to jimmy the lock, for whatever reason, you were probably doing something you shouldn't have been.

      If I store my stuff in a storage locker and have to use a lock the storage company provides, can I test its security?

      If I live in an apartment building, can I check the lock on my door to make sure it's not easy to pick?

      In reality, all locks are pretty easy to pick. Locksmiths and law enforcement have tools that can
    • Is this about discovering a vulerability, or trying to discover a vulnerability?

      This seems to be the essence of the law. The federal law uses the word intentionally for a reason. Link to the text of 18 U.S.C. 1030 [justice.gov].

      For those who read the legal text remember "damage" could cover a lot of things including log files or time stamps.

    • by antic ( 29198 )
      Have a standardised signal in the footer of a website - coloured flags indicate the owner's approach to the issue - one flag might say that they appreciate being notified of any flaws. The absence of a flag could suggest that it's not worth the risk - they'll play a heavy hand out of embarrassment.
  • One problem is the lack of qualifications to call oneself a legitimate security researcher. Every two-bit script kiddy hacker in the world is a "security researcher" by the current definition. Unfortunately, many of the current actually-qualified security researchers have some sort of black-hatting in their background, which, to my mind, makes them suspect in the first place.

    It's an issue of trust. If you sit outside the system and make pronouncements, it's difficult to trust what you say. If you break into
    • by Intron ( 870560 )
      Which is why the legitimate security professional testing an active website has a letter signed by a company officer allowing them to do so.

      In fact, I plan to send a large number of emails to security professionals hiring them to hack my website and send me a report of what they find.

      -- sincerely,
          Charles Prince
          Chairman & CEO
          citigroup
    • Unfortunately, many of the current actually-qualified security researchers have some sort of black-hatting in their background, which, to my mind, makes them suspect in the first place.

      Would you automatically disqualify Kevin Metnick [freekevin.org] just because he was a "blackhat hacker"?

      Falcon
  • Anonymizers? (Score:5, Insightful)

    by tfinniga ( 555989 ) on Tuesday January 16, 2007 @04:07PM (#17635480)
    So, this might not be relevant, but once I reported a cross-site scripting to a website by using a web anonymizer to create a hotmail account, sending exactly one message, and then never using the email account again.

    Anonymizer tools have improved since then, especially for combating censorship. Would you be able to use TOR or something similar to report vulnerabilities without exposing your identity?
    • The banner that appears when you start TOR says it's experimental software and that you shouldn't rely on it for strong anonymity.
  • Sooner or later, they will learn that they need to secure their site after they get hacked, used for a warez dump and find out that they have to pay (literally) for using 8x the bandwidth they paid in advance for.
    Expensive lesson usually means lesson learned.

    Why are we supposed to help the stupid? Let them continue doing stupid things until they get pwnt and it costs them their business.
    • Re: (Score:2, Insightful)

      by haddieman ( 1033476 )

      Why are we supposed to help the stupid? Let them continue doing stupid things until they get pwnt and it costs them their business.

      Making mistakes != being stupid. If someone found a vulnerability in your site wouldn't you want them to let you know about it? On the other hand, if you had already been warned about this vulnerability and done nothing about it then yes, that would be very stupid.

    • How about if this is a business that affects your life in some way? For instance, what if the New York Stock Exchange had a vulnerability it didn't know about, but you do (not gonna ask how you found it)? Now think about what could happen if the NYSE got hacked. Worst case scenario, the US economy collapses. Now how does this affect you? Well, your job could be in jeaopardy, hyper-inflation could make the cost of living to sky rocket. Happy times are not in the cards for you. This is a pretty extreme
    • Why are we supposed to help the stupid? Let them continue doing stupid things until they get pwnt and it costs them their business.

      1. They're on the same Internet we are, flooding the common bandwidth with worms and spam.
      2. We may have to actually do business with them (banks, government sites, etc.)
      3. It can be very interesting and rewarding to find vulnerabilities. It improves one's ability to create secure code.
  • Comment removed (Score:3, Insightful)

    by account_deleted ( 4530225 ) on Tuesday January 16, 2007 @04:11PM (#17635566)
    Comment removed based on user account deletion
  • What's the problem? (Score:4, Interesting)

    by gravesb ( 967413 ) on Tuesday January 16, 2007 @04:14PM (#17635612) Homepage
    What's the problem with sending info to a webmaster? And what's the point of doing anything else? If you post it publicly, you've created a race condition between script kiddies and the site admin, and should be punished. If you send it to the webmaster, you are doing a service, and shouldn't be punished. As long as you don't exploit it, you should be ok.
    • by fractalus ( 322043 ) on Tuesday January 16, 2007 @04:35PM (#17636044) Homepage
      Simple: sometimes such information gets lost, or doesn't get acted on, and the bug persists. That bug could be exposing thousands (or hundreds of thousands) of users of that site to risks they're not aware of. If one person found it, another surely can, so it's a reasonable assumption that someone else other than the site owner could know about the bug and be exploiting it for personal gain. At that point, being aware of the bug but not informing the users is allowing them to be exposed to unnecessary risk. Businesses are often reluctant or slow to fix problems because they assume nobody knows about them or they're costly to fix (just like auto companies hate to have to recall cars to fix problems). Sometimes, the only way to get the problem fixed is to announce it publicly and give the company a bit of a black eye.
    • by Jussi K. Kojootti ( 646145 ) on Tuesday January 16, 2007 @04:38PM (#17636106)
      That may be a race, but a race condition is something else...
    • by linuxmop ( 37039 )
      You assume too much. Consider:

      1. Script kiddies may already know about the vulnerability. There is no reason to believe that you are the first to discover the exploit.

      2. The webmaster might not fix the issue before harm is done to the users. If the script kiddies already know about the vulnerability, they will likely exploit it before the webmaster has time to react.

      As a user, I want to know immediately when a vulnerability is discovered. It gives me an opportunity to stop doing business with a website befo
  • by rabblerouzer ( 1052104 ) on Tuesday January 16, 2007 @04:14PM (#17635618)
    Some interesting comments from Bruce Schneier and Marcus Ranum (and Microsoft too) on the debate. http://www2.csoonline.com/exclusives/column.html?C ID=28088 [csoonline.com]
  • If disclosure of vulnerabilities stops, exploits will still occur... only no one will know how they work or how to stop them. yeah, this is progress.
  • Armed with this fair and just legal precedent, we can finally put all those scheming hoodlums from Bugtraq in Federal PMITA Prison where they belong.
  • It's been ok for me (Score:5, Interesting)

    by nicpottier ( 29824 ) on Tuesday January 16, 2007 @04:36PM (#17636074)

    A few years ago I was renewing my car tabs on the WA state's site and they had a box for 'donations to DOT' or somesuch. For kicks I tried putting in a negative value, and sure enough it reflected the total for my tabs as less. I went ahead and submitted things with a dollar taken off the value, just to see if it would actually go through. Sure enough, a week later I received my tabs, and the mathematically correct but embarrassing negative donation on my receipt.

    I ended up calling them and letting them know about the bug. They were nice about it, and the next year at least it was fixed.

    -Nic
  • those that ask *best whiny voice* "Is it ok if I do this? Will I get arrested? Is it illegal to do this?"
    and those that proudly proclaim "I am doing this and no-one can stop me. If you think you can arrest me for this, YOU ARE WRONG."

    The first kind of people contribute nothing to our freedoms. They are crippled by uncertainty and their annoying whining makes people think that, hey, maybe there is something to fear. The second kind of people challenge the norms and make that which was uncertain clearly
    • >The first kind of people contribute nothing to our freedoms. They are crippled by uncertainty and their annoying whining makes people think that, hey, maybe there is something to fear. The second kind of people challenge the norms and make that which was uncertain clearly not illegal.

      Youre advocating vigilantism. The history of vigilantism proves your narrow assumption about 'badasses' very wrong. [ncwc.edu]

      American vigilantism arose in the Deep South and Old West during the 1700s when, in the absence of a forma

      • by QuantumG ( 50515 ) *
        Right, yes, that's a logical conclusion that one. People who are not feared by uncertainty and instead stand up to be counted, those people are vigilantes. Black people sitting at the front of the bus? Damn vigilantes.
         
        • No the problem is that you are advocating any rule breaking as being a good thing. This is historically untrue. On occasion this kind of thing is helpful, like with civil disobedience for a moral cause, but it certainly is not true generally. Generally, the actions of 'we know better' hotheads is almost always morally wrong and morality is not this simplistic and narrow 'badasses' vs 'sheep' dichotomy you describe.
          • by QuantumG ( 50515 ) *
            Dude, do you even *read* my comment before you reply? Jesus. We're talking about a situation where people don't know if the action is illegal or not. There's no rules being broken here. We're talking about a situation where people think it should be ok to report security vulnerabilities but they fear they may get in trouble for it. You're talking about something completely different. Try to keep up with the rest of us.
            • And you're saying they shouldn't bother finding out if it's right or wrong, they should do it anyway and see what happens. While that might be correct in the given context, it's asinine to suggest that it's valid to generalize it. What you're saying is that in any situation with any moral ambiguity, you shouldn't sit down and figure out what the morality actually is; that's for sheep. You should just go ahead and do it, and wait and see if anyone arrests you.
    • There's a third kind: The thoughtful person. They realize that there *IS* something to fear, that society is something you need to treat with respect and that you need to plan accordingly.

      Think about this: Would anybody care about Rosa Parks if she wasn't a little old lady? How many hundreds of black men tried the same thing and only ended up in prison?

      You need more than just a backbone.
      • by QuantumG ( 50515 ) *
        Yeah, the third kind is this freakin' "middle way" of wishy washy compromise. I'm not a fan. If you think you have a right to do something, do it. If no-one cares, great you set a precedent that others can follow. If someone makes a stink fight. Don't ask permission, and don't go "testing the water" by half doing it. These middle way people, they only get half the job done and end up making it worse for everyone else because they go in timidly, and back off as soon as they hit resistance.
  • by Protonk ( 599901 ) on Tuesday January 16, 2007 @04:43PM (#17636176) Homepage
    this is an issue that simply must not be decided by the people whom it has been entrusted to. In this case, the vested interests that will lobby congress, pay for legal teams, and write friend of the court briefs are not the whisleblowers and the security researchers. There are HUGE industries where the economic incentive is to ignore problems, rely on obscurity for security, and prosecute those who would expose vulnerabilities.

    Each time an exploit comes out, the pattern is the same. the company doesn't announce it, anti-virus makers are either paid off (as in 'approved' spyware and/or rootkits) or not kept informed, and once the story breaks, the public relations machine starts. The researcher is vilified as a hacker, the problem is denied or minimized, and the prospect of a patch is left moot because this would require accepting that a huge problem exists. Most of us scream that this is ridiculous, companies should tell everyone when an exploit shows up, and patch it as soon as possible. More to the point, they should expose their source code to scrutiny in order to better provide services to their customers.

    Are you sitting down? good. They won't and they don't care. The first rule in the PR handbook is to deny and put off realization. If the big front is that there isn't a problem, or that a crack of a voting machine can only be done in a lab, and months down the road, the company quietly sues the researcher or releases a patch, they win. People have a limited attention span and fatigue quickly in the face of fear and hysteria. As long as your company's admission of guilt comes well after the original problem, or not at all, people are happy.

    With this in mind, let's look at the law. thankfully, whistleblowers have some protection, and some internal voices about code might not be silenced, especially if the review takes place within the judicial system, and not through a new law. Of course, corporate secrecy, as in the case of Apple and HP, is pretty extreme, and most employees wouldn't risk the civil consequences of voicing a problem that doesn't rise to the level of a public safety hazard.

    Outside researchers are in more and more trouble, and this really only leads to problems for the customer base as a whole. We rely on sites like MOAB [info-pull.com] to shame companies into action. We also rely on OSS competition in order to make products like IE better--Firefox gives an economic incentive to Microsoft to improve their product, otherwise, security development would have languished.

    Very few analogues exist in the places where this is critically important: commercial and banking software. CITIbank [boingboing.net] suffers a classbreak and doesn't bother informing their customers. Security conscious customers can voice their discontent and move to another bank, but we have to trust that the new bank is as averse to security breaches as we are. For the rest of the millions of customers, security will not improve. Since identity theft costs are largely borne by the customers, the banks don't care. because the banks don't care, it is much easier, and better in their eyes, to make publishing voulnerabilities like this one [eweek.com] illegal and trust that their customers will never be the wiser.

    check out this article:
    [PDF] Why information security is hard [google.com]

  • by gillbates ( 106458 ) on Tuesday January 16, 2007 @04:54PM (#17636400) Homepage Journal

    But then, it's not your business, either.

    Should you discover a security vulnerability, the correct response is to forget it. Here's why:

    • No one likes the bearer of bad news - not the website owner, not the vendor who sold the software, not the consultant who coded the website. They have lawyers; their interest is in making money, not necessarily in creating secure software. Keep this in mind. If they can find a cause for libel, they will. If they can deflect blame (stupid hackers are at it again!), they will.
    • Why would you expose yourself to potential legal problems, especially considering that you aren't getting paid for your efforts
    • If they were truly concerned about security, they would have hired an audit firm.
    • Getting hacked is perhaps the best teaching experience regarding security. Let another hacker expose their vulnerability in a way they can't deny. Then they will take security seriously.
    • Do the security industry a favor: why would anyone hire a security specialist when good samaritans on the internet (aka whitehats) will audit their website for free? Don't undermine your fellow workers.
    • No one has ever been brought to trial or sued for failure to disclose a security vulnerability. You stand nothing to lose by quietly taking your business elsewhere; let the company figure out that the public wants secure web sites.

    Naturally, we might feel a sense of duty to help someone out - if they have an exposed security flaw, we naturally want to help them. But first consider how it will be received. Most companies would rather produce software with publicly unknown flaws than to produce perfect software, websites, etc... at a much higher cost.

    And, if you feel that the website owner would appreciate knowing, you might at least disclose it from an anonymous email address.

    • Re: (Score:3, Insightful)

      by Evardsson ( 959228 )
      Hmmm, to answer point by point:
      • No one likes the bearer of bad news - not the website owner, not the vendor who sold the software, not the consultant who coded the website. They have lawyers; their interest is in making money, not necessarily in creating secure software. Keep this in mind. If they can find a cause for libel, they will. If they can deflect blame (stupid hackers are at it again!), they will.
        As a website owner, and admin of several sites, yes I do want to know and while no one likes bad n
      • While it may be the right thing to do, and I certainly respect your wishes concerning your own website, the unfortunate reality is that business has created a climate of fear and uncertainty surrounding disclosure. Thus, your website may get hacked because some good samaritan is afraid of disclosing a vulnerability with your website.

        People are risk averse, and disclosing bugs carries a lot of risk with it. If you desire reporting of security vulnerabilities, you should state so, in unambiguous languag

  • ... I intend to smash a window in the back of my neighbours house, then stick a postit note on his front door letting him know that I have discovered a potential problem with his home security.
    • Re: (Score:2, Insightful)

      by sameeer ( 946332 )
      there is a difference in smashing the window, and being smart enough to observe that he's left his window open. then leaving a post-it (not visible to the public) that the window is open, and to close it.

      smashing the window means you've actually made the system more vulnerable than it was, which is not the case in this argument.
    • by CKW ( 409971 )
      It's more like you noticed that his side door is off the hinges, and you're going to tell him about it.

      But 2 years latter you notice that it's still off the hinges, and your cousin rents the basement apartment and you're worried about her safety - so you post a message on the community bulletin board to embarass him into fixing the door (the fucking cheapskate - putting your cousin at risk just because he's too fucking cheap to fix the broken door).
  • NOT exposing an insecurity in any application only helps the true criminals. Or does anyone here (or anywhere) doubt that this information is readily available to those that cause the real harm, those that hack for profit?

    An insecure webserver is becoming one of the cornerstones of phishing attacks. Today, ISPs routinely block access to those servers the attackers setup in some countries that have more pressing problems than finding criminals that do damage in other countries. We can't grab those servers, b
  • Comment removed (Score:3, Interesting)

    by account_deleted ( 4530225 ) on Tuesday January 16, 2007 @06:07PM (#17637716)
    Comment removed based on user account deletion
  • Yesterday, I was on a site with URLs of the form:

    http://domain.com/showpage.cgi?/pages/index.html

    I wondered if the path was being untainted, so I tried the following:

    http://domain.com/showpage.cgi?../../etc/passwd
    h ttp://domain.com/showpage.cgi?../../../etc/passwd
    http://domain.com/showpage.cgi?../../../../etc/pa sswd
    http://domain.com/showpage.cgi?../../../../. ./etc/passwd

    Bingo - I had their /etc/passwd file. And then from there, a quick look at their motd gave me the OS, and from their I got

    • by CKW ( 409971 )
      .

      YES, this EXACT example was prosecuted and convicted in England, and he didn't even get anywhere!!!

      It was for a telco tsunami donation website, all he wanted to know was whether or not the site was legit or not so he tried a couple ../'s, then he made his donation. They were running an IDS (intrusion detection system) and he was one of the only people whose identities they could identify (because he made a donation) and they zealously prosecuted and the prosecutor was zealous and looking to make a convict
  • Due to website/browser/plugin problems, its often the case that a media file will
    not play in the window. Now its usually not difficult to determine where on the
    website the media is located. If you browse that directory using automatic indexing,
    and download what is there, are you breaking the rules? What about parent and subdirectories?

    After all you have not guessed a password or anything, but is it considered "out of bounds"?

    On a related note, do web-spiders do this? Do they just follow the links or do
  • It's simple (Score:2, Interesting)

    by zialien ( 962681 )
    If you don't own the website or you don't have the owners permission then it is illegal for you to attempt to access the web server except if you are "using it properly" (eg. you actually surf the web site via the links). So if you have found the exploit without permission then you have already committed a crime. Then telling people about it is 1. stupid, 2. gives people evidence to have you charged. As to whether it is illegal to disclose the vulnerability is anybodies guess. I would think that it wouldn't
  • I once tried to leave a comment on an article on a local newspaper's website. My subject had the word "Don't" in it, and I got a SQL error back from PHP. I changed my post and added "This website is vulnerable to a SQL injection attack. Send data as parameters" at the end of the comment.

    I wonder how likely it is that the newspaper's website designer reads the comments generated by code he created. Or reads the error logs spewing SQL.
  • Bike U-locks had a defect and could be picked easily with a ball point pen. Informing people helps everyone. Informing no one helps bike thieves because they are the kind of people who find out these things and inform each other about them.

    Why is this difficult to understand?

    As for all the "doing something you shouldn't" bullshit, it's innocent until proven guilty. When did people become so terrified of freedom.
  • Do not tell boss that we're storing credit card numbers, usernames and passwords in plain text on our database server. I might get arrested.

    (Posting anonymously so you don't know who I work for!)
  • Knowing Eric McCarty personally I have some level of insight into this case other than what's put out in the news media. For what it's worth here is my $.02.

    I think we should establish stricter minimum guidelines for information security and hold those we choose to share our personal information with to them. Anyone in IT in the medical industry knows about HIPAA... usually with a groan. HIPAA can levy fines, shut down operations, etc... if you're not taking "reasonable and appropriate measures" in s
  • I was interviewed for this article by Scott Berinato. I have added some thoughts on the topic to my blog. A rich and robust vulnerability research community needs legal access to the software we are researching. As more and more software becomes web 2.0 instead of running on our desktops we will have less and less independent vulnerability research.

    Vulnerability Disclosure in the new "Software in the Cloud" World
    http://www.veracode.com/blog/?p=11 [veracode.com]

    -Chris

"Your stupidity, Allen, is simply not up to par." -- Dave Mack (mack@inco.UUCP) "Yours is." -- Allen Gwinn (allen@sulaco.sigma.com), in alt.flame

Working...