Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Security

Ethical Lines of the Gray Hat 261

Facter writes "There is a great article on CNET about the ethical debate between white/gray/black-hat hackers - interesting to note is that it reports the "fading away" of the "gray" definition between white and black, due to the DMCA hindering anything in between.."
This discussion has been archived. No new comments can be posted.

Ethical Lines of the Gray Hat

Comments Filter:
  • by netphilter ( 549954 ) on Monday September 23, 2002 @12:10PM (#4312572) Homepage Journal
    IMO, there are hackers, and there are security professionals. If you were a hacker and are now a security professional...great. If you continue to break the law, you should go to jail. Pretty simple, and none of this hat confusion.
    • Most good security professionals used to be good hackers. I hate this white hat black hat shit that people tout, doesn't mean a damn thing to anybody except marketing people. You're either good at what you do, or you're not, legal or illegal.
      • I prefer my red hat... I don't have to shave, all the white hats respect and revere me, and I can get it on with Smurfette.

      • by neuroticia ( 557805 ) <neuroticia@ya[ ].com ['hoo' in gap]> on Monday September 23, 2002 @12:54PM (#4312894) Journal
        White hat and black hat are necessary distinctions. Either someone intends to cause harm, or does not. Those terms are an easy way of explaining to the average layperson that there are 'good' and 'bad' hackers, otherwise they'll lump us all together.

        The 'bull' is that there is no longer a 'gray hat' hacker. The elimination of the 'gray areas' is a legality, and a stupid one, at that. It is not a reality. Hackers will still walk the line, and things they do will still be thought of as "good", "bad", or "fuzzy line down the middle". The only difference is that the DMCA has moved the line of acceptable actions so far over, that people can be White Hat hackers and still end up being persecuted under the DMCA for doing something that even the majority of the population would consider "GOOD" as opposed to bad.

        This doesn't mean that the hackers are "black hat", and it's stupid to imply so.

        -Sara
        • You're right.

          The DMCA is criminalizing the White-ish hat, meaning that if you are not 100% pure cotton white hat then you must be, by law, a rotten, credit-card thieving, hard-drive reformatting, website-defacing, hardcore-porn-trading, no-good, evil, and overall bad person.

          Of course, it's equivalent to saying that people that drive over the speed limit are killers.

          Just because you bend a little stupid and useless law does not make you a hard-core, purse-snatching, nigerian-money-laundering uberhaxor whose handle rhymes with Phuckiaul.

          I say: Hacking is good: It's called creativity, perseverence, and curiosity. Take these things away from society and people become sullen, unimaginative, short-attention-spanned. Which, come to think of it, is exactly what the entertainment industry wants people to be like.

          {voice of irate teacher in pink floyd's The Wall]
          "You will sit on the couch and watch our programming! Any demonstration of self-awareness will be punished! How can you become a couch potato if you don't eat your meat?" Da-dum-dum da-dummmm.
          • Hm. No. I disagree with your analogy (people who drive over the speed limit are killers)--driving over the speed limit, while it doesn't always result in death, is a dangerous activity that could more easily be classified as killing than the majority of gray-hat hacking could be called theivery or even illegal, if it weren't for the DMCA.

            A more appropriate analogy would be "It is illegal to research into, and document the progress of a disease", or "It is illegal to test the security of the locks that the locksmith installs on your door."

            Even 100% cotton white hats check the security of things, and attempt to make sure that they work on their systems--under the DMCA this could be considered attempts at hacking, and thus illegal.

            If the DMCA just made it possible to crack down on "law benders", or "law breakers", I'd be unhappy about the law-bending category, but hey- they're laws. However, the DMCA outlaws things that it should not touch. Things that are beneficial for society, things that keep technology moving forward, and that keep the country's data safe. Gray hat hackers are *NECESSARY*, if only because black hat hackers exist, and at least gray hats are less malignant.

            In a lot of ways, the DMCA is equivilent to the US Gov't outlawing a cure for aids because it caused people to have a cold for a week.

            It's over-reaching, and goes beyond being restrictive--straight into the field of being suffocating and damaging.

            -Sara
    • by JUSTONEMORELATTE ( 584508 ) on Monday September 23, 2002 @12:16PM (#4312636) Homepage
      The question "Do we really need a hat?" from someone who's blog is at whitehatorganization.com
      Yes, apparently you really need a hat.
    • by (trb001) ( 224998 ) on Monday September 23, 2002 @12:27PM (#4312726) Homepage
      Where do you draw the line? Are the only sanctioned hackers the ones that work for a security company? Personally, if I'm using software, I want to find out about any vulnerability that exists. If I find one, I want to report it. I have no trouble reporting it to only the company that produced the software, but let's face it...they don't always respond with a patch or a fix. If you've taken the legit route and the company has done nothing, I don't see a problem reporting it. I think this is a notable difference between the Hats.

      Not to sound like I'm getting up on my soapbox (I'm not), but it's one of the reasons I like Linux software. I know that if someone finds a problem with bind/apache/ftp that a fix is going to be published somewhere I'll read it (fyi, I don't go surfing the Microsoft website for patches) and I can fix the hole. It's comforting, and that's the defense I give people when they ask why they should use OSS for secure systems.

      --trb
    • by Zathrus ( 232140 ) on Monday September 23, 2002 @12:47PM (#4312841) Homepage
      If you continue to break the law, you should go to jail

      Ok. So you realize that merely reporting a security hole in a protocol to a company, with working source code, is a violation of the DMCA?

      So, as a "security professional" you have now broken the law and should go to jail.

      If we want to be sane about the situation then people trying to uphold themselves as being better than black hats need to get off their high horse. Realize that if you've found a security hole in a product then you're probably not the only one. And yes, you should dutifully report it to the company with enough data/code for them to verify your claim, and give them time to address it (which is a key issue - how long is long enough?).

      But what happens when they don't fix it? Do you just decide that you've done your duty and ignore the fact that someone else out there either has or will discover the hole and exploit it? Or do you report it to a public independant organization like BugTraq? To whom do you owe loyalty? The company producing the product, or to the customers who are being left hanging in the breeze by the company?

      I'll admit that I'm no hacker or security professional, but as a programmer I'd damn well want you to do the latter. It's called whistleblowing, and it's accepted as a viable method to right wrongs when other attempts to solve a problem have failed. This isn't a new concept, nor is it limited to the computer world. The only real difference is the speed at which companies are expected (and needed) to act.
    • really? ok.. so do you investigate hardware designs and modify equipment that YOU PURCHASED as a hobby? if so then you are a Black hat and need to go to jail by your definition.

      Security means nothing with the term hacker unless you are an un-educated manager. What you are referring to is a cracker and a completely different individual....

      Please, get a clue as to what term is what. I dont care what the illeterate media calls them or how they use the term... a HACKER is not a criminal but a software and hardware genius...

      A CRACKER tries to break into systems or bypass security. Why is this so hard for people to understand? The drivel that spews forth from the anchorwoman/man's mouth does NOT make it truth.
    • by Gerry Gleason ( 609985 ) <{gerry} {at} {geraldgleason.com}> on Monday September 23, 2002 @01:02PM (#4312961)
      We don't give the media permission to denegrate the basic goodness that is "the hacker ethic". In spite of all the crap the major media puts out about this, there is almost no connection between hacking and breaking the law. The real origin of the urge to hack is the same as they urge any artist feels to create.

      I fully support the use of the alternate term "cracker" to refer to people who use hacker-like skills (or often, no skill just downloaded cracker kits) to vandalise whatever system they can manage to crack. Yes, some hackers get sucked into these activities at some point in their development, but that doesn't mean it is condoned by the hacker ethic.

      How about some analogies. When you check the door of the business down the street and find it unlocked, is it legal so wander around inside and see what you find? No, but if you didn't do any damage, it shouldn't be more than a legal slap on the wrist. If when you tried the door, you triggered the alarm, or some damage was done just by trying it, you can expect someone to be pissed off, and maybe prosecute you when you try it again on another business.

      If a responsible third party closely inspects and tests the security perimiter around your nuclear, chemical or biological plant, and finds vulnerabilities, what should be done? Right, first they tell you and the relevent government authorities, and if there is no real response for a reasonable period of time, tell someone else (press, other trusted third party, etc.).

      What is going on now is a typical corporate response, and it is exactly the same as using SLAPP lawsuits to silence critics. It is evil and anyone getting hit by such tactics should get help from advocacy groups. Of course, staying away from controversy is one approach, but it doesn't give you good hacker-karma.

    • by ebyrob ( 165903 ) on Monday September 23, 2002 @01:41PM (#4313276)
      Hmm... this sounds like an obvious troll, but since you've been modded insightful, I'll byte.

      The term "hacker" has a lot of confusion tied to it. Where I come from it's a term of respect for someone's raw technical abilities. A hacker is someone who is so good at taking things apart and understanding them that they can make gadgets and software do things the original designers never dreamed of. If you think everyone fitting that description without "proper approval" belongs in jail you've got another think coming.

      Maybe when you say hacker you mean someone who breaks into systems belonging to someone else without permission. Yes, that is a minor criminal act, much like trespassing. And there is no excuse for responsible adults doing such things without very good reason, but kids will be kids (Sometimes a system is so insecure this can happen by accident [zdnet.com.au]. )

      The term hacker in general usage today usually covers both the system hacker who gains access to systems not belonging to them as well as the software hacker who takes apart software they have rightfully purchased on their own system. Classically system hacking has been seen as wrong or illegal, but software hacking has always been accepted, and only disclosure has ever been at issue. The DMCA attempts to deal with both in one fell swoop and does so very badly. I take your comment to mean we should just enforce the law to it's fullest even while it is changing in subtle and terrible ways.

      White hats hide information. It seems they *never* disclose exploit code. Black hats hide information. They only use vulnerabilities for themselves. It would seem to be only Grey hats who hold the advancement of security important by sharing their code and knowledge fully. In fact, I'd say it is highly unethical for a White hat to get a vulnerability fixed without ever disclosing it. Perhaps we need criminal penalties for that as well? It also seems a tragedy that white hats will never be inclined to disclose their exploit code even after a fix has been made. They just don't seem to realize that information sharing really is a power positive good. (wasn't that the hacker eithic? [tuxedo.org])

      Actually there are a whole host of other things White hats can and do that are wrong. Like implanting spyware in a product or being negligent in protecting customer information. I don't see criminal penalties for those...
    • You like that nice cheap x86 hardware your using? You ever wonder how IBM ended up with competition in the market of making x86 machines that could run DOS.

      Oh whats that? The bios got cracked. Oh no, you benifit from the fruits of a hacker, shame on you! You should go to jail.

      Some people...
  • Cracker (Score:3, Insightful)

    by SquadBoy ( 167263 ) on Monday September 23, 2002 @12:13PM (#4312609) Homepage Journal
    You mean Cracker. While some of these people might be hackers I can't think too many of them are. Please I know everyone else uses the term hacker in this way. But can't we use the real term?
    • When I first read the blurb without reading the whole story, I was thinking to myself, "So do kernel hackers fall under the grey hat?"

      These writers really need a geek consultant to get their terminology correct.
      • Re:Cracker (Score:3, Funny)

        by red_dragon ( 1761 )

        Well, there's The Jargon File [tuxedo.org], a.k.a. The New Hacker's Dictionary, which the writers could presumably consult whenever they write anything with a geek factor greater than 0. However, even The File's contents can be tough to grok by non-geeks, so I've decided to condense it into a form more easily digested by non-geeks:

        Hacker good, cracker bad!

        Hopefully they'll get that one.

        (I make no warranty of accuracy of my statements.)

    • The problem is the general populace doesn't know enough to make a distinction between hacker and cracker. They hear one news report that says hacker == bad, and it has a negative connotation forever with these people.

      There's also the problem of the use of cracker as a racial slur in the south.
    • Crackers are hackers (Score:2, Interesting)

      by ACNeal ( 595975 )
      There is no real distinction between hacker and cracker.

      The tools, tricks, and procedures used by one are used by the other. The original hackers were the original crackers. It was fun to break into things (be it your radio, your telephone, your telephone network, or someones computer system). Well whats the fun in just being there if no one knows you were there. This is where data stealing, or defacing came in. All the way back when the hack/crack was as simple as making a score board say MIT, when they didn't have a sports team, let alone being involved in the specific contest.

      To you and me, it is obvious where a prank ends, and malicious intent begins. To the person that has to clean up the prank, it is all malicious. So to you an me, there is a distinction between hacker and cracker, but to the laymen, they are the same. Not because they don't know any better, but because to them the outcome is the same. And now with the DMCA and the like, the line is clearer.

      And before someone says kernel hacker, the prankster hacker is where the term originated. So if anyone is using the term incorrectly, they are probably the ones that should get the chastising. Kernel hacking is such a small and specific subset of the word, it isn't what the term was created for, nor does it truly represent the standard.
    • I was about to mod this down, but I decided that it would be better to respond.

      The "real term" is hacker, not cracker. Why? Because that's what the majority of the english speaking population says it is. Get used to it, because unless you can convince Joe Sixpack and his favorite news anchor otherwise, that's the way it's going to stay.

      You'd need to find another term anyway; cracker already has a commonly accepted meaning when it's applied to a person, and it has nothing to do with computers.

  • Forget the DMCA... (Score:2, Insightful)

    by Spazholio ( 314843 )
    What about the new legislation (forget the name) that makes 'hacking' a federal crime, and heavily punishable. I think I remember reading that you can get a life sentence for hacking? What the hell? And I can guarantee you that they're just WAITING for another Kevin to come around so they can make an example of him:

    "See? Look what he did! He 'hacked' into someone's computer, and now he's someone's bitch for life."

    "But he didn't do anything damaging."

    "He was HACKING. That's BAD. He's gone for LIFE. Let that be a lesson."

    The lesson is that curiousity is now punishable by life in prison. Great. Don't get me wrong, traipsing into someone's computer isn't exactly ethically RIGHT (I don't care HOW wide open they leave it), but it's certainly not criminally WRONG.
    • by netphilter ( 549954 ) on Monday September 23, 2002 @12:20PM (#4312676) Homepage Journal
      traipsing into someone's computer isn't exactly ethically RIGHT
      I was under the impression that right and wrong were mutually exclusive. If it's not right then it has to be wrong. If you "traipse" into my computer you will go to jail. Pretty simple. Should I be able to pop the hood on your car if it's in the parking lot of Wal-Mart because I'm curious as to how your car is different from mine. What about your house? I'm interested in the architectural differences between our houses, so I break into your house because of my "curiosity." Please try to refrain from ridiculousness in the future.
      • by GlassUser ( 190787 ) <{ten.resussalg} {ta} {todhsals}> on Monday September 23, 2002 @12:37PM (#4312782) Homepage Journal
        Well, if you leave your car's hood propped open, with a flashing blue light on top of it. Or if you prominently display your house with open doors (commonly known as an "open house", at least in america, they're kept near the entrance to new neighborhoods, specifically so people can come in and examine the workmanship and . . . architecture).
      • I was under the impression that right and wrong were mutually exclusive.
        Is abortion right, or wrong?
        Should I be able to pop the hood on your car if it's in the parking lot of Wal-Mart because I'm curious as to how your car is different from mine.
        If you don't fuck anything up, no harm was done, regardless of whether your actions were illegal.

        An example more salient to this discussion: if your hood was open, and your windows were down, and your doors were open, etc., would you seriously expect your car to be untouched after you got out of Wal-Mart?

        • by polin8 ( 170866 )
          even more apropriate: if your hood was open, and your windows were down, and your doors were open, etc., would you not want someone(wh) to come into wallmart and warn you before someone else(bh) took your car and ran over grandma?
      • Is it really that simple?
        What is I am unknowingly in your computer because someone else is routing through a hole in your system? or is storing images on your system that are linked to a different site? Is requesting something from your computer wrong?
        It's not wrong for met to go to your door and request to borrow a cup of sugar from you, nor it it wrong for me to equeste a ride from you.
        This is why there is such confusion with computeres, so many different analogies can be made to prove any side of any argument. Computer really need more concrete examples that belong to them.
        Todays, computer are designed to share informatiuon, the internet is designed to share information.
        Really, we need to accept that, and focus on good security mothodologies and technology implimentation in all products.
        Gone are the days when computers where isolated machines. It seems obvious, but people can't seem to get that through there heads.
      • If it's not right then it has to be wrong

        Yes, and if it's not in light then it must be in darkness, right?

        I won't even go into the myriad of ideas or situations that exist in the grey area between right and wrong.

        If you "traipse" into my computer you will go to jail. Pretty simple.

        Ok, so what if I find a backdoor onto my own computer? Should I report it to the company? If I do and they do nothing to fix it what then?

        This shouldn't be hard for you to answer. After all, by your own statement there's one right answer and everything else is wrong.
      • by Quixadhal ( 45024 ) on Monday September 23, 2002 @01:13PM (#4313041) Homepage Journal
        Right and Wrong are only mutually exclusive in today's simplistic binary computers, and the minds of some simplistic people.

        Should you be able to pop the hood on my car in the Wal-Mart parking lot to see how my car is different than yours? No.

        Should you be able to pop the hood on my car to extinguish a fire in the engine compartment and keep it from destroying the vehicle, anything in it, and probably the vehicles on either side? Yes, please do!

        But... you still "broke into" my car. Do you want to go to prison and enjoy the tender thrusts of Bubba for your good deed?

        If you have an ftp server running on your machine, and I happen to notice it, I feel perfectly justified in connecting to that server. If it allows anonymous logins, I feel fine looking around. If not, I won't sit there and try to guess passwords, as that *would* be wrong.

        Yet, if after logging in as an anonymous user, I manage to get access to your filesystem, I would feel obliged to leave you a note, telling you that maybe / isn't the best anonymous ftp root. Would you send me to prison for that? If so, I'd suggest you seek counseling, since you obviously have some personal insecurities and ego problems beyond your server.

        The DMCA is an abomination. It creates a situation where one can be punished without actually doing anything beyond research. How many people who just happen to own Sharpies bought them with the criminal intent of listening to protected music CD's? Most of my sharpies pre-date the DMCA, yet I am technically a criminal because they COULD be used to circumvent copy-protection??? All of you out there who have screwdrivers -- you can use those to unscrew poorly secured locks. There, now I'm in trouble for disseminating information about circumvention, and you're all screwed for having the tools. Go Law!

      • netphilter is right that open doors don't make B&E legal. If you leave your door hanging open, and a robber comes in in the middle of the night, "the door was open" does not work as a defense strategy.

        That being said, the important problem with the new federal hacking bill(s) is the harshess of the punishment. You can spend more time in jail for cracking someone's podunk little website than for rape.
      • by amitola ( 557122 )
        Should I be able to pop the hood on your car if it's in the parking lot of Wal-Mart because I'm curious as to how your car is different from mine.

        No. Should you get life in the big house if you do that?

        I'm interested in the architectural differences between our houses, so I break into your house because of my "curiosity."

        If you did that, but did not take or break anything, do you think you would get life in prison?

        I was under the impression that right and wrong were mutually exclusive. If it's not right then it has to be wrong.

        This Axis-of-Evil crap, which you are parroting here, is one of the worst abuses that two useless Bush administrations has come up with. Before, it was the War on Drugs, now it's the War on Terrorism. Hey, future presidents! Got some societal ill that's obviously far too complex and pervasive for you to begin to address? Declare war on it!

        The rhetoric has not changed: You are either for us or against us! God bless the USA! (insert patriotic theme a la Animal House.)

        The methodology has not changed: Caught with a couple grams of an herb considered harmful by some? Lose your house, lose your car, do prison time comparable to assault or manslaughter. Caught using or (God forbid) writing a sequence of computer code that an American media corporation finds inconvenient? Lose your house, lose your equipment, and off to the cooler where you can only hope that someone like EFF or the ACLU takes up your case.

        [Y]ou will go to jail. Pretty simple.

        Indeed! As in, simplistic, oversimplified, and simple-minded. Who did more damage to life, liberty and the American Way--Kevin Mitnick or Kenneth Lay?

      • I was under the impression that right and wrong were mutually exclusive.

        So is posting to slashdot on company time "RIGHT"?
    • Comment removed based on user account deletion
  • gray/grey hats (Score:4, Insightful)

    by cetan ( 61150 ) on Monday September 23, 2002 @12:16PM (#4312643) Journal
    It's a bit ironic that the c|net article tried to put such a boundry around so-called "gray-hat" hacking. I'm sure there's a number of "gray hats" that don't release the info about a security problem until after a suitable time period has passed and the company has either not responded or is not being speedy enough in issuing a patch.


    It seems to me that giving companies time to fix their holes is always a Good Thing (tm) but that a lack of public disclosure by a 3rd-party will only help obscure legitimate problems. People with the attitudes similar to that of Peter Lindstrom* demonstrate, to me at least, a lack of care towards users and their potentialy open/vulnerable systems. One of the easiest ways to get a slow company to fix something seems to be to talk about it in the press.


    * quote: ("If you are gray, you are black," Lindstrom said. "It's not that I don't understand what they are trying to do, but it comes down to what you are actually doing.)

    • I'm sure there's a number of "gray hats" that don't release the info about a security problem until after a suitable time period has passed and the company has either not responded or is not being speedy enough in issuing a patch.

      Actually, they consider them white hats (as do I). In the side bar for white hats read:

      Information handling: Works with software companies to resolve vulnerabilities; won't announce vulnerabilities until company is ready or found to be unresponsive.

      Typos are mine. The source is a gif.

  • by raehl ( 609729 ) <(moc.oohay) (ta) (113lhear)> on Monday September 23, 2002 @12:17PM (#4312652) Homepage
    Suits are scared of the public knowing about holes in their product, because that could erode trust in the product. That's the short term vision that motivates suit fear, and causes them to lash out with threats of lawsuits.

    Unfortunately, this fear overwhelms the suit's intelligence, which would tell the suit that in the long term, a climate where disclosing holes is discouraged merely limits access to the information to the so-called "black hats".

    Obviously, an environment where most of the flaws and holes are only known by the less scrupulous because you'd lawsuit-threatened the scrupulous out of finding the holes and telling you about them just makes it that much easier for your programs to be hacked and your customer's data to be stolen - and then they definitely won't trust your product.

    • For me to poop on. (Score:3, Interesting)

      by FallLine ( 12211 )
      Suits are scared of the public knowing about holes in their product, because that could erode trust in the product. That's the short term vision that motivates suit fear, and causes them to lash out with threats of lawsuits.

      Unfortunately, this fear overwhelms the suit's intelligence, which would tell the suit that in the long term
      I'm not a suit, I'm well aware of the arguments on all sides and I was once involved in the hacking community, but I don't agree that the the instant disclosure of new vulnerabilities (and especially the all too common practice of releasing corresponding exploit code with it) is good policy. Regardless of the speed of the vendor or development team to release an appropriate patch, the person that publishes a new vulnerability gives those that wish to hack (yes, I know and I don't care) into systems a huge advantage on the administrators of the world. With the publication of a new exploit to bugtraq or what have you, you instantly arm thousands of script kiddies with an attack that cannot be defended against (in the majority of cases anyways). Even in the best of situations, there is going to be some delay in the development team's response. Even in the best of situations, the sysadmin can only patch so many systems so quickly. Even in the best of situations, only so many admins are going to be available to update their systems in the first place. This is simply a totally unnecessary situation in the vast majority of cases. If the so-called hacker were a little more reasonable and a little less self-centered, then they would give the vendor at least a day or two to come out with a patch before announcing it to the world.

      The argument that you need to publish to the whole world instantly is absurd. Sure, a couple vendors may not be responsive, but most are. Even in the cases where the vendor's response is not entirely adequate, the "harm" posed by waiting is negligable because it's rather unlikely that some unknown hacker will discover the same bug and start exploiting it before then. Few would argue that the developers of Linux and a couple other leading open source packages are slow to respond, yet we see this same instant disclosure of code, often without a patch (even in the cases where a patch is provided, it's not necessarily one that is suitable).

      The reason for this publication in the majority of cases is pretty simple. The publisher wants some recognition for his discovery. While this is understandable, there are other ways to gain recognition. For instance, he could disclose the fundamental details of the exploit to the public and/or a trusted 3rd party on discovery and maybe attach a checksum or PGP signature of his official advisory that he sent to the vendor (in case someone else tries to take credit for the particulars, the corresponding document could be revealed and proven to be known by the discoverer at least when the first advisory was sent out). It may not bring him quite the same fame, but it would be something.

      a climate where disclosing holes is discouraged merely limits access to the information to the so- called "black hats".
      Even if the so-called "white" or "grey" hats cease to disclose these vulnerabilities to anyone, it would be virtually impossible for a large number of black hats to keep the exploit to themselves without it getting back to the security community. It's human nature to brag and to leak. What's more, I would argue that very few blackhats have the sophistication to come up with original exploits themselves. They pretty much depend upon the more knowledgable people that disclose the vulnerabilities to the public. In other words, the community of people having exploits over vulnerable machines would be far smaller.
      • Even if the so-called "white" or "grey" hats cease to disclose these vulnerabilities to anyone, it would be virtually impossible for a large number of black hats to keep the exploit to themselves without it getting back to the security community.
        It's human nature to brag and to leak.
        There are several real-life examples of remote root exploits being held by a (relatively large) group of "black hat" hackers for several years before leaking out to the community at large. For example, there was a Solaris statd exploit that circulated for, IIRC, three years before it "leaked", resulting in a functional patch from Sun.

        What's more, I would argue that very few blackhats have the sophistication to come up with original exploits themselves.
        It only takes one.
        There are some very intelligent people coding for black hats. Many of the brightest people on the legitimate side of network security honed their skills as a black hat, then had a change of heart in the past few years as the threat of criminal charges grew larger, or after suddenly realizing that having a house, a wife, and kids changes your priorities.

        They pretty much depend upon the more knowledgable people that disclose the vulnerabilities to the public. In other words, the community of people having exploits over vulnerable machines would be far smaller.
        However, the pool of exploitable machines would be much much larger.

        Restricting public exposure of holes has been tried, and found wanting. Limited distribution of the details of holes was the unwritten law in the 1980's and early 1990s (anybody remember the 'core' list?). This is why the creation of Bugtraq in 1993 was such a big deal. Prior to that, vulnerability information was carefully controlled, distributed to a limited pool of "trusted" admins... including the "daytime personas" of a number of black hats.

        This approach did little to keep the black hats from learning about new vulnerabilities and writing exploits, and put little pressure on vendors to patch their software or pro-actively work to limit security holes.

        Full-disclosure may not be ideal, but it is better than the alternatives.

        • There are several real-life examples of remote root exploits being held by a (relatively large) group of "black hat" hackers for several years before leaking out to the community at large. For example, there was a Solaris statd exploit that circulated for, IIRC, three years before it "leaked", resulting in a functional patch from Sun.
          Sorry, but I think you're wrong. I knew of statd exploits many years ago. The statd exploits were publically known vulnerabilities for a long time, most admins were just too lazy to patch their systems. Please be more specific if you wish to use this example. Which group? How many people? When did they have it? Which versions of solaris were affected? When was the vulnerability (not necessarily the exploit) made public?

          It only takes one.
          Sure, and it only takes one person to leak. You can hardly have a group of 30 people or more and not have a leak after a week or two. So the question is something like this: Would you rather have 30 hackers attacking the same number of vulnerable targets for a slightly longer period of time or 20000 script kiddies (plus assorted people that have more skills) of them for slightly less? You do the math. I'd certainly take the 30 and that's assuming that the vendors are significantly less responsive (a premise that I disagree)...by the mere fact that you give them, say, a 2 day lead time.

          There are some very intelligent people coding for black hats. Many of the brightest people on the legitimate side of network security honed their skills as a black hat, then had a change of heart in the past few years as the threat of criminal charges grew larger, or after suddenly realizing that having a house, a wife, and kids changes your priorities.
          Well this can quickly unravel into a semantic argument, but I disagree. Very few people that are not disclosing to the public or to the vendors have the ability to write their own exploits. What ever hat you wish to put on them is an entirely different argument that I'm not interested in. I won't debate that many sophisticated people had their start in hacking, but the more sophisticated people quickly outgrow hacking into other people's servers for the sake of it as their skills develop. What fun is it to hack a bunch of servers with already known exploits (even if you created them) when you can you do something that is actually intellectually challenging (e.g., discovering your own) and do it mostly above board while you're at it, not to mention profit from your legitimate fame. (Sure, someone on the fringes may engage in the occassional hack, but not en masse) Yes, there are some undeniable blackhat codes, but they're generally lacking in originality.

          Restricting public exposure of holes has been tried, and found wanting. Limited distribution of the details of holes was the unwritten law in the 1980's and early 1990s (anybody remember the 'core' list?). This is why the creation of Bugtraq in 1993 was such a big deal. Prior to that, vulnerability information was carefully controlled, distributed to a limited pool of "trusted" admins... including the "daytime personas" of a number of black hats.

          This approach did little to keep the black hats from learning about new vulnerabilities and writing exploits, and put little pressure on vendors to patch their software or pro-actively work to limit security holes.
          In much the same way (as you and others argue this point) "democracy" was tried by, and subsequently failed for, the Greeks (and others), so therefore it could have been (and was) argued that it was the wrong path and should have been avoided in favor of monarchy, dictatorship, or the other extremes. Of course, we all know the United States and other democracies have since succeed magnificently. The reason? Subtle and important differences in the governence and a different situation (class, geography, economics, etc). You can't neglect these important differences:

          Firstly, what I'm asking for is not the same as the policy with CERT and other bodies. These people pretty much gave CERT the information and then walked away from it. Instead, I'm giving the vendor a reasonable period of time to respond. If they fail to respond in that alloted time, then the hacker always has the option of making the same disclosure that they do today, only a day or two later. The vendor has every incentive to respond before the hacker does this. Secondly, you can hardly compare the situation today with the growth of the internet (and lists devoted to distributing this sort of information to the public) and the increased interests in security with that of 10+ years ago. It's an apples and oranges comparison. Thirdly, I've yet to see any objective evidence that full disclosure has been any more effective in practice (and yes, I was around and quite aware then). Maybe you can argue that the sysadmins and/or users are a little better armed today with knowledge, but the script kiddies are also armed in that same stroke... The difference is that the script kiddies are armed first, with real weapons (well code at least) when the users only have knowledge that's of questionable value (even with this full disclosure and if the vendor tries as hard as they can, it may take more than a day to come out with a patch or an acceptable workaround).
  • by GuyMannDude ( 574364 ) on Monday September 23, 2002 @12:24PM (#4312708) Journal

    Facter writes "There is a great article at CNet..." but I wasn't so impressed. This example of Kevin Finisterre isn't really that amazing. Finisterre's employee publically disclosed the vulnerability. You gotta expect to piss off HP when you do something like that. Look, I'm a fan of open-source software and I understand that publically disclosing software bugs is one way of motivating a lazy company to plug those holes but I'm not sure you can really defend this ethically. If you find a bug in Company A's software, then let A know about it. If A decides not to do anything about it (or if they are taking longer to plug the hole than you thought) I don't see how you are morally justified in leaking that info to the world.

    Finisterre, who was not hired by HP, now says he'll think twice before voluntarily informing another company of any security holes he finds.

    This is just silly. If he had just informed HP, there wouldn't have been a problem. However, his employee decided to inform the entire world and that's what triggered HP's retalliation. If Finisterre and his employees restrict themselves to informing the company, they should be okay.

    The rest of the CNET article is okay. But starting off with such a stupid example really weakens the story. They could have started off this story with the Sklyarov example. That would make a stronger case for the idiocy of the DMCA.

    GMD

    • So companies have the right to prevent my freedom of speech?
      If I find a hole, I shoule be able to tell anybody I want about it, because it is speech.

      If I found a hole in a major software product that could be damaging, would I tell the company first? Yes, because I believe that would be the moral thing to do, but freedom of speech is not about morals, its about being able to say/write what I want to, even if it is not what society, or an individule, or a corporation, think is moral or right.
    • If A decides not to do anything about it (or if they are taking longer to plug the hole than you thought) I don't see how you are morally justified in leaking that info to the world.

      Just because you found a hole, it doesn't mean that you are the ONLY one to find the hole. It's possible that any hole you find is an actively exploited hole.

      While I'm not familiar with Kevin's case, I've been in a similar situation before. Bank A would not patch their holes in their banking websites. I notified them again and again. After months waiting, I went public. Problem was solved the NEXT DAY! It was simply a matter of getting the right people to make it a priority. I feel that this is completely morally justified and I don't think that the bug was exploited, and I don't think that USERS were harmed just because it was public. It may however have hurt Company A's reputation.

      • While I'm not familiar with Kevin's case, I've been in a similar situation before. Bank A would not patch their holes in their banking websites. I notified them again and again. After months waiting, I went public. Problem was solved the NEXT DAY! It was simply a matter of getting the right people to make it a priority. I feel that this is completely morally justified and I don't think that the bug was exploited, and I don't think that USERS were harmed just because it was public.

        Congrats on getting the bank to do something. And your sentence makes it clear that you feel that you deserve the credit for getting the bank to fix this.

        Now I am wondering: what if the bank did not fix this problem the next day? And what if some cracker/con-artist used your publically-disclosed exploit to cause significant damage to the accounts of one or more bank's customers? Would you be willing to take the blame for this? Yes, the bank should have fixed the problem and you gave them ample opportunity to solve the problem themselves. But I would argue that, yes, you do bear some responsibility in this case. But that's just my opinion. I am curious what yours is.

        You are very eager to take the credit for a case when a public exploit resulted in something beneficial. Would you also be willing to take the blame if your actions had had disasterous consequences? If so, then I salute you as a fair man/woman/slashkitty. If not, I wish I could smack you upside the head.

        GMD

        • Now I am wondering: what if the bank did not fix this problem the next day? And what if some cracker/con-artist used your publically-disclosed exploit to cause significant damage to the accounts of one or more bank's customers? Would you be willing to take the blame for this?

          The fact that an attack is performed shortly after the weakness is disclosed does not mean that (a) the attack would not have been performed had the weakness not been disclosed or (b) that the disclosure had any relationship whatsoever with the attack.

          What's very clear, however, is that the correction of the defect has a direct, causal relationship with the public disclosure.

          Certainly, public disclosure increases the odds of an attack, but it does not increase them from from zero, and disclosure which results in the correction of the defect reduces them from the previously-unknown value to zero.

          In most cases, the bank's customers are better served by public disclosure. For one thing, it lets them know that their bank behaves irresponsibly with their money, and gives them a good hint that they should take their business elsewhere.

          I would agree that it's irresponsible to publish software that automates an exploit, and that doing so might place the author at fault, to some degree. Publishing the vulnerability on a secret crackers-only forum would be thoroughly reprehensible. And it's both polite and good for the bank's customers to give the bank a chance to fix the problem themselves before going public. But if the bank isn't willing to protect its customers unless its nose is publically rubbed in the problem, then the responsible thing to do is to go public.

          You are very eager to take the credit for a case when a public exploit resulted in something beneficial. Would you also be willing to take the blame if your actions had had disasterous consequences?

          You have it backwards. The poster would be at fault if he had continued to keep it quiet until the customers' accounts had been emptied. The only difference is that there would be no one trying to apportion blame to him, so that is an /easier/ approach. But a much less moral one.

        • I do see your point, however, I will throw it back to you like this: In the same situation where you knew of a hole and did not disclose it to the public, would you feel guilty if it was found by someone else and exploited anyway? Your LACK OF ACTION can have consequences as well. Would you take the blame of not informing the public?

          It's unfortunate that the legal system tends to look more at actions instead of inactions. Did you ever see the final episode of "Seinfeld"?

          I feel that there is less RISK to users if they know which company / product / website is more risky to use, and know which companies keep up to date on fixing things.

          In the end, in my case, the type of bug in the bank's site had been listed in CERT for 2 years, along with how to fix it. I think that it's clearly the company's fault for not building a safe website.

        • Now I am wondering: what if the bank did not fix this problem the next day? And what if some cracker/con-artist used your publically-disclosed exploit to cause significant damage to the accounts of one or more bank's customers?

          If I went to my bank and noticed the door to the vault was open, I would tell the manager about it.

          If I came back the next day and it was still open, I would close my account. I would also feel ethically obliged to tell all the other customers at that bank that their money isn't secure.

          A: Do you agree with that, in the terms of the analogy? (physical bank; physical door)
          B: Does the analogy become any different when a computer is involved?

          One person, and one person only, is responsible for a malicious exploit: the person who performed the exploit.

          Networking protocols were designed for sharing information. There are (relatively) easy ways to ensure that only authorized recipients get information through these protocols. If a security system allows me access to parts of an internetwork, I have no reason to think I'm an unauthorized recipient of the information on that network.

    • If you find a bug in Company A's software, then let A know about it. If A decides not to do anything about it (or if they are taking longer to plug the hole than you thought) I don't see how you are morally justified in leaking that info to the world.

      Wrong!! Read the above staement again. Still wrong.

      Bugs and exploits make us (as users of the software) vulnerable - and because the software is question (HPunix) is closed source, we are dependant on the software maker to fix these exploits. If they choose to not do so, or take their time, the we are obligated to ourselves and other users of the software to push the issue.

      Any eula or law that prevents this is flawed and needs too die (die! die!).

      Now, the Finisterre story may still not have been the best argument - the article does say that HP was creating a patch - but no mention of how long it too them.

      just my opinionated 2 cents...
    • What if this was ANYTHING other than the software industry?
      What if HP made the car you drive your family around in?

      Of course we should TRUST the corporations to fix all the problems with their products. Why wouldnt they? And of course dont let the public know that new car SuperFastExpensive SUV can explode if hit at the right spot, why should they know about that???
  • Regardless of the arguments over the ethics of open-disclosure handling of vulnerabilities, the DMCA does not apply in this case, and was just a paper tiger. The DMCA forbids reverse engineering, for example...but ONLY with regards to circumventing copyright-protection mechanisms. While not the most polite thing to do to a TRU-64 box, rooting it does not comprise a copyright violation.

    I once sat down with a Talmudic scholar (I'm Jewish by choice, and find their ethical constructs best for tackling ethical questions) to discuss the ethics of hacking. The farther we got into it, the more I realized that "hacking," as used to define the uninvited attack of another person's system or systems, is fundamentally unethical. You want to make the world a better place? Stand up your own system and practice on it to find problems to fix.

    I also think that public release should be delayed until the vendor addresses the issue. But if the vendor is unresponsive...I think releasing to the public is critical. I've seen situations where I've found a vulnerability and was prohibited from disclosing by an NDA with the client. Every time, the vendor failed to address it within 12 months. The times when it could be released, they were all over it like white on rice.
  • We need the grays (Score:4, Interesting)

    by spiro_killglance ( 121572 ) on Monday September 23, 2002 @12:47PM (#4312840) Homepage
    As a sys-admin i say we need greys to give
    independent reports, on the software and
    systems we run.

    In house security research is typically poor,
    a lot of times (cough microsoft cough) companies
    refuse to make any information about flaws in there software public. Which means that without the greys, the blackhats will be exploiting flaws, and us poor sys-admins will have no idea of how there doing it, or how to keep them out.
    Without the greys, where would CERT advisorys come
    from?

    Secondly, in the case of open source software, the
    public and the developers/owners of the code are
    the same group of people (theoretical, if not in practice), so its impossible to make
    any distinction between grey and whitehats in
    this case.

  • by Your_Mom ( 94238 ) <slashdot@NOSpaM.innismir.net> on Monday September 23, 2002 @12:50PM (#4312861) Homepage
    Prolly too late to get modded up, but I have plenty of Karma anyway:

    Whitehat: Finds a hole on your box, breaks in, writes a nice note to the admin about patching it.
    Blackhat: Finds a hole on your box, defaces your homepage.
    Script Kiddie: Hears about a hole on your box via AOL Instant MEssenger becomes utter perplexed why his IIS rootkit won't work.

    WH: Sees your /etc/password\/etc/shadow posted someplace, sends you a copy and tells you why its bad.
    BH: See's the same, breaks in, rm -rf /
    SK: See's the same, pings your box, brags about it on IRC.

    WH: Sees a probe coming from your machine, finds out its hacked, drops you a note.
    BH: Sees the same, roots your box, roots the original attackers box, kills him, kills his family.
    SK: Gets rooted.

    DISCLAIMER: This is humor, thank you
  • by Suppafly ( 179830 )
    The whole conversation makes a lot more sense if you drop the hat references.. sure its easy to lump people into categorys of white, black, gray, etc hat. But in reality there are crooks, good guys and crooks who play good guys. It used to just be a hax0r description to use the hat verbage.. its unfortunate that its passed into mainstream security usage.. I personally have a hard time taking anyone seriously that describes themselves by the figurative color of their hat..
  • security VS fame (Score:3, Insightful)

    by phorm ( 591458 ) on Monday September 23, 2002 @12:56PM (#4312908) Journal
    I think one of the big questions when accusing somebody of "hacking" should be intent. While this is of course one of the hardest ground to judge, hackers tend to fall along lines the lines of.
    • Fame: Doing something for popularity or fame
    • Profit: Doing something for profit
    • Personal gain: Doing something to gain personally or to lessen a personal expense, either by not paying for software/services or otherwise.
    • Entertainment: Simply because the hacker has nothing better to do with his/her time
    • Security: Doing something for the purpose of forwarding the intent of security etc
    • Revenge/attack: Self explanitory
    • Script kiddies or typical hack-it-cause-I-can types would tend to fall into "Fame" or "Entertainment".

    • If you have somebody who's informed a company of their problem, waited for them to do something, and then finally anonymously or semi-anonymously posted the problem, then we have the "security" types that are looking out for all of us. Somebody who posts it as "hey look at me, I hacked XXX/YYY and somebody should fix it" is just looking for fame or possibly profit.

      I think that if you can hack a system and then offer a viable fix/solution without the indicated repercussion of telling everyone in the world what the problem is, then you shouldn't be blacklisted as a "black hacker".

      However, if you go off and tell everyone that so-and-so's software/network is insecure because they didn't pay you, then you're no better than an extortionist or a crook.

      If you've bypassed security on a product that was hindering legitimate users, we have another really hard area to define. Anything that gets done to a company's product generally should be done with the grace of the producing company.

      Perhaps one of the biggest problems is those who just jump out and post something on the internet without thinking of the ramifications to the owner/users of the product. If you post a security vulnerability and fix, you may be allowing a certain amount of people to fix the problem, but you're also letting all the hackers out there know where there's easy prey in those that don't see the fix soon enough.

      In the same hand, if companies legally lambaste anyone who hacks and then offers a solution to their woes, it only makes things worse.

      Corporations with insecure products/networks need to recognise that running for the lawyers isn't always the best solution, while those doing the hacking need to recognise that extortionist/fame mongering/otherwise damaging tactics aren't helping either.

      If more companies can work with legitimate hackers in a productive way (as stated in the article, many have internal hackers), without inviting dozens of script-kiddies to poke at their servers, then perhaps one day the important people (we, the end-users) will find a day when we can legitimately use the products we pay for, in a meaninful manner, and without security woes.

      It's not what you can do, it's how you do it that counts - phorm
  • by wytcld ( 179112 ) on Monday September 23, 2002 @12:57PM (#4312913) Homepage
    Let's say you notice that my Ford Pinto is likely to explode. But there's a law in place that says that Ford can sue you if you tell me, because that violates their crash security, which consists in not letting people who might be malicious know that the rear end of a Pinto could be a tempting target.

    Now let's say you notice that my HP server is likely to be compromised. But there's a law in place that says HP can sue you if you tell me, because that violates their cracker security, which consists in not letting people who might be malicious know that the rear door of an HP could be a tempting target.

    Exactly why should HP deserve a legal protection that no sane person would give to Ford, when in both cases the customers are far better off with the knowledge?

    • I think the better analogy would be knowledge that if you use ABC tool the Ford Pinto's door comes right open and thus its easy to get inside. The law is kinda iffy about this sort of information, in general the prosecution needs to prove intent to harm.

      The problem with /. is people confusing getting sued with losing a suit, and getting prosecuted with getting convicted.

  • I would have thought most of the white hats would give up, seeing how most people seem to wear dark black sunglasses when determining how white/gray/black a hat is....

    Kjella
  • Almost all major players in the security field nowadays sell early access to information on unpublished vulnerablities (or let others sell it). Therefore, "responsible disclosure" is important: not only have vendors a comfortable time frame for dealing with problems, but the information is also more valuable if its distribution is limited for a longer period of time.

    Of course, this hasn't got to do much with security anymore, it's all about making profit and a feeling of security. After all, when you learn about a new, critical defect in Windows or some component of the GNU/Linux system, there's already a patch (at least in most cases, and the other ones are so obscure that you don't understand what's going on, so you really can't be bothered by them). So it's not that bad if you run software which is poorly designed and sluggishly implemented, isn't it? The whitehats will keep everything in control, and thanks to the new DMCA law, we can safely tell them from the blackhats!

    sigh

    (And BTW, the "responsible disclosure" document is referenced quite a lot for a withdrawn Internet Draft.)
  • by werdna ( 39029 ) on Monday September 23, 2002 @01:03PM (#4312966) Journal
    While the ethics of cracking have always been interesting, the legality has never been an issue. It is, and for years has been, a crime, essentially, merely to knowingly obtain unauthorized access or to exceed authorized access to a computer owned by another. [Alas, many companies have injudiciously asserted these criminal charges against former consultants, merely to beat a bill with a nasty counterclaim.]

    However popular it is to join the bandwagon railing against the DMCA anti-circumvention provisions (people seem to forget that the DMCA is itself an omnibus of technical and non-technical issues, good, bad and indifferent, and ranging from boat-hull designs to ISP immunities), the article's focus on DMCA is misplaced -- almost irresponsibly so.

    The big guns against cracking conduct have been in place for years, and well before DMCA: The Computer Fraud and Abuse Act, the ECPA and countless state computer crime and regular theft statutes. All of these tend to be much broader in scope and reach, and far easier to prove and enforce. After the enhancements (from a prosecutor's point of view) made in the USA-PATRIOT Act, CFAA has become an even more powerful tool. The FBI didn't need a DMCA to get Kevin.

    At the end of the day, the HP nonsense was just that: nonsense. The reason the HP DMCA threat was never pressed was simple -- it was a no-play claim, and everybody knew it. However, there are and have for years been a kazillion laws to beat up on anybody who engages in unauthorized access or exceeding authorized access of any kind, and regardless whether the conduct amounts to any circumvention of an effective copyright protection scheme.

    I'm not arguing cracker ethics, or defending DMCA. I'm simply saying that the focus of the article is wildly misplaced. DMCA is just barely an interesting curiousity in the enforcement quiver -- so far as real cracking goes, it isn't even a fourth-string defense except in the oddest cases.
    • The reason the DMCA is particularly pernicious, however, is that it criminalizes the dissemination of "hacking tools", not just the act of hacking itself.

      True, the DMCA is narrower than some of the other laws you cite because it is specific to security systems designed to protect copyright, and not security systems in general.

      The article unfortunately confuses two gray hat actions: breaking into a system to report to the owner about its vulnerabilities without permission (which should be illegal in my opinion), and releasing exploit scripts to the public when vulnerabilities are found in commonly used operating systems or servers. I think the latter should definitely NOT be illegal for First Ammendment reasons if no other.

      The DMCA stands apart from the other laws you cite, in that it criminalizes the latter activity (if the security system is primarily used to protect copyright.) The other laws only criminalize the former activity.

      • The reason the DMCA is particularly pernicious, however, is that it criminalizes the dissemination of "hacking tools", not just the act of hacking itself.

        You will search in vain to find "hacking tools" among the proscribed devices set forth in DMCA. Only particularized devices are involved there, and very few of them have ANYTHING to do with cracking.

        I don't disagree that the DMCA is pernicious, only that the conflation of it with these practices is bad karma for those who would like to criticize DMCA -- its technically weak as an argument, and generally associates violators of DMCA with an image not favorably taken in the public at large. If you want to beat down the DMCA, don't blame everything on it, like some technological "el nino."

        There is simply no reason to think that releasing an exploit script directed to a technical vulnerability would be a DMCA violation -- and the HP backtracking that immediately followed their ludicrous overreaching is more evidence that DMCA is not implicated than that it is.
        • You will search in vain to find "hacking tools" among the proscribed devices set forth in DMCA.

          Explain that to Dmitri Sklyarov, who spent more than a month in jail for releasing a hacking tool, which unlocks Adobe e-books.

          • Explain that to Dmitri Sklyarov, who spent more than a month in jail for releasing a hacking tool, which unlocks Adobe e-books.

            That's just silly. This is some new use of the word "hacking tools." Certainly, Elcomsoft [elcomsoft.com] doesn't think so -- the words "hacking tools" do not appear on their web site.

            Sure, you can try to define yourself out of this argument by treating the word "hacking" to mean whatever you like. But that's the same logical error -- you are still conflating the same concepts. If you define "hacking" to include the activity of trafficking in software for "unlocking Adobe e-books," congratulations! You won the argument. But so what? My point is that DMCA is not directed toward the conduct traditionally known as hacking by most of us (clever machination of technical systems) nor the conduct currently known as hacking (cracking). The DCMCA anti-circumvention proscriptions may overlap with some cracking conduct, just as any number of other laws -- that doesn't make it anti-cracking legislation, for the reasons stated earlier.
      • Or in fewer words:

        The DMCA criminalizes free speech and thereby nullifies the First Amendment.

        And there were already laws aplenty against nefarious hacking; for what did we need another one??

  • Ethics (Score:2, Insightful)

    by mrcparker ( 469158 )
    Since when is giving out information unethical? I find a flaw in something - anything - and somebody asks me about it, I am going to tell that person what the flaw is. If my wife buys a car that she will be travelling around with my little girl in and my wife asks if there are any problems with the car the salesman has to tell my wife about any flaws. If I find a problem with the tires that causes the car to flip I anm going to tell people about it. This is the nature of information.
    • "Since when is giving out information unethical?"

      Ever since you subscribed to a utilitarian view of ethics and there was a better option, I should think.

      "If I find a problem with the tires that causes the car to flip I anm going to tell people about it."

      Before or after it's flipped? And who exactly are you going to tell?
  • by xeno ( 2667 ) on Monday September 23, 2002 @01:30PM (#4313183)
    Bull. There's plenty of room in the grey-hat region, and plenty of population in it. The wiggle room for those who crack systems/software and then publicly announce the results is getting tighter. However there are an awful lot of people whose main concern is simply sharing results of bug/flaw discovery or other necessary activities that aren't good for vendor busines models. The fact that the DMCA seeks to redefine discovery and community notification as reverse-engineering and criminal collusion doesn't do a thing to shrink the number of people (admins, architects, programmers, dbas, etc) who simply need to do these things to do their jobs. The grey hat is still a thinking person's hat -- one abides by the letter of the law as best one can, and find ways around the obtuse or wrong-headed sections to accomplish primary goals of systems operation, data protection, and other work processes. Some prefer to skirt the line with black-hat-dom, while others simply protest bad law. Ain't nobody a white hat unless they utter phrases like "He was arrested so he must be guilty" or "The law is always right."

    Not too long ago, I sent a note to several of my friends about a conflict [theregister.co.uk] I saw between the DMCA-esque proposed Microsoft security certification -- requiring software bug hiding and notification of the software vendor before notification of the affected client -- and the codes of ethics binding those with CISA and CISSP certifications -- both of which require protection or notification of the potential target/victim. (My personal favorite part of the ISC2/CISSP code is "Tell the truth" which is anathma to the DMCA/bug-hiding camp.)

    Of course, since DMCA enforcement tends towards the corporate view of things (property, ownership, patents, royalties) rather than the societal view (ethics, trust, truth, community), if I follow the vendor-independent (societal) path, I get labelled as a grey-hat or a black-hat right out of the starting gate. Have I personally cracked and distributed software? No. But do I swear to uphold the right of the consumer to know of flaws in their software or implementation? Of course I do -- it's the core of my job as a consultant. But doing so may label me as a criminal, and not doing so is unethical and unprofessional. As the article point out, all you can do is try to do the right thing. Currently that may be illegal.

    Maybe some of us will go to jail for it, but that's what it'll take to change or repeal ill-formed laws such as the DMCA. Nothing induces judicial scrutiny like a situation where a judge is embarassed to enforce a bad law against a just person. But for anyone contemplating the notion of a "test case", keep in mind that the ACLU only picks up your legal fees if you keep your nose clean while you're doing the (illegal) right thing.

    J

  • "Narrow them down to a simple choice. Make them think it's their own." - Luke, on salesmanship
  • If I bought a truck, and the seatbelt linkage into the truck's frame was faulty and likely to fail in a crash, then I suspect I'd write a letter to Consumer Reports reporting it. I'd probably also write a letter to the company. The fact that I would have had to take apart a portion of the truck to find the fault would make NO difference. No one would say it was illegal, no one would complain that I was 'gray hat' or 'black hat'. I bought the truck, the truck had a problem, I told people. Big deal.

    If I took apart someone else's truck without asking for permission, I suspect I'd just get my ass kicked. But, charges could of course be filed by the owner of the truck as well.

    Why is it different with computers? Why are there people here saying that someone who looks at something they've legally purchased and find flaws with it are ethically in the wrong? And why should they not be able to speak up about it? The article is about a guy who reverse-engineered something on his own system. He didn't hack anyone else's system. What is wrong with that? I'm seeing tons of posts saying that all gray hats are black hats, or that ethically gray hat hacking is wrong although they do it anyway, and lots of garbage like that. What is gray at all about experimenting on your own machine when you've purchased the software?!? The whole gray/black/white hat stuff to me only applies (in any way, even if it is all b.s.) when you're poking into *other* people's computers.

    Yes, if you find a hole, it's polite to everyone to give the company a chance to fix it before going public. But - that's a polite social thing to do. I see nothing wrong with telling an emporer or anyone else that they are butt naked. And if I feel like it, I should be able to tell everyone that the emporer is butt naked without asking his permission. That's called freedom of speech.
  • WHITE
    Hacks systems at the request of the system owner to find vulnerabilities. Helps system administrator eliminate obvious holes first. Gets a paycheck and free lunches from the IT manager.

    GRAY
    Inconsiderately hacks systems without the knowledge of the system owner, blinded by his good intentions. Notifies system administrator about holes in the system. Receives suspicion and a subpoena, gets free representation.

    BLACK
    Cracks systems in search of personal booty and root exploits. His back-door scripts leave no traces. Notifies the world by rerouting all requests for the public site to goatse.cx. Never gets caught, gets all the chicks.
  • ... and so should yours, if you're worried about this stuff. Go here [eff.org] and send them a hundred dollars. You'll be glad you did.
  • If the community keeps all the hacks secret all software will be secure. No one will need to patch their systems. Personal firewalls will no longer be needed. Anti-Virus will a thing of the past. I think this is what the white house and other insecurity, are really trying to tell all of you. Don't share and don't hack. That way no one know about a hole. ie, China will be the only place that can hack into your system. Well including the government, MPAA, RIAA. Remember if you don't know they are doing it. It's not illegal. So IF are smart enough to find a hole, don't tell and OWN THE SYSTEM. At this rate it won't be patched and they most likely won't even know your there. This is how our government is going to protect us.
  • I'll probably be modded down for this, but that's okay.
    I agree that if your Gray then your black. You might be Black with good intentions.. but your still black.

    It's like breaking into a store; simply to warn the store owner that you could break into a store.. no different. Or to use a popular theme in other postings regarding a house with an Open sign on it. NO! It's more like going up to a house, trying all the doors and windows till you find one that is open.

    Unless you are specifically asked by a company owner or software maker to exploit security holes, you shouldn't be doing it. If your concerned about security of the source, then choose a OpenSource alternative or write your own. If your using a COTS, then ask the publisher for permission to test the software for security holes, most will allow you as long as your a paying customer. If they don't, you probably don't want to be using that software vendor's appliction anyways.

    It's all about property people and respecting peoples privacy. Yes, it would be a utopian society if everybody could be online without fears of your network being compromised, and that's not reality obviously. But we don't need vigilanties running around exploiting everybodies software or network just because they can. It's not research its criminal; you've breached somebodies privacy even if you didn't do damage. If you want to practice, setup your own private network with software that allow's you to do as such. An no, I don't agree at all with the penalties associated with violations of the DMCA. They are outrageous and should be removed and educated individuals should re-establish new ones.
    • thedarkstorm writes:
      I agree that if your Gray then your black. You might be Black with good intentions.. but your still black.
      I strongly disagree. The law may define more and more actions as being unlawful (see the DMCA), yet those actions may still be ethically/morally right, and socially acceptable. The US has many such rules, where the law says one thing and society at large says another.

      Unless you are specifically asked by a company owner or software maker to exploit security holes, you shouldn't be doing it.
      I'm not exactly a "white hat" by most definitions.
      My job (and my hobbies) involves legally acquiring software and hardware and testing it, tearing it apart, looking for weak spots.

      That includes purchasing items like a Cisco PIX or a software firewall, testing for security holes, and often extends to writing and executing working exploits for these holes, on legally acquired copies running in my test lab.

      These actions may violate the vendor's EULA. But they do not ever involving penatration of the network, host, or data belonging to an innocent third-party. Do these acts make me a black hat?
      If my customer agrees, I report issues to the vendor. If they not respond, and if my customer agrees, I will post some or all information to a full-disclosure list. What color is my hat now?

      If your concerned about security of the source, then choose a OpenSource alternative or write your own. If your using a COTS, then ask the publisher for permission to test the software for security holes, most will allow you as long as your a paying customer. If they don't, you probably don't want to be using that software vendor's appliction anyways.
      Neither I personally nor my employers trust the publisher to do their own testing and report honestly on the results.

      While it may be in violation of the law or a civil transgression to "test" software after purchasing a legally licensed copy, I do not agree that such testing turns a grey hat to black.

      But we don't need vigilanties running around exploiting everybodies software or network just because they can. It's not research its criminal; you've breached somebodies privacy even if you didn't do damage.

      I've breached whose privacy? That of the vendor who wrote the software or designed the hardware?
      If I legally acquire software and hardware, install it on my private testbed, then exploit the software (locally, in my "sandbox"), it most certainly is research. It may also be criminal. If I take the results of my tests and publish them, that too is research, and under the DMCA or certain EULAs, may be unlawful.

      Regardless of how the laws are contorted to depict my actions, I will not accept the label of "black hat" on this basis.

  • by alexjohns ( 53323 ) <almuric AT gmail DOT com> on Monday September 23, 2002 @02:23PM (#4313569) Journal
    You can call me white, gray, black, puce, ochre, whatever. I already break the law, every day. I speed; roll through stop signs; jaywalk; litter; drive after having a beer or two with dinner; try to get every conceivable deduction on my taxes; copy software and music CDs. In the past, I experimented with illegal drugs; shared prescription drugs; bought alcohol for minors; participated in sodomy in at least one state that outlaws it. Shit, the list's just too freakin' long.

    I'm already a criminal. I imagine most people on here are. Who the hell hasn't broken a law today. We're in a drought here in Maryland. Water a plant today, did ya? Broke the law. have you let a teenager bum a cigarette? Criminal.

    Why should anyone care what color hat they supposedly wear. It's an arbitrary label. I call myself a hacker. I don't break things. I don't steal things. I try not to hurt people I like. In my opinion, that makes me an OK guy. Of course, opinions vary.

    Oh, and you... yeah you. Stop looking over your shoulder. I'm running crack against your password file right now. Might want to go change a few of 'em. Especially root. You know, the one that's your girlfriend's name. (And we both know she's not really your girlfriend. All you really have to do is ask her out, but you're scared. Pussy.) I'm only telling you all this because I like you. Now go ask her out, wimp.

  • Can we at least get away from the terrible analogy of:
    "Ok, say you someone breaks into your house/car/business but doesn't steal anything" to mirror the actions of "hacking"?

    Yes, it really sounds like it might be a good analogy, but computers are absolutely none of the above.
    There is no such thing as a nice citizen who comes around to your house and checks to make sure your door is locked and your jewelry is secured in your house. There never has been, there never will be, and there never will need to be, because the Internet is a way different medium than the real world.
    Analogies are great for helping geeks explain computer terms to non-computer people, but no matter how you slice it an apple will never be an orange.

    A prime example of how it doesn't work is in software "hacking". If a major gaping security hole in someone's software exists, it is something that desperately needs to be fixed immediately and brought to people's attention.

    Imagine something simple like an IIS bug (no way!) that allows people to download the source code for some script on your server that includes things like database and system passwords. Some well meaning (gray) hacker tells Microsoft about this, and gets tossed in jail. Meanwhile the same exploit is found at the same time by a malicious (black) cracker, who tells all his l337 script kiddie friends and before you know it some poor startup companies have just given out credit card numbers and secure corporate information to exactly the wrong kind of people.

    Where is the white hat in all this?
    Oh, he thought about the exploit, but didn't look into it because that sort of thing is naughty and he might get his pretty little white hat dirty.

    Testing security measures and breaking software is absolutely necessary if we want to keep robust efficient systems across the country.
    Do you really think other countries prosecute their L337 cR4X0rs when they break into our untested unsecured networks?

    There have been hackers ever since there have been computers, and it needs to stay that way or we will all find ourselves up that silicon creek without a paddle.

  • The only effective way to get many compaines to fix problems is blackmail which is technicaly illegal just about everywhere. There is something wrong when you have to break the law to get your vendor to fix something.

    The page says a black hat will not disclose their hacks and use them for their own gain. That sounds like me. I run unix boxes and I think Windows in most cases is trash. When a client says they are as secure, I've been known to show them why they aren't. I've had one client get all upset since I wouldn't explain to MS how I took down their secure box. MS isn't paying me and they have done enough boneheaded things to make my life hell at times. I'm not going to do anything else that helps gates and his evil minions make my job harder.

Real programmers don't bring brown-bag lunches. If the vending machine doesn't sell it, they don't eat it. Vending machines don't sell quiche.

Working...