Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security Censorship Government The Courts News

Gag Order Fuels Responsible Disclosure Debate 113

jvatcw writes "The Boston subway hack case has exposed a familiar rift in the security industry over responsible disclosure standards. Many see the temporary restraining order preventing three MIT undergrads from publicly discussing vulnerabilities they discovered in Boston's mass transit system as a violation of their First Amendment rights. Others, though, see the entire episode as yet another example of irresponsible, publicity-hungry security researchers trying to grab a few headlines." We discussed the temporary restraining order last weekend, and later the EFF's plans to fight it. CNet reports that another judge has reviewed the order and left it intact. Reader canuck57 contributes a related story about recent comments by Linus Torvalds concerning his frustration over the issue of security disclosure.
This discussion has been archived. No new comments can be posted.

Gag Order Fuels Responsible Disclosure Debate

Comments Filter:
  • by zappepcs ( 820751 ) on Saturday August 16, 2008 @02:12AM (#24624577) Journal

    Linus is dead on right. If you find it, tell the author(s). If they don't respond? Tell the world. Software makers should credit those that find the bugs as well. This will eventually lead to credit where credit is due, and subsequent reputation building in a reasonable manner.

    Gag orders just make things worse. This is where I believe the law should take a stand. If someone makes reasonable due diligence to report the vulnerability to the author(s) and nothing happens in response to the report, then the authors have no recourse on what happens when it is made public. This is in line with the intent of our legal framework now, and would not IMO violate legal values.

    "Unsafe at any speed" was not exactly something the auto industry wanted to deal with, but they had to. Those lessons are very applicable here. Those who don't play nice and disclose to the public too soon should be penalized if actual damages can be shown. Restraint and respect. These two things have no dependency on reciprocal action.

    I read Linus' rant and he's absolutely correct. The bigger the flame war over vulnerabilities, the more security companies make off of unwarranted fears etc. It's just a game, and where the law is concerned, we have prior examples to look at... and goddamnit, they are about cars! No analogy needed here

    • Re: (Score:1, Insightful)

      by iminplaya ( 723125 )

      If you find it, tell the author(s). If they don't respond? Tell the world.

      But it still MUST be done anonymously to keep anybody from suppressing it, as is being done here.

      • Re: (Score:1, Insightful)

        by Anonymous Coward

        >But it still MUST be done anonymously to keep anybody from suppressing it, as is being done here.

        Often when we see information being suppressed, it is as much due to the self-aggrandizing nature
        of the person trying to disseminate the information, rather than the information itself.

        This information could have been silently released to the public through any number of anonymous channels.
        But the people being suppressed here, are themselves attempting to limit availability of the information for
        their own se

        • Heh, you're right. We are dealing with some fat egos with these guys. They want their names in big bright lights. Gives them a feeling of power.

    • Re: (Score:3, Interesting)

      by Anonymous Coward

      Linus manages to be right, arrogent and stupid in the same statement.

      He seems to have now discovered that in order to improve security you have to try to fix all bugs. This is right. A bug is a place where the software doesn't do what the "educated" user expects. That can almost certainly lead to a security situation.

      He's competely stupid, however, to compare a random bug with a demonstration of exploitability. When someone has an exploit, that's something they can sell for money to cause harm to your

      • I had not thought of looking at Linus' signal to noise level quite that way, but you're right: arrogant, stupid, but right. His notable quotes are both sad and hilarious at the same time.

        It is those that choose to sell the vulnerability to bad guys that I believe should be considered criminals. The vulnerability finders that just want some credit... well, they should get it.

        I can see how Linus perspective is a bit skewed on this subject. When he started out with the kernel there was hardly the pressure or c

    • As much as I love Linux, I don't think that "Linus is dead on right". Coming from anybody else that interview would be labelled as a rant or a troll. Coming from Linus, I still regard it as a rant. He said *NOTHING* in the interview, but just went on and on (between the lines) about how great he is. Sorry. I admire the man. I respect him. But, seriously, he needs to grow up.
    • by Z00L00K ( 682162 ) on Saturday August 16, 2008 @03:52AM (#24624821) Homepage Journal

      And gag orders are today's version of "shoot the messenger".

      The problem is there even if you don't tell the world.

      Anyway - being a security person is more than revealing or hiding facts. It's also about having the insight to realize that there are always security failures in a system. The point isn't to track down each individual security failure but to create a layered solution that can change a security problem from being critical to moderate.

      It's impossible to catch all security problems in a system, and sometimes a security weakness is in place because it isn't possible to make the system more secure without causing it to be impossibly hard to use.

      But there are of course stupid security failures too. Autorun in Windows is one... Very effective when you want to spread viruses and other malicious features. And now and then we hear about USB devices that are infected.

    • "I think the OpenBSD crowd is a bunch of masturbating monkeys, in that they make such a big deal about concentrating on security to the point where they pretty much admit that nothing else matters to them."

      I think we can also safely ascertain that he has cracked Theo's webcam. :)
    • Temporary restraining orders of all different kinds are often issued at the beginning of a legal case. The idea is that a party might be doing another party harm, and you shouldn't have to wait for the conclusion of a court case (which can take years in some cases) to get the harm to stop. The other party can, of course, argue that the restraining order would cause them harm and thus shouldn't be granted.

      Take, for example, a case of someone slandering you. The make knowingly false statements about you with

      • by mysidia ( 191772 ) on Saturday August 16, 2008 @12:19PM (#24626913)

        You can argue the city deserves it, but all the same. However it won't really cause the students any harm to have to keep quiet about it until the case is settled.

        This is clearly not true. Being unable to reveal the information probably discredits some of their valuable work.

        It will effect their successful reputation, and possibly what job opportunities they might have, if they don't get to reveal their discovery.

        More importantly, it is denying them of their constitution-granted rights as citizens of a free country by placing a prior restraint on their ability to use their free speech right, interfering with their liberty, and right to pursue happiness.

      • Could the respondents (the students) cause the plaintiff (the city) harm through their actions? Would it cause the respondents hard to have to cease their action? Well yes, it would cause the city harm if the students revealed their information.

        You appear to be overlooking the critical point that the students' planned presentation did not Reveal All -- critical information needed to actually exploit the flaw was left out. MBTA was told this and sued anyway. The only "harm" the city would have suffered is

      • by HiThere ( 15173 )

        If you are arguing that the restraining order was legal...well, two judges have agreed with you.

        It's stupid, harmful, obnoxious, and of doubtful constitutionality, but it appears to be legal as defined by the courts of New York. (What, you say "doubtful constitutionality" should mean it's not legal? I agree. But the legal system doesn't.)

        • A gag order may have reasonable fear, but no evidence of damage. A gag order is a precognitive assumption of un-realized facts. A judge is only for judgement, not law. You too are a judge of the law. The constitution's interpretation is open to interpretation by all. And even if a judge deems something lawful, you have recourse to persue an alternate interpretation. In some cases, you can vote the judge off the bench and replace him with someone with better judgement. Unfortunately we have had the imm
    • by WK2 ( 1072560 ) on Saturday August 16, 2008 @04:27AM (#24624919) Homepage

      Linus is dead on right. If you find it, tell the author(s). If they don't respond? Tell the world.

      Once upon a time, I would have agreed with you. But nowadays, when someone finds a vulnerability and tells the vendor, the vendor goes and gets a gag order to prevent the public from being able to protect themselves. Or the security researcher gets arrested. It might be safer to just tell everybody, anonymously, through one of the many full disclosure lists.

      This is where I believe the law should take a stand.

      There is no reason to get the law involved with this one. In fact, the courts seem to be the problem in this case.

      • It's really a question of trust.

        As a security researcher/bug finder, do you trust the author to act correctly upon hearing the information? If yes, then you should tell the author first, ie give him the benefit of the doubt and only tell the world after a small delay (but always tell the world, because you're helping people). If you don't trust the author to act correctly, then you should tell the world immediately.

        The thing is, in the open source world authors have a strong incentive to act correctly w

      • What happens if you tell someone else (e.g. an attorney) first. Will they be bound by the gag order? Can they disclose the information later? What happens if you tell someone in a different legal jurisdiction before you tell the manufacturer? For example, if these MIT guys had told someone in the UK first, then the gag order in the USA wouldn't have had any legal impact on the people in the UK and they would then have been free to legally disclose the information to the public. As long as you don't tel
      • by jc42 ( 318812 ) on Saturday August 16, 2008 @09:15AM (#24625757) Homepage Journal

        But nowadays, when someone finds a vulnerability and tells the vendor, the vendor goes and gets a gag order to prevent the public from being able to protect themselves. Or the security researcher gets arrested. It might be safer to just tell everybody, anonymously, through one of the many full disclosure lists.

        Indeed. In a recent discussion on this topic, someone pointed out that there's a legal name for the strategy of "tell the vendor, and if they don't fix it, tell everyone". The name for this is "blackmail", and you are in danger of prosecution.

        We might add that if you tell the vendor, and offer to work for them to fix the problem, there's also a legal term that applies: "extortion".

        The only real way to protect yourself from the danger of prosecution is to not tell the vendor anything. You should simply make the information public. That way, it's clear that you're not threatening the vendor with release of the information and you're not trying to get them to pay you to fix it.

        This also prevents them from asking the courts to impose gag orders. It doesn't do much to prevent the media from labelling you a "hacker", which to the general public is a kind of criminal. But if you are knowledgeable enough to find and fix security problems, there's probably no way to prevent the media or the political system from labelling you as some sort of criminal. People in positions of authority have always wanted to silence messengers with inconvenient messages, and there's probably no way to fix this bug in the human psyche.

        • by mysidia ( 191772 )

          What you do is you give the information to the vendor, and tell them the information will be published within 6 months.

          If they fix the issue, no problem, the publication will address a fixed issue.

          The publication is already outsourced to a trusted third party, and they will have an irrevokable order and agreement to place the information into a public place on the agreed upon date, subject to a non-disclosure agreement concerning the exact time, venue, etc.

  • by Anonymous Coward on Saturday August 16, 2008 @02:17AM (#24624597)

    However...

    "...yet another example of irresponsible, publicity-hungry security researchers trying to grab a few headlines" <-- this does not invalidate this --> "First Amendment rights" ...no matter what the neo-cons or lobbyists might say.

    • In the land of "free speech zones" and sedition acts, the first amendment is toothless. The info must be released from a place that's out of reach.

  • What part of the First Amendment don't you understand?
    • Re: (Score:2, Flamebait)

      What part of the First Amendment don't you understand?

      I'm guessing that would be "all of it".

    • by HTH NE1 ( 675604 )

      You have the obligation to remain silent. Anything you say can and will be used to find you in contempt of court. Findings of contempt are final and are subject to appeal only at the judge's discretion. Your term of incarceration also will be at the judge's discretion.

  • by bogaboga ( 793279 ) on Saturday August 16, 2008 @02:37AM (#24624647)

    ...How? You may ask.

    By letting Russian hackers release the info. The problem for the authorities is to prove that those under the gag order had a hand in this.The Russians get the information using no traceable medium. That includes the internet, post, fax etc.

    Proving that the students had a hand in this, would be hard if not impossible. After all, the system was open to usage to everyone as long as they paid up -- including the Russians we are talking about.

  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Saturday August 16, 2008 @02:39AM (#24624649)
    Comment removed based on user account deletion
  • by Animats ( 122034 ) on Saturday August 16, 2008 @02:43AM (#24624661) Homepage

    The MTA is trying to cover up the fact that their system design is very weak. The value of the card is actually stored on the card, and there's no central validation. That's embarrassing, considering that the MTA implemented fare cards quite late, long after other cities.

    The NYC MetroCard system [sephail.net], in comparison, is totally paranoid. Cards have unique serial numbers and are validated by the entry gate, the station computer, and central servers at MetroCard HQ. Creating new cards with new IDs won't work. Duplicating cards is possible, but is detected the second time the card is used. NYC is so paranoid that equipment maintenance is performed by an outside company, but NYC employees handle the money and blank cards, so that no single party has full access. The New York City subway system was losing about $20 million a year to token fraud, and when the new system went in, they were determined that would stop. They had some fraud back in 1995, when someone stole a supply of blank cards and was able to encode them, but it turned out to be a rip-off for buyers - the cards only worked once, then were invalidated.

    The first fare card system, San Francisco's BART, isn't that secure, but has an big advantage - BART has exit gates. So, while it doesn't have real-time validation against a central database, gate info is being transmitted in background to a central system, and if centralized analysis indicates something funny going on, central control can flag the card, trap the user at the exit gate, and alert station security to check the card.

    • You can store the value on the card. You just have to combine it with salt and encrypt it against a big enough private key. Shouldn't be hard in this day and age.

      I don't really see why they are so worried about this attack. Most people would be deterred from using it because falsifying tickets is against the law. They couldn't possibly lose much money and the hole has to be fixed in the long run anyway.
      • by 0123456 ( 636235 ) on Saturday August 16, 2008 @03:08AM (#24624733)

        "You can store the value on the card. You just have to combine it with salt and encrypt it against a big enough private key. Shouldn't be hard in this day and age."

        How does that help? If you can copy the data to another card or prevent the reader from updating the value, then you have infinite amounts of money available.

        We used to have stored value cards at university back in the 80s, and it wasn't long before someone discovered how to prevent the automated readers from writing the value back to the card after they subtracted money from it so it never went down. There was also a bug where in some cases the reader would add $100 to the card rather than deducting $0.25...

        • Re: (Score:3, Insightful)

          by jd ( 1658 )

          Not if all transactions are validated. If you're using PKI, then the holder of the card cannot determine in advance what the new value on the card is supposed to be, so all the software has to do is ensure that the decrypted value when re-encrypted is equal to the value that should have been written to the card, and that the digital signature placed on the card matches up with that machine's "personal" public key. Then you know that the value that you think has been written to the card is indeed the value w

          • This is what your system sounds like: I have card with memory contents A. The reader then reads A, writes B to the card, reads B from the card and is happy. I push a button on my modified card, it's now stores A.

            Your fancy hashes, signing, whatever are all stored in the memory contents on the card. What you need is a way to know that A is invalid data and should never be read from the card ever again (putting a timestamp in the data will ensure going back to a previous balance won't write the same data to

            • by jd ( 1658 )

              Congratulations, you're the first person to have actually given a sensible counter-example. I'm genuinely impressed. You are correct, and what you are saying reduces to The Byzantine General's Problem. If there are only two members of a group, it is not possible to establish that one member is not a traitor to that group. In order to establish that all members of a group are trustworthy, 1/2 of that group plus one MUST be trustworthy. This does not require any central member of that group to do the assignin

          • by 0123456 ( 636235 )

            "As for writing the correct value - well, not my fault if coders are so incompetent they can't be bothered doing basic top-down design and bottom-up testing."

            You just don't get it, do you? This is the way that these systems are broken, not by breaking encryption keys.

            There is NO WAY to prevent these kind of attacks without some central validation; you can make them moderately difficult with complex hardware, but the hackers are generally smarter than the people building the system... if they have to dismant

            • by jd ( 1658 )
              Repeating a claim is not proof of a claim. Security kernels can be proven mathematically to be correct, ergo distributed security kernels that include a card as part of the distribution can also be mathematically proven correct. Lousy coders and inferior management is not a problem with the fundamental concept, it is a problem of delinquent implementors. Reverse-engineer away all you like on a solid PKI system, see if I care. If the system is designed correctly, it should be 100% impervious to such attacks.
              • by Animats ( 122034 )

                Reverse-engineer away all you like on a solid PKI system, see if I care. If the system is designed correctly, it should be 100% impervious to such attacks.

                A PKI system alone won't help. If you have a valid, unused stored value card, and can store that data and copy it to another card exactly, you can "recharge" cards by rewriting them with the valid card data. It doesn't matter what the recorded contents are. Without validation against a database, if you can duplicate a card exactly, that's sufficien

        • by jpatters ( 883 )

          Suppose that both a value and a unique transaction ID are stored on the card. Then, the system could count on a particular value/transaction ID pair only being used once. When the card is used, the system encodes the updated value with a new transaction ID and writes that to the card. If that is prevented, the card is left with the already used value/transaction ID pair, and the card is rejected if it is attempted to be used again. If the card is duplicated, only the first use would work. Of course, you wou

    • Re: (Score:1, Interesting)

      by Anonymous Coward

      That is such a primitive way of looking at things. Call-home systems suffer from all kinds of issues that a PKI system with localized validation could laugh at with both PCI busses tied behind its back. For crying out loud, this is NOT the 1970s! You could store tens of millions of private keys on each and every single card reader and not even notice the change in costs. A smart card that stores just one side of the asymmetric key pair and a value (ie: a card that is read-only EXCEPT to an authorized machin

    • by HTH NE1 ( 675604 )

      The first fare card system, San Francisco's BART, isn't that secure, but has an big advantage - BART has exit gates. So, while it doesn't have real-time validation against a central database, gate info is being transmitted in background to a central system, and if centralized analysis indicates something funny going on, central control can flag the card, trap the user at the exit gate, and alert station security to check the card.

      As a teenager visiting San Francisco, one day I accidentally left my BART card on the BART, and found myself in a station where not only did I need to use my card to leave the station, but all card vending machines were beyond the gates I couldn't pass without a card! I was at a loss for what to do. The cards had left the station and were irretrievable. I could get on another and try station after station looking for one that had vending machines inside the station or a free exit, being unsure whether su

    • The first fare card system, San Francisco's BART

      I understand that they were originally going to call it the "Frisco Area Rapid Transit".
  • I am glad (Score:4, Insightful)

    by definate ( 876684 ) on Saturday August 16, 2008 @02:52AM (#24624693)

    I am glad this judge has put a gag order on the MIT students, because now there is no exploit, and we are all safe from the terrorists/etc.

    As we all know, if we all don't talk about it, it doesn't exist... right?

    Okay, so sarcasm aside, this is the most ridiculous idea I have ever heard. Attempting to fix a problem by stopping people from hearing about the problem?

    I know I am over simplifying the matter to get my point across, but I'm doing this to point how ridiculous it is.

    Additionally by saying "He added that in such cases, the goal of security researchers often seems to be to further their own agendas instead of helping others fix problems." shows a complete lack of understanding of market forces. Yes he is furthering his own agenda, and in the process, he benefits us. It's the market you commie bastard, it isn't evil, we all win, get over it.

    • Re:I am glad (Score:4, Insightful)

      by wellingj ( 1030460 ) on Saturday August 16, 2008 @03:35AM (#24624793)

      Okay, so sarcasm aside, this is the most ridiculous idea I have ever heard. Attempting to fix a problem by stopping people from hearing about the problem?

      Welcome to modern life in the USA.

      • by wisty ( 1335733 )
        Gagging has been tried before.

        Let's say you are a large military power, with a naval base in Hawaii. Let's say another large power in the Pacific is causing a bit of trouble.

        Let's say that a guy (let's just call him Billy Mitchell) writes a 324-page report predicting a war in the Pacific, and pointing out that if it happens, the ships in your Hawaiian base are sitting ducks to an air attack.

        Do you fix your defenses? If you don't immediately fix your defenses, do you try to gag the guy who is point

    • I am not (Score:4, Insightful)

      by TubeSteak ( 669689 ) on Saturday August 16, 2008 @03:37AM (#24624795) Journal

      Yes he is furthering his own agenda, and in the process, he benefits us. It's the market you commie bastard, it isn't evil, we all win, get over it.

      The market is neither evil, nor good, it merely is.
      But, as we've seen time and time again, without regulation, markets tend towards imperfect competition [wikipedia.org].

      That said, what you and many other people generally fail to point out is exactly how security researchers contribute towards the free market. Their contribution is information. Complete information (in this case) is when everyone has knowledge that an exploit exists. Perfect information is when everyone has knowledge of how the exploit works.

      But economics and markets are never that simple and it isn't very hard to argue that the net harm from releasing the information is greater than the net good.

      • It is funny, I hear this time and time again, that "free markets without regulation tend towards imperfect competition" however I have never seen any credible theory which suggests that.

        Free markets ALLOW competition, regulation reduces competition/possibilities. By it's very definition it forces markets towards imperfect competition. (Yeah, there are pro-competition laws (monopoly laws), however they are hardly used, especially when compared to how often the anti-competition laws (trademarks, copy rights,

        • You must have been reading a somewhat strange selection of books on economy if you didn't find a theoretical explanation of why free markets are not always great for competition.

          It may all be fine and dandy with software or some manufacturing business, but you might have noticed that for instance the telecom market has this barrier of entry thing that prevents any meaningful competition *unless* regulated.
          • Re: (Score:3, Insightful)

            by definate ( 876684 )

            Barriers to entry are be overcome as long as those barriers are not enforced by Government. This is the primary problem telco's have problems competing.

            If we are talking about infrastructure that the company has created being a barrier, you are mistaken, since any opportunity to a company is weighed according to it's profitability.

            If another telco wants to use their infrastructure then they pay for it, where it is priced against their own internal services.

            Under a free market companies will make stupid deci

    • While I agree with you, it needs to be said:

      Additionally by saying "He added that in such cases, the goal of those issuing gag orders often seems to be to further their own agendas
      instead of helping others fix problems." shows a complete lack of understanding of market forces. Yes they are furthering his own agendas, and
      in the process, they benefit us. It's the market you commie bastard, it isn't evil, we all win, get over it.

      Acting in ones own interest doesn't always benefit the common good.

      • Acting in ones own interest while operating in a free market always benefits the common good, if you are successful and prosperous.

        In this case, Judges don't act in a free market, they act in a democratic market.

        If we switch Judge with CEO (kind of like a judge who acts in a free market), we would see that if the CEO's best interests weren't in line with our best interests then he would not be able to pursue his best interests, or at best would have limited time to do it, as it became infinitely expensive.

        (

  • by inKubus ( 199753 ) on Saturday August 16, 2008 @03:10AM (#24624735) Homepage Journal

    "If anyone else knows, you must disclose."

  • by bm_luethke ( 253362 ) <`luethkeb' `at' `comcast.net'> on Saturday August 16, 2008 @03:30AM (#24624783)

    "Many see the temporary restraining order preventing three MIT undergrads from publicly discussing vulnerabilities they discovered in Boston's mass transit system as a violation of their First Amendment rights. Others, though, see the entire episode as yet another example of irresponsible, publicity-hungry security researchers trying to grab a few headlines."

    Well, how about both? It can be a restriction of their first amendment rights *and* a publicity hungry "researcher" trying to grab headlines. They two things are not mutually exclusive.

    Doing the Right Thing has not been in vogue for many years now, it is all about making some form of a statement.

    It would be interesting to see the fingers being pointed if said system was attacked by terrorist and the only people killed were the family of the two sides. My guess is that the other sides point of view would become immediately obvious and they would both then point fingers at each other in an attempt to make themselves feel better.

    However in this particular case I can see why the courts would give a gag order until the case is heard - that is not a violation of your first amendment rights. It has generally been established that whilst things are being litigated that the more restrictive side is somewhat enforced until the case is decided. That really only makes sense - otherwise why even have the courts have some type of decision in this case as one side is the de facto winner?

    Ah well, what do I know? It's worth our deaths to tell everything yet of we kept all flaws secret then all would be well. We can't do something reasonable like, say, not tell people bent on killing us how to do it and when we are informed of a problem fix said thing. Nope, too hard to do and it may show that we aren't the Saviors of the World we think we are. Heck we may even have to look at the other side as Not Crazy and wanting to live free and with little threat of death - how bad would that be?

    • Why indeed have the court make a decision in this matter?

      Prior restraint isn't their business ... there was no decision to be made by them, the fact that they thought there was something to decide in the first place is the problem.

  • by Anonymous Coward

    Post it to wiki:

    http://wikileaks.org/

    Then, if some moron complains, point him/her to this article. No good deed goes unpunished, so to hell with them.

  • by Anonymous Coward on Saturday August 16, 2008 @04:14AM (#24624879)

    The Tech leaked these slides days ago.

    http://www-tech.mit.edu/V128/N30/subway/Defcon_Presentation.pdf

    It really covers absolutely everything you care about. If you're willing to, you can do all of this from the comfort of your bedroom.

    Now, I'm not in Boston, but next time I am...

  • Comments (Score:5, Insightful)

    by RAMMS+EIN ( 578166 ) on Saturday August 16, 2008 @04:18AM (#24624891) Homepage Journal

    My thoughts:

    First amendmend rights are a red herring. The fact that you have a right to say something doesn't make it a good idea to say it.

    Publicity-hungry researchers trying to grab a few headlines also aren't the issue here.

    The issue here is security. And that raises the question of who we are trying to protect. As far as I am concerned, we _should_ be trying to maximize overall security. I think the best way to do that is to protect the users of products. So, the question then becomes: What kind of disclosure yields the best security for users?

    Unfortunately, the answer to that question depends on a variety of factors. I think the three most important ones are:

    1. How will the vendor react to being informed of the vulnerability?
    2. How will the users react to being informed of the vulnerability?
    3. How will the black hats (bad guys) react to being informed of the vulnerability?

    None of these questions can be answered generally. In particular, in general, you cannot know how the black hats will react, because you cannot know if the black hats were already aware of the vulnerability. If they weren't, you have just given them a new attack vector. This is a Bad Thing, and one of the most common arguments against full disclosure. On the other hand, if they were already aware of the vulnerability, you have just told them nothing they didn't already know. Since you can't know, in general, if the black hats already know of a vulnerability, it seems that full disclosure is a bad idea, overally. But that's if you only consider point 3.

    Once you factor in points 1 and 2, the picture changes. The fact that you found a vulnerability is always interesting news to the vendor and the users. If they didn't know about it already, the vendor now knows that they have a problem that affects their users and that they need to fix, and the users know they have a problem that the vendor hasn't fixed yet, and that they should protect themselves against. If the vendor or the users did know about the vulnerability, they now know that _another_ person has found it, and that, perhaps, more priority should be given to fixing it and protecting against it. In case of full disclosure, everybody now knows for sure that the black hats know about the vulnerability, that they _will_ use it to attack systems, and that it _must_ be protected against and fixed as soon as possible.

    Now, I am going to say a couple of things that aren't really factual, but that seem reasonable to me.

    First of all, protecting yourself from vulnerabilities and getting them fixed is _always_ the right way to deal with vulnerabilities. Doing so as soon as possible minimizes the time you are vulnerable, and thus is a Good Thing. Not everyone realizes the importance of this. But, once a vulnerability has been announced publicly, you _know_ that the black hats know about it, so it is clearly risky to not protect yourself against it.

    Secondly, in general, you will never make all users aware of a vulnerability. It may seem that a vendor could inform the users of their product of a vulnerability. However, vendors are notoriously reluctant to provide their users with information about vulnerabilities. If they provide information at all, it is usually not detailed enough to allow users to take protective measures, or comes long after the black hats have already started exploiting the vulnerability. Moreover, even the vendor will not know everyone who uses a product. And nobody can exclude the possibility that some of these users may be black hats, or that the information may leak to the black hats. Public disclosure at least gives every user of the product the possibility to inform themselves of a vulnerability.

    Thirdly, historically, vendors have been reluctant to fix vulnerabilities unless they were publicly known. This is a Bad Thing, because the fact that a vulnerability is not publicly known does not mean it is not being exploited. Now, of course, vendors could change. And some of them have changed. But, hi

    • Your post is not, primarily, facts. It's primarily reasoning: that's trickier to correct. Next time split your work into multiple messages, eisier to follow, please.

      For example, you wrote: "..., vendors have been reluctant to fix vulnerabilities unless they were publicly known. This is a Bad Thing, ... "

      There's no trivial way to fix this: Fixing the flaws often requires a complete redesign of a system. In this case, it means using a better RFID system that supports encryption better. But that's actually cha

    • There have been plenty of stories about disclosure responsible or otherwise, that isn't what makes this one special. The fact that multiple courts decided that prior restraint was fine and dandy is what stands out here ... so no, it's not a red herring.

    • Curious people may want to read the list of papers and articles about security bug disclosure policy [lib.fl.us] (no longer maintained but full of interesting stuff).

  • by Jane Q. Public ( 1010737 ) on Saturday August 16, 2008 @04:54AM (#24624985)
    And for good reason!!!

    They have a RIGHT to speak. They can exercise discretion and do people a favor, or they can exercise a different kind of discretion and do a different group of people a favor, or they can lack discretion and get themselves arrested for illegal speech, which does happen sometimes... but only AFTER they say it! There is no such law as "conspiracy to say something harmful or offensive"!

    Regardless of whether it is right or responsible or moral for them to do what they want to do, they have a RIGHT to speak. And you can't mess with that right without messing up a hell of a lot more than just the "security" of one sorry municipality or corporation.

    Prior restraint amounts to a legal attempt to read someone's mind. Sorry, but "thought crimes" STILL do not exist in this country. Because prior restraint would open up a whole nightmarish can of worms and, effectively legitimize the concept of "thought crime", it should never be tolerated even a little bit, EVER.
    • [headline:] Prior Restraint is UNCONSTITUTIONAL!!!

      Usually, but not always. Gag orders are prior restraint, for example, and are not always unconstitutional. There are also clear exceptions for what the USSC refers to as "exceptional circumstances," and clearly defining that is up to the courts themselves. You are correct that there is a heavy bias against it in US jurisprudence, but it is important to note that no decision has yet been rendered. It MAY be prior restraint days from now, but now it is sim

      • Quote: "It MAY be prior restraint days from now, but now it is simply "keep your pants on a minute."

        Not so. It is prior restraint NOW. As you point out, it could theoretically turn out to be one of those rare exceptions of legal prior restraint when it gets to court, but the chances of that happening in this particular case are about the same as a snowball in Hell. This is CLEARLY a case of ILLEGAL prior restraint.

        By the way, I do not recall any acceptable "exceptions" to prior restraint prohibition w
    • by dissy ( 172727 )

      Regardless of whether it is right or responsible or moral for them to do what they want to do, they have a RIGHT to speak. And you can't mess with that right without messing up a hell of a lot more than just the "security" of one sorry municipality or corporation.

      Prior restraint amounts to a legal attempt to read someone's mind. Sorry, but "thought crimes" STILL do not exist in this country. Because prior restraint would open up a whole nightmarish can of worms and, effectively legitimize the concept of "thought crime", it should never be tolerated even a little bit, EVER.

      I wonder how private of a company the transit system actually is. I was under the impression it was run by the city, thus the government.
      Assuming of course that is true, then this just opened boston up to a massive entrapment lawsuit!

      It is entrapment when government attempts to entice you to commit a crime or crimes that you would not commit otherwise.
      Which is exactly what this statement is doing:

      "When you discover major flaws in a system that society relies on, you go to the people who own the system and work with them," Jordan said "You don't stand up on a podium and say, 'Look how clever I am.'"

      Jordan is attempting to convince the blackhat security researchers into committing both blackmail and extortion

  • ...remains intact if the theatre is actually on fire and the manager refuses to pull the alarm.
  • ...about 15 minutes after word of the gag-order hit the streets.

  • Others, though, see the entire episode as yet another example of irresponsible, publicity-hungry security researchers trying to grab a few headlines."

    Let me add another, somewhat more cynical voice to the debate...

    Why should security researchers disclose their discoveries to the original author first? That would only make sense if we assume all security researchers do what they do for the sake of improving software for which have no financial incentive to improve, out of pure benevolence. While such pe
    • I found a security breach once, tracked it, reported it to the authorities in charge of the company and got fired! They were the ones doing the dirty deed! It was a fraud scheme used to make money under the table and launder money from other nefarious ventures. So even if you do figure it out, Who are you gonna call? In My case I had to tell the owners of the company. But in the case of a Public Entity, It's the public that should be told. If the truth is inconvienient, so be it.
      • But in the case of a Public Entity, It's the public that should be told. If the truth is inconvienient, so be it.

        Indeed. And if that were the accepted practice (as it should be) I bet there'd be more QA positions opening up at some of these vendors.

        So even if you do figure it out, Who are you gonna call?

        That's an excellent point

        I found a screwup once too ... it was a major national newspaper (which I won't name) who left a folder full of word docs of upcoming stories wide open off their main Web site. Plus all kinds of other official-looking documents.

        Now, I could have called or emailed or otherwise contacted them about their gaffe, and in fact I was tempted to do

  • by speedtux ( 1307149 ) on Saturday August 16, 2008 @07:05AM (#24625331)

    The situations of the Linux kernel and the Boston subway are completely different. In the case of the Linux kernel, people need to know because it's their security that's at stake. In the case of the Boston subway, it all comes down to the economics of fare evasion and doesn't affect anybody's security (and you can be certain that the Boston subway knew about this and accepted it when they bought the system).

    Now, I think the MIT students have a first amendment right to disclose this. However, I also think that these kinds of antics deserve reproach: people should point out that this was a stupid thing to do.

    • This is really CYA on behalf of the incompetent people running the Boston system.

      They made the cheap choice ( unvalidated stored value cards w/ crappy encryption of the data ) and it bit them on the ass.

      So now, someone else discovers the OBVIOUS FLAWS, and publicises the incompetence of the administration responsible.

      Here's a little secret: The researchers are surely not the FIRST people to discover this. They're just pointing it out. I'm sure others are already exploiting the flaws even before the anno

      • This is really CYA on behalf of the incompetent people running the Boston system.

        They didn't have a secure system before (tokens), why should they have one now?

        They're just pointing it out. I'm sure others are already exploiting the flaws even before the announcement.

        Of course, people are exploiting it, just like they were exploiting tokens before. It's factored into the overall cost.

        They made the cheap choice ( unvalidated stored value cards w/ crappy encryption of the data ) and it bit them on the ass.

        Ec

        • Explain how New York City managed to stop 20 Million a year in subway fraud using a secure, centralized system then?

          I guess NYC > Bos.

    • (I'm reposting cause I noticed my post as "anonymous coward"! Not me!) I don't agree entirely. When the problem is with a public entity, especially government, the truth should be shouted from the rooftops. Loudly! The founding fathers put the rights to free speech and free press and the right to bear arms FIRST in the list for the reason that that is our only protection from government corruption. That's right people, corruption. That should be our first thought when such a dismal system is implemented.
      • When the problem is with a public entity, especially government, the truth should be shouted from the rooftops.

        Nobody has shown there to be a problem.

        They are hiding the fact that it is flawed badly and quite possibly BY DESIGN.

        Of course it is "flawed" by design, except that it is not a flaw. Subway tokens were never unforgeable before, so why should that all of a sudden be a requirement?

        So they made FREE SPEECH a TOP priority

        I think the MIT students should be allowed to publish their findings, just like I

        • But people should also recognize that what the MIT students did is stupid. People should also recognize that arguments like yours are stupid.

          What is stupid about it? What is stupid about knowing the truth? What a pointless statement.

          • What is stupid about it?

            What is "stupid" about is their and your claim that this is a flaw.

            What is stupid about knowing the truth?

            Nothing. But there is something stupid about publishing the truth in some cases. This is one.

            • Like what? They (the MIT Students) didn't publish the findings Publicly. The Lawyers for the MBTA did. That was definately stupid, unless they used the filing of the restraining order to get the info into the public eye on purpose to expose the MBTA director's mishandling of the security. Just saying something or someone is stupid is not an argument. Back up what you say with some logical reasoning.
  • by Anonymous Coward

    The judge upheld the gag order because he realized that riding the subway for free is a bigger threat to civilization than blowing up the world. That's why the MTA was entitled to prior restraint against the subway hackers, when the US government was not able to restrain The Progressive magazine from publishing the secret of the H-bomb in the 1980's.

    For more info, google "morland progressive" or see the first hit:

    http://www.fas.org/sgp/eprint/cardozo.html

  • Others, though, see the entire episode as yet another example of irresponsible, publicity-hungry security researchers trying to grab a few headlines.

    See, this is exactly why one should always announce security problems anonymously, via one of the security lists that supply anonymity. That way, you don't get labelled publicly with such epithets. Then, when the fuss has died down and it's only the security geeks talking, you can let them know that you were the source of that "leak". That way, you get the c

  • by arthurpaliden ( 939626 ) on Saturday August 16, 2008 @10:04AM (#24626009)
    In a related story it appears the Judge's home was broken into and ransacked and several irreplaceable articles were stolen and destroyed without anyone knowing even though it has an activated alarm and security locking system. It appears that there was a flaw in the system that enabled the perpetrators bypass it. This flaw was know to security researchers however they were under a gag order and were not permitted to release this information to the general public. The gag order was applied for by the company because âoeif the general public knew about the flaw it would impact our revenue streamâ.
  • The information was presented to the MBTA in a manner that was available to the students. Being a Boston resident myself, I know that I can't just walk into North Station (essentially the hub of the T) and speak to their security techs. Even then, I doubt the MBTA is receiving very much mail about their security issues (or at least they weren't before this). Their failure to act on information that was in all essence handed to them, is their own fault.

    On another note, these security researchers seeking
  • http://www.wiretrip.net/rfp/policy.html [wiretrip.net] - common sense disclosure policy from August 2000.

    Nuff said.

  • There are always ten thousand arguments for restraining free speech and they supposedly are all backed by dire need.
            At the bottom of it all we have settled it over two centuries ago. Free speech is not up for debate. Whether it harms individuals, groups or the whole world is simply not an issue. What is sick is allowing endless court cases over restraint of free speech. These cases should be dismissed without even being looked at.

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...