Gag Order Fuels Responsible Disclosure Debate 113
jvatcw writes "The Boston subway hack case has exposed a familiar rift in the security industry over responsible disclosure standards. Many see the temporary restraining order preventing three MIT undergrads from publicly discussing vulnerabilities they discovered in Boston's mass transit system as a violation of their First Amendment rights. Others, though, see the entire episode as yet another example of irresponsible, publicity-hungry security researchers trying to grab a few headlines."
We discussed the temporary restraining order last weekend, and later the EFF's plans to fight it. CNet reports that another judge has reviewed the order and left it intact. Reader canuck57 contributes a related story about recent comments by Linus Torvalds concerning his frustration over the issue of security disclosure.
IMO, this is really a simple issue (Score:5, Insightful)
Linus is dead on right. If you find it, tell the author(s). If they don't respond? Tell the world. Software makers should credit those that find the bugs as well. This will eventually lead to credit where credit is due, and subsequent reputation building in a reasonable manner.
Gag orders just make things worse. This is where I believe the law should take a stand. If someone makes reasonable due diligence to report the vulnerability to the author(s) and nothing happens in response to the report, then the authors have no recourse on what happens when it is made public. This is in line with the intent of our legal framework now, and would not IMO violate legal values.
"Unsafe at any speed" was not exactly something the auto industry wanted to deal with, but they had to. Those lessons are very applicable here. Those who don't play nice and disclose to the public too soon should be penalized if actual damages can be shown. Restraint and respect. These two things have no dependency on reciprocal action.
I read Linus' rant and he's absolutely correct. The bigger the flame war over vulnerabilities, the more security companies make off of unwarranted fears etc. It's just a game, and where the law is concerned, we have prior examples to look at... and goddamnit, they are about cars! No analogy needed here
They're a bunch of attention whores (Score:5, Insightful)
However...
"...yet another example of irresponsible, publicity-hungry security researchers trying to grab a few headlines" <-- this does not invalidate this --> "First Amendment rights" ...no matter what the neo-cons or lobbyists might say.
Re:IMO, this is really a simple issue (Score:1, Insightful)
If you find it, tell the author(s). If they don't respond? Tell the world.
But it still MUST be done anonymously to keep anybody from suppressing it, as is being done here.
Comment removed (Score:5, Insightful)
I am glad (Score:4, Insightful)
I am glad this judge has put a gag order on the MIT students, because now there is no exploit, and we are all safe from the terrorists/etc.
As we all know, if we all don't talk about it, it doesn't exist... right?
Okay, so sarcasm aside, this is the most ridiculous idea I have ever heard. Attempting to fix a problem by stopping people from hearing about the problem?
I know I am over simplifying the matter to get my point across, but I'm doing this to point how ridiculous it is.
Additionally by saying "He added that in such cases, the goal of security researchers often seems to be to further their own agendas instead of helping others fix problems." shows a complete lack of understanding of market forces. Yes he is furthering his own agenda, and in the process, he benefits us. It's the market you commie bastard, it isn't evil, we all win, get over it.
The problem with a binary world... (Score:3, Insightful)
"Many see the temporary restraining order preventing three MIT undergrads from publicly discussing vulnerabilities they discovered in Boston's mass transit system as a violation of their First Amendment rights. Others, though, see the entire episode as yet another example of irresponsible, publicity-hungry security researchers trying to grab a few headlines."
Well, how about both? It can be a restriction of their first amendment rights *and* a publicity hungry "researcher" trying to grab headlines. They two things are not mutually exclusive.
Doing the Right Thing has not been in vogue for many years now, it is all about making some form of a statement.
It would be interesting to see the fingers being pointed if said system was attacked by terrorist and the only people killed were the family of the two sides. My guess is that the other sides point of view would become immediately obvious and they would both then point fingers at each other in an attempt to make themselves feel better.
However in this particular case I can see why the courts would give a gag order until the case is heard - that is not a violation of your first amendment rights. It has generally been established that whilst things are being litigated that the more restrictive side is somewhat enforced until the case is decided. That really only makes sense - otherwise why even have the courts have some type of decision in this case as one side is the de facto winner?
Ah well, what do I know? It's worth our deaths to tell everything yet of we kept all flaws secret then all would be well. We can't do something reasonable like, say, not tell people bent on killing us how to do it and when we are informed of a problem fix said thing. Nope, too hard to do and it may show that we aren't the Saviors of the World we think we are. Heck we may even have to look at the other side as Not Crazy and wanting to live free and with little threat of death - how bad would that be?
Re:I am glad (Score:4, Insightful)
Welcome to modern life in the USA.
I am not (Score:4, Insightful)
Yes he is furthering his own agenda, and in the process, he benefits us. It's the market you commie bastard, it isn't evil, we all win, get over it.
The market is neither evil, nor good, it merely is.
But, as we've seen time and time again, without regulation, markets tend towards imperfect competition [wikipedia.org].
That said, what you and many other people generally fail to point out is exactly how security researchers contribute towards the free market. Their contribution is information. Complete information (in this case) is when everyone has knowledge that an exploit exists. Perfect information is when everyone has knowledge of how the exploit works.
But economics and markets are never that simple and it isn't very hard to argue that the net harm from releasing the information is greater than the net good.
Re:IMO, this is really a simple issue (Score:5, Insightful)
And gag orders are today's version of "shoot the messenger".
The problem is there even if you don't tell the world.
Anyway - being a security person is more than revealing or hiding facts. It's also about having the insight to realize that there are always security failures in a system. The point isn't to track down each individual security failure but to create a layered solution that can change a security problem from being critical to moderate.
It's impossible to catch all security problems in a system, and sometimes a security weakness is in place because it isn't possible to make the system more secure without causing it to be impossibly hard to use.
But there are of course stupid security failures too. Autorun in Windows is one... Very effective when you want to spread viruses and other malicious features. And now and then we hear about USB devices that are infected.
Comments (Score:5, Insightful)
My thoughts:
First amendmend rights are a red herring. The fact that you have a right to say something doesn't make it a good idea to say it.
Publicity-hungry researchers trying to grab a few headlines also aren't the issue here.
The issue here is security. And that raises the question of who we are trying to protect. As far as I am concerned, we _should_ be trying to maximize overall security. I think the best way to do that is to protect the users of products. So, the question then becomes: What kind of disclosure yields the best security for users?
Unfortunately, the answer to that question depends on a variety of factors. I think the three most important ones are:
1. How will the vendor react to being informed of the vulnerability?
2. How will the users react to being informed of the vulnerability?
3. How will the black hats (bad guys) react to being informed of the vulnerability?
None of these questions can be answered generally. In particular, in general, you cannot know how the black hats will react, because you cannot know if the black hats were already aware of the vulnerability. If they weren't, you have just given them a new attack vector. This is a Bad Thing, and one of the most common arguments against full disclosure. On the other hand, if they were already aware of the vulnerability, you have just told them nothing they didn't already know. Since you can't know, in general, if the black hats already know of a vulnerability, it seems that full disclosure is a bad idea, overally. But that's if you only consider point 3.
Once you factor in points 1 and 2, the picture changes. The fact that you found a vulnerability is always interesting news to the vendor and the users. If they didn't know about it already, the vendor now knows that they have a problem that affects their users and that they need to fix, and the users know they have a problem that the vendor hasn't fixed yet, and that they should protect themselves against. If the vendor or the users did know about the vulnerability, they now know that _another_ person has found it, and that, perhaps, more priority should be given to fixing it and protecting against it. In case of full disclosure, everybody now knows for sure that the black hats know about the vulnerability, that they _will_ use it to attack systems, and that it _must_ be protected against and fixed as soon as possible.
Now, I am going to say a couple of things that aren't really factual, but that seem reasonable to me.
First of all, protecting yourself from vulnerabilities and getting them fixed is _always_ the right way to deal with vulnerabilities. Doing so as soon as possible minimizes the time you are vulnerable, and thus is a Good Thing. Not everyone realizes the importance of this. But, once a vulnerability has been announced publicly, you _know_ that the black hats know about it, so it is clearly risky to not protect yourself against it.
Secondly, in general, you will never make all users aware of a vulnerability. It may seem that a vendor could inform the users of their product of a vulnerability. However, vendors are notoriously reluctant to provide their users with information about vulnerabilities. If they provide information at all, it is usually not detailed enough to allow users to take protective measures, or comes long after the black hats have already started exploiting the vulnerability. Moreover, even the vendor will not know everyone who uses a product. And nobody can exclude the possibility that some of these users may be black hats, or that the information may leak to the black hats. Public disclosure at least gives every user of the product the possibility to inform themselves of a vulnerability.
Thirdly, historically, vendors have been reluctant to fix vulnerabilities unless they were publicly known. This is a Bad Thing, because the fact that a vulnerability is not publicly known does not mean it is not being exploited. Now, of course, vendors could change. And some of them have changed. But, hi
Re:IMO, this is really a simple issue (Score:4, Insightful)
Linus is dead on right. If you find it, tell the author(s). If they don't respond? Tell the world.
Once upon a time, I would have agreed with you. But nowadays, when someone finds a vulnerability and tells the vendor, the vendor goes and gets a gag order to prevent the public from being able to protect themselves. Or the security researcher gets arrested. It might be safer to just tell everybody, anonymously, through one of the many full disclosure lists.
This is where I believe the law should take a stand.
There is no reason to get the law involved with this one. In fact, the courts seem to be the problem in this case.
Re:The Boston system is really dumb (Score:3, Insightful)
Not if all transactions are validated. If you're using PKI, then the holder of the card cannot determine in advance what the new value on the card is supposed to be, so all the software has to do is ensure that the decrypted value when re-encrypted is equal to the value that should have been written to the card, and that the digital signature placed on the card matches up with that machine's "personal" public key. Then you know that the value that you think has been written to the card is indeed the value written to the card.
As for writing the correct value - well, not my fault if coders are so incompetent they can't be bothered doing basic top-down design and bottom-up testing. Nor is it my fault they're so frail and scared when anyone suggests formal methods or even something as puny as checking invariants and QAing the corner cases. Frankly, it's pathetic. If you want good software, you've got to put in effort, same as if you want good anything. A top-notch Olympic-quality athlete is going to cost more to train and prepare than a third-grader for their school sports day. It is also going to take a lot more time to get them up to that standard. Software is no different.
If you want software that's 99.999% bug-free, you can do it, but it's not going to be pretty and it's not going to be cheap. If you want cast-iron guarantees that the remaining 0.001% cannot have bugs that significantly impact operations in terms of money, quality of service or reliability, you can do it, but don't expect it to appear effortlessly.
NB: When I say it's not "cheap", I mean exactly what I say. It's going to cost developers in blood, sweat and tears. It's going to cost them in time, it's going to cost them in pain, it's going to cost them in stress. It might well cost them their sanity. If it's in the corporate sector, it's certainly going to cost someone a great deal of money. But don't tell me it can't be done. Given enough time and some suitable rope to hang themselves with afterwards, any programmer can do it. It's that they or their paymasters aren't interested. Lack of interest is a WHOLE 'nother game. Nothing to do with impossibilities, it's all psychological bullshit.
(Linux is a good example. It has a low percentage of bugginess because the coders think they can do it. Windows does not because their coders don't think they can. OpenBSD has superb security because their coders think that that's what they're great at. There are carrier-grade and DO-178B level A Linux variants because those coders thought that was possible too. I've mentioned before general-purpose OS' that use security kernels to achieve mathematically-provable Orange Book A1 security - yet I still hear people insist it cannot be done. Too bad, it already has. Get used to it.)
If people wanted a high-reliability IT system that was secure, bullet-proof and made life easier - really wanted it - then that is what they would have. IT disaster stories aren't because IT is difficult, it's because a difficult attitude is difficult.
Re:I am not (Score:3, Insightful)
Barriers to entry are be overcome as long as those barriers are not enforced by Government. This is the primary problem telco's have problems competing.
If we are talking about infrastructure that the company has created being a barrier, you are mistaken, since any opportunity to a company is weighed according to it's profitability.
If another telco wants to use their infrastructure then they pay for it, where it is priced against their own internal services.
Under a free market companies will make stupid decisions, however in the long run they will be forced to make better ones, unless you regulate the industry.
There are a few examples of how free markets are not completely efficient (eg, total surplus excluding government surplus is not maximized), however all of the solutions for these problems are often criticized as introducing more problems than they solve. Especially since most of them presume that the government has perfect information, which it never does.
Additionally, companies develop barriers to entry to push the price from perfect competition towards monopoly, however they are stopped from pushing it to far by other companies competing. If a company is pushing it far without any competition then there are 2 possible reasons:
1) The actual margins of the company are not particularly attractive or there is an immensely unattractive payoff period/npv/etc. If so, perhaps this industry isn't that attractive, and by forcing the introduction of another through whatever means, would not benefit the industry or the consumer.
2) The company has developed a competitive advantage and so they are capitalizing on their innovation and hard work. If so, why would you want to punish a company for doing well and creating so much value for people?
There are theories about why free markets are not always good, and there are "problems" with a free market, however there is no other reasonable alternative.
why disclose at all? (Score:5, Insightful)
The situations of the Linux kernel and the Boston subway are completely different. In the case of the Linux kernel, people need to know because it's their security that's at stake. In the case of the Boston subway, it all comes down to the economics of fare evasion and doesn't affect anybody's security (and you can be certain that the Boston subway knew about this and accepted it when they bought the system).
Now, I think the MIT students have a first amendment right to disclose this. However, I also think that these kinds of antics deserve reproach: people should point out that this was a stupid thing to do.
Buying an insecure system was stupider. (Score:3, Insightful)
This is really CYA on behalf of the incompetent people running the Boston system.
They made the cheap choice ( unvalidated stored value cards w/ crappy encryption of the data ) and it bit them on the ass.
So now, someone else discovers the OBVIOUS FLAWS, and publicises the incompetence of the administration responsible.
Here's a little secret: The researchers are surely not the FIRST people to discover this. They're just pointing it out. I'm sure others are already exploiting the flaws even before the announcement.
Re:IMO, this is really a simple issue (Score:1, Insightful)
>But it still MUST be done anonymously to keep anybody from suppressing it, as is being done here.
Often when we see information being suppressed, it is as much due to the self-aggrandizing nature
of the person trying to disseminate the information, rather than the information itself.
This information could have been silently released to the public through any number of anonymous channels.
But the people being suppressed here, are themselves attempting to limit availability of the information for
their own selfish reasons. One may assume that they *want* to be suppressed by government and/or corporate
forces, that this is their goal. If it were only about getting the information to the public, the individuals
in the story would not need to be personally associated, and would not be under attack, or even, under discussion.
Re:IMO, this is really a simple issue (Score:5, Insightful)
But nowadays, when someone finds a vulnerability and tells the vendor, the vendor goes and gets a gag order to prevent the public from being able to protect themselves. Or the security researcher gets arrested. It might be safer to just tell everybody, anonymously, through one of the many full disclosure lists.
Indeed. In a recent discussion on this topic, someone pointed out that there's a legal name for the strategy of "tell the vendor, and if they don't fix it, tell everyone". The name for this is "blackmail", and you are in danger of prosecution.
We might add that if you tell the vendor, and offer to work for them to fix the problem, there's also a legal term that applies: "extortion".
The only real way to protect yourself from the danger of prosecution is to not tell the vendor anything. You should simply make the information public. That way, it's clear that you're not threatening the vendor with release of the information and you're not trying to get them to pay you to fix it.
This also prevents them from asking the courts to impose gag orders. It doesn't do much to prevent the media from labelling you a "hacker", which to the general public is a kind of criminal. But if you are knowledgeable enough to find and fix security problems, there's probably no way to prevent the media or the political system from labelling you as some sort of criminal. People in positions of authority have always wanted to silence messengers with inconvenient messages, and there's probably no way to fix this bug in the human psyche.
Re:The gag order may be appropriate (Score:4, Insightful)
You can argue the city deserves it, but all the same. However it won't really cause the students any harm to have to keep quiet about it until the case is settled.
This is clearly not true. Being unable to reveal the information probably discredits some of their valuable work.
It will effect their successful reputation, and possibly what job opportunities they might have, if they don't get to reveal their discovery.
More importantly, it is denying them of their constitution-granted rights as citizens of a free country by placing a prior restraint on their ability to use their free speech right, interfering with their liberty, and right to pursue happiness.