Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Chinese Prof Cracks SHA-1 Data Encryption Scheme 416

Hades1010 writes to mention an article in the Epoch Times (a Chinese newspaper) about a brilliant Chinese professor who has cracked her fifth encryption scheme in ten years. This one's a doozy, too: she and her team have taken out the SHA-1 scheme, which includes the (highly thought of) MD5 algorithm. As a result, the U.S. government and major corporations will cease using the scheme within the next few years. From the article: " These two main algorithms are currently the crucial technology that electronic signatures and many other password securities use throughout the international community. They are widely used in banking, securities, and e-commerce. SHA-1 has been recognized as the cornerstone for modern Internet security. According to the article, in the early stages of Wang's research, there were other data encryption researchers who tried to crack it. However, none of them succeeded. This is why in 15 years Hash research had become the domain of hopeless research in many scientists' minds. "
This discussion has been archived. No new comments can be posted.

Chinese Prof Cracks SHA-1 Data Encryption Scheme

Comments Filter:
  • by GigsVT ( 208848 ) on Saturday January 20, 2007 @04:43PM (#17696680) Journal
    This is total crap. I can't believe anyone would give any second thought to Chinese propaganda.

    MD5 and RC4 was not "cracked" and I highly doubt SHA-1 was "cracked" either. Some weaknesses were found in MD5 that do not affect the majority of uses of it. I suspect the situation is the same here.
  • by fyngyrz ( 762201 ) * on Saturday January 20, 2007 @04:44PM (#17696684) Homepage Journal

    We gain the obvious: The more we know, the better off we are. All science contributes to rolling back the veil of the unknown, and (eventually) almost all science benefits us. Encryption research is no exception. Suppressing research in favor of the dogma of the day is old-school religious thinking. Not a good way to go.

    Besides; my suspicion is that if she's gone and cracked it, the odds are at least reasonable that the NSA and crew already had, anyway — it's not like they would tell us if they had. Time to move on.

  • News for nerds? (Score:5, Insightful)

    by Toveling ( 834894 ) * on Saturday January 20, 2007 @04:45PM (#17696692)
    This article is completely devoid of any real content. It just says she "cracked it" over and over, not explaining whether a crack is a collision, preimage, or other attack. It also seems technically inaccurate, saying that SHA-1 'includes' MD5? I know that no one RTFA, but c'mon, at least cover for a crappy article by having a good summary: this story has neither.
  • by Instine ( 963303 ) on Saturday January 20, 2007 @04:53PM (#17696754)
    Like most things there, I'm guessing (tho this could well be very predjudist) that the Government pays... But she has done anyone who banks online a favour, by showing the flaw in the system. It would be naive to think that only she would ever crack it. What is interesting is that she has made it public knowledge that she has cracked it. This is probably China flexing its IT knowhow muscles a little. Not in such a threatening way, but a "look at the level at which we can play" kind of way. And no! This is not an act of war, nor would the US Gov be wise to call it one. But hey, their not so wise....
  • by Anonymous Coward on Saturday January 20, 2007 @04:54PM (#17696764)
    Besides; my suspicion is that if she's gone and cracked it, the odds are at least reasonable that the NSA and crew already had

    Not necessarily. There are often times when major leaps like this are made because of the efforts of one exceptionally brilliant person. It doesn't matter if you have whole teams of really smart people working on a problem, because this one person will come along and break the field open in a new way. That seems to be what's happened here.
  • by Aim Here ( 765712 ) on Saturday January 20, 2007 @04:58PM (#17696794)
    "Well said. I'm pretty sure that this is just the English translation of a Chinese state-run newspaper. (The "read original Chinese" link at the bottom gives this away.)"

    Errr, you are aware that the Epoch Times is a virulently anti-Communist newspaper don't you? They're famous for doing some sort of 10-part history of Chinese Communism (which read like a lurid and hysterical diatribe. I picked up a copy once; I don't know much about the history of China but they had a summary of the Paris Commune of 1871 which was an utterly atrocious travesty of history). If anything, the Epoch times is far more likely to distort the facts in a manner that defames the Chinese government, hard as that may be to believe.

    Not everything written in the Chinese language is censored by the Chinese government

    "Do the editors read ANYTHING before posting!?"

    I find the irony of THIS statement quite remarkable, given the above.
  • by iion_tichy ( 643234 ) on Saturday January 20, 2007 @05:08PM (#17696872)
    "Repeat after me: A hash algorithm is NOT encryption."

    Not entirely correct, though. The thing is that many crypotgraphyc "processes" rely on fingerprints of documents (as one signs the fingerprint rather than the whole document and stuff like that). So I think many current protocols would be affected. It's perhaps not encryption in a mathematical sense, but in a practical sense.

    Nevertheless the article was crap, it doesn't even say in what way SHA-1 was broken (making it impossible to judge the severity).
  • Re:Old (Score:5, Insightful)

    by Schraegstrichpunkt ( 931443 ) on Saturday January 20, 2007 @05:08PM (#17696884) Homepage
    Honestly, using SHA-512 is probably more secure than using a bunch of hashes concatenated together.
  • by malakai ( 136531 ) on Saturday January 20, 2007 @05:12PM (#17696920) Journal
    Fbzrgvzrf vg'f orfg gb uvqr va gur bcra.
  • A few facts (Score:5, Insightful)

    by Jerry Coffin ( 824726 ) on Saturday January 20, 2007 @05:21PM (#17696976)
    For those who care, Bruce Schneier gave some real facts [schneier.com] about the attack on his site a couple of years ago. As he pointed out:

    For the average Internet user, this news is not a cause for panic. No one is going to be breaking digital signatures or reading encrypted messages anytime soon. The electronic world is no less secure after these announcements than it was before.

    A short note [mit.edu] about the attack has been available for a couple of years as well. The note shows collisions for two different reduced versions of SHA-1.

    Though it's not absolutely certain, my guess is that the reality behind the new announcement is that they've actually found a collision for the full version of SHA-1, and possibly for MD-5 as well. OTOH, maybe the mention of MD-5 is just a journalist's hashed (no pun intended) version of the fact that SHA-1 is based closely enough on MD-5 that an algorithm that's successful against SHA-1 will probably be effective with respect to MD-5 as well.

  • by wfberg ( 24378 ) on Saturday January 20, 2007 @05:22PM (#17696978)

    It's only a matter of time before other hashes "fall" really - you're taking a large vector space, and mapping to a smaller one. You're in a "state of mathematical sin" relying on that for validation :-)


    Hashes will always have collisions, if (and only if) the input space is larger than the output space, sure.

    Nevertheless, if a hash were perfect, there would be no more efficient way to find a collision than brute force.

    When people are designing cryptographic protocols, they always assume a perfect cipher, a perfect hash, etc.

    Typically, what these attacks mean, is that some one found a short cut, so that actually forging a signature or deciphering text would take less than brute force. How much of a big deal this is, depends on how much the difference is, and also on whether it exposes any weaknesses (e.g. 'if your input starts with 123, you'll always get the same hash, whatever comes next').
  • by Kadin2048 ( 468275 ) <.ten.yxox. .ta. .nidak.todhsals.> on Saturday January 20, 2007 @05:33PM (#17697030) Homepage Journal
    Here's what you really need to look out for: what's the NSA's reaction?

    In the past, it was widely understood that the NSA was well ahead of the private sector in terms of both encryption and decryption. During the 70s and 80s, the private sector basically closed the "encryption gap" and produced some ciphers that (at least most people suspect) are as secure as those used by the NSA.

    What's still an open question, is how far ahead the NSA is of the private/corporate sector in terms of breaking other people's ciphers.

    Depending on the NSA's reaction, it might be possible to know whether or not this break was anticipated. If they're using SHA-1 internally, one can assume they didn't know about this discovery already, and they've fallen behind of the position many folks assumed they had. If they just shrug and smile, then they may have already known about this (and possibly been using it) for some time now.
  • Re:Old (Score:5, Insightful)

    by nacturation ( 646836 ) <nacturation AT gmail DOT com> on Saturday January 20, 2007 @05:37PM (#17697048) Journal

    Honestly, using SHA-512 is probably more secure than using a bunch of hashes concatenated together.
    Probably? I'll grant you that the output of SHA-512 is going to be longer than combining several small hashes, but I don't intuitively see that it's necessarily more secure. If there aren't any weaknesses in SHA-512, then it would have more security, but if there are weaknesses that could be exploited to find identical hashes is that more or less difficult than exploiting weaknesses in multiple smaller hash functions?
     
  • Re:Snuffle (Score:4, Insightful)

    by nacturation ( 646836 ) <nacturation AT gmail DOT com> on Saturday January 20, 2007 @05:42PM (#17697076) Journal
    While that's definitely interesting, it's still not the case that SHA-1 is an encryption scheme. I mean, if you encrypt all your data with SHA-1 then I suppose you ought to be really happy that researchers have found a way to potentially reduce the monumental decryption effort.
     
  • by Raffaello ( 230287 ) on Saturday January 20, 2007 @05:53PM (#17697140)
    There is no other way to protect unpopular views. The whole purpose of tenure is to allow scientists with new or minority ideas that are outside of the scientific/political/economic orthodoxy to continue to do research in spite of the fact that their work can't get wide publication. We make them prove that they are competent by meeting the extremely high standards of the tenure review process - getting tenure is no cake walk - then we give them the freedom to follow research avenues without regard to how popular that area of research is, and without fear that unconventional avenues or conclusions will cost them their job.

    Part of the price we pay for this is that some people will be lazy. Academia as a whole feels that this is worth the risk because:
    1. The tenure review process will screen out the overwhelming majority of the lazy people - you simply can't get tenure if you're lazy - it's too damn hard.
    2. Carrying a few lazy professors is more than worth the benefit of having a faculty that is unafraid to voice the truth as they see it without fear of reprisal from administration, established researchers in their field, powerful alumni, government, etc.
    3. Knowing what work will lead to something "useful" is tantamount to being able to predict the future. The idea that one can tell in advance where important breakthroughs will come from or where they will lead is a bean counter's fantasy. Therefore we have to trust that extremely competent scientists when allowed to follow their own chosen research paths without coercion will come up with important results. It's worked for us so far.
  • by RAMMS+EIN ( 578166 ) on Saturday January 20, 2007 @06:02PM (#17697200) Homepage Journal
    And here I was, thinking that Zonk had finally posted something great. I even jumped through hoops to get at the story, which I normally wouldn't have seen, because Zonk is on my block list. I guess I'll keep him there.
  • by Nemetroid ( 883968 ) on Saturday January 20, 2007 @06:12PM (#17697248)
    Slashdot is truly the only place where "Fbzrgvzrf vg'f orfg gb uvqr va gur bcra." can be modded "Insightful".
  • by Anonymous Coward on Saturday January 20, 2007 @06:25PM (#17697308)
    Unfortunately, this is not really true. There are already things that can be done to dramatically reduce the likelihood of buffer overflows as well as things like numeric (math) overflows and underflows. It is just that it is more work (and time) for the developers to do this.

    In our present business climate, it is better to ship a product without security and then do monthly patches than it is to design the product from the beginning so that it requires fewer patches. After all, the customer will not buy what is not on the market, and any unpatched holes only harm the customer, not the vendor.
  • by Anonymous Coward on Saturday January 20, 2007 @06:44PM (#17697404)
    Even if you have tenure there are still techniques to drop the dead wood. You will never get another raise and any means to make you miserable will be used if you fail to do good research.
  • by vakuona ( 788200 ) on Saturday January 20, 2007 @06:53PM (#17697446)
    I still think the fact that a hash algorithm is broken can be relatively unimportant. I mean, for your average Linux distribution, if you want to trick someone into using your 'fake' iso, you will have to change the bits you want to change to make certain software vulnerable, or malignant, and then you will have to make sure it is giving the exact same checksum. You are not just looking for some collissions. The collissions have to be useful to you as well.

    My question is, how trivial is it to create, say, a binary that features the command "take over user's computer" whilst keeping the same hash as the original.

    The question I would ask myself is, what is easier, cracking the website where the program is stored, and replacing the hashes with the hashes of my binary, or trying to come up with a working binary that has my misfeatures in it. I still think that if you can make things difficult enough, then you have achieved the objective. Isn't this the idea behind crypto/hashes anyway. They are not 100% foolproof, but the required level is so hard as to not be worth it.
  • by Anonymous Coward on Saturday January 20, 2007 @07:03PM (#17697502)
    The whole purpose of tenure is to allow scientists with new or minority ideas that are outside of the scientific/political/economic orthodoxy to continue to do research in spite of the fact that their work can't get wide publication.
    You do realize that the way you get tenure is by churning out incremental improvements to the existing orthodoxy. Take a chance on anything new and you risk not getting published and thus not getting tenure. By the time you make associate professor, most traces of innovation have been stomped out.
    -A Graduate Student
  • by Aim Here ( 765712 ) on Saturday January 20, 2007 @07:34PM (#17697690)
    Now you're confusing me. I think you're trying hard to say SOMETHING as a retort, because I pointed out how you made an ass of yourself in your previous post, but what you actually mean by this latest post I can't decipher.

    Wang Xiaoyun lives and researches in Beijing. Whether she's a communist or an anti-communist or not, I don't know, but the fact that both the Chinese government, and it's US-based enemies have published relatively uncritical articles on this research does tend to give it a bit of credibility; you desperately want to dismiss this as some sinister Chinese propaganda, but when the propagandists on both sides of the fence say the same thing, then it gets a bit confusing as to what sort of propaganda we're talking about here. Maybe there's no propaganda angle here at all; maybe this is (shock) news!

    Now the article is pretty badly written, but the news in it seems perfectly plausible; the same researcher was after all, one of the authors of the peer-reviewed attack in a European journal that discovered ways of constructing collisions in MD5, and has appeared at a crypto conference with collisions on the MD4 scheme. Why don't you think she's able to crack SHA-1? Because she's Chinese? Because she's in a country with communists in it? Because some anti-communists wrote a newspaper article about her? Because SHA-1 is sooper-seekrit NSA stuff that is uncrackable?

    Give up now, please. You're flailing.
  • SHA-2 is a new family of hash algorithms. But that's kind of like saying that Twofish is a new cipher algorithm that isn't Blowfish. Realistically, if someone finds a major flaw in Blowfish that wasn't anticipated in the design of Twofish, it's quite possible that Twofish has the same flaw because they're built along the same lines, despite being different algorithms.

    The SHA-2 family is designed by the same people who designed the SHA-1 algorithm, and they were designed before the flaws in SHA-1 were discovered. And from what I understand, the internal structure of SHA-1 and algorithms in the SHA-2 family are very similar.

  • by fyngyrz ( 762201 ) * on Saturday January 20, 2007 @08:47PM (#17698162) Homepage Journal
    Is [goatse.cx] that [tubgirl.com] so [lemonparty.org]?

    Absolutely. I'm not in the least offended by what other people choose to do to themselves and with intelligently consenting partners. Amused sometimes, but not offended. I'm only offended by what people do to non-consenting partners or partners who cannot consent in a reasonably intelligent fashion. And in such cases, it is useful to know what is going on.

    And technology does do bad things, for one we're helluva lot better at polluting the planet than we were without technology

    You said yourself: "we're helluva lot better at polluting the planet"... the culprit isn't technology. The culprit is people. Technology can clean up pollution, even eliminate it at its source in some cases. You're blaming the gun for the thoughts and actions of the person who decided to fire it, which is wrong. Guns and technology have no way to say "No, wait, don't do that!" It's not the same as when Bush orders a cop to pick someone up without a warrant; the action is evil, and the cop is evil for obeying because that cop could (and should) have said "no, this is wrong" and aborted the process. The lesson is: You can't blame intermediaries in any human action unless those intermediaries are also human.

    Or another totalitarian regime backed up by massive databases, computer checks and surveilance cameras. KGB or Stasi would just drool over the possibilities they'd have today.

    Well, we call that the Government of the United States of America; they used to be controlled by a document we call the constitution, which laid a very nice groundwork for a government, but that era appears to be completely over.

    Witness Commerce clause absurdities, 2nd amendment erosion, ex post facto law and punishment, phone tapping, mail opening, "free speech zones", theft of land for tax revenue, government backing of religion in multiple venues, loss of habeas corpus, torture... and all these changes made in how we operate without the (supposedly) required constitutional hoop-jumping. The only question that remains is, what new way will they find to foul our nest?

    How close are we, really, to becoming something that in no serious way resembles what the founders put in place? As this happens, from where does the government derive its authority? If it won't obey the constitution (and that seems very clear indeed), then how is the government going to justify any action it takes? I really don't understand how a government official can look a run of the mill citizen in the eye today. But again, we're talking about the actions of human beings, not the capabilities of a government. Just because you have databases doesn't mean you have to make no-fly lists; you could have a list of people who need cancer surgery, instead.

    Technology, inanimate objects, ideas - even horrifying ideas - these aren't the enemy. People without ethics that take other people's rights into account, or with canned ethics based on apocalyptic religious bullshit like G. W. Bush, those people are the problem.

  • Re:Old (Score:5, Insightful)

    by CryBaby ( 679336 ) on Saturday January 20, 2007 @10:28PM (#17698686)
    I'll grant you that the output of SHA-512 is going to be longer than combining several small hashes, but I don't intuitively see that it's necessarily more secure.
    Intuition doesn't have anything to do with it. SHA-512 has not been cracked and so it meets the definition of a "secure" hash function. Concocting your own recipes, especially based on hash functions currently known to be insecure, is a classic mistake made by non-cryptographers.

    WEP is a good example of what happens when non-cryptographers decide to make up a cryptographic function.
  • by diablomonic ( 754193 ) on Saturday January 20, 2007 @11:37PM (#17699082)
    there is no anti bush propaganda machine, only truth...

    (actually I dont completely believe that. almost EVERYTHING on mainstream news seems to be propaganda from one group or another to me. Its just that where bush is concerned, they dont really have to try very hard)

  • by Metasquares ( 555685 ) <slashdot.metasquared@com> on Sunday January 21, 2007 @12:40AM (#17699454) Homepage
    You're making some assumptions: first, that teaching is not worth compensation and job security; second, that the value of research will be immediately recognized by the scientific community; and third, that the research process is instantaneous and requires little effort.

    In actuality, great ideas sometimes fail to gain recognition by the community for years and the research itself can take months to years to perform before any worthwhile results are available. I am of the opinion that it is impossible to objectively evaluate the worth of an idea in the first place, but this philosophy notwithstanding, the "worth" of an idea, which I will define for simplicity's sake as its usability, seldom remains constant over time. How would you propose to compensate someone for doing research of still-indeterminate impact?

    You also fail to consider the career from a professor's perspective or you would dare not call academics lazy, but I address that in the longer response to your parent post, as it is not an effective rebuttal to your argument so much as an apology for the academic profession and way of life.
  • by Metasquares ( 555685 ) <slashdot.metasquared@com> on Sunday January 21, 2007 @01:00AM (#17699538) Homepage

    Here's that longer response/apology I promised below:

    The argument I hear implicit in your words, that professors should be compensated for their research activities, is one I support. However, as I mentioned below, this is often not feasible because the "worth" of one's research is not always immediately apparent. Additionally, you are referring to tenured academics as lazy, which I simply cannot countenance. You glorify something that you do not understand. Therefore, though I am only a Ph. D. student at the moment, I wish to share my view (doubtless with its misconceptions) of the career as an aspiring academic:

    Becoming a professor is not a career decision to be taken lightly and it is not for the lazy; it truly is something that must be born of a devotion to the pursuit of knowledge to the exclusion of almost everything else. The training process required to get a Ph. D. is lengthy, difficult, and generally unrewarding. True, we are generally funded while graduate students, but the funding is paltry, requires a TA or RA position at the institution unless you are fortunate enough to obtain a fellowship, and carries an expectation to devote every moment of our time to our studies and research. Even fellowships contain clauses prohibiting us from working without permission of the dean. Following a successful defense, most professors must undergo a more difficult and only slightly more rewarding postdoctoral position. These do not necessarily lead to tenure-track positions; approximately 10% will be offered assistant professorships, which carry an average salary of $44,939. In other words, after I complete my Ph. D. and a postdoc, I can look forward to starting at about $10,000 less per year than I would with most jobs I could attain right now with only a bachelor's degree in CS if I happen to be in this fortunate 10%. This is despite all of the work I have published without demanding anything in return (indeed, such work is expected). If I please my superiors and bring lots of grant money in for my institution (which involves writing a lot of proposals I'd rather not be bothered with, as they interfere with my research and other duties), I may eventually be granted tenure and perhaps rise in academic rank.

    We are not compensated for publishing our research, so unless we choose to patent our innovations, our salary is our sole source of income.

    A lazy person would not get this far. Anyone capable of enduring that much to reach this point is dedicated enough to the pursuit of knowledge to continue of his own accord because it is truly what he wishes to do.

  • by fyngyrz ( 762201 ) * on Sunday January 21, 2007 @02:25AM (#17699948) Homepage Journal
    What you're saying is if the bullets reach the right people for the right reason then guns can be good, but if the slugs hit the wrong person or for the wrong reason then they're bad.

    No. I'm not saying that at all.

    I'm saying that people are good or bad, people's actions are good or bad, and it hasn't got a single thing to do with cars, bullets, or highways. That's just evasive nonsense, mumbo jumbo from addled thinkers (or those seeking to escape responsibility.) We're human. We can choose. Choose well, and bear responsibility for good; choose poorly, and bear responsibility for bad. Technology isn't the culprit here. It's you. It's me. It's people.

    People make choices. They're responsible for those choices. Highways, guns and communications are not. Any philosophical mumbo jumbo that says the more choices are available the more blame the choices carry, is completely and utterly worthless. Likewise, when technology can amplify a choice we make, we carry additional responsibility; the technology carries none at all. This has been true since the first rock was used with intent to kill.

    Responsibility is the lost idea in modern civilization. People do anything to avoid it, to slough it off onto someone else. Well, I'm here to tell you straight out that the existence of a gun makes you no less culpable when you kill someone because it is physically easier to do, and no more respectable when you refrain in the face of whatever tempts you. It is no more or less about you and me than it was a thousand years ago. Science and technology are neutral. We have the power to turn them in either direction. We always have. There's no one here but us, and objects don't make choices. As the power is ours, so is the responsibility. 100%.

    Also: If you let media change your mind, that's your responsibility. Media can only be "active" through your actions. In other words, you can always choose. Some choices are more difficult than others, certainly, but who ever promised you an easy ride? If anyone did, they were lying and you were a fool to believe them. Just about every choice you make carries responsibility with it. There's no way out. You can't blame the Internet, highways or weapons for your problems. Your problems come from human sources, at least those that aren't sourced by the ongoing processes of nature. Technology, science... these are the last places to look to place blame.

  • Re:NSA (Score:3, Insightful)

    by ultranova ( 717540 ) on Sunday January 21, 2007 @06:29AM (#17700826)

    * Doofuses? Just look how well that has worked out for their feelow Muslims... their 70 virgins are probably going to turn out to be 70 desperate truckers with a taste for the dark meat...

    You are making an incorrect assumption here: that the purpose of Osama was to benefit his fellow muslims. It was not. It was to destroy the "infidels" (meaning every non-muslim, but especially the USA). The way to do that (in his mind) is to start a jihad, a holy war. Now which one is more likely to throw their life away in a suicide attack: someone who's kids have just been killed by US occupational forces, or someone who's busy bringing them up ?

    Osama bin Laden is an evil man, a monster who's perfectly willing to inflict suffering and death to his fellow muslims to serve his ends. He is not, however, stupid. He made a trap, and Bush walked right into it. Bush is the doofus here. Or maybe I'm underestimating him, and he's just playing the same game as Osama...

  • by Courageous ( 228506 ) on Sunday January 21, 2007 @12:52PM (#17702734)
    Not to mention federal drug laws. It required an Amendment to make alcohol illegal in the states. Where's the Amendment authorizing federal drug laws??? There is none.

    Conclusion: we barely have a Constitution any more. It's hanging on by a mere thread.

    C//
  • by fyngyrz ( 762201 ) * on Sunday January 21, 2007 @05:18PM (#17704792) Homepage Journal

    But, if you could do something so that people were not able to make the bad choice at all, would you do it?

    As a direct answer, probably not. I'm not sure that you can prevent choice in any case, or execution of choice (action.) If you try, they'll probably fight you on principle and do it anyway, find a way around the "safeguards", etc. You can react when people make a choice and take action on it; and in many cases, you should. In my view of the optimum world, my rights end where yours begin, and if I step over that line, society has a good case to get rid of me.

    Suppressing choice, either by law or by technology, has a way of going afoul of many things, not the least of which are personal liberty and people's safety.

    In the extreme case, a guy with a gun is robbing a bank and has hostages. Now, he can choose to shoot the hostages, or he can choose not to shoot the hostages. If you had the opportunity to shoot the robber dead so he can't choose to shoot the hostages, would you?

    I would even shoot through the hostages to take him out. Any time hostages are used successfully as a line of defense, more hostages will be taken as part of the lesson learned in that event. The robber is outside the pale; he has violated the rights of others by extending his actions where they must not go. He's a valid target now. The hostages are consequences of his choice to take them, and the fact that if they are treated as an impediment to his apprehension or elimination, they will be used to hurt others in the future. In other words, if taking hostages never works, and further, makes it even rougher on the hostage-takers, very few people will take hostages.

    People can choose to drink and drive or not drink or not drive. If there was an inexpensive, perfect piece of technology that was convenient and stopped some people from driving drunk and never stopped sober people from driving, would you require people to install it in their cars?

    No. There may be valid reasons why a person may need to drive drunk to save lives, move their vehicle around on their own land, etc. My take is that driving needs to be an action (like 99% of all actions) where a person's responsibility is to avoid trampling on the rights of others, knowing that society has severe consequences prepared if that line is crossed. Drinking isn't a problem. Driving isn't a problem. The combination isn't a problem. The problem is when other people's rights are trampled upon. So trying to use technology to eliminate drinking and driving is the wrong path. In my opinion.

    Yes, people have choice. But some people will choose to do bad things. Saying that the murderer is responsible for killing the victim doesn't stop people from killing victims.

    No, it doesn't. Neither do laws, neither will any technology I am aware of. However, eliminating the criminal will stop them from doing it again, and as far as I am concerned, that is the right choice as soon as we can be sure we have the right "criminal." At this time, I do not support the death penalty because we make so many mistakes in identifying the perpetrator. Life imprisonment unless they can prove they didn't do it, instead. The very day we can know they did it, we kill them.

    When thinking of a (presently imaginary) technology used to "stop killing", it is also important to realize that there are many valid scenarios that involve killing. If you enter my home in the dead of night, you've violated my rights and I can kill you. If you attack my family on the street, you've violated their and my rights and I can kill you. If you've taken hostages, you've violated their rights and I can kill you. If you are about to poison a water source, you're going to be violating many people's rights, and I can kill you to stop you. If you attempt to hijack an aircraft, you've violated the othe

  • Re:Old (Score:5, Insightful)

    by CryBaby ( 679336 ) on Sunday January 21, 2007 @05:23PM (#17704828)
    I can't tell you if SHA-512 is stronger than some combination of hashing functions you might come up with. The reason I can't tell you is because I'm not a cryptographer, which is my point -- neither are you.

    What I can tell you is that actual cryptographers are researching SHA-512 and, so far, it's held up pretty well. No one is researching your custom hashing recipe. It might be fantastically strong, but, if history is any indication, it's more likely to be highly vulnerable to an attack that you didn't think about.

It is easier to write an incorrect program than understand a correct one.

Working...