Poor, Homegrown Encryption Threatens Open Smart Grid Protocol 111
An anonymous reader writes: Millions of smart meters, solar panels, and other grid-based devices rely on the Open smart grid protocol for communication and control — it's similar to SCADA's role for industrial systems. But new research shows that its creators made the common mistake of rolling their own encryption, and doing a poor job of it. The researchers believe this threatens the entire system. They say, "This function has been found to be extremely weak, and cannot be assumed to provide any authenticity guarantee whatsoever." Security analyst Adam Crain added, "Protocol designers should stick to known good algorithms or even the 'NIST-approved' short list. In this instance, the researchers analyzed the OMA digest function and found weaknesses in it. The weaknesses in it can be used to determine the private key in a very small number of trials."
Homegrown (Score:4, Funny)
Damn those eco-friendly nuts always trying to grow things at home!
Re: (Score:1)
Re: (Score:3)
In fact, we've often seen these "experts" exhibit hostility toward anyone who dares to question them and their systems, even when the questioner has found massive and easily exploitable security flaws.
[Citation needed]
If you know what you are doing and are competent to design crypto primitives, you've probably had a lot of practice and so want to meet others with similar interests and so you probably have been to the odd IACR conference (CRYPTO, EUROCRYPT, ASIACRYPT etc.) to meet these people. You would know where to publish your algorithms to receive serious cryptographic review by your peers.
Anyone can be a dick, but being a competent crypto primitive designer entitles you to behaviour that from the outside looks dicki
Re: (Score:2)
> Anyone can be a dick, but being a competent _____________ entitles you to behaviour that from the outside looks dickish, but it really just a case of telling to truth to people who don't know what they don't know.
I think that's true of a great many specialties, even when you're trying hard not to look dickish (as I think we generally should)
Re: Homegrown (Score:5, Insightful)
There's an implicit "unless you *really* know what you're doing" to the sentence, which just tends to not be the case for most people, which is mainly because most people aren't crypto nerds, and the consequences of failing crypto are typically serious. Much more serious than doing your own science at home (provided you aren't going nuclear; "don't do your own nuclear science at home" doesn't sound so absurd, does it?) and ending up with wrong results, or composing music and ending up with horrible garbage.
Re: Homegrown (Score:5, Insightful)
There's an implicit "unless you *really* know what you're doing" to the sentence, which just tends to not be the case for most people
It's not the case for any people. You won't see professional cryptographers rolling their own crypto and using it, either.
I'm not a cryptographer myself, but I am a very experienced cryptographic security systems engineer, and I work with a bunch of serious cryptographers, who are well-published and extremely well-respected in academic circles -- exactly the sort of people who you'd expect to be most capable of designing and building custom systems. And you know what? They don't.
And I'm not just talking about creating new ciphers. Even when I go to them with novel requirements that seem to demand some sort of new construction using existing algorithms and techniques, the very first thing they do is go to the literature to see what has been done, how long it's been in use, how widely it's been reviewed and analyzed, etc. The less knowledgeable (like me, frankly, though I'm getting better) tend to start by cooking up some new scheme. Real experts avoid that if at all possible, and if they have to do something new they look really hard at how they can prove its security by reducing it to known constructions.
Even the guys who do create new ciphers do it with great care, often spending years designing and attacking and tweaking, and then their next step is to publish it so others can attack it. Only after it has survived lots of other review does anyone, especially the author, begin to trust it for real use. But the most common outcome, when something new is designed, even by serious experts, is that it gets broken shortly after publication. It's quite common for new algorithms and constructions to be broken at the same conference they're initially presented.
I reiterate: No one who knows what they're doing creates new crypto for production work.
Moreover, people who know what they're doing even approach implementation of known and trusted algorithms with trepidation! There are so many very subtle things you can get wrong. Heh, just last week someone pointed out that my implementation of a constant-time memcmp had a subtle bug that caused it to be not quite constant-time on some architectures. Novices have no idea why it even matters in crypto that memcmp always run in the same amount of time for a given buffer size, irrespective of the contents of the buffers, and assume that the C library's memcmp is fine. More knowledgeable engineers know why it matters, but really deep expertise is required to get it right. That's just one tiny example. My primary mistake wasn't the bug in my implementation, it was trying to write memcmp at all. I should have found a well-vetted implementation and used that.
Doing your own crypto is nothing like doing your own science or doing your own music. The thing about security is that it's only as strong as the weakest link; the tiniest crevice can give the attacker a wedge to bust your system wide open. Other fields are forgiving of minor flaws, you can do useful and interesting work even if it has some defects. In security, and crypto is often at the heart of security solutions, one tiny mistake can render the totality of what you did not only useless, but actively dangerous to your users.
If writing a good secure memcmp is too hard for an engineer with 25 years' experience, including 20 years doing cryptographic security, what does that say about trying to write something that doesn't appear to be trivial? Crypto is hard. Really, really hard. The more you learn about it the harder it gets, because you understand more about what can go wrong.
Re: (Score:2)
I wish I had mod points to give to you.
This is the most true and best written comment I have seen posted to slashdot in a very long time.
Thanks.
Re: (Score:2)
QFT ... that's all.
GSM Rolling their own - Malice or Incompetence? (Score:2)
GSM rolled their own crypto. They depended on Obscurity to protect their algorithm. Somebody handed a copy to Ian Goldberg, then a grad student at Berkeley, and the reason it took him three whole hours to break it was that the Chinese restaurant near campus was having the good lunch special that day.
It was a weak enough algorithm (designed in electrical-engineer-math style, which is fine if you want checksums for reliability) that I'll give them credit for incompetence, though the fact that 10 bits of the
Re: (Score:2)
Re: (Score:3)
No one who knows what they're doing creates new crypto for production work
In the generalized cases, I fully agree with you. However, the successful suites of cryptography software were written by someone, presumably someone who knew what they were doing, so I'd wager that the statement is a bit over broad. Might I suggest...Only a tiny fraction of people who know what they are doing even manage to do it successfully
No.
The statement is precisely accurate, not overbroad at all. Yes the suites were created... but not for production use. All of the bits and pieces were created first, then analyzed and attacked for years, and only then put into production.
And as the raft of SSL implementation and protocol bugs over the last year demonstrate quite conclusively, many of them are still put into production too soon.
Re: (Score:2)
Unless there's some interaction between the two, such that one reverses some property of the other, then the combination should be at least as secure as either. I don't expect that would be the case with any of the important algorithms.
However, you're not going to gain any security by doing that, either, unless you're hedging against the possibility that one of the two gets completely broken. I don't think that's very likely with something like AES. If you really want to make sure you've achieving good se
Re: Homegrown (Score:1)
Re: (Score:2)
Re: Homegrown (Score:5, Insightful)
An algorithm that has been in use for years, especially if said algo has been used by companies that value the security of their data where there is also a huge incentive for third parties to break that security (like, say, financial institutes, insurances, journalism, governments), you may rest assured that they all hired various, very different, people with the goal to see whether that algo is actually as secure as they claim it to be. Not to mention a lot of other experts who do it on their own time for the sake of being the one who found a critical flaw in it (which is usually a HUGE boost in credibility and peer esteem, which can easily be converted to more income and better jobs).
There are a LOT of people with nothing better to do than testing those cryptos. You may rest assured that thousands of hours have been thrown against common encryption standards. And while many companies (governments especially) would of course keep such findings secret to exploit them if their opponents use them and consider themselves secure, most others would not only publish it (especially pretty much all security companies out there), to be the one that broke it. As I said, if you could e.g. show that you broke AES, your company becomes the de facto security industry leader.
The hostility you may observe is less about people who "dare" to muscle into their territory. We're in general quite open to criticism and we do want to hear about new algorithms. They drive the industry. When PFS became a reality, we were more than happy to embrace it because it does offer a decisive increase in security. It took away a single point of failure and meant a lot more effort on the side of an attacker, which was something that boosted security considerably.
What irks us is the snakeoil peddlers that litter the industry. Idiots who make impossible claims, knowing that most of this "security stuff" is not really easily understood by someone who isn't privy to the inner workings of encryption. So it's easy for some smooth talking con artist to sell them anything as long as it's sprinkled with buzzwords and makes outlandish claims about the key size. And here the old meme actually is true: It's not the size that matters.
And this is why we're usually wary of anything "homegrown". No "self made" security system so far could come close to the security of the tried and tested systems. And here again first and foremost because of the old axiom that nobody ever managed to crack his own security. By definition. If he could, he would have built it differently. So even if the "homegrower" was a top level security expert... think Rivest, Shamir, Schneier und Adleman rolled into a single brain ... he himself could not sensibly test his own security system because he will never ever find the flaws in it.
And the even bigger problem: Few of those that build such a "homegrown" system come close to a single one of those four, let alone all of them. Because I'm dead sure, those 4 would reach for one of the old, tried algos. Not only because most of them were actually developed by one of them (or, as in RSA, three of them).
Re: (Score:2)
On the flip side there may be something to be said for a certain amount of security through obscurity, provided it doesn't interfere with battle-hardened security, after all some of the security holes found by those many eyes will be kept secret/be sold on the black market by nefarious actors. A narrow-purpose algorithm is going to have far fewer eyes, good and bad, looking for security holes.
BUT for Gandalf's sake use your custom algorithm as a second layer of defense behind a well tested one, not instead
Re: (Score:2)
A little extra obscurity (almost) never hurt anybody... as long as it's backed up by real security TOO. It's really "obscurity instead of security" that is the demon to vanquish here.
Re: (Score:2)
One-Time-Pad is 100% pure security-though-obscurity, and nothing beats it if you have leakproof key management. But that last is a "little detail" that can be pretty hard to achieve.
Re: (Score:2)
Quite the opposite.
Part of my job is doing just that. Performing security audits to test the stability and robustness of security algorithms and implementations thereof. We are some of the few good eyes that look for security holes.
And therein lies the problem: Not only are we "good" guys probably fewer in numbers, we also have less time than the "bad side". Less time simply because you cannot sensibly afford employing us for the dozens of days it takes for a sensible black box test. A black box test tests
Re: (Score:2)
Spoken out of true ignorance.
Obscurity doesn't work for *any* form of security; someone will figure it out and then it will be broken.
Good security can be published and peer reviewed and is *still* secure.
The only thing that should be obscure is your encryption key.
Re: (Score:2)
Nonsense. It may not add *much* security, but that depends entirely on the resources of your attacker. Have you never hidden a house key somewhere outside in case you lock yourself out? Classic case of security through obscurity (though in fairness, that's probably one of the stronger links in your home security). Ditto secret passages across the ages. Your security *will* be broken, if it's important then you need to implement multiple independent layers and pray that the weaknesses in one will be fou
Re: (Score:2)
Hmm, sounds plausible. Knew there was a reason I stay away from security beyond avoiding buffer overrun potential, etc. (I mean, aside from it encouraging unhealthy paranoid tendencies)
But... it seems like your raw data would be protected from such side-channel attacks though if your home-grown encryption was the *first* line of defense instead - any vulnerabilities would then only expose the battle-tested encrypted stream, would they not? That might even improve the penetration resistance of your own alg
Re: (Score:2)
But... it seems like your raw data would be protected from such side-channel attacks though if your home-grown encryption was the *first* line of defense instead - any vulnerabilities would then only expose the battle-tested encrypted stream, would they not?
I'm not sure what you're thinking of as "first" (which direction), but if you did something like this you'd want to make sure that AES, or similar, is what operates directly on your actual data. "Encrypting" the already AES-encrypted ciphertext with your homegrown thing can't reduce the security of AES, and can't expose anything via side channels because all it would expose is the AES-encrypted data.
OTOH, you're really, really unlikely to gain anything at all by doing that. If you want to defend yourself
Re: (Score:2)
Yes, that is the order I meant - I didn't think there was any ambiguity in my phrasing. The first line of defense is the first defense an attacker encounters. Just as the last line of defense is your last chance to repel attackers before being overrun.
> if your system is insecure I guarantee AES won't be the reason
In the sense that an easily-picked lock is typically the most secure element of home security, I'm inclined to agree.
Re: (Score:2)
Re: (Score:3)
Troll? What else would make a public utility monopoly change its ways?
Head/desk... (Score:5, Interesting)
The least you can do is implement a real algorithm; but screw it up somehow (key handling is always a good place for that); but just making it up? How did they sneak this past a standards body?
Re:Head/desk... (Score:5, Insightful)
The least you can do is implement a real algorithm; but screw it up somehow
That's why the best recommendation is to not only use the approved algorithm, but also the standard implementation. Don't get cute, don't try to optimize it, just use it as is. AES was required to run on emdedded systems 13+ years ago, any modern chip should have zero problem running the standard C implementation today.
Re: (Score:3)
There are many MCU's that have hardware acceleration for AES256 and SHA256 so there's no real performance based excuse not to use it.
Re: (Score:1)
Ah; but defense-in-depth says "Assume you're screwed no matter what you do. Now that you're in the right mind space, which implementation is easiest to mitigate when things go wrong?"
I can tell you right now: the answer isn't roll-your-own, and it's not "optimize the reference implementation" either.
Re: Screwed either way these days. (Score:1)
There has never been any problems with the algorithms in OpenSSL. It's only the SSL part that have shown weaknesess.
Re: (Score:2)
There has never been any problems with the algorithms in OpenSSL. It's only the SSL part that have shown weaknesess.
It's because the algorithms are the easiest parts to get right (because they are mostly just math and have been successfully ported and rewritten into many different programming languages many times). How you use the algorithms are the biggest part of the problem (e.g., actually getting random numbers to use as keys, or not having buffer overrun issues).
One of the issue with OSGP is that they used RC4 incorrectly (well, to be fair they probably weren't up on a weaknesses of RC4 [wikipedia.org] discovered in 2001) in addit
Dear Open Smart Grid Ppl (Score:2)
Read this - pay attention to the interpretive dance requirement:
http://www.moserware.com/2009/... [moserware.com]
K/Tnx/Bai,
Min
Re: (Score:2)
Even if you use existing crypto algorithms, such as with Java, verifying that a crypto key has not been tampered with is non-trivial. I've been working on such code over the past few months, and what I found is that you need to deliver a set of approved root certificates with your application so that you can verify the certificate chain. That makes delivering an application an on-going maintenance headache compared to just using the crypto and hoping you don't get hit by a man-in-the-middle attack.
Give
Re: (Score:3)
Re:Head/desk... (Score:5, Insightful)
Every crypto is roll your own at some point in its life
At this point in its life; it has no place in new protocol specifications or production systems, not until the new crypto is published and disseminated by the community and found to have no flaws that any researchers can find.
Even for something as simple as AES it's a chore to find an open implementation that's actively being maintained
No... there are many implementations that are actively maintained; your implementation doesn't have to be Open source, so long as the implementation of the specified ciphers and hashes is a correct implementation....
It is a small price to pay to use standardized ciphers or standardized implementations known to be secure, instead of risk everything on a homegrown cipher.
Re: (Score:1)
Even before Linux was around, there were encryption algorithms with reference implementations as part of UNIX directly, or built as source code:
One of the first was crypt(1). It isn't secure by today's standards, and actually there was a tool called Crypt Breaker's Workbench... but one file encrypted on Solaris would work on IRIX.
Another was des(1). This came around in the early 1990s, and had two implementations, one ecb mode. This eventually became part of base UNIX distributions, but was superseded by
Re:Head/desk... (Score:4)
WEP used a standard algorithm - RC4. They just accidentally screwed it up because of the way RC4 works (related to key handling and IVs).
A homegrown algorithm for WiFi is TKIP, which was created because RC4 had hardware acceleration, while AES didn't at the time. So they created TKIP to leverage the hardware crypto alongside several protections to mitigate several shortcomings that were found.
AES is fixed by standard. There is no need to "maintain" it - as long as the code compiles properly you're done.
And for AES, because it's an official encryption algorithm, NIST has the official specification document [nist.gov], and the original author has the reference code [efgh.com].
Of course, the vast majority of people will just use OpenSSL or LibreSSL, being BSD licensed and all that. Even on embedded systems there is often a reference AES implementation.
That alone should be disincentive to roll your own algorithm - the fact that the standard ones are available everywhere for practically no cost and very little effort. Why write your own algorithm when you can copy and paste an existing one in? Even the lazy should see the benefits.
Re: (Score:2)
>AES is fixed by standard. There is no need to "maintain" it
Provided the implementation is flawless of course. After all, a lot of maintenance is bug-fixing rather than updates.
Re: (Score:2)
AES is fixed by standard. There is no need to "maintain" it - as long as the code compiles properly you're done.
Not if you care about security. What about side channel attacks? What about leaking sensitive data into the heap? What about performance? On various architectures? I could go on.
Merely compiling and generating correct test vector outputs is far from all an implementation has to do to be good. Luckily, there are good implementations out there, easily obtained and under generous licenses.
Re: (Score:2)
I'm assuming the MPAA spent good money creating their stupid protocols too; from the Content Scramble System to HDCP.
Re: (Score:3)
Only one elliptic curve method using a set of parameters that may be been chosen by the NSA is at risk. Elliptic curve in itself is still secure as far as I know.
Re: (Score:2)
How dare you bother this AC on his crusade against the evil encryption lobby with facts?
Re:don't fall for it. (Score:4)
The "pre-broken" Eliptic curve technology is a backdoored random number generator, not a cipher.
It is not ECC itself that is broken or backdoored, only the Dual_EC_DRBG random number generator with specific input parameters.
How many times do we have to say it? (Score:1)
Come on we're in 2015, there are plenty of widely available not publicly broken primitives.
DO NOT ROLL YOUR OWN GOD DAMNED CRYPTO. Unless you're a cryptographer planning to have it reviewed and attacked long before even considering using it.
Let me quote Schneier's law again: "any person can invent a security system so clever that she or he can't think of how to break it."
This is a blatant example of Dunning-Kruger. Again.
Re:How many times do we have to say it? (Score:5, Informative)
Homegrown crypto has been a constant menace since the 1990s when people sold numerous encryption programs, usually sporting their own encryption algorithm and DES.
A few I've seen were running 1-2 rounds of DES at most (FWB Hammer's hard disk drivers for Macs did this, but at the time, it was the best encryption one could get due to the relatively slow CPUs like 68000s at the time.) Others were seeding random() with a CRC of the password and XORing the output with the plaintext.
However, back then, there were no government entities standardizing functions like is done now with AES, RSA, and other algos, so people had to write their own, and if it jumbled and unjumbled stuff, it was good enough, since not much in the way of ciphertext was really being attacked.
Times are different now.
These days, with most ARM, AMD, SPARC, POWER, and Intel CPUs having hardware AES acceleration, why would one want to roll their own algorithm?
If one thinks AES is backdoored, cascade it with another known good algorithm like SERPENT, Threefish, heck, maybe even an older one like IDEA, 3DES, or even 3-Skipjack. There are other less known algorithms which have withstood testing as well. Cascading isn't intended to expand the bit width, but to have protection should one algorithm get broken. TrueCrypt offers/offered this functionality.
Same with public key algorithms. Worried about RSA? Have two signatures, one RSA, and one with ECC or a lattice based algorithm that is resistant to TWIRL and quantum factoring, and validate both sigs.
As for crypto implementations, if a user needs to encrypt a file, OpenPGP is a known standard. For communicating across the wire, SSH and SSL are known standards that are decently robust. For encrypting stuff in RAM, almost all modern operating systems have a facility like KeyChain to keep sensitive data from being swapped out, or if it is, have it encrypted.
With almost every programming language offering hooks for AES and RSA, there isn't a need to roll crypto, even for obfuscation reasons. If one just needs obfuscation, use an AES() function with all zeroes as the key.
Re:How many times do we have to say it? (Score:4, Interesting)
Re:How many times do we have to say it? (Score:5, Informative)
Re: (Score:2)
In fact, you can use AES but not the recommended default values. Satoshi did this with Bitcoin, just in case the NSA was recommending those values for a reason.
No, Satoshi did not change any values in AES. Satoshi chose a less-common curve for ECC, nothing to do with AES.
Re: (Score:2)
And if you absolutely, positively MUST roll your own encryption (because, reasons!), then cascade THAT behind something battle-tested.
Embedded is different (Score:2)
I was an embedded developer for many years. It is just a totally different way of thinking. Embedded guys are always writing the same thing again from scratch. They also are obsessed with knowing exactly what is going on. Before I got into high level software development, the idea of dumping someone else's library into my carefully constructed embedded code made me cringe.
The other issue is that embedded guys can think they know it all, because they normally understand how a line of code they write affects
Re: (Score:2)
"Suit wearing chatter monkey" describes so many of those out there out there, especially "security consultants" which are sprouting up left and right. I have cleaned up the messes that those types leave behind, especially after they "do" a job for six months, and the fundamental issues are still present. Usually they may be familiar with one tool, and because they have that hammer, everything is a nail.
I can see the NIH mentality of embedded programmers, since those are the types who usually are proud of
Re: (Score:2)
To be (slightly) fair the 1.1.1 standard was published in 2012. Presumably the first versions where a year or several before that. So most likely this is circa 2008-2010 protocol standard writing.
Doesn't really excuse them. But it wasn't 2015 and not quite as obvious then.
Re: (Score:2)
Right. At that point only pretty much every homegrown encryption algorithm on the planet had been trivially cracked. Not like today when pretty much every homegrown encryption algorithm on the planet has been trivially cracked
Re: (Score:2)
SSL's flaws are due to poor protocol design by cowboy coders at Mozilla. They were hardly "experts". It was the "experts" that had to bandaid the system to make even the barest safe to use.
Re: (Score:1)
Re: (Score:2)
And so it makes more sense that some dufus sits down and tries to roll his own security suite? Seriously?
To stress the ever popular car analogy, time and again cars get called back because of some flaw, so Mr. Mechanic from the shop 'round the corner should design my next one, not the "self proclaimed experts" at GM.
Hipster crypto (Score:4, Funny)
Re: (Score:2)
But are your Strowger switches vintage?
Obligatory XKCD (Score:2, Funny)
If you are going to implement your own crypto, you will screw it up, so at least make it entertaining:
https://xkcd.com/153/
Re: (Score:2)
One key sign of being a noob is to think that everyone else is one.
Quite seriously. If there is a library that has been written by people who spent not only years but DECADES studying and perfecting a topic, where should I take the hubris from to think that I can do it better?
Re: (Score:2)
Dude, have you *seen* me code? 96 hours with no sleep, just Cheetos and Jolt cola, and I can write a much improved Office from scratch. Okay, it can't actually save, or format text, and the spreadsheet formulas can only use addition in Reverse Polish notation, and never get quite the right result. But damn it, the cursor blinks in a Fibonacci sequence mod 5!
Re: (Score:2)
isn't it more work to make your own shitty encryption? are these people retarded?
Foolish, yes, but not completely retarded. If you look at the details, it's pretty clear that what they were trying to do was to build something really, really fast, so they could process huge volumes of packets on lower-spec hardware, with less generated heat, etc. There were valid reasons for putting in the extra work... but obviously the approach they chose was wrong.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Quick tell the FBI! (Score:2)
We have found the first FBI-compliant system!!!
Re: (Score:2, Insightful)
Obama's legacy: (N)othing (S)ecure (A)nywhere and (T)error (S)imulation (A)dministration
Holy fuck you're dumb as bricks. How exactly could the TSA be "Obama's legacy" when it was created during George W's presidency and before Obama was even a US Senator?
unshakable physical laws security (Score:2)
But there are impregnable physical laws, which all people understand. Still this type of security is neglected by hardware producers. For example, why there is no a light physical plastic lid, which can be used to close web-camera on a ultra-book physicaly? Why the web-camera lenses are always open? Why people shall u
Hooray Smartmeters (Score:2)
People called me a luddite when I refused a smartmeter. Then they started overcharging people, actually overreporting power consumption where when the old mechanical meters fail, they underreport it. Next the power meters literally started exploding, yes exploding, not just burning. Now it turns out anyone good at crypto, or any skript kiddie, can read your smartmeter in the near future.
I'm not worried about RF interfering with my brain, even if I thought that was a concern with these frequencies they only
OSGP? Never heard of it (Score:2)
I work for a smart grid consulting company...before that, a major (nearly a century old, and widely-recognized) civil engineering firm, again in the power industry. Before that I was the official smart grid security spokesman for a large IT company, and briefed Gartner, Ponemon, Forrester, etc. I've been deep into the guts of generation, T&D, energy marketing, and smart metering infrastructure at dozens of power companies over the past decade.
I've never seen OSGP in the field, not once. The OP talks
Re: (Score:1)