How To Prevent the Next Heartbleed 231
dwheeler (321049) writes "Heartbleed was bad vulnerability in OpenSSL. My article How to Prevent the next Heartbleed explains why so many tools missed it... and what could be done to prevent the next one. Are there other ways to detect these vulnerabilities ahead-of-time? What did I miss?"
Static analysis (Score:3, Insightful)
It could have been discovered with static analysis if anyone had the foresight to implement a specific check ahead of time (although it's unknown whether anyone could have thought of checking this specific case before Heartbleed was discovered):
http://blog.trailofbits.com/2014/04/27/using-static-analysis-and-clang-to-find-heartbleed/
Re:Static analysis (Score:5, Informative)
Coverity has a blog post [coverity.com] describing the problem and why their static analysis methods currently can't detect it.
Re:Static analysis (Score:4, Interesting)
OpenSSL was static analyzed with Coverity. However, Coverity did not discover this, as is a parametric bug, which depends on variable content.
The reaction from Coverity was to issue a patch to find this kind of problem, but in my opinion, the "fix" throws the baby out with the bath water. The fix causes all byte swaps to mark the content as tainted. Which surely would have detected this bug, but it also leads to an enormous amount of false positives for development where swabs are common, like cross- or multi-platform development.
And while it finds "defects" this way, it's not the real problem it finds.
So in my opinion, 0 out of 10 points to Coverity for this knee-jerk reaction.
In my opinion, what's wrong here is someone with a high level language background submitting patches in a lower level language than what he's used to. The problems that can cause are never going to be 100% (or even 50%) caught by static analysis. Lower level languages does give you enough rope to hang yourself with. It's the nature of the beast. In return, you have more control over the details of what you do. That you're allowed to do something stupid also means you are allowed to do something brilliant.
But it requires far more discipline - you cannot make assumptions, but have to actually understand what is done to variables at a low level.
Unit tests and fuzzing helps. But even that is no substitute for thinking low level when using a low level language.
Re: (Score:2)
There are also other statical analysis tools like splint [splint.org]. The catch is that it produces a large volume of data which is tedious to sift through, but once done you will have found the majority of the bugs in your code.
However the root cause is that the language itself permits illegal and bad constructs. It's of course a performance trade-off, but by coding part of the code in a high level language and leave the performance critical parts to a low level may lower the exposure and force focus on the problems t
Re: (Score:3)
Like I fixed my fusebox that always blew by putting a nail in across the contacts. It never blows a fuse anymore.
(disclaimer: I didn't really. don't do this)
When it comes to highbrow bugs like this, everyone jump sup and down and demands to know what you're doing to stop the next one - ie stopping this bug from ever occurring again. What they really need to worry about is the next unknown bug that we will find. they are out there, we will find it in production one day, it will bite us, and no I don;t think
Re: (Score:2)
Yes, that solution is complete and utter crap. Claiming that marking all byte swaps as tainted will help you find thi
Re: (Score:2)
Even full retards don't implement their own memory allocator.
You just called kernel and base library developers full retards. Which goes to show that a little knowledge is dangerous.
When you write low-level code, yes, you often do. You may have to both be frugal with both memory and cycles. Or you may require guarantees that an allocation request will succeed no matter what. Or you may need to take alignment and endianness into account. On NUMA systems, you may try to ensure that memory is assigned from a bank reachable by another CPU without copying/invalidatin
need to get over the "cult of macho programming" (Score:3, Insightful)
Every industry goes through this. At one point it was aviation, and the "hot shot pilot" was the Real Deal. But then they figured out that even the Hottest Shot pilots are human and sometimes forget something critical and people die, so now, pilots use checklists all the time for safety. No matter how awesome they might be, they can have a bad day, etc. And this is also why we have two pilots in commercial aviation, to cross check each other.
In programming something as critical as SSL it's long past time for "macho programming culture" to die. First off, it needs many eyes checking. Second, there needs to be an emphasis on using languages that are not susceptible to buffer overrunning. This isn't 1975 any more. No matter how macho the programmer thinks s/he is, s/he is only human and WILL make mistakes like this. We need better tools and technologies to move the industry forward.
Last, in other engineering professions there is licensing and engineers are held accountable for mistakes they make. Maybe we don't need that for some $2 phone app, but for critical infrastructure it is also past time, and programmers need to start being held accountable for the quality of their work.
It's things the "brogrammer" culture will complain BITTERLY about, their precious playground being held to professional standards. But it's the only way forward. It isn't the wild west any more. The world depends on technology and we need to improve the quality and the processes behind it.
Yes, I'm prepared to be modded down by those cowboy programmers who don't want to be accountable for the results of their poor techniques... But that is exactly the way of thinking that our industry needs to shed.
Re:need to get over the "cult of macho programming (Score:5, Insightful)
The problem has more to do with the "hey, this is free so lets just take it" attitude of the downstream consumers not willing to pay for anyone to look at the code or pay anyone to write it.
Why would you want the OpenSSL people to be held accountable for something they basically just wrote on their own time since nobody else bothered?
Striking out to solve a problem should NOT be punished (that culture of legal punishment for being useful is part of why knowledge industries are leaving North America).
This problem was caused by a simple missed parameter check, nothing more. Stop acting like the cultural problem is with the developers when it is with the leaches who consumer their work.
Re: (Score:3)
I actually agree with both of you. The Open SSL guys gave out their work for free for anybody to use. Anybody should be free to do that without repercussions. Code is a kind of literature and thus should be protected by free speech laws.
However, if you pay peanuts (or nothing at all) then likewise you shouldn't expect anything other than monkeys. The real fault here is big business using unverified (in the sense of correctness!) source for security critical components of their system.
If regulation is nee
Re: (Score:2)
"businesses with a turn over $x million dollars should be required to use software developed only by the approved organisations."
That would just lead to regulatory capture. The approved organisations would use their connections and influence to make it very hard for any other organisations to become approved - and once this small cabal have thus become the only option, they can charge as much as the like.
Re: (Score:3, Informative)
This problem was caused by a simple missed parameter check, nothing more. Stop acting like the cultural problem is with the developers when it is with the leaches who consumer their work.
I do not believe you. If this were an isolated case, then you'd be right. But no, this kind of "oops, well now it is fixed" things happens all the time, over and over again. The culture of the programming never improves due to the error - no matter how simple, no matter that it should have been noticed earlier, no matter what.
I am willing to bet that after next hole the excuses will be same "it was simple, now it is fixed, should up" and "why don't you make better, shut up" or just "you don't understand, sh
Re: (Score:2)
If you are worried about security don't use software written by people who can't be bothered to check parameters.
Re: (Score:2)
Don't use software at all, then.
Re: (Score:2)
Shit happens to the best programmers. The only thing to prevent such things is to check the code. Therefore, you need another person trying to test the code and you need a specification for the code so you can really check the code against another artifact. But obviously nobody bothered. That's why in housing the architect plans the building and at least two structural designer check the design (at least in Germany that is).
Re: (Score:2)
You forgot NIH. OpenSSL used its own allocator, the most positive thing I can say about that is "totally idiotic". AFAIK nobody is removing it ...
Furthermore, C is insufficient language for a security software (C++ when properly used barely acceptable, managed languages much better).
Re: (Score:2)
OpenSSL used its own allocator, the most positive thing I can say about that is "totally idiotic".
That's deeply unfair. The most positive thing I can say about it is that it was 100% necessary a long time in the past when OpenSSL ran on weird and not so wonderful systems.
AFAIK nobody is removing it ...
Except in LibreSSL, you mean?
Furthermore, C is insufficient language for a security software (C++ when properly used barely acceptable, managed languages much better).
Depends on the amonut of auditing. C has h
Re: (Score:2)
Depends on the amonut of auditing. C has huge problems, but OpenBSD shows it can be safe.
How so? OpenBSD says they audit their operating system (which includes code that they did not write). OpenBSD was affected [openbsd.org] by Heartbleed, which means OpenBSD's audit did not catch this bug, and they were affected just like everybody else.
Also, most of the bugs on their advisory page are for typical C memory problems, such as use after free and buffer overruns.
Re: (Score:2)
> programmers need to start being held accountable for the quality of their work.
They are.
But I guess you mean that people who aren't paying for your work, and companies which aren't paying for the processes and professional services necessary for some level of quality, should hold programmers who don't have any kind of engineering or financial relationship with them accountable.
Re: (Score:2)
In programming something as critical as SSL it's long past time for "macho programming culture" to die.
Yeah, but it's kind of going the other way, with more and more companies going to continuous deployment. Facebook is just pit of bugs.
programmers need to start being held accountable for the quality of their work.
OK, I'm with you that quality needs to improve, but if I have a choice between working where I get punished for my bugs and where I don't; I'm working for a place where I don't get punished for my bugs. I test my code carefully but sometimes they slip through anyway.
LICENSE (Score:2, Informative)
Excerpt...
* THIS SOFTWARE IS PROVIDED BY THE OpenSSL PROJECT ``AS IS'' AND ANY
* EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE OpenSSL PROJECT OR
* ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GO
Not a buffer overrun (Score:2)
Heartbleed was not really a buffer overrun problem.
Re: (Score:2)
Re: (Score:2)
It did not read more memory than allocated.
Re: (Score:2)
Re: (Score:2)
Yes. It's a buffer overread. But if did not go beyond the memory allocated by malloc.
Re: (Score:2)
unsigned char *pl = &s->s3->rrec.data[0];
n2s(pl, payload);
Get a pointer to the heartbeat data inside an SSL record and copy the first two bytes to a 16 bit value payload. pl will point to data on the heap, but it might only be one byte long.
memcpy(bp, pl, payload);
Copy payload bytes from pl to a bp. This will read pl, plus a bunch of stuff that is after pl on the heap. In that sense, "it did not go beyond memory allo
Re: (Score:2)
You have missed the malloc call. See what is being passed as size to the malloc call. That will show you that the it does not cross the size allocated by the malloc call (the malloc for this call - not everything allocated by malloc).
Re: (Score:2)
No, it wasn't. Sorry.
Re: (Score:2)
It was reverse psychology.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
But they do give the programmer control of where the checking happens.
If you have a function CalculatePasswordHash(char *pass, int len) that in turns calls functions sha1, memcpy, rotatebit and xor fifty times each passing that len parameter, then you can check it is = the space allocated for *pass just once, rather than doing it for every function and thus needing two hundred and one checks minimum.
Re: (Score:2)
So there's nothing inherently unsafe about C. Its just that most implementations haven't bothered to deal with the problem.
C is inherently unsafe because the default mode is unsafe. History has shown that expecting implementations to add security after the fact does not lead to secure programs. C's builtin strings, which are null-terminated and prone to security flaws, are a perfect example of C's insecure defaults.
Also, the Heartbleed over-read could have happened in Java. Plenty of high-performance Java projects use buffer pools that look identical to what OpenSSL was doing. They do it to cut down on garbage churn.
Could have, yes, but you have to go out of your way in Java to fall to this kind of bug. There's a huge difference.
Re: (Score:2)
Speaking of weird remnants of the past, I've seen claims about the kernel needing to be uber-efficient before, but does that really make any sense? How much time does the average machine spend executing kernel code, besides the idle loop? If kernel was 10 times as slow, would it still be a significant amount?
No we can't
Re: (Score:2)
A lot of kernel stuff is very time-sensitive. Got to get the next block of sound to the audio device before the ring buffer catches up, got to get the display memory updated before the screen refresh kicks in, got to calculate the next LBA address read before the disc spins around to whereever it may lie.
Re: (Score:2)
Re: (Score:2)
This. It is high time that by default C compilers did buffer overrun check.
It has been claimed that due to OpenSSL's own memory management, this wasn't actually a buffer overrun. If you allocate 10 bytes for X, 5,000 bytes for Y, and 50,000 bytes for Z, but your proprietary allocator puts all these items into a 1MB malloc block, then copying 50,000 bytes from X isn't a buffer overrun to the compiler.
The real problem was that the code tried to respond to requests that it shouldn't have responded to. In this particular case, trying to respond could have triggered a buffer overflo
Re: (Score:2)
Point taken, I've heard the same thing. This is also a problem with ancient languages: they have really primiitve malloc routines that call the kernel every time there is a malloc. The consequence is people roll out their own memory management routines.
Don't get me wrong, I used C heavily and really liked it, back in the late 70s and early 80s. Thirty years later is long on the tooth and with very little progress in between. The original version was released in 1973, the first revision took place 16 years
Re: (Score:2)
Then we have to go one step further and store the size of the buffer.
In general, a "hardened C" programming language would be an excellent idea in my opinion.
Re: (Score:2)
Re: (Score:2)
DOES NOT COMPUTE
You must be one of those high-level language compilers to have caught that error.
Re: (Score:2)
like lack of QA / testing
This was not a bug that would have been found in testing. It doesn't _attack_ the software. The software was totally unaffected. You could have a very specific test.for this problem, but if you thought of that test, you might as well have looked at the code and immediately spotted the problem.
Not really (Score:4, Insightful)
BTW the last famous one with TIFF files was pretty recent:
http://threatpost.com/microsoft-to-patch-tiff-zero-day-wait-til-next-year-for-xp-zero-day-fix/103117
Re: (Score:2)
An automated tool that was probing the binary on a live system is was what discovered heartbleed.
Re: (Score:2)
Considering how many times you need to do this (read the length of a block of data, then the data) it's strange that we haven't implemented a standard variable length encoding like with UTF-8. Example:
00000000 - 01111111: (7/8 effective bits)
10000000 00000000- 10111111 11111111: (14/16 effective bits)
11000000 2x00000000 - 110111111 2x11111111: (21/24 effective bits)
11100000 3x00000000 - 11101111 3x11111111 (28/32 effective bits)
11110000 4x00000000 - 11110111 4x11111111 (35/40 effective bits)
11111000 5x00000
Easy (Score:3, Informative)
What did I miss?
An article before the word "bad."
Thanks for this article (Score:3)
Hi dwheeler,
This is a great article. It covers many common software development and testing techniques. But also some "on live system" techniques. It was a pleasure to read, I'll recommend it to various places.
Re: (Score:3)
Buffer overruns can be prevented at compile time w (Score:2)
Buffer overruns can be statically prevented at compile time without any runtime penalty.
All that is required is that the type system of the target programming language enforces a special type for array indexes and that any integer can be statically promoted to such an array index type by a runtime check that happens outside of an array access loop.
Array indexes are essentially pointer types that happen to be applicable to a specific memory range we call an array. Memory itself is just an array, but for tha
Sure, but not in C (Score:2)
Agreed, but not in C. You need to change C (and modify the code to use the functionality) or change programming language. The article does discuss switching languages.
Priorities? (Score:2)
Rigorous coding should be held to approximately the same standard as engineering and math. Code should be both proven correct and tested for valid and invalid inputs. It has not happened yet because in many cases code is seen as less critical (patching is cheap, people usually don't die from software bugs etc). As soon as bugs start costing serious money, the culture will change.
Anyway, I'm not a pro coder but I do write code for academic purposes, so I am not subjected to the same constraints. Robust code
Re:Priorities? (Score:4, Insightful)
Rigorous testing is helpful, but I think it's the wrong approach. The problem here was lack of requirements and/or rigorous design. In the physical engineering disciplines, much effort is done to think about failure modes of designs before they are implemented. In software, for some reason, the lack of pre-implementation design and analysis is endemic. This leads to things like Heartbleed - not language choice, not tools, not lack of static testing.
I would also go as far as saying if you're relying on testing to see if your code is correct (rather than verify your expectations), you're already SOL because testing itself is meaningless if you don't know the things you have to test - which means up-front design and analysis.
That said, tools and such can help mitigate issues associated with lack of design, but the problem is more fundamental than a "coding error."
Re: (Score:2)
If the physical products manufacturers have a design fault, they will have to fix those products, during warranty period, at their own expense. If on top of it the defect is safety related, they'll have to fix it even beyond the standard warranty period. Whether the product is a car or a coffee grinder, they'll have to recall it period.
Now contrast
Re: (Score:2)
Rigorous testing is helpful, but I think it's the wrong approach. The problem here was lack of requirements and/or rigorous design.
The real problem is the horrible OpenSSL code, where after reading 10 lines, or 20 lines if you're really hard core, your eyes go just blurry and it's impossible to find any bugs.
There is the "thousands of open eyes" theory, where thousands of programmers can read the code and find bugs, and then they get fixed. If thousands of programmers tried to read the OpenSSL code with the degree of understanding necessary to declare it bug free, you wouldn't end up with any bugs found, but with thousands of progra
Zero memory (Score:2)
Not preventable (Score:2)
Of course, we should find ways to improve quality control in open source software. But the next Heartblee is going to happen. It's like asking, "How can we prevent crime from happening?" Sure, you can and should take measures to prevent it, but there will always be unexpected loopholes in software, that allow unwanted access.
Preventable! (Score:2)
But that's the point, we can and should take measures to prevent it. Even if we never eliminate all vulnerabilities, we can prevent many more vulnerabilities than we currently do.
Re: (Score:3)
No doubt. So why didn't YOU take steps to prevent the Heartbleed vulnerability? The same reason everybody else didn't: time. Finding bugs takes time. Sure, you can automate, but that automation also takes time. So we are caught between two desires: 1) the desire to add or improve functionality, and 2) the desire to avoid vulnerabilities. The two desires compete for the amount of time that is available, so it becomes a trade-off.
It's also an arms race. There is real financial incentive for finding vul
Re:How about (Score:4, Informative)
about as effective as sunshine and puppies.
Re: (Score:2)
Re: (Score:2, Insightful)
Don't use C and its variants like C++. C is an extremely unsafe, low-level language that is just one step above assembly language. This makes it great for low-level, performance sensitive programs like OSes, compilers, etc. but the low-levelness also increases bug count for general purpose applications.
Instead use safer languages like Pascal, Eiffel (design by contract), Ada, etc. These languages guard against buffer overflows and don't have the slowness and bloat associated with garbage collected languages
Re: (Score:2, Insightful)
Yeah, we'll just rewrite the Internet in Pascal.
Libraries like OpenSSL are built in C in no small part because C can easily be linked into just about any other language out there. Nothing is going to change that.
And idiots can write bad code in any language. It might not be a buffer overflow, but they could still have screwed up in many other ways.
Re: (Score:2, Insightful)
Or you just learn how to code properly. This particular vulnerability wasn't because there was a mistake, it was because they opted to bypass a function that was meant to keep people safe. It's a bit like bolting the fire escapes closed then wondering why everybody died after the fire.
It's astonishing to me that somebody would put code into a production environment that asked for a certain length of response without bothering to do any validation.
Re: (Score:3)
If that really worked, there would be no QA dept. for software. Unless you can formally prove your software is correct, you should assume there are bugs. And no one has the time, money or ability to formally prove hundreds of millions of lines of code.
And even more astonishing the head maintainer and mer
Re: (Score:2, Insightful)
> If that really worked, there would be no QA dept. for software.
No, that's just poor reasoning.
Quality must be built-in, not added-on. QA expectations and improvement scope are largely imposed on any QA department, therefore the level of 'quality' reached can never be an absolute bar.
Developers in general need to minimise the vector product of bug count/severity that could be exposed before it gets to QA. This allows the bar to be raised, and focus to be spent on where it should be rather than catching
Re: (Score:2)
Instead use safer languages like Pascal, Eiffel (design by contract), Ada, etc. [...] The problem usually is, few people know these languages and they are not portable from one platform to another.
Agreed regarding both the solution and the problem with the solution.
It's probably reasonable to use [insert-super-secure-machine-verifiable-language-here] to develop libraries that are as security-critical as OpenSSL. However, it's unlikely that such libraries will be widely used if they aren't easily callable from the more popular languages (C/C++/ObjectiveC/etc).
Given that, I wonder how difficult it would be to write a library in (e.g.) Ada, but have the Ada compiler compile the code in such a way that
Re: (Score:3)
I'm not sure you can auto-generate a C header file but you can create a library (.dll or .o) file from Ada source and call it from C. You have to hand generate the C header file.
Create DLL library in Ada [stackoverflow.com]
Re:How about (Score:5, Funny)
Re: (Score:2)
Going forward, all CPUs shall be required to execute Java bytecode natively.
Well there was the PicoJava [wikipedia.org] from Sun.
Or the MAJC [wikipedia.org] from Sun
Both of which did exactly that.
Alas none are around any more...
Re:How about (Score:5, Informative)
I have personally ported OpenSSL to at least 6 embedded systems, one of which was so proprietary they wrote their own C/C++ compiler. Good luck finding an Ada compiler for that.
his makes it great for low-level, performance sensitive programs like OSes, compilers,
Aaand... performance sensitive like, say... crypto? There isn't much code more performance sensitive than crypto libraries, which is one of OpenSSL's main uses. In fact, there are a whole bunch of native assembler implementations for x86, MIPS, ARM, PPC, etc to achieve that low level performance. Clearly you have never actually looked at the OpenSSL code base...
Re: (Score:2)
Adacore [adacore.com] has a perfectly good implementation of a high-security Ada compiler, which produces executables for multiple platforms. There's nothing difficult about finding such tools. What's difficult is finding programmers and developers who are willing to take the time to actually develop their code to take advantage of the strict typing which is one of Ada's strengths.
John Barnes, author of one of the most-used Ada texts, outlined the meanings of "safe" and "secure" software in a very straightforward manner
Re: (Score:2)
"Multiple platforms" means nothing if it's not MY platforms (which looking at it in general, it's not).
Re:How about (Score:4, Informative)
If your web server is pushing out lots of https traffic then yes it is performance sensitive.
Re: (Score:2)
Performance sensitive? really? most crypto is NOT performance sensitive at all and you could easily sacrifice some performance for more secure/reviewed code. I would imagine there are very few mostly fringe cases where the performance is more critical in which cases they should be uses modified versions not having hacks put into the main code stream.
First: how do YOU know whether crypto is performance sensitive or not "at all", because it's entirely dependent on the use of it.
Second: yes, it's absolutely performance sensitive because the trend is becoming to use HTTPS for everything. On a server that means the whole front end can greatly benefit from faster crypto, and on client side one of the most popular current Internet applications - video streaming - often uses crypto for DRM so the entire video stream needs to be decrypted in real time. Sorr
Re:How about (Score:4, Interesting)
The US Army will swear that I was once, many moons ago, officially certified in Ada, whether that means anything or not. I never liked it much, even though I did turn in successful code a few times, and I really have a problem with Ada for open source applications - Yes, in theory, Ada has some very strong security functions by design, but it's definitely not going to result in the 'many eyes make all bugs shallow' effect. I actually read your post as deliberately tongue in cheek at first, what with phrases such as 'extremely unsafe'.
But as I think more about it, one of the problems revealed by Heartbleed is open sourcing the target code didn't result in a lot of properly trained eyes passing over that code. I never thought I'd encourage anyone to learn Ada after I got out of the service (just as I never thought I'd encourage anyone to start a cult worshipping many-tentacled, eldritch, blasphemous horrors from beyond space-time as we delusionally try to limit our conceptions of it to preserve our fundamental human sanity, and for much the same reasons), but I have to admit, you may have a damned good argument for Ada there.I don't know if the extensive compile time checking of Ada 2012 could have automatically caught the bug that made Heartbleed possible - the last version of Ada I've really used is 95, but I'd be really interested to hear from someone who's current if they think Ada is just about totally bulletproof against this sort of bug, because even the older versions I recall had some features that would have made it hard to make this sort of mistake.
Re: (Score:3)
But as I think more about it, one of the problems revealed by Heartbleed is open sourcing the target code didn't result in a lot of properly trained eyes passing over that code.
My experience is that reading code isn't a very good way to catch bugs, mainly because reviewers tend not to read it as carefully as the person who wrote it. If you want to find bugs, it's more effective to do white/black box testing of some sort.
Re: (Score:2)
My experience is that reading code isn't a very good way to catch bugs, mainly because reviewers tend not to read it as carefully as the person who wrote it. If you want to find bugs, it's more effective to do white/black box testing of some sort.
That depends. Your reading of code can have three possible results: 1. "There are no bugs". 2. "There are bugs A, B, C and D; go and fix them". 3. "I can't understand the code to a degree that I can say it is bug free".
In case 3, the code should be rejected unless it is code handling some really hard problem that needs a better reviewer. The area where the Heartbleed bug happened was in no way difficult, so code that is hard to understand should have been rejected. If that happens, reviews reduce the num
Re: (Score:2)
If you can say you know how to always write code that is so clear that it never has any bugs, I would like to know how you do it.
Re: (Score:2)
Can you express what you didn't like and why? Perhaps it's a bit verbose and overly strict. But the strictness means you find many bugs during compilation and basic testing. Of course, compiler and runtime errors frustrate many programmers, which is why many prefer C -- fewer warnings and errors. Let the customers deal with the errors.
Re: (Score:3)
I think it's clear to everyone who's actually looked at the situation that the problem here wasn't the language, it was the people who were using the language. They w
Re:How about (Score:5, Interesting)
A quote from the "Insane Coding" blog, which in turn quotes from the book "cryptography engineering":
The issues with higher level languages being used in cryptography are:
- Ensuring data is wiped clean, without the compiler optimizations or virtual machine ignoring what they deem to be pointless operations.
- The inability to use some high-level languages because they lack a way to tie in forceful cleanup of primitive data types, and their error handling mechanisms may end up leaving no way to wipe data, or data is duplicated without permission.
- Almost every single thing which may be the right way of doing things elsewhere is completely wrong where cryptography is concerned.
Re: How about (Score:2)
Re: (Score:2)
Yes. Also, the problem with Rap is that it is in English. If they just wrote their masogenistic statement in a different language all would be well!
Seriously, please stop with the ridiculous claim that the language is the problem. The problem is that nobody is perfect, no process is perfect, and mistakes will always happen. They will happen far more often when the system is implemented by people who understand so little about software development that they thi
Re: (Score:2)
Re: (Score:2)
My ID is hardly 'new', but this place has surely gone downhill. :(
Re: (Score:2)
Re: Republicans (Score:3)
I have no idea why you're maintaining that "Republicans" create these bugs, and I'm, like, a socialist.
Re: (Score:2)
I have no idea why you're maintaining that "Republicans" create these bugs, and I'm, like, a socialist.
I think he's claiming non-Republicans can't code...
Re: (Score:2)
The Heartbleed vulnerability existed for a long time, then it was fixed quickly when finally discovered.
The recent Internet Explorer vulnerability existed for a long time, then it was fixed quickly when finally discovered.
Re: (Score:2)
We cannot write complex bug-free software. PERIOD. OpenSSL is not windows. Headlines about OpenSSL bugs are not such a common occurrence. One bug happened at the wrong time, wrong place. This could have happened even if the world had opted for a proprietary library for this critical role. The only difference is that there would have been somebody to sue. Big consolation.
New theories come out of IT faculties around the world at regular intervals, that promise, if strictly followed, the holy grail of bug-free software. All of them eventually prove non-effective.
The only concrete effect of all these tactics is that the job of the programmer becomes more tedious, less interesting. One thing I can tell you from direct experience is that, the lowest the level of interest of the programmer, the higher the possibility will be that bugs may slip into his or her code.
Actually, it's possible to remove all errors and imperfections, if you would be satisfied with being boring. That's one thing I got from Douglas Crockford's Programming Style and Your Brain. [youtube.com] Sometimes, especially for security-related software, "boring" is exactly what you want.
Unfortunately, SSL is anything but boring. It's barely standardized, and it's prone to getting new features. But just because the standard is exciting, doesn't mean the code has to be exciting. The OpenSSL developers may have received
Re: (Score:2)
No. Software for which you can guarantee that no error exist is not only boring: it is useless.
You do not get my point. You may succeed in rendering it less probable. But you cannot prevent it.
I do get your point, and I disagree. Perhaps my point is not so clear, so I'll rephrase it: For a protocol as complicated as SSL, it's difficult to guarantee that a program is free of bugs, but it is possible to create a program free of exploits. With sufficient discipline [microsoft.com] in specific domains, it's also possible to create bug-free specifications. Computer programs are just math, and a lot of math can be proved. The key is to decompose programs into pieces that humans can reason about. That's what Crockford
Re: (Score:2)
This is a great idea. It also makes testing using Fuzzing methods easy.
1. Generate random test parameters
2. Feed parameters to variant program A and get results.
3. Feed parameters to variant program B and get results.
4. Both results should match.
Re: (Score:2)
Re: (Score:2)
The LLVM static analyzer finds this bug. So would warning about dead code, since the code past the point of the second goto...
Um, no. You're talking about the Apple "goto fail; goto fail;" vulnerability. That's a different vulnerability in a different program. They're both vulnerabilities in TLS/SSL implementations, but they are different programs.
Re: (Score:2)
I'm really glad you're trying to think of alternatives. However, when you say: 1). Initialize all allocated memory. Routinely and automatically.... They did. But the Heartbleed bug let you see currently-active memory. In particular, you have to have the private key available somewhere so you can use it.
Some of the weirdness was due to the spec itself (RFC 6520). I agree that error avoidance is better than parameter-checking, but it's not clear that parameter-checking could have been avoided in this
Re: (Score:2)
Profiling w/ 100% code coverage would have caught this bug. - No, code coverage would not have worked in this case. Since the problem was that code was missing, you can run every line or branch without triggering the vulnerability. For more, see: http://www.dwheeler.com/essays... [dwheeler.com]
Input fuzzing in the unit tests under memtest could have located this bug even faster. - No, not in this case. Fuzzers were countered because OpenSSL had its own set of memory allocators. When fuzzing you often are looking for
Re: (Score:2)
I know just enough mathematics to impliment my own key exchange and assymetric encryption functions.
I also know enough cryptographic practice not to attempt to do so. I leave that to the expects who know all the non-obvious mathematical tricks too.