Slashdot stories can be listened to in audio form via an RSS feed, as read by our own robotic overlord.

 



Forgot your password?
typodupeerror
Security

OpenSSL: the New Face of Technology Monoculture 113

Posted by Soulskill
from the relied-upon-to-a-fault dept.
chicksdaddy writes: "In a now-famous 2003 essay, 'Cyberinsecurity: The Cost of Monopoly,' Dr. Dan Geer argued, persuasively, that Microsoft's operating system monopoly constituted a grave risk to the security of the United States and international security, as well. It was in the interest of the U.S. government and others to break Redmond's monopoly, or at least to lessen Microsoft's ability to 'lock in' customers and limit choice. The essay cost Geer his job at the security consulting firm AtStake, which then counted Microsoft as a major customer. These days Geer is the Chief Security Officer at In-Q-Tel, the CIA's venture capital arm. But he's no less vigilant of the dangers of software monocultures. In a post at the Lawfare blog, Geer is again warning about the dangers that come from an over-reliance on common platforms and code. His concern this time isn't proprietary software managed by Redmond, however, it's common, oft-reused hardware and software packages like the OpenSSL software at the heart (pun intended) of Heartbleed. 'The critical infrastructure's monoculture question was once centered on Microsoft Windows,' he writes. 'No more. The critical infrastructure's monoculture problem, and hence its exposure to common mode risk, is now small devices and the chips which run them.'"
This discussion has been archived. No new comments can be posted.

OpenSSL: the New Face of Technology Monoculture

Comments Filter:
  • by raymorris (2726007) on Wednesday April 23, 2014 @07:06PM (#46829003)

    I guess you're not a programmer, and therefore don't know what a shallow bug is. Conveniently, the rest of the sentence you alluded to explains the term:

    "Given enough eyeballs, all bugs are shallow ... the fix will be obvious to someone."

    If you have to dig deep into the code to figure out what's causing the problem and how to fix it, that's a deep bug. A bug that doesn't require digging is shallow. Heartbleed was fixed in minutes or hours after the symptom was noticed - a very shallow bug indeed. "The fix will be obvious to someone."

    The presence or absence of bugs is an orthogonal question. That's closely correlated with the code review and testing process - how many people have to examine and sign off on the code before it's committed, and if there is a full suite of automated unit tests.

    The proprietary code I write is only seen by me. Some GPL code I write also doesn't get proper peer review, but most of it is reviewed by at least three people, and often several others look at it and comment. For Moodle, for example, I post code I'm happy with. I post it with unit tests which test the possible inputs and verify that each function does its job. Then anyone interested in the topic looks at my code and comments, typically 2-4 people. I post revisions and after no-one has any complaints it enters official peer review. At that stage, a designated programmer familiar with that section of the code examines it, suggests changes, and eventually signs off on it when we're both satisfied that it's correct. Then it goes to the tester. After that, the integration team. Moodle doesn't get very many new bugs because of this quality control process. That's independent of how easily bugs are fixed, how shallow they are, depending on how many people are trying to fix the bug.

  • by greg1104 (461138) <gsmith@gregsmith.com> on Wednesday April 23, 2014 @11:40PM (#46830283) Homepage

    Self-organization [wikipedia.org] is a perfectly reasonable way to run a project. It has several properties that are useful for geographically distributed open source projects, like how it avoids a single point of failure. You can't extrapolate global leadership maxims from the dysfunction of local groups you've been involved in. I'd argue that an open source program that requires "strong leadership" from a small group to survive is actually being led badly. That can easily breed these troublesome monocultures where everyone does the same wrong thing.

    I think the way Mark Shuttleworth organizes Canonical is like the traditional business role of a "strong leader". That's led to all sorts of pissed off volunteers in the self-organizing Debian community. Compare that against the leadership style of Linus Torvalds, who aggressively pushes responsibility downward toward layers of maintainers. The examples of Debian and Linux show volunteers can organize themselves if that's one of the goals of the project.

  • by plover (150551) on Thursday April 24, 2014 @12:49AM (#46830465) Homepage Journal

    I think the bigger problem is that everything about encryption software encourages a monoculture. Anyone who understands security will tell you "don't roll your own encryption code, you risk making a mistake." I would still rather have OpenSSL than Joe Schmoe's Encryption Library, simply because at this time I trust them a bit more. Just not as much as I did.

    Another problem is that the "jump on it and fix it" approach is fine for servers and workstations. It's not so fine for embedded devices that can't easily be updated. I'm thinking door locks, motor controllers, alarm panels, car keys, etc. Look at all the furor over the hotel card key system a few years back, when some guy published "how to make an Arduino open any hotel door in the world in 0.23 seconds". Fixing those required replacing the circuit boards - how many broke hotels could afford to fix them, or even bothered to?

    The existence of a "reference implementation" of security module means that any engineer would be seriously questioned for using anything else, and that leads to monoculture. And in that world, Proprietary or Open doesn't matter nearly as much as "embedded" vs "network updatable".

  • by TheRaven64 (641858) on Thursday April 24, 2014 @03:43AM (#46830969) Journal
    OpenSSL is quite shockingly bad code. We often use it as a test case for analysis tools, because if you can trace the execution flow in OpenSSL enough to do something useful, then you can do pretty much anything. Everything is accessed via so many layers of indirection that it's almost impossible to statically work out what the code flow is. It also uses a crazy tri-state return pattern, where (I think - I've possibly misremembered the exact mapping) a positive value indicates success, zero indicates failure, and negative indicates unusual failure, so people often do == 0 to check for error and are then vulnerable. The core APIs provide the building blocks of common tasks, but no high-level abstractions of the things that people actually want to do, so anyone using it directly is likely to have problems (e.g. it doesn't do certificate verification automatically).

    The API is widely cited in API security papers as an example of something that could have been intentionally designed to cause users to introduce vulnerabilities. The problem is that the core crypto routines are well written and audited and no one wants to rewrite them, because the odds of getting them wrong are very high. The real need is to rip them out and put them in a new library with a new API. Apple did this with CommonCrypto and the new wrapper framework whose name escapes me (it integrates nicely with libdispatch), but unfortunately they managed to add some of their own bugs...

In seeking the unattainable, simplicity only gets in the way. -- Epigrams in Programming, ACM SIGPLAN Sept. 1982

Working...