Security Problems Are Primarily Just Bugs, Linus Torvalds Says (iu.edu) 272
Linus Torvalds, in his signature voice: Some security people have scoffed at me when I say that security problems are primarily "just bugs." Those security people are f*cking morons. Because honestly, the kind of security person who doesn't accept that security problems are primarily just bugs, I don't want to work with. Security firm Errata Security has defended Linus's point of view.
They're bugs, unless they're not (Score:5, Insightful)
Security by obscurity, government backdoors, etc. Those are not bugs.
Re: (Score:2)
That's not what Linus is talking about either. Stop grinding axe at every opportunity.
Re: (Score:3)
Re:They're bugs, unless they're not (Score:5, Funny)
With Linus it's more like security through obscenity :-)
Re: (Score:2)
Re: (Score:3)
It seems like he wasn't actually saying "all security problems are just bugs" in the abstract. He's saying that, in the development of the kernel, security problems should be treated the same as a bug. That is, basically, you shouldn't freak out and make patching those bugs your only priority, breaking other things as you go.
In the world of all security problems, there are security problems that are not "just bugs". However, in the world of developing the Linux kernel, he's saying, "A bug that causes a
Re: (Score:2)
that works for open source too, it's just trickier.
Re: (Score:2)
yeah, but it probably isn't. there would be much smarter, more widely-deployed, and more cost-effective ways to do it.
selinux is just easy for people to think of because it was designed by the NSA to scratch their particular bureaucratic itch. but, sure, anything is possible.
Re: (Score:3, Interesting)
The backdoor in SELinux isn't in the code, it's in the setup documentation.
Re: (Score:2)
Other than the fact that it has been extensively security reviewed by independent people...
https://security.stackexchange... [stackexchange.com]
https://www.androidauthority.c... [androidauthority.com]
https://www.reddit.com/r/linux... [reddit.com]
Re:All data security is through obscurity (Score:5, Informative)
When we talk about security by obscurity we mean that the way of how the security is produced is obscured. Not that a certain secret, a key, has to be kept secret to use it.
PGP contains a private key, this is not what obscurity means in this context. What obscurity means is when the basic algorithm used to produce the encrypted result is not open to a public audit.
The key is secret. Not the lock. Big difference.
Re: (Score:2)
Re: (Score:2)
The security of quantum cryptography does not depend on obscured information, but from the property of unreproducibility
It entirely depends entirely on classical sources of trust. In the simplest of terms Quantum crypto is all about refreshing keys amongst communicating peers. You still need trust in the form of hidden secrets to start otherwise there is no way of having any assurance who is on the other end.
The anti-eavesdropping properties of quantum don't mean squat when your spilling the beans to Malice instead of Alice.
Re: (Score:2)
Re: (Score:2)
I was under the impression we were talking about security, not authentication.
I'm really sorry I don't read in threaded mode and going back it looks like the entire thread amounts to a childish word game as to whether the word "obscurity" applies to intentionally kept secrets.
All I read was your post and pointed out correctly it is factually incorrect. I have no interest in the underlying question. Quantum crypto requires classical secrets to be kept end of story.
Re: (Score:2)
Or, apparently, the post topic:
But that's irrelevant.
If you are going to assert that a comment is factually incorrect without even claiming to be aware of the context in which it was said, be aware that such a claim is very liable to have little to no resemblance to reality.
Quantum crypto only requires secrets to be kept if you need to authenticate... Which is not inherently necessary for cryptography... for exa
Re: (Score:2)
I may dislike systemd, and the direction that it is pushing things, but that isn't sufficient to justify calling it a bug. A flat head screw isn't a bug instead of a phillips, it's just designed for different applications.
True, but. (Score:4, Interesting)
Re: (Score:2)
It's true, security problems usually exploit a bug. BUT, in general, there is a systematic problem underneath the bug, which allows a bug in a program to escalate to gain access to root-level systems. So, it's not just a bug, but a bug that is built on a system that does not have security built in.
I am assuming Torvalds considers not building security into a system is a bug. Consider software which does not prevent SQL injection attacks. If there was no attempt to prevent these attacks, technically the code is working as intended. Security simply was not a consideration. But in practice I believe it is still fair to consider that a bug.
Re: (Score:2)
Aren't SQL injection attacks usually queued commands? Isn't the ability to queue multiple SQL commands in one string a flaw in itself? Ex: what possible harm would it do to require a "drop table" command to be called on its own,etc ?
Re: True, but. (Score:2, Insightful)
Theyâ(TM)re usually someone passing unescaped user data to an sql query. So the end user is able to break out of a string and change the functionality of the query. Incredibly basic stuff.
Re: (Score:2)
That's what I'm saying. Why does SQL allows to "break out of a string" in the first place?
Re: True, but. (Score:4, Insightful)
Name some interpreted serialization formats that don't.
Re: (Score:3)
Aren't SQL injection attacks usually queued commands? Isn't the ability to queue multiple SQL commands in one string a flaw in itself? Ex: what possible harm would it do to require a "drop table" command to be called on its own,etc ?
The real flaw is giving out ddl grants to a service account that's supposed to be doing dml.
Re: (Score:3)
The real flaw is giving out ddl grants to a service account that's supposed to be doing dml.
Listening to this shit is painful. You should ALL know better.
xkcd 327 is WRONG.
Everyone who thinks SQLi is about cleaning / sanitizing / scrubbing / bathing data is fundamentally wrong and entirely missing the point of what SQLi actually is and how to address it.
SQLi has NOTHING to do with the content of data. "Scrubbing" is entirely irrelevant. You all need to "internalize" this basic fact and stop propagating bullshit.
As for the "real flaw" being handing out DDL grants this reminds me of xkcd 1200.
Re: (Score:2)
I don't read that comic so your post makes little sense. I suppose in general I agree no one design any software around something they read in a web comic but I'm not clear on why it's painful to suggest an account that needs to only do dml not have ddl grants. Are you saying it should?
Also, it's a little odd that you rant about how wrong the comic is yet have episode numbers memorized.
Re: (Score:2)
I don't read that comic so your post makes little sense. I suppose in general I
It is not necessary as relevant context is provided separately. You can ignore the comic references if you'd like.
agree no one design any software around something they read in a web comic but I'm not clear on why it's painful to suggest an account that needs to only do dml not have ddl grants. Are you saying it should?
The original remark "The real flaw is giving out ddl grants to a service account that's supposed to be doing dml".
The problem with this remark it treats a very specific instantiation of a symptom that does nothing to:
1. Resolve the underlying issue ... but not DELETE F
2. Address problems caused by continued existence of underlying issue. In simplest of terms disallowing DDL prevents DROP TABLE
Re: (Score:2)
It is about both, I would say sanitation is more important. Yes there is no need to grant the process access to drop tables but that only patches that problem you can still do a lot of damage without that.
e.g. '); update account set balance = 1000000;
or even if you don't give write access (usually the process needs to write) it is still possible that you maybe able to extract information that you shouldn't have like other customers email, or credit card info.
PS sanitation should not be done manually but bui
Re: (Score:2)
Aren't SQL injection attacks usually queued commands? Isn't the ability to queue multiple SQL commands in one string a flaw in itself? Ex: what possible harm would it do to require a "drop table" command to be called on its own,etc ?
You won't be able to execute non-trivial installation SQL scripts directly through your code. You'll either have to chop the script into individual queries and run each separately, or run the SQL script e.g. from command line.
Also, SQL injection can be useful even without adding extra query. For example, if the login form uses this kind of SQL query: "SELECT * FROM users WHERE username='$username' AND password='$password_hash';", you can log in as arbitrary user without knowing the password just by typing t
Re: (Score:2)
Re: (Score:2)
Either you can't accept the fact that I do not know SQL very well, or you don't understand the core of my question.
Re: (Score:3)
Re: (Score:2)
The same can be said of functional bugs, they are buggy, but you don't know it until discovered. The discovery of the bug does not mean the code changed, it means that bug hadn't been caught yet.
So yes, a system that is vulnerable to an as-yet unknown attack is buggy.
Re: (Score:2)
I disagree that you can view lack of security as a bug. Using your example, lets say a novel way to attack databases developed in 2018. Lets call it relationship mutations. Today we have no idea how it works and how to defend against it, because it isn't invented yet. Are all databases released today buggy as a result? Do they become buggy, without any code change whatsoever, at the time this new exploit is invented?
I am not sure why you don't consider that a bug. If a new way of attacking any SQL command was discovered tomorrow, that would simply mean that 100% of existing SQL commands have a bug in them. It was a previously undiscovered bug, but a bug which needs to be fixed none the less. Perhaps the bug is in the SQL syntax or ODBC interface, but it is still a bug in need of fixing.
Re: (Score:3)
I don't consider my example of a hypothetical new exploit a bug because we can't be sure it is connected to a programmatic mistake. It could be the case that in the future all databases start running in a different environment... that is, our assumptions will have to be changed. This happened in the past - in the past databases
Re: (Score:2)
For example, my goddamn login page stops working when the singularity arrives. Is this a bug?
Re: (Score:2)
We can have a philosophical/semantics debate about how to classify a bug that was not known or prevalent before a certain time, but it's easier to accept reality and just stop assuming that bugs can only be created or closed by changes to the code.
"Doesn't work on a smartphone browser" was a "bug" that popped up for many sites when smartphones became mainstream, despite the sites themselves remaining unchanged (in fact, that was the bug).
By contrast, a bug that causes problems with a particular processo
Re: (Score:2)
I am assuming Torvalds considers not building security into a system is a bug.
By that measure, the code with the most bugs is the program that hasn't been written yet.
Re: (Score:3)
I am assuming Torvalds considers not building security into a system is a bug.
By that measure, the code with the most bugs is the program that hasn't been written yet.
If the program hasn't been written yet, it cannot behave in any unintended way. So no, it doesn't have any bugs. A piece of software that when run allows a user to do something they aren't supposed to do is behaving in an unintended way, so that is a bug regardless of whether they put any thought into security when building it.
Re: (Score:2)
I am 'security guy' but I would agree with Linus most, maybe not all, but certainly most are just bugs.
SQLi is a perfect example. The code does NOT work as intended. If SQLi is possible than code that was supposed to allow the input of a string somewhere does not handle certain strings properly, or does not correctly control the input domain if certain values are not supposed to be allowed! Take your pick.
The first time that name filed for example encounters a name with an apostrophe in it, D'Arc
Re: True, but. (Score:2)
Re: (Score:2)
You seem to have higher opinions of your Standard Developer[tm] than warranted.
Re: (Score:2)
Escaping SQL seems to be almost impossible task in this era of Unicode and gazillion encoding types, only safe way to go is prepared statements or something similar.
Re: (Score:2)
A bug is never just a mistake. It represents something bigger. An error of thinking that makes you who you are.
-- Mr. Robot
Security problems are NOT just bugs (Score:2, Insightful)
Re:Security problems are NOT just bugs (Score:5, Informative)
Linus's context is entirely in terms of the kernel. If you ignore that, you write comments that are complete non-sequiturs.
Re:Security problems are NOT just bugs (Score:5, Interesting)
A few years ago I spent some time studying ontology technologies. In a nutshell ontology is a branch of philosophy having to do with "being" and existence, but in an information technology context it refers to models of reality that are built around taxonomic models (e.g. statements like "security problems" are a kind of "software bug"). This has most obvious applications in object oriented class hierarchies, but taxonomic models are also a big part of database design and also implicitly arise in the design of data interchange formats.
Here's what I took away from my dive into the intersection of metaphysics and software engineering: taxonomic models are only valid within a specific domain of application. Even if you intend to model objective reality, you end up modeling just the parts you work with.
This is a perfect example. Torvalds is effectively saying while some security problems may not be bugs, but for practical purposes nearly all of them are. Clearly this is true for him, so true that he literally doesn't know how to work with people concerned with non-bug security problems. What he is saying has really more to do with what he does on a day to day basis, rather than about the overall field of security. In that field you also have to deal with issues like trust delegation, agency, physical security and and social engineering. Clearly Torvalds must know these things exist, but for him they might as well not.
People are very seldom concerned with some kind of universal model of capital T Truth; they're almost always concerned with creating models that help them get their job done. This is inevitable, and it creates problems when you try to glue data from different sources together. The unnecessary problems that arise come from people who don't accept that their useful domain-specific models don't describe all of objective reality.
Re: (Score:2)
Here's what I took away from my dive into the intersection of metaphysics and software engineering: taxonomic models are only valid within a specific domain of application. Even if you intend to model objective reality, you end up modeling just the parts you work with.
Usually because you collapse the model in all the directions that don't really matter to you. Like say if you're classifying animals, you like mammals and reptilians and that way... but then you got aquatic creatures like fish and whales, land-based like lizards and humans, flying animals like bats and birds. You got predators and herbivores, nocturnal animals and day-dwellers, bipeds and hexapods and any number of other abstractions which might be useful in a particular context. And you always end up with
Re: (Score:2)
Usually because you collapse the model in all the directions that don't really matter to you.
You can think of this this way in principle, and that's important because otherwise when you move data from one domain to another you'd never be able to reconcile the different viewpoints under which data is constructed. But it doesn't take too much of that for the complexity to overwhelm your ability to get anything useful done, so even if it were possible to deal with all the issues like differences in opinion and limitations of human knowledge, you still can't hope to capture any kind of complex reality
Re: (Score:2)
Re:Security problems are NOT just bugs (Score:4, Insightful)
Well, I certainly wouldn't want to endorse Torvalds' attitude here. But you encounter it, minus the armor of overwhelming fame, all the time when you work with multiple groups of stakeholders. As a system designer a lot of what you do when you develop system requirements is make localized concerns globally visible. But there are always people who don't see the needs of other users as important, and depending on how they're situated they can create a lot of grief.
People actually confuse "objective" and "subjective". I actually had a client once who even used those terms: we should focus on what's "objectively" important, by which he meant things that seemed obviously important to him. Things that were important to other stakeholders were "subjective" concerns. People do that a lot more than they realize, even if they don't use those terms. What's rare is having enough status to be an asshole about it.
Re: (Score:2)
The practical problem is that people don't understand the undesirable effect of propagating constraining logic between application domains.
Re: (Score:2)
Re: (Score:2)
Intentionally. However it's very easy to do that unintentionally.
For example, suppose your state bars felons from various things like owning a gun and voting. It exchanges lists of convicted felons with another state, however the states don't agree on whether specific crimes are felonies or misdemeanors. Depending on how you ask for the data, you can end up with different results.
Re: (Score:2)
My conclusion from my study is that using ontology technology for inter-organizational standards was a really bad idea because it exported baggage from each organization that creates problems for the other organizations. On the other hand the technology had potential uses internally (e.g. purging sensitive information from datasets being shared).
Basically RDF is really potentially useful in a lot of situations where people use XML, but OWL is asking for trouble.
Re: (Score:2)
Linus's context is entirely in terms of the kernel. If you ignore that, you write comments that are complete non-sequiturs.
More importantly, Linus's context is the particular discussion. If you lift the comment to the context of the kernel as a whole, it's wrong.
In the full context, I think he has a point; he was arguing against panicking the kernel when an out-of-spec situation is found. The security guy's (Kees) patch presumed that out-of-spec indicated an attack, when it most likely just indicates a bug. Being a security guy myself, I sympathize with Kees, we tend to think about things in terms of how to mitigate possible
Re: (Score:2)
One solves security issues by architecture primarily. This ensures that the damage a bug can do is minimized. I do wish mainstream operating systems went with a microkernel, or a more structured, compartmentalized system. It might make writing drivers tougher, but it would keep something like a USB flesh drive from masquerading as a keyboard and mouse, when it shouldn't.
The Linux kernel has been pretty good when it comes to security, but what threats are out there might just trying to patch bugs without
Re: (Score:2)
I don't think you understand the difference between a monolithic and a micro kernel. The latter in no way prevents a driver or device from masquerading as something it is not. The very idea doesn't even make sense.
Re: (Score:2)
he said they were "primarily" bugs. By "problem", I would guess he is talking about issues in properly set up software.
You are right about there being other issues in practice but you might argue better without using a strawman.
Re:Security problems are NOT just bugs (Score:5, Informative)
He is demonstrably wrong. True, some security problems are bugs, but there are also security problems that are bad design choices, that are misconfigurations, that are counting use of old technology (e.g. RSA 1024), that are poor use cases (nobody follows policy, because it is too complex and/or convoluted). You can't secure systems with just code reviews and patching. No way, no how.
A software bug is an error, flaw, failure or fault in a computer program or system that causes it to produce an incorrect or unexpected result, or to behave in unintended ways. [wikipedia.org] You may disagree with this definition of a software bug from Wikipedia, but I think it lines up with what I consider a bug. The bad design choices you mention are merely another potential cause of a bug.
The context of Linus's statements must also be considered. He is talking about product level security (Linux kernal in this case), not enterprise scale system level security. At my company some security concerns are at the product level, such as ensuring users can only see the appropriate fields / records. Others are at the operations level, such as properly verifying the identity of a customer over the phone before relieving certain information. I agree with Linus that security problems at the product level are primarily just bugs. Security problems at the level of corporate policies are sometimes bugs, but also sometimes the result of people not following protocol. All the training in the world will never prevent employees from making mistakes, and sometimes it isn't possible to put checks and balances everywhere.
Re:kernel Security problems are NOT just bugs (Score:2)
Granted from Linus' kernel perspective, _MOST_ security problems are caused by bugs. Userspace has far more bugs, and proportionally more caused by poor design & implementation. However, loadable kernel modules are a security hazard that has been designed-in.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
All design decisions are intentional. Bugs are by definition unintentional defects.
All design decisions are intentional, but design decisions nearly always have unintentional effects. Bugs are often caused by those unintentional effects.
Anyway, if you read any books on architecture, design, or security, they all say that you cannot test for architectural or design flaws because by definition, you tests are working to spec and should not fail if they follow the architecture and design.
Do you have an example of such a book? Our testing team routinely catches architectural or design flaws when their tests identify edge cases which were not considered during architectural or design reviews. It is certainly much harder to catch these defects but it certainly common to do so.
Re: (Score:2)
He is demonstrably wrong. True, some security problems are bugs, but there are also security problems that are bad design choices, that are misconfigurations, that are counting use of old technology (e.g. RSA 1024), that are poor use cases (nobody follows policy, because it is too complex and/or convoluted). You can't secure systems with just code reviews and patching. No way, no how.
You are completely missing Linus' point. He is saying in the context of kernel development that security issues don't get privileged treatment. There is one set of rules for all issues, be they outright bugs, bad design choices in any aspect, misconfiguration in any aspect, etc.
Vocab [Re:Security problems are NOT just bugs] (Score:2)
Is there any objective and consistent working definition and/or test of "bug" versus "bad design"? I suspect there is a lot of gray area such that Laynes Law [rationalwiki.org] will reign over such discussions.
Re: (Score:2)
To me, bad design is understanding undesirable consequence and proceeding with them. For example, leaving default hard-coded credentials for the service team to remotely access your product. You can't call this a bug - the functionality is intentional.
Re: (Score:2)
Re: (Score:2)
Bad design choices
Like choosing to use insecure-by-design languages.
The vast majority of security bugs would disappear if languages like Ada, PL/1, FORTRAN and COBOL were used instead.
Very few public, large-scale projects are coded using the older 'safe' languages. (Ignoring Rust for the moment, which is safe, but it's also new and is gaining wider distribution through Firefox). That might be a clue that there's an inherent problem with many safe languages -- whether it be performance, difficulty in actually getting useful things done, or something else.
Re: (Score:2)
Nah, it's Comp Sci snobbery.
Bug or feature? (Score:2)
The alternative would be "features"
Linus is mostly right (Score:2)
At least when you take into account that people should design security in today. So from the coding angle, pretty much "just bugs". From the testing angle often vastly different, as in functionality testing you check for the presence of functionality, but in security testing you check for the absence of functionality. Individual tests are still pretty similar, but getting test-coverage is very different and a lot more difficult.
Of course, the "just bugs" view also requires that the developers actually under
Re: (Score:2)
We will NEVER, EVER have 100% of all developers understand security at the level required to make 100% secure programs.
What we need is OS and languages that have security built-in, the same way programmers don't know assembly and UEFI and yet can still code and make programs.
Re: (Score:2)
So? Why would "100% secure programs" be desirable? This is the thinking of a complete amateur in the security-space. Security is risk management. You never do risk mitigation to "100%" in reality. It is stupid.
And "OS and languages that have security built-in"? Have you completely ignored all attempts and all research on that for the, oh, last 40 years or so? It cannot be done and asking for it is, again, stupid.
Re: (Score:2)
There is also a 3rd class of coders, which one of my Application Security students pointed out last year: Those that do not really understand security, but know it is difficult and that get help from an expert when they need it. While this may be a bit underwhelming as result of my teaching efforts, I have to say he really got it.
Re: (Score:2)
And there's reality, where there's deadlines and a lack of funds but the project must still be delivered. Functionality wins over security almost every time.
Re: (Score:2)
It is. You do not have to cut through the BS as with so many other people. Without that, Linux would never have grown to the quality-level it currently is at. While not perfect, it is pretty good, provided competent system administration and good competent coding.
Assuming he has a clear requirement for security (Score:2)
Doesn't matter (Score:3)
You can word it the way you want. If it's not secure, it's not secure.
Here's a more complete discussion of the issue. (Score:5, Informative)
https://www.theregister.co.uk/... [theregister.co.uk]
Re: (Score:2)
Which is reasonable. Only if this is the only way to prevent major attack activity going on is it the right thing to start killing processes. The update in the article by Errata Security is spot on though: Most security people are not developers and that severely limits their perspective. Personally, as security person that is also an (occasional) developer, I cannot imagine how you actually can do good things in the security space without at least some real hands-on experience of software development. Yet
Didn't Linus approve this? (Score:2)
I thought that Linus personally approves all the changes to the kernel. So didn't he approve the changes he is complaining about?
The context of his statement is refusing to approv (Score:5, Informative)
The context of the statement is that somebody submitted changes to the kernel and he denied the request.
Somebody wanted to add "security" code that would kill of a process, or even the whole kernel, if it detected something that might be a security concern. So with the proposed change, programs would crash without warning if this new code detected a possible security problem. Most of what it detects aren't attacks, though, they are just bugs in the software that need to be tweaked to explicitly follow security rules. The new security code should first warn about these bugs rather than crashing the system, Linus said. Quoting him:
So the hardening efforts should instead _start_ from the standpoint of
"let's warn about what looks dangerous, and maybe in a _year_ when
we've warned for a long time, and we are confident that we've actually
caught all the normal cases, _then_ we can start taking more drastic
measures".
Re: (Score:2)
In his world, he is right (Score:2)
His world being the world of the Linux Kernel. When you use this context then of course any security breach is due to a bug, simply because, well, what else should it be?
Outside of that context... no.
Re: (Score:2)
For some value of âoebugâ (Score:2)
Thereâ(TM)s âoeyou forgot to check array bounds hereâ bugs, and then thereâ(TM)s âoeyour entire design is fundamentally fucked and insecureâ bugsâ¦
Re: For some value of "bug" (Score:2)
And then there's "it's 2017 and we've never heard of UTF-8" bugs.
Re: For some value of "bug" (Score:2)
No preview on mobile. :(
linus vs grsecurity (Score:2)
Re: (Score:2)
As usual ... (Score:3)
... Linus doesn't mince words when it comes to pointing out bad software development.
As usual, he's right.
And before you object, think for a moment if it could actually be the case that he knows what he is talking about and may even be a better programmer than you. And that your should maybe listen to what he has to say.
My 2 cents.
Perspective (Score:2)
Perspectives will vary by profession. Once you get outside of software development, I suggest that most security problems are the result of either failing to promptly update software, failure to properly configure software, or incomplete risk analysis. For example:
The Pentagon leak [slashdot.org] appears to be a a failure to properly configure access controls.
Equifax [slashdot.org] was a failure to update software after a bug was found/fixed.
Fukushima [slashdot.org] was a failure to consider the risks during the design process.
Linus' perspective is f
OpenBSD has said this for years (Score:3)
OpenBSD has promoted this belief for years. The description of their audit process [openbsd.org] states...
"Another facet of our security auditing process is its proactiveness. In most cases we have found that the determination of exploitability is not an issue. During our ongoing auditing process we find many bugs, and endeavor to fix them even though exploitability is not proven. We fix the bug, and we move on to find other bugs to fix. We have fixed many simple and obvious careless programming errors in code and only months later discovered that the problems were in fact exploitable. (Or, more likely someone on BUGTRAQ would report that other operating systems were vulnerable to a `newly discovered problem', and then it would be discovered that OpenBSD had been fixed in a previous release). In other cases we have been saved from full exploitability of complex step-by-step attacks because we had fixed one of the intermediate steps."
Linus is back :) (Score:2, Insightful)
it is great to see that "kinder gentler Linus" has gone away and good old "kick 'em in the ass Linus" is back.
Linus' outrageous remarks serve kernel development well
Out of context (Score:3)
Got to read Linus' comment in context of his post, otherwise it's a gross generalization where you're just arguing about semantics and opinions.
A better summarization of what Linus said is: take into account security aspects when designing a feature, so you don't rely on a kernel panic (or exceptions) when some rule is not observed.
Here is something analogous I ran into recently regarding a Java SDK that was not designed with security in mind. Java has a SealedObject to protect sensitive data while in memory - great feature, but then things got messy when it came to dealing with String instances. In Java it is considered bad practice to use String type to represent any kind of sensitive data like passwords because the String is immutable (i.e. it can be visible in the heap for quite a while before getting garbage collected, and if a heap dump is triggered you are screwed). What it boiled down to was the current SDK had signatures like the following:
setPassword(String pwd); // BAD!!!
instead of:
setPassword(char[] pwd); // better!
If the SDK was designed with setPassword(char[]) to begin with, SealedObject library usage would have been much simpler and cleaner - no silly security rules. But thanks to cluelessness of setPassword(String) in the SDK, SealedObject library design became much messier due to security rule to throw an exception whenever it encountered String instances were used to represent sensitive data.
Re: (Score:3)
The patch submitter agreed with him, don't know why everyone is jumping to white knight for him.
Torvalds point is that it can wait, and that it can be phased in. The proposal is a hardening scheme and there's a long history of hardening schemes breaking valid usage inadvertently. Torvalds perspective is that it can be done carefully, it's a nice to have, but it's not going to save the world and it's not so terrible for it to wait a little while to make sure it is right. The patch submitter said that he d
Re: (Score:2)
I prefer Base64 encoding the inputs, and continuing in my meandering way to build things from SQL strings. Quite frankly, no expects the Base64 encoding, and it works on, I think, everything...just need to write an extra function to decode the data when looking for stuff in the database.
Re: (Score:2)
The question was *how* equifax was hacked. Was it through a measure that this would have prevented? Probably not, it was probably much more mundane.
The patch may be a nice improvement and ultimately a good idea, but it's a hardening improvement, not a fix for a specific vulnearabilty, so caution must be taken. You can't just invoke the 'security' card as a 'nothing else matters' when dealing with adding security features.
Security vulnerablities are urgent, security mitigation features are important, but
Re: Finnish translation (Score:2)