What Happens When Software Companies Are Liable For Security Vulnerabilities? (techbeacon.com) 221
mikeatTB shares an article from TechRepublic:
Software engineers have largely failed at security. Even with the move toward more agile development and DevOps, vulnerabilities continue to take off... Things have been this way for decades, but the status quo might soon be rocked as software takes an increasingly starring role in an expanding range of products whose failure could result in bodily harm and even death. Anything less than such a threat might not be able to budge software engineers into taking greater security precautions. While agile and DevOps are belatedly taking on the problems of creating secure software, the original Agile Manifesto did not acknowledge the threat of vulnerabilities as a problem, but focused on "working software [as] the primary measure of progress..."
"People are doing exactly what they are being incentivized to do," says Joshua Corman, director of the Cyber Statecraft Initiative for the Atlantic Council and a founder of the Rugged Manifesto, a riff on the original Agile Manifesto with a skew toward security. "There is no software liability and there is no standard of care or 'building code' for software, so as a result, there are security holes in your [products] that are allowing attackers to compromise you over and over." Instead, almost every software program comes with a disclaimer to dodge liability for issues caused by the software. End-User License Agreements (EULAs) have been the primary way that software makers have escaped liability for vulnerabilities for the past three decades. Experts see that changing, however.
The article suggests incentives for security should be built into the development process -- with one security professional warning that in the future, "legal precedent will likely result in companies absorbing the risk of open source code."
"People are doing exactly what they are being incentivized to do," says Joshua Corman, director of the Cyber Statecraft Initiative for the Atlantic Council and a founder of the Rugged Manifesto, a riff on the original Agile Manifesto with a skew toward security. "There is no software liability and there is no standard of care or 'building code' for software, so as a result, there are security holes in your [products] that are allowing attackers to compromise you over and over." Instead, almost every software program comes with a disclaimer to dodge liability for issues caused by the software. End-User License Agreements (EULAs) have been the primary way that software makers have escaped liability for vulnerabilities for the past three decades. Experts see that changing, however.
The article suggests incentives for security should be built into the development process -- with one security professional warning that in the future, "legal precedent will likely result in companies absorbing the risk of open source code."
Nada (Score:2)
Re: (Score:3, Insightful)
You cannot build a secure application without planning the whole thing out first. This ADHD / MBA / lazy fuck / quick profits / fuck the customer approach to development ("agile") is a cult kool aid, and all the young ones drank it.
It will take computer science decades to recover from this, if it ever does. I think we may have already peaked.
Re: (Score:2)
What sorcery is this [imgur.com]?
File contents stored alongside metadata in the filesystem, because it fits? Not sure which FS you're using, and now sure on Windows 8/8.1/10 report on such things.
The price will skyrocket (Score:3)
Just look at medical devices. They don't cost that much to make but have to go through a long certification process that needs to be paid back.
Same with software. Something like SOX, PCI or HIPAA will pop up to certify "secure software" and software that is patched on a regular basis and people will end up paying for it. And on top of that every piece of software will be "certified" on some platform, similar to a game console. If you run it outside of the certified hardware you lose the ability to sue.
You're all idiots if you think you will be able to run software on some of the ridiculous configurations I've seen in my time and expect vendors to pay for it when it breaks because of your stupidity.
Re:The price will skyrocket (Score:5, Informative)
And yet, ironically, that certification process does not cover security. The software on medical devices is well known for being almost ludicrously insecure.
Re: (Score:2)
This price hike won't either.
And just like the price hike with medical devices, it will be mostly to cover the additional cost for legal battles and settlements. Life has a price, ya know...
Re: (Score:2)
Doesn't matter. After the lawsuits of the 80's and 90's there are now "best practices" and "standards of care" and standards for almost everything because you can't just sue. You have to prove someone did something wrong.
Same here. Industry will make up some best practices, it will be a certification or some other process that costs a lot of money, it will mean hiring people to push the paper and make sure the paperwork is right and everyone will pay.
Re: (Score:2)
The whole medical ecosystem is seriously screwed up, starting with the reimbursement models. They scream high costs, but one particular med device company I worked for spent $600 per device full up for production and FDA overhead. They sold for $15K and the company was barely breaking even. Where did the other $14K+ go? Mostly sales and marketing, also lobbying for increased reimbursements.
Answer: It won't happen. (Score:2)
If not already, there will be a clause added to the Terms and Conditions saying, "(a) We're not liable and (b) all disputes will be settled via arbitration."
Re: (Score:2)
Now only the laws of your country must allow that.
Re: (Score:2)
"(c) Residents of jurisdictions that do not recognize disclaimers of implied warranty or mandatory arbitration are not eligible to license this software or purchase this device."
The maturing of the profess (Score:4, Interesting)
... software takes an increasingly starring role in an expanding range of products whose failure could result in bodily harm and even death. Anything less than such a threat might not be able to budge software engineers into taking greater security precautions.
What you are seeing is the maturing of software engineering as a profession. A few hundred years ago if you needed surgery you would go to your barber [wikipedia.org]. The reason for this was that they were usually in possession of the right tools. The medical profession eventually matured to what we have today, where a surgeon is a specialized physician. But that didn't happen overnight and lots of people died in the process. In fact, we didn't even have a theory of infectious disease until the 1830s.
The point is that right now hardware, including its firmware components, is oftentimes made without the involvement of a software engineer. It wasn't that long ago that software engineers didn't even exist and in time as the profession matures we will get to the point where developing a piece of hardware without the participation of a software engineer will be unthinkable. But we are not there yet.
An important side note is that there is a difference between a coder, a developer, a programmer, a software engineer, and several other specialized disciplines in the software arena. I think that a precondition to solving the problem identified by the article has less to do with things like development methodology (that is not central the problem at hand) and more to do with establishing minimum standards for some who claims to be a software engineer. For instance, a surgeon in 2017 has to meet vastly different minimum qualifications than a surgeon did in 1917. We didn't even have software engineers a hundred years ago, so who knows what it will actually looks like by the time the field really starts to mature.
"Security" and "move toward agile development" ? (Score:3, Interesting)
Sorry, I stopped there, at "Even with the move toward more agile development and DevOps". What's the link, supposed positive here, between the two ?
Both "old" and "new" method won't never mean better software than the people using them.
Bad engineers using old method (V cycle ? Tons of documents ?) or new methods (you said agile, as in "get as many things done as possible, as quickly as possible, using shiny web app like Trello or Kanban-something" ?) won't make secure software.
May be with good engineers, you can achieve good results, whatever the method is.
More or less related : ISO9001 doesn't mean that the certified company makes good products, it means that it produces always the same quality, good or bad.
This may sound a bit like a troll, but I'd like to add that, since young engineers favor more agile methods, and considering the lack of experience, combined to the messy sensations I sense in agile methods, I tend to think that agile methods would produce less secure software ...
Re:"Security" and "move toward agile development" (Score:4, Insightful)
Couldn't agree more. The notion that the road to computing security runs through agile and Devops seems to me to be as unlikely as the notion that the way to get you and your bicycle from New York to Bermuda is to head off on said bike for the Bering Strait (in Winter) so you can get to Singapore then think out your next step.
FWIW, I think the road to computing security probably is ill paved, difficult, unpleasant and involves shrinking attack surfaces by eliminating unneeded capabilities (e.g. #@$%^ Javascript) . It probably also requires shrinking the toolset to a bare minimum of proven libraries and protocols. That's not much fun, so it probably won't happen until we've exhausted the long list of entertaining but ineffective alternatives.
Software Engineers Failed? (Score:4, Insightful)
Software engineers have largely failed at security. Even with the move toward more agile development and DevOps, vulnerabilities continue to take off. More than 10,000 issues will be reported to the Common Vulnerabilities and Exposures project this year.
How about you get what you pay for? Many management teams have decided that adding security costs money and it's more cost effective not to spend many cycles on it, but rather to just deal with problems as they pop up.
I don't think you can spin that as software engineers "failing." If the management wants security, they can pay for training, consultants, audits, bug bounties, etc. There are lots of ways to address this issue. Besides, perhaps the number of bugs is skyrocketing as a natural consequence of all of the new software projects and products.
Re: (Score:2)
And since companies are not really known for sitting on costs, expect software (and hardware) prices to skyrocket.
Re:Software Engineers Failed? (Score:5, Insightful)
So very much this. Given the notion that out of fast, cheap, and $x, you can have any two; it's practically a truism that PHB/MBA types will always choose fast and cheap, no matter the value of $x. The only exceptions are when you're contractually or legally obligated to have $x, such as in PCI or HIPAA environments. And even then, fast or cheap is only given up for $x very begrudgingly, and sometimes only on paper but not in reality.
Re: (Score:3)
Software hasn't had it's "Pinto" moment yet, where a jury decides that a company needs to be punished for that type of calculus.
Re: Software Engineers Failed? (Score:2)
If it's not secure, it's not working. (Score:2)
Automated test should be including known attack vectors, if the build is vulnerable, the build fails.
Re: (Score:2)
If it's not secure, it's not working.
so your saying no one has ever written working products yet? if that is your definition of working then there will never be any working products, security can never be absolute.
Re: (Score:2)
The problem with that is that most security issues are a result of attack vectors that were not known yet when the software was under development.
Software patches address that - so software when released may be tested against known attack vectors, security patches are also tested against known attack vectors, just a few more of them as more become known over time.
Simple (Score:4, Funny)
We'll essentially get shrink-wrap contracts that basically say "This software can't do shit, if you use it regardless, sucks to be you."
In other words, what we already have.
Re: (Score:2)
Isn't that in most EULAs already?
You can always find something in the lines of no guarantee, explicit or implied, that this software is fit for any purpose.
Re: (Score:2)
Compare the price of a SpaceX Falcon 9 to the average PC. It's not only the rocket fuel, ya know...
Who are these "experts"? (Score:5, Informative)
Reading the article, it's all people with an interest in peddling solutions to the problem, naturally. This is a marketing paper.
Claiming that Software Engineers have "failed" at security is akin to claiming that police have "failed" at crime stopping crime. And the courts aren't going to suddenly start blaming companies for the actions of threat actors unless there is some representation that the products they're creating are unhackable.
Re: (Score:2)
Same here. Because in actual reality it is mostly managers that have failed to hire competent people and then give them the time to create secure code.
That depends on how you define company (Score:2)
If it were literally only companies which were on the hook, you'd see a bloom (not a renaissance, because there has not been a dark age yet) of OSS. If companies are liable, but J. Random Coder on the street is not, then you're going to see FoSS take center stage simply due to lack of liability.
Re: (Score:2)
Uh, no. Given a choice between "use product from A and if there is a problem they are liable" and "use FOSS product and if there is a problem I am liable", who do you think is going to go with the second option?
Re: (Score:3)
Uh, no. Given a choice between "use product from A and if there is a problem they are liable" and "use FOSS product and if there is a problem I am liable", who do you think is going to go with the second option?
You haven't read a EULA lately, have you? That's how it is now.
Re: (Score:2)
So you're saying that there are EULAS today where the developers ACCEPT liability? No, today EULAS deny liability, just like FOSS. As far as liability is concerned, today proprietary and FOSS are equivalent.
But you are talking about a situation where they are unequal - proprietary software can be held legally responsible while FOSS cannot. In that case, one would have to be nuts to choose FOSS.
Software has already killed people (Score:2)
Not a security vulnerability - as in, no malicious actor exploiting a security flaw - but killing people? BT,DT. https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:3)
The problem, then, was poor product (hardware) engineering and a series of lapses in judgme
Re: (Score:2)
Nope, that's not an excuse.
The earlier models had two, redundant, safety mechanisms in place to prevent killing patients, one in software and one in hardware. Yes, it was an unforgivable management decision to deliberately compromise that redundancy by removing the hardware safety mechanism in Therac-25, but that does not excuse the bug in the software safety mechanism.
The software was responsible (not solely responsible, before Therac-25, but still responsible) for preventing fatal radiation doses, and i
Re: Software has already killed people (Score:2)
Programming practices. (Score:2)
If (and that's a big "if") companies become liable for software failures then it will be most likely that there will be a guideline of standard programming practices. Likely it would restrict companies to using programming languages that have already been heavily analyzed and their security weaknesses identified. CMU has composed guidelines for multiple languages and platforms [cert.org] which could easily be identified programmatically. Such regulation would be a deathblow to companies using script kiddies to scra
Re: (Score:2)
That is BS. The CMU guides are pretty reasonable, but they cover maybe 10% of the problem. And people that have what it takes to write secure code do not actually need them.
This is not a problem that will be fixed by "best practices" anytime soon.
Hopefully good things (Score:2)
Oh shit, now wireless cameras cost $49.99. But they're secure? Works for me.
Security is not Safety (Score:2)
Safety and security are independent requirements. An expensive insurance bill or loss of operational trust is not a measure of safety.
In the real world of buildings and machinery security is only occasionally a factor and often clashes with safety. Safety is always number one at the expense of security.
Re: (Score:2)
So who're you going to sue? The retailer? The distributor? The importer? Or will you try going after the producer - overseas, different jurisdiction, and possibly out of business already (or simply shut this company and moved on to the next).
Re: (Score:2)
I think you might have replied to the wrong posting.
Oh give me a break. (Score:4, Insightful)
Even with the move toward more agile development and DevOps, vulnerabilities continue to take off
Last time I checked, both of those fads tend to harm security, not help it.
Doesn't matter. (Score:2)
It's bad (Score:2)
Look what happened to general aviation when cessna, piper, et al got the pants sued off them. A small 4 place plane used to cost about as much as a mid range Cadillac, after the lawyers got through with them they cost $200k.
Let me fix the summary for you: (Score:3, Insightful)
"Even with the move toward more agile development and DevOps, vulnerabilities continue to take off..."
Should read as follows:
"Because of the move toward more agile, less detail-oriented, lower quality development and DevOps, vulnerabilities continue to take off..."
Security will only happen ... (Score:2)
... in response to litigation.
I predict the EULA inclusion of waiver of liability is going to be removed.
It's similar to signs in the parking lots of Walmart saying, "Not responsible for damage from shopping carts."
While that sign may discourage some shoppers from filing damage claims, it certainly does not protect Walmart from liability in all cart-related matters.
As TFS suggests, computing devices are becoming more critical and damages more damaging.
Agile? (Score:2)
Anyone care to explain how agile is supposed to improve security?
It's not the software companies that get hosed (Score:2)
To which I say, good fucking riddance.
The sane solution is of course... (Score:2)
... to define a "state of the art" regarding security. It should contain things like not mixing user input with SQL-queries, unless it goes through a whitelist of characters or is escaped by a proven to work function.
Essentially that "state of the art" should always be a bit above what idiots do in order to weed out idiots. Ideally it's defined in a way that that compilers can prove it working. (in the above case, user input strings and SQL-queries could have different types)
Slowly, but surely you'd raise t
Large software corporations would benefit (Score:2)
Large corporations have armies of attorneys to cover their asses. Liability for software faults would benefit them because they have the resources to kill most any lawsuit against them. The opensource world, however, would whither and die because no weekend coder is going to risk everything because of a mistake. Expect large corporations to fully endorse software liability laws since it will remove the one kind of competition that they can't compete against on cost or functionality.
Re: (Score:2)
Outside Actors? (Score:2)
If the brakes go out on your car and you crash into a tree, is person who made the brakes liable?
If the software 'goes out' on your car and you crash into a tree, is the person who made the software liable?
If you build a car with an easily hackable lock, and someone breaks in, are you liable for theft? (tennis ball trick)
if you build a car with an easily hackable electronic lock, and someone breaks in, are you liable for theft?
Do you see the parallels here? Just because someone can do something bad to yo
Agile and Devops? (Score:2)
Are we sure? (Score:2)
*needs citation Seriously, I'm a software developer and often have to be involved in a variety of security-related aspects of development and I've been doing it for twenty years. My anecdotal evidence is that security exploits are way *way* down in terms of risk and severity compared to when I entered the industry... I could be wrong (the plural of anecdote is not data) but it feels the opposite for me.
Re:You get what you didn't ask for (Score:4, Insightful)
It has to be like a legal obligation in large software or certain domains to pass like either:
- HP Fortify (HPFOD)
- IBM AppScan
- Coverity
or similar security scans
Open Source does not need to pass this, but users* of those libraries / programs
should pass this and then pass the scan.
Some of these companies provide "free scan" for Open Source / public GitHub projects.
It happened to us, that HPFOD would find bugs in some Maven Java Libraries JAR file
that we had to patch ourselves, in order to be put that JAR in production.
The other problem is that you can pass this year,
but then next year, the software will fail as new security issues are discovered
and newer best practice must be followed.
Example from 2014: /* Log4J */ /* SLF4J */
LOG.error("Order name: " + orderName);
LOG.error("Ordername: ", orderName);
Those passed before, now you would get a "Log Forging" security issue.
LOG.error("Order name: " /* Correct Log4J */
+ (orderName == null ? null : orderName.replaceAll("[\x1B\0\r\n\t\f]+", "_")) );
Control characters, new line, line feed must be stripped
from any @Tainted String / Object... from the logs in production.
Otherwise, someone could do this:
orderName = "my order\n\n[INFO] The user logged in successfully.\n\n";
or attacks like you open the server logs in VIM or shell and then
"ASCII ESC sequence" do things to your terminal...
https://www.owasp.org/index.php/Log_Injection
Re: You get what you didn't ask for (Score:3)
If it's such a security issue, shouldn't it already be done correctly in the library or the logging system? These sorts of things is exactly what a developer shouldn't have to worry about. If the underlying system receives a string from a Log library, it should either be cleaned or the underlying system should clean those up.
Re: (Score:2, Insightful)
If you think Fortify, AppScan, or Coverity will magically make your app secure, you are about as far from secure as you can get.
Re: (Score:2)
And our software does have serious problems. People still write SQL injection bugs, for unknown reasons.
Re:You get what you didn't ask for (Score:4, Informative)
LOL, HP Fortify, the tool that marks almost every line as a vulnerability to cover its own ass. It generates so many false positives that it is beyond useless. We'll just keep doing our own reviews. ...and if junk in your log manages to cause a hack, then it is not your software at fault. It is the log viewer software that is at fault. If that happens to be VIM or your shell, then yes, I boldly claim that is a bug in those pieces of software.
Re:You get what you didn't ask for (Score:5, Insightful)
If your company were going to be held liable for security vulnerabilities, finding and plugging these holes during development would be part of your job. As things are, there's no reason to look for or deal with them unless there's a way to make your customers pay for it. This holds true for all custom software, either open or closed source.
Re: You get what you didn't ask for (Score:5, Informative)
If your company were going to be held liable for security vulnerabilities, finding and plugging these holes during development would be part of your job. As things are, there's no reason to look for or deal with them unless there's a way to make your customers pay for it. This holds true for all custom software, either open or closed source.
It really depends on how big the company is, how often they get busted, and what exactly they are liable for. As it stands now, the average small company can go 20 years without an incident. The small company that skips on security can likely outcompete and outlast the small company that doesn't. Sure if they get unlucky and have a security incident, it could bankrupt them but the odds are in their favor that skipping security gives them a competitive advantage to the company that doesn't.
Re: (Score:2)
I'm glad that someone here recognizes this fact. I don't know how many companies I've seen that did things "right" went under or were bought by companies that took every software shortcut known to man.
The basic fact is that if the customer is ignorant of the intangibles like quality, they'll prioritize reputation and then price. If you're a smaller company, you won't survive long enough to get a reputation for excellence if you don't go cheap enough to allow you to undercut all your competitors. And (in
Re:You get what you didn't ask for (Score:4, Interesting)
The developers aren't at fault. The people in charge have to be the ones to demand security. Blaming pros and cons on Agile or DevOps misses how companies really work. If the management puts security as a required feature, then it'll get added in even with Agile. Nobody should be dumb enough to allow bottom tier developers to set their own goals.
You also need management to actually hire security experts. A lot of failures come from having novices work on security (novices can mean those with decades of software experience but only a superficial understanding of security and zero academic understanding of crypto).
Re: (Score:2)
No, Agile won't allow security to be built in. Agile builds dirty snowballs with little integration other than slapping feature on top of another. There is no mechanism for going back and developing a model for how the features are integrating together to produce security holes. DevOps is no better.
Continuous delivery will always outpace security integration.
Re: (Score:2)
Sure. Just up the price to cover the legal problems and settlement fees and you're set. Why bother change the software? In the end, the whole shit is a matter for risk management, not engineering.
it's done all the time in aviation (Score:3, Informative)
In spite of people confusing inflight entertainment systems with avionics, yes is is done all the time j. The aviation industry. Every piece of software that controls the airplane must be built to RTCA DO-178B/C design processes. Among other things, every input and output to every module is specified in the design process, and out of bound input responses are chosen. Then in writing the software, the inputs are checked, and then validated against random and maliciously crafted input. Bogus states are inject
Re:it's done all the time in aviation (Score:4, Insightful)
It's not really that much more expensive, as mature engineers aren't really more expensive than programmers, are a lot more effective, and the debug cycle is a lot faster when it's designed in at the front.
Dude, I worked in this industry. Its FUCKING INCREDIBLY EXPENSIVE. Like on the order of 100x more expensive than writing line-of-business commercial software. A 10 line subroutine can EASILY require 100 hours of engineering and testing to meet spec. Everything has to be written out in some sort of design document beforehand, every requirement flowed down to lines of code that cover it, documented test cases that cover each requirement, total coverage of all possible inputs at every call boundary, etc.
I mean, yes, theoretically it would be great if all of this was done in every piece of software, but software's PURPOSE is to be flexible and quickly and efficiently implement functions in a way that can be modified without exorbitant cost. If you force things to the level of safety of flight critical software then you might as well literally build dedicated silicon for everything, because it will be cheaper.
The truth is software will probably never achieve this kind of level of reliability and security in general. It just isn't worth it. Even safety critical software needs to be cheaper than that. If we want the functionality and convenience of embedding software in cars, airplanes, etc then we better be willing to accept the consequences. The only alternative is likely automated software development performed entirely by AIs, but I doubt that will fix the problem either. There's always some guy that can make the smarter AI that can figure out the security hole in the software your dumber one wrote.
Re: (Score:2)
According to the sites about the CGA I've found, it does cover software. However, it doesn't cover anything sold "for commercial use", so business software wouldn't be covered. An OS or program bought for home use would be, though. Open source would not be covered as only item sold "in trade" are. Private sales are not covered, so custom software made on contract would not be covered.
Re: New Zealand (Score:2)
Re: (Score:2)
And then everyone comes around and cries out "Why does the US price for [item] is $500 and I'm paying $1000 for it?!"
There
Re:Liability (Score:5, Insightful)
Liability is what's gonna kill the free software movement. Many reasons.
Liability for general purpose computing is not going to happen. It would make software way more expensive, and mean locked down desktops and laptops that prevent users from downloading, connecting, and configuring. People are not going to accept that.
For safety critical software, such as automotive control (not infotainment), elevator systems, etc. we already have liability regulations.
Liability for a insulin dispenser makes sense. Liability for a free webapp does not.
Re:Liability (Score:4, Interesting)
Liability is what's gonna kill the free software movement. Many reasons.
Liability for general purpose computing is not going to happen. It would make software way more expensive, and mean locked down desktops and laptops that prevent users from downloading, connecting, and configuring.
In addition to that, we have the most vulnerable OS being the biggest OS, and the Chinese building the Internet of things essentially open systems, so what would we do? Sue them?
It isn't to blame the victims here, but the ascendency of personal computing for the masses means that most computing devices are owned by people with very little idea of security. In a world where people click on random stuff they get in email, it's gonna be very hard to get any real security.
Re: (Score:2)
This kind of thinking pisses me off.
It's a goddam computer.
You and I know what best practices are, so why the fuck don't we "AI" the computing devices?
Meta Code (I never met a code I didn't like):
- See email attachment
- Examine attachment code
- Predict consequences
- Vet the code against:
Isaac Asimov's "Three Laws of Robotics"
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Re: (Score:3)
Re: (Score:3)
Then you're not the person to tackle this issue.
Microsoft did (as mentioned above) glue their hardware together.
That fixes a lot of problems.
Microsoft will find a solution to security when it's cost-prohibitive to continue to ignore the problem.
Re: (Score:2)
Then you're not the person to tackle this issue.
Then again, who is? Or are you suggesting a "VolksComputer".
Re: (Score:2)
Then again, who is?
That person or persons will be revealed when litigation is applied.
I see the security issue as sharing a similar trajectory as product liability litigation and public safety standards.
For reference, see fire and building codes, aircraft and automobile safety standards.
Qualified people were located when the cost of doing nothing became expensive.
Re: (Score:2)
Re: (Score:3)
Re:Liability (Score:4, Interesting)
Actually, I am a programmer (retired) and I agree with you that there is no such thing as "AI." The AI part was a dig at those who are delusional in that regard.
Still, you and I have the skill sets to write "play-like" algorithms that can single-step through an executable without allowing anything to actually happen.
If the code says it's going to start some shit, we can tell it, "No, you're not."
Re: Liability (Score:3)
Solved the halting problem, there, did ya' pal?
Re: Liability (Score:2)
Re: (Score:2)
Please cite the various laws in computer science that nullify the very solution your second sentence offers.
Re: (Score:2)
The Secure Model of Computation would be defined over a less flexible subset of a Turing machine that deliberately avoids the properties on which the proof of the halting problem's undecidability rests.
Re:Liability (Score:4, Insightful)
This kind of thinking pisses me off.
It's a goddam computer.
You and I know what best practices are, so why the fuck don't we "AI" the computing devices?
It's a market problem, plus it's a We problem, plus the unknown person/group problem.
The profit margin is pretty thin for many devices and the software to run them, and the lifetime of a device or software is likewise very short. Security is about the last thing on their minds. Milking whatever profit can be had out of product A while Product B is getting ready for release is a problem.
Then there is the we issue. The collective we is still using stupid passwords like Password1, and don't think twice about clicking on email links. At this point, it is obvious that the collective "we" is not going to be of much help in matters of computing security.
It's nothing short of amazing that 30 year old SMBv1 is still being shipped toggled on. (it is being removed from the OS finally) This is the part that might be conjecture. It's been known to be a gaping security hole for years, so why was it still there. Microsoft had no problems making a shitload of peripherals obsolete with Vista, and no issues with abandoning Windows 7 users. But SMBv1? That must be included, and it must be turned on by default. So it's not hard to imagine that someone wanted it turned on by default.
Re: (Score:2)
So it's not hard to imagine that someone wanted it turned on by default.
I agree, and state it a little differently:
It's easy to imagine that no one felt compelled to even fuck with it.
Obvious to both of us is that Microsoft now feels compelled.
I think the reason for that is that "We" are dancing on the rim of litigation.
Re: (Score:2)
It's a goddam computer.
You and I know what best practices are, so why the fuck don't we "AI" the computing devices?
Oh, very simple: Because it is not possible today and it may never be possible. Strong AI is a dream/nightmare, but not anything that we can reasonably expect to ever exist at this time. There is actually no indication that it is even possible in this universe. And should it be possible, it may well come with self-awareness and free will and may flat-out refuse to work for you.
Re: (Score:3)
Well, we've both been at this a long time and I agree with you to a large degree.
The "intelligence" part of AI was initially equated with human intelligence.
Of course, an intelligent algorithm would commit suicide when it determined that Facebook was down.
So, while the buzzword remains, the definition of AI has changed to omit the unrealistic goal of being actually intelligent.
However, where we may disagree on a point:
I submit that a computer that can "mentally" make hundreds of thousands of moves in chess
Re: (Score:2)
Re:Liability (Score:5, Insightful)
Liability for "free" software rests with the people who use it to make money. They are the ones on the hook to ensure that the "free" software is suitable for the purpose for which they are selling it.
Organizations which use "free" software directly are themselves responsible for whatever happens as a result of using that "free" software.
GPL is rather long-winded - take a look at the MIT license for a notion of where liability for "free" software lies.
Before you say "that's gonna change when liability comes into the picture" - no, not at all - people writing software who don't know how it is going to be used cannot conceivably be held liable and more than Sir Issac Newton's estate could be held liable for a mishap on the space shuttle.
Re: Liability (Score:2)
Re: Liability (Score:4)
Re: Liability (Score:3)
Yes, and liability stops with whoever put it in that safety-critical system without assurances from a third party that the software was fit for such use.
Similarly, a lumber yard is not liable if someone is particle board where high-tensile, fire-resistant, waterproof material is indicated.
Re: (Score:3)
That is unmitigated nonsense. FOSS software is used, sometimes heavily, in industries where there is strong liability for security breaches, for example banking, medical, insurances, etc.
Re: (Score:2)
It's management that won't pay for properly written, properly tested software. That takes time (measured in metric shit-tons), and that makes it too expensive in every case I've ever seen.
Security cannot possibly be the result of dotting every i and crossed every t. It cannot require exhaustive testing, massive expenses, being very careful with critical attention to every detail. Any approach requiring these things for success is almost certainly guaranteed to fail. Security must never require human perfection.
The only realistic way to get to a secure system is by perusing designs which are inherently secure where coders are required to intentionally create a vulnerability or otherwise kn
Microsoft never innovates, but does improve ideas (Score:2)
Could have been the entry point for an insightful analysis. You didn't comment (or notice) that it's not a spurious correlation. The EULA was perfected by Microsoft so that liability was eliminated as a real concern for the developers.
I think the other major wrinkle perfected by MS was selling to the makers, not the actual users.
There are other possibilities, but because the rules of the game are biased for YUGE companies, not increasing consumer choice and freedom, we're screwed. I suppose Trump's voters h
Re: (Score:2)
Indeed. Devops has basically failed (not many people that can do it and those that can have already done it before). Agile is mainly a method of making sure management does not stand in the way of developers too much, but again, it needs highly competent people to work well.
As such, claiming the failure of two hype-movements is responsible for insecure software is excessively stupid or a marketing lie. I suspect the later.
Re: (Score:2)
If you disagree, please do show me a practical way how to write completely secure code.
There is no need for that and asking for it shows you are a novice at software security. In actual reality it just needs to be harder to break in than what your target adversary can do or can afford. That is often pretty easy to reach, given competent architects, designers and implementers. The real problem is that most software is written by incompetent people without the first clue about security. Hence breaking in is often excessively simple. Just look at the recent Intel vulnerability (management engine
Re: (Score:2)
And that is complete nonsense, because 1) it is not doable and 2) it would not result in secure software if it were doable.
Re: (Score:2)
The problem aren't the vulnerabilities you know about, the problem are the ones you donot know about.