Following Best Coding Practices Doesn't Always Mean Better Security 61
wiredmikey writes "While some best practices such as software security training are effective in getting developers to write secure code, following best practices does not necessarily lead to better security, WhiteHat Security has found. Software security controls and best practices had some impact on the actual security of organizations, but not as much as one would expect, WhiteHat Security said in its Website Security Statistics Report. The report correlated vulnerability data from tens of thousands of Websites with the software development lifecycle (SDLC) activity data obtained via a survey. But there is good news — as organizations introduced best practices in secure software development, the average number of serious vulnerabilities found per Website declined dramatically over the past two years. 'Organizations need to understand how different parts of the SDLC affects how vulnerabilities are introduced during software development,' Jeremiah Grossman, co-founder and CTO of WhiteHat said. Interestingly, all the Websites tested under the study, 86 percent had at least one serious vulnerability exposed to attack every single day in 2012, and on average, resolving vulnerabilities took 193 days from the time an organization was first notified of the issue."
Just a thought (Score:2, Interesting)
Isn't that why they're called Best Practices and not Perfect Practices?
Re:In the eye of the beholder (Score:3, Interesting)
Well.. 'Best coding practices' is all in the eye of the beholder.. what one calls best practice might look awfull to another.. there really is no 'best coding practices'..
For overall coding, you're right - it's all in the eye of the beholder. For secure coding, one simple rule (which is unfortunately much harder to follow than it should be) will avoid 99% of the problems:
DON'T EXECUTE CODE WRITTEN BY YOUR USERS!
What makes it so damn hard is the temptation (if not active encouragement by your platform) to "stringly type" all your data, combined with the temptation (if not active encouragement by your platform) to build up executable code by pasting strings together, all smothered in a rich sauce of inconsistent, confusing, and poorly-documented rules for how to escape what characters where.
Good studies and bad studies (Score:4, Interesting)
Re:Having good engineers (Score:3, Interesting)
sprintf is a minefield of bad. You *have* to know how to use it correctly.
For example
char xyz1 =1;
unsigned int xyz2 = 2;
long long xyz3 = 3;
short xyz4 = 4;
char buffer[50];
sprintf(buffer, "%d %d %d %d", xyz1, xyz2, xyz3, xyz4);
That is a bad statement (especially if you are porting between platforms). with at least 6 different places for over runs and underruns. 1 place for a incorrect signed type. Your code btw returns a pointer from the stack. Which means it will just 'go away' and is subject to change. You may get lucky and it works for 'awhile' until you call something else.
Each datatype has its own % type. For example a short is %h most people do not know that (and that varies between different CRTs). The code I have above would overrun a buffer and read out too much data (off the stack probably from the buffer var or the padding between them). The first %d would do the same thing. The 3rd one would not read enough. Make those numbers bigger and I would overflow the buffer.
I learned this the *VERY* hard way (400+ statements all with over runs and under runs). Use printf/sprintf in the right way. Read the doc completely match your types exactly. It is also different on windows vs linux vs random embedded platform. After looking at about 6 different sprintf implementations they quality varies wildly from stupid to able to handle the above statement with aplomb.
Also just because you are using a 'type safe' language does not mean you are safe from this. Many just pass along to sprintf/printf so you are still subject to the same rules. Sometimes with even less control.