Microsoft's New Plan For Keeping the Internet Safe 302
itwbennett writes "Microsoft Corporate Vice President for Trustworthy Computing Scott Charney used to think it was the responsibility of ISPs to keep hacked PCs off the Internet. Now, he says the burden should be on consumers. Speaking at the RSA Conference, Charney suggested that the solution may be for consumers to share trusted certificates about the health of their personal computer: 'The user remains in control. The user can say I don't want to pass a health certificate,' he said. 'There may be consequences for that decision, but you can do it.'"
Pathetic (Score:5, Insightful)
From TFA:
"A bank could ask customers to sign up for a program that would scan their PC for signs of infection during online sessions"
hello ? privacy issues anybody ?
So basically organizations that do business with consumers would be allowed to scan the consumer PC. Great idea...
Next step, you have to allow the government, banks, Ebay, Paypal and what not to scan your PC otherwise they will refuse to do business with you. Since they may not have a linux or other OS scanners, you would be required to use Windows of course.
This guys is a genuis !
Re: (Score:2)
Re:Pathetic (Score:5, Insightful)
Re: (Score:2)
Re:Pathetic (Score:4, Informative)
Re: (Score:3)
It's hard to trust locks from a company that hand out copies of the key to anyone who says "that's my lock" and gives them $50.
Re:Pathetic (Score:5, Insightful)
I think the it would have to be a third party company that the consumer and the bank would both need to trust. Like how we trust verisign to prove the identity of an https provider.
I don't think it's a good solution, though.
There's another glaring problem with this idea. Those of us who study computer security and take steps to use our systems responsibly don't want to be burdened by all of these requirements intended for those who don't. I'm sorry that a few bad people defraud others of their money, but the minimum requirements for any proposed solution include not punishing those who are doing things correctly by imposing such intrusive measures.
As far as banks are concerned, securing their own systems is all I would expect from them. As their customer, I really don't want my bank getting into the end-user computer security business and telling me how I should run my systems. I want them to stick with what they know. I also don't want to pay the higher fees and less favorable interest rates it would take to cover this expense. That's not even considering the support costs, as the users for whom this is really intended are the same ones who need the most handholding.
If Microsoft really wants to do something helpful, they can stop marketing Windows as "the easiest thing ever!" to non-technical users. They can start being more realistic and up-front about the basic competency required to safely use a worldwide untrusted network. They can harden the Windows codebase and require that software be built with address randomization, non-executable pages, and other stack-smashing protections before it is allowed to use the little Windows certified logo. They could do a much better job of treating data from the network as untrusted and potentially malicious (the sandboxing they are beginning to implement for IE is a step in that direction).
Hell, for that matter they could split the company up into separate corporations which make competing operating systems that all implement the Win32/64 API. Perhaps some of them could be based on *BSD like Mac OSX. Getting rid of the "write once, infect everywhere" Windows monoculture would be a decently effective way to limit the spread of malware.
There are many options to be considered before we even think about universally intruding into everyone's PC and making this into a common practice that is somehow considered acceptable. Normally that's what the bad guys who write malware are trying to do. This is a terrible precedent. Not to mention that if average users get used to the idea of some company (that they don't get to audit) scanning their systems, what's to stop the organized criminals from just running their own scanning companies and collecting any financial data they find? This could change the nature of the attacks but has little or no hope of preventing attacks.
Re: (Score:2)
They can harden the Windows codebase and require that software be built with address randomization, non-executable pages, and other stack-smashing protections before it is allowed to use the little Windows certified logo.
Shouldn't this be done via the kernel and OS support libraries?
Re: (Score:2)
They can harden the Windows codebase and require that software be built with address randomization, non-executable pages, and other stack-smashing protections before it is allowed to use the little Windows certified logo.
Shouldn't this be done via the kernel and OS support libraries?
Yes, the way I worded that was sloppy of me. Still, for address randomization you'd have to compile the applications with position-independent (i.e. relocatable) code. So I should have said require that software built for Windows is compatible with such security measures. While they're at it, they can place canaries at the end of buffers like GCC's SSP to offer an additional layer of protection in userspace.
Microsoft should take realistic, do-able steps like this to actually address its security problems
Re: (Score:3)
Re: (Score:3)
You mean like ASLR which has been implemented in Windows 7 and DEP which is supported in Windows XP and beyond for certain system libraries and all x64 applications.
Issue with Windows security isn't technical issues, it's trying to maintain compatibility and ease of use with compatibility being biggest hold up. I bet if they behaved like Mac and Linux did, doing the whole "I'm sorry your older program doesn't work with newest libraries, tough shit. Get program updated."
At work, I'm still dealing with custom
Re:Pathetic (Score:4, Insightful)
MS obviously does not consider backward compatibility a defining feature for many users anyway. After all, XP mode is only available with the business versions of Windows 7. Most copies of Windows sold to consumers have copies of Windows that have specifically and intentionally left out a great deal of XP compatibility that MS is sitting on the code for.
So, No. Backward compatibility has NOTHING to do with any security problems Windows may or may not have.
Re:Problem (Score:4, Interesting)
You're really on to something. Take it up a concept class.
"Those of us who study (Airport) security and take steps to use our (Airport) systems responsibly don't want to be burdened by all of these requirements intended for those who don't. I'm sorry that a few bad people defraud others of their (Flight Safety), but the minimum requirements for any proposed solution include not punishing those who are doing things correctly by imposing such intrusive measures."
One of the best descriptions of the TSA problem I've ever seen!
Re:Pathetic (Score:5, Insightful)
So, this guy wants to run a program on an untrusted machine, which will report back to a website on whether or not the machine should be trusted? Presumably he also thinks banks should employ people to stand at the front door and ask "are you a bankrobber?" rather than employing security guards.
Re:Pathetic (Score:5, Interesting)
"So, this guy wants to run a program on an untrusted machine, which will report back to a website on whether or not the machine should be trusted?"
No, you're missing what they are actually proposing.
They are proposing that everyone must have a Trust chip locking down their computer. This Trust chip is most commonly known as a Trusted Platform Module or TPM. The Trust chip contains a unique identity code (PubEK) that can be used to securely track your computer and your identity. The Trust chip contains a master key (PrivEK) to lock down identity control. You are FORBIDDEN to know your own master key locking down your identity. This key is REQUIRED to be securely locked down inside the chip to deny the owner knowledge or control of this key. The chip also contains a key (RSK) to lock down files on your computer. You are FORBIDDEN to know your own master storage key. This key is REQUIRED to be securely locked down inside the chip to deny the owner the ability to read or modify his own files, except as permitted by the Trust chip. The Trust chip also scans the software you run on your computer, and it does this for two purposes:
(1) It spies on and logs the software running on your computer in order to send over the internet Trusted spy reports (Remote Attestation) telling other people exactly what hardware and software you are running. For example a website can ask for a Remote Attestation spy report to check if you're running any sort of Ad Blocker. If you have any sort of Ad Blocker, or if you're running an unapproved web browser, or if you are runing an unapproved operating system, or if you don't have a Trust chip, or if you refuse to send the spy report, then you are blocked from viewing the web pages.
(2) It logs exactly what software you are running in order to DENY YOU THE ABILITY TO READ OR MODIFY YOUR OWN FILES unless you are running the exact unmodified software that is APPROVED for reading or modifying the files. For example the Trust chip can make it impossible to play music downloads unless you play them with the exact unmodified RIAA Approved DRM-enforcing music player. The Trust chip can also make it impossible to view streaming video unless you are running the exact unmodified MPAA Approved DRM-enforcing web browser. Other people can store and modify data on your computer, but it's impossible for you to read or modify that data except to outright delete it. Of course, deleting the files will cause stuff on your computer to stop working.
This is the "Security System" Microsoft originally codenamed Palladium. This is the "Security System" the government has been talking about for the last several years to secure the National Information Infrastructure. This is the "Security System" that underlies the Trusted Identity System that the White House has been talking about for the last several years. This is the "Security System" that Microsoft has been promoting to secure corporate networks. This is the "Security System" that the copyright industries have been pushing to lock down music and video and book and websites and to enable a "rental" model for software.
The subject of the article is that Microsoft is backing off on the idea of having ISP's DENY YOU INTERNET ACCESS unless you have a Trust chip and run an Approved operating system along with Mandatory Approved software to "secure" your computer. The argument is that this is a "Health Check", and that if you fail the "health Check" then you computer might be infected by a virus, and that it is appropriate for ISPs to shut off your internet access if you have an infected or vulnerable machine. See? Doesn't that sound wonderful? The system comes wrapped in a bright shiny box advertising it as a GOOD thing to protect you and everyone else on the internet against viruses.
The article here is merely saying that Microsoft noticed that some people (like me) have been calling out this evil Trust chip plan, in particular pointing out the blatantly evil step of having ISPs deny you internet access if you resist. The ar
Re: (Score:2)
Re:Pathetic (Score:4, Informative)
I love that they keep trying to bring this up. It's their Pinky and the Brain-style take over the world plan. The TCPA FAQ [cam.ac.uk], while somewhat old by now, is still relevant (and shows just how long they've been trying this).
Re:Pathetic (Score:5, Informative)
That simply means you need a "trusted" box to reply to the challenge. It doesn't have to be THE box. This sounds like something a Windows VM and some packet sniffing/injection could very easily defeat
Nope. The entire point of Trusted Computing is to make exactly that sort of thing impossible. It's impossible to virtualize the Trust chip unless you know the master keys locked inside the silicon. No amount of packet sniffing/injection will enable you to forge a Trusted communication. They are cryptographically signed by keys inside the chip. Trying to run a normal computer plus a second box to reply to challenges generally does you no good because everything gets encrypted or signed. The second box won't sign the stuff you need signed, and it won't decrypt what you need decrypted. The master keys are locked inside the silicon, and the lower level keys are generally encrypted before they leave the chip and only decrypted when they are loaded back into the Trust chip.
Trying to use a two-box setup would be extremely difficult and it wouldn't achieve much. Lets say your ISP wants a Trusted Health Check on your computer before giving you a connection. You use the Trust box to authenticate. During the authentication the ISP sends an encrypted internet session key. It is encrypted in such a way that it can only be decrypted by the Trust chip, INSIDE the Trust chip, using the a decryption key locked inside the Trust chip. You can't sniff the internet session key because it's been encrypted with the Trust chip's key, which you don't know. You now connect your "real" box and try to use your internet connection. Except now your ISP expects some or all of your outbound packets to have a validation code embedded. These validations codes can only be generated using the secret internet session key. You can't send packets because your "real" box doesn't know the internet session key needed to validate those packets, and your secondary Trust box refuses to validate them for you.
Do not underestimate Trusted Computing. I'm a programmer, I've read the 300+ page technical specification on this chip, I know DRM is impossible and the reasons it Always Fails. Trust me, software attacks are almost completely nullified. Any successful software attack is generally confined to temporarily exploiting localized bug affecting specific data belonging to that specific affected program, and they can FORCE down patches fixing the bug. It is essentially impossible to fundamentally defeat the system with any software attack. Only a hardware attack will truly defeat the system, and they are moving the Trust chip INSIDE THE CPU ITSELF. Not even the god of all modchips and motherboard hacks can do squat when the Trust chip is inside the CPU.
The only way to break the system is to literally rip open the CPU itself. That will indeed blow the Trust system wide ope, but then there's another problem. You have to be insanely careful never to allow them to detect that you have beaten the system and that you can do stuff you're not supposed to be able to do. Almost anything you do can be traced back to the the specific Trust identity code involved. If they ever detect you doing anything you shouldn't, then that identity code goes on a revocation list. You can still access the data you've already broken, but for all practical purposes that computer is dead. It can no longer access any new Trusted data, and all other Trusted devices will refuse to speak to it.
By revoking the hacked identity key they can make it cost you (up to) the price of an entire new computer, plus the difficulty of physically dissecting the new CPU chip to extract a new set of keys. You have to do this each and every time they catch anything anomalous relating to your cracked system.
And you're really screwed if you have to use your real identity during the Certificate Authority process required to enable a new chip. They may refuse to let you activate a new system, or they may send the feds to arrest you for violating the DMCA o
Re: (Score:3)
The problem with trusted computing is that you the owner of the computer is not trusted, and the service providers and government are ...
The companies and governments think this is a good idea .... but it will not actually cure any of the problems it claims to ...
It will be a very bad idea for computer users, it will make the system more expensive and less flexible (no alternative OS, no self authored apps.... etc ..) and you will not longer have full access to your own computer, but other people will ...
Re: (Score:3)
I understand that they're cryptographically signed however that still doesn't answer the previous posts' point about why spoofing the correct authentication that the chip should provide the server with wouldn't work.
That is difficult but possible with a hardware hack in between the Trust chip and the CPU, but it won't work if the Trust chip is inside the CPU. There are a lot of layers and technical details, but I'll try to boil it down to the key steps. I'm going to gloss over a lot.
First step: The Trust chip watches the software that gets loaded. It logs the BIOS, the operating system, and drivers. Microsoft or some Third Party examines that list and certifies your system as Trusted, and they set up a secret key that'
Re: (Score:3)
So now Microsoft can put me on the untrusted database for using linux and banks will not want to give me a loan. I'm so building my next computer from scratch.
Nobody will stop you from NOT getting a certificate by installing an "untrusted" OS on "trusted" hardware and you probably won't get non-trusted hardware just like you can't get a monitor without HDCP (over DVI/HDMI/DP) or a DVD/BluRay player without CSS/AACS.
The point is that they're pushing to make this a requirement for using any major corporate or government service and turn you into a digital caveman. You will get a top-to-bottom locked down system because it's the only thing that'll work. And because
Re: (Score:2)
Re: (Score:2)
Think of it this way: would you mind if a web site ran their own programs on your computer, before they let you use their site? Maybe that's your bank, that's one example. Maybe he wants this extended to the cloud, like Microsoft's Office365. Taken to the extreme, what if social networking sites (Facebook?) decide to do this?
Charney's proposal to put the onus on the end user is going to get old really fast. And I see it causing more problems than it solves. If users have web sites running their "scan" sof
Re: (Score:2)
Maybe you shouldn't trust either.
Trusted Platform Module (Score:2, Informative)
ZDNet article (http://www.zdnet.com/blog/security/microsoft-continues-push-for-infected-computers-to-be-quarantined/8164) a little more informative.
Combining trusted software such as hypervisors and hardware elements such as a Trusted Platform Module (TPM) could further enable consumer devices to create robust health certificates and ensure the integrity of user information
Re: (Score:2)
From TFA:
"A bank could ask customers to sign up for a program that would scan their PC for signs of infection during online sessions"
I think "program" here means an initiative by the bank that a customer can optionally participate in, rather than an executable running on the customer's PC. It might be a port scan done from the bank's servers.
Still I doubt this is actually useful: if these scans becomes common practice, malware can stay undetected by not responding or faking another protocol/application unless the contact is initiated in a particular way that only the malware control network can perform. For example a TCP connection would
Re: (Score:2)
Most malware don't open incoming ports, they connect to a C&C server (using IRC, IM or even Twitter).
Re: (Score:2)
I would like to see Banks hand out Live 'Nix CDs with their website loaded up in the browser when its booted into X. This option will make it brainless for most to use and there should be a better assurance that the computer doesn't have a "Virus" unless BIOS ones are still around. It would be much easier to implement then some new certificate system.
Re: (Score:2)
Re:Pathetic (Score:4, Informative)
How about if banks hand out tokens? Mine does. I log on with a username\Password\token number that changes once every 30 seconds. So if the hacker has managed to get the https traffic unencrypted in record time, they only get 30 seconds to play.
The other feature is the "transfer money" feature requires re entry of the token number.
Re: (Score:2)
The largest bank in Brazil has been doing this for years - with a small Java program that at least says it's checking your computer (and takes only a few seconds). I've never tried denying it, but I'm pretty sure you just can't access their online banking without allowing it to run.
I have never heard of anyone complaining about it.
virtual pc (Score:2)
Re: (Score:3)
I swear, this guy will do anything to get the spotlight off Microsoft, even if it means he has to turn off his brain while taking the Glen Beck approach to his outcry.
Come on Microsoft, the problem is you. I see it every day in my shop. Stop blaming the customer.
This Microsoft guy is so out of touch with the consumer.
You could just NOT FREAKIN' USE IT (Score:2)
Did that ever occur?
It drives me nuts that every reply to every new product idea assumes:
1) The product is seriously being worked on
2) The product will be released to the public, and soon, and
3) They'll be forced to use the product, as if some thug was holding a gun to their head
In this case, Microsoft's not even likely at step 1, much less step 3. Frickin' relax, ok?
Re:Pathetic (Score:5, Insightful)
Do you consider it a "violation of your privacy" to tell your prospective sexual partners whether you have an STD or not? Because this is the computational equivalent.
Not really. It's more like letting potential partners draw a couple of test-tubes of blood and run them through the local medical lab to see if you have any diseases, and maybe get a stool and urine sample for good measure.
It is perfectly reasonable for anyone coming in virtual contact with your data to request that you prove that your data is sanitary.
ROTFL.
Re: (Score:3, Funny)
It is perfectly reasonable for anyone to whom you can not prove you are sanitary to tell you to go fuck yourself.
You've never been laid, right? (Score:5, Informative)
The problem is that this isn't about "proving" that you're clean.
This is about proving that you have, in the past, purchased condoms (anti-virus).
And that you are currently wearing a condom (anti-virus is running).
NOT that you don't have a disease.
Or that you have any symptoms.
Or that anyone you've had sex with had a disease.
The BANKS are the ones that should be dealing with whether they can sanitize anything they receive from you (and anyone else) AND verify that it really is you initiating the transaction.
Sex is NOTHING like an on-line purchase. Try it and see.
Re: (Score:2, Informative)
>>>coming in virtual contact with your data to request that you prove that your data is sanitary.
Then you don't mind if I sit in my bankofamerica.com cubicle, and review the naked photos of your wife (or possibly daughter) that I just scraped off your/her machine?
Re: (Score:2)
All they need is to DL and run a checker that reports Pass/Fail and nothing more. Uploading my data en masse or spelunking my files with their eyes would not be reasonable. Nor would it be at all profitable for them to do it.
Re: (Score:2)
Do you intend to audit all of the network traffic to ensure that "pass/fail" is all it's reporting? Do you think an average user who can't be bothered to learn basic secure practices has the skill or the inclination to do that? This is assuming of course that the traffic isn't encrypted -- it would probably use SSL for the communications to ensure that no one has tampered with the results.
You didn't go far enough. (Score:3)
More to the point, there isn't a single AV product available today that catches 100% of the mal-ware currently out there.
AV is a reactive process.
First comes the mal-ware.
Then comes the infections.
Then comes the signature file.
Then comes the download of the signature file.
Then comes the protection.
Saying that an AV scan found nothing on your computer is really pretty meaningless.
Remember
Re: (Score:2)
Re: (Score:2)
And do you have a checker that runs on Linux?
BSD?
Android?
Symbian?
MacOS?
Just windows is it? OK... Thanks, but your banking product is not secure enough for me to use. I'll head up the road to the other bank that supplies a token for logging on to their web site. Shame you don't get access to my savings as collateral for your loans.
Re: (Score:3)
- If the system is already compromised, you can not trust anything an application says inherently, the execution of the downloaded checker can be altered
- A checker will not know about every possible running program in existence, in order to truly validate a system, you need to work with white lists, not blacklists
- There is no checker that will run on all possible operating systems
- You still need to trust the checker itself
Re: (Score:2)
I consider a violation of privacy if a guy comes into my house looking everywhere to see if i have the state approved remedy for a disease regardless of my utter absence of symptoms or the existence of better cures.
"virtual contact with your data"? there is transmission so the receiver must sanitize all incoming data, not scan the official source which is insufficient, for obvious reasons.
Re: (Score:3)
your data is sanitary.
The solution is plain text. While it is possible to insert malware in word, excel, html and maybe even opendocument files via scripting, it is not possible to insert viruses into plain text and CSV files. It just can't be done. Do not accept files that are not plain text and the problem of "unsanitary data" goes away.
Re: (Score:2)
The solution is plain text. While it is possible to insert malware in word, excel, html and maybe even opendocument files via scripting, it is not possible to insert viruses into plain text and CSV files. It just can't be done. Do not accept files that are not plain text and the problem of "unsanitary data" goes away.
Of course it's possible to have plain text viruses - plain text editors are subject to buffer overflows and other errors that all programs are subject to. That's like saying that it's impossible to have viruses embedded in images, which has been proven to be false. An editor doesn't have to allow macros in its file format to be subject to virus attacks (though it does make it easier)
Re: (Score:2)
SQL is plain text. So is perl.
Any source of data input can be hacked to cause problems to software.
Pushing a virus-check for a new exploit is easier than patching the server, when you're talking about thousands of high-availability servers, and thousands of new exploits per year.
Re: (Score:2)
To be fair, the parent poster did imply that it was scripting ability that was the problem, so obviously text files that you're going to use as scripts are going to have the same problem. If you're going to point out text files that can be hazardous when used as intended, you could argue that an MS Office XML file is plain text as well.
Re: (Score:3)
Any source of data input can be hacked to cause problems to software.
I don't believe that is true, at least with SQL Injection attacks. I work with the stuff all day long and as long you VALIDATE THE GODDAMN DATA you're in the clear. Obviously, I cannot understate V.A.L.I.D.A.T.I.O.N.
If you are just passing values into an SQL statement, you are asking, nay begging, for an ass raping by some random sociopath out there.
I always, always, always, take each individual value and validate it. Strip out weird characters. Enforce value ranges where appropriate. Then there is a BLOB
Re: (Score:3)
Wait.
Do you consider it a "violation of your privacy" to tell your prospective sexual partners whether you have an STD or not?
Because this is the computational equivalent.
It is perfectly reasonable for anyone coming in virtual contact with your data to request that you prove that your data is sanitary.
Yes, it's always "for the children", "to prevent terrorism", and "for your safety" isn't it? Since you have nothing to hide, why would you possibly object to a full cavity search every time you enter any building? Do you want the evil terrorists/criminals/hackers to win or something? This is the computational equivalent.
The difference between this and your scenario is simple: the prospective sexual partners are giving mutual consent. If they don't like that arrangement, they can always decide that casua
Re: (Score:2)
The difference between this and your scenario is simple: the prospective sexual partners are giving mutual consent. If they don't like that arrangement, they can always decide that casual sex with strangers is inherently risky, or they could do something crazy like have sex with someone they love, trust, and know very well. By contrast, if this system is implemented, every bank and probably lots of other corporations are going to require it in order to do business. It's rather difficult to live in a modern world without ever doing business with banks and other corporations, which is why this would be forced on us with or without consent.
That can't be the difference. it's also rather difficult to turn down sex with possibly diseased strangers.
Re: (Score:2)
It is perfectly reasonable for anyone coming in virtual contact with your data to request that you prove that your data is sanitary.
One of the rules in computer security is to never trust the client. A server should always fully validate the data regardless of what assurances the client gives about it, so it is pointless to send those assurances in the first place.
Re: (Score:2)
It would never work to deny access... which is what you're trying to scare us into thinking.
Why wouldn't it? If a Bank thinks that only people that can provide the certificate have computers that are trustworthy, why would they accept logins from a computer that doesn't present the certificate?
Some banks have already been known to only allow those using MSIE to access their site, so why is it so unthinkable that they would restrict access to those that can provide this certificate of trust? Especially if it reduces their liability for bank fraud.
Re: (Score:2)
True, banks have also been known to only let you do your banking in person. Of course, that was long ago - just like the times when they only allowed IE was long ago. It seems like other operating systems would simply need to also have a health certificate method. Seems reasonable. Other operating systems have an SSL method. They have cryptographic methods. Why should they not have health certificates?
The key will be having someone "trusted" sign the cert. Microsoft will be "trusted", Apple will be "trusted", not sure if Google/Android will be trusted, but perhaps if Motorola succeeds in preventing rooting on their Android phones, then they will be trusted.
It seems like something that OS X and Linux folks should be able to have deployed about the time that Microsoft goes into Beta with theirs, right?
It's unlikely that any open source vendor will be trusted by the banks to sign a cert since the very nature of open source makes it hard to validate that an installation meets the security standards set by the vendor. I'm not saying that an open source
I can see it now (Score:2)
MS has there own good free AV and they will not le (Score:2)
MS has there own good free AV and they will not let them self's be locked out from any plan.
What if my "PC" is an old VAX (Score:5, Insightful)
Yeah, this will work real well on my old VAX that I use to surf the web using Lynx.
Re:What if my "PC" is an old VAX (Score:4, Insightful)
Re:What if my "PC" is an old VAX (Score:5, Insightful)
That's an important point - Charney probably expects this to apply to Windows only, because that's all he sees. What about Linux? What about Mac?
More importantly, what about iPads, or smartphones, or tablets, etc that are increasingly used to access the web? Will Charney's plan work for all these devices? Apple doesn't like third-party apps to execute on the iPad - so good luck getting this to work with iPads. And if all it takes to "bypass" the scan is to fake your browser's user agent string to that of an iPad Safari browser, this won't be very effective.
Naturally. (Score:5, Insightful)
Re: (Score:2)
The responsibility goes to the consumer,
That's right...after all, it is the consumer that keeps using a vulnerable operating system. Same degree of responsibility as in paying a certain vendor for the use of a said vulnerable system (and possibly generating extra CO2 by running a crappy AV solution to protect that OS).
Re: (Score:3)
Re: (Score:2)
Given, it is easier to do explot Windows. But it is even easier to exploit stupid users than it is to exploit Windows.
Right. At least, you don't need to pay for the OS and be exploited while running Ubuntu d:)
Re: (Score:2)
The responsibility goes to the consumer,
That's right...after all, it is the consumer that keeps using a vulnerable operating system
However, the consumer doesn't have a choice in the matter - or at least none that they are aware of. Most consumers buy their PCs at big box retailers, where Windows is the only option. They can't buy a PC with Linux on it, they can't buy a PC with DOS on it, nor can they buy a PC with no OS at all. They might be able to buy a Mac - depending on where they are shopping - but they might not be inclined to pay that much for a PC. Windows is sold as a working OS, but it is provided as something not quite
Re: (Score:2)
The difference between a computer and a refrigerator is that a refrigerator doesn't get to talk to its buddies on the phone.
Another specific difference: I don't know anyone to put together a fridge from components bought separately, but I know lots that do build their own PC this way.
Re: (Score:2)
Re: (Score:2)
If you squish trojans, viruses, and worms all together, then Windows is clearly more vulnerable than, say, OSX or Linux, which don't get viruses.
(if you didn't catch it ... people tend to lump all Windows attacks together: plugins, social, and executables-that-you-download-and-run-yourself, and then compare it to "real" viruses on Linux; downloading an rpm or deb and installing it yourself "doesn't count")
I don't know if the OP is stating that, he may have valid arguments for why Windows is still more insec
Re: (Score:2)
The responsibility goes to the consumer, when Microsoft is assigning responsibility (blame). After all, the highly vulnerable operating system clearly has nothing to do with it, hence the company behind said vulnerable operating system shouldn't have any liability either.
In a way they have a point. Those customers have created a market where those who make highly vulnerable operating systems are rewarded with literally billions of dollars and greater than 90% marketshare. It's a logical extension of this reality for Microsoft to assign responsibility as you describe.
Re: (Score:3)
It's pretty amazing how they've managed to get their customers to swallow the line that it's reasonable to be expected to pay a third party for "anti-virus" software to fix their errors and vulnerabilities.
Re: (Score:2)
I sold you a frozen hotdog.
Windows is sold as a fully working operating system - a "fully cooked hotdog" would be a better analogy, really. In which case, if eating the fully cooked hotdog occasionally caused unexplained death, then the risk might be equivalent.
It's an OS, not a hot dog. (Score:2)
You cannot store an OS "improperly". It doesn't catch germs just by normal decay.
Microsoft's decisions have placed "user friendly" above "security" for years.
That is a problem.
Re: (Score:2)
> Microsoft's decisions have placed "user friendly" above "security" for years.
Exactly. Case in point: Even Win7 still hides known file extensions by default. Users can be easily manipulated into clicking on something they think is legit.
http://www.google.com/search?q=Win+7+still+hides+known+file+extension+type [google.com]
e.g.
http://www.f-secure.com/weblog/archives/00001678.html [f-secure.com]
Granted, you can't protect ignorance from stupid, but c'mon, why make it harder then it needs to be.
Bad moderation, bad (Score:2)
I like how all of their solutions assume... (Score:5, Interesting)
I like how all of Microsoft's solutions to this Internet-wide problem assume that absolutely everybody is using their software. Honestly, half the problem would go away if everybody stopped using their software.
Re: (Score:2)
Yeah, that about sums it up ... Microsoft's "Trustworthy" computing has always been about locking the damn thing down so tightly you can't use it, relying on their own proprietary technologies so that everybody pays them, and pretending like it's not the security holes in their OS that is the root pr
Hate to be a grammar Nazi but... (Score:2)
You misspelled Linux. Funnily, it came out as Microsoft. Go figure. A Freudian slip perhaps?
FTFY. Monocultures are bad, m'kay?
Their definition of "security" isn't yours or mine (Score:5, Insightful)
When Microsoft talks about "security" they're talking about securing the property&rights of digital rights owners (BSA, MPAA, etc) from the untrustworthy users who licensed the software and DVD.
It's not at all about keeping the computer user safe.
It's about keeping data safe from the computer user.
Re: (Score:2)
And that may happen if Charney's plan goes into effect on popular web sites. At least, I predict a sizeable community of Windows users leaving for other options.
This concept will immediately raise the perceived TCO for running Windows. Maybe not in cost, but even "general" users will see the delays and effort required just to access basic services (the Web) from Windows. If my mom has to let her bank, or Facebook, or her Yahoo!Mail run their virus software on her computer before she can access her favorite
Trustworthy? (Score:2)
99% of the time (Score:2)
Already kinda exists in user-agent header (Score:2)
Translation (Score:2)
The user remains in control. The user can say I don't want to run Microsoft's operating system. There may be consequences for that decision, but you can do it.
Re: (Score:3)
So let me get this straight...in order to buy or sell anything I need to bear the mark of Microsoft on my hardware...
The Burden Is On Consumers... (Score:2, Informative)
I agree completely with that part of things. The burden is on consumers (or citizens, as we used to be called). Don't buy Microsoft products and the Internet will be a much safer place.
What are they smoking? They sell the buggiest, shittiest, most useless (some people find it useful...I don't; the last time I tried to use MS Office I spent 15 minutes dicking around w/ the application just to set some bullet points, and decided that 15 minutes could have been better spent downloading and installing OpenOf
It will come down (Score:2)
Disproportionate burden (Score:4, Insightful)
If you require positive proof of system health then this will penalize every minority operating system or device that does not have the scanning software/certificate available for it yet. But aren't these minority systems the ones that are least risky, compared to the millions of zombie WinXP boxes?
Sure, Microsoft systems will be supported by the bank (using the example given in the article) but what about everyone else (and I do mean everyone). Do we really want a presumption of "disconnect" or "limit"?
Re:Disproportionate burden (Score:4, Interesting)
If you require positive proof of system health then this will penalize every minority operating system or device that does not have the scanning software/certificate available for it yet.
I get your point, however, I must point out two things:
1) Zero Day exploits occur frequently.
2) An infected machine can obviously not be trusted.
Infected machines especially can not be trusted to scan themselves and report on their state of infection. Suppose you run a completely different machine in order to check the validity of another. Could not the machine doing the scan also be infected? Would not the validation apparatus be required to have a signing key somewhere within it? Would not simply extracting such a key, and forging your own certificates also be an option?
The only thing reliable about Windows security is that it has been, and will continue to be broken.
Honestly, MS does not have a good track record when it comes to cryptographically signing the system & software in order to validate that the machine is genuine... WGA certified my Linux machine as "Genuine Microsoft Windows" [slashdot.org], this is odd to me because I entirely switched to Linux after suffering a WGA false positive [zdnet.com] (no, my hardware had not been changed / upgraded).
TFA Assumes that MS can deliver a system capable of detecting insecurities -- Forgive me if I'm sceptical -- If so, would not Windows itself just do this and no longer be vulnerable at all?
AV: Are there any viruses in this directory?
Rootkit: Nope, I'm not in this directory.
AV [to bank]: All clear!
AV [to user]: Proceed to enter your banking credentials!
TL;DR: If ( ( Linux || Rootkit ) == false_negative && MS_defective_spyware == false_positive ) { MS_Plan != Secure }
How do they know a machine is safe? (Score:4, Insightful)
If they have a magic scanning technology that tells them if a machine is "safe", then why doesn't Microsoft just deploy that technology to everyone? When I managed a helpdesk, I saw many fully patched machines with updated antivirus machines still manage to become infected by Malware. I didn't know we were already past the age of Zero-day exploits
Security theatre (Score:2)
Maybe he got the idea while standing in a queue for an airport security check...
Burden is on the manufacturers (Score:4, Interesting)
Just like in the auto industry, if a car maker creates a car that is prone to wrecks, its not the drivers fault.
Proper maintenance, is the responsibility of the user, not fundamental manufacturing flaws that create security problems.
The user can say I don't want to run Windows (Score:5, Insightful)
The user can say I don't want to pass a health certificate,' he said. 'There may be consequences for that decision, but you can do it.
The user can say I don't want to run Windows. There may be consequences, but you can do it.
There fixed that for you, M$.
(Oh, did we forget to mention that that health certificate, de facto, requires you to run M$ Windows? That although there are Linux solutions around, 95% of ISPs don't support it?)
Just another attack vector (Score:2)
Anything like this 'trusted certificate' or 'health scanning app' will just become another attack vector.
Microsoft should just build a new operating system from the ground up that is secure. If MS applied everything they should have learnt from all the security problems they have had over the last 20 years, they could probably make something quite good.
Wouldn't this solve 95% of the problems with infected PC's? Of course that would require reinvesting some of the billions they make from selling their curren
Sounds a lot like ... (Score:2)
...getting tested for STDs as a condition of employment in a porn studio. Who hands out those certificates? Do you really want to trust them as you are getting ready to pull that train?
Network Access Protection (Score:2)
Blame still at the wrong feet (Score:2)
Hacked PC's are the fault of the OS vendor. Not the user, or the ISP.
Blaming the user is like blaming the driver for their car's recall-worthy shoddy components.
Blaming the ISP is like blaming the highway department for a car's recall-worthy shoddy components.
Who does car recalls? The manufacturer, who usually passes on the cost of it to the vendors who provided the faulty parts (see Toyota and the Tacoma frame rusting). All the OEMs should pass on the cost of their support for Redmond's flawed OS's to..
Solution form (Score:2)
Modified from this [craphound.com]:
Your post advocates a
( X ) technical ( ) legislative ( ) market-based ( ) vigilante
approach to computer security. Your idea will not work. Here is why it won't work. (One or more of the following may apply to your particular idea, and it may have other flaws which used to vary from state to state before a bad federal law was passed.)
( ) Spammers can easily use it to harvest email addresses
( X ) Remote access and other legitimate computer uses would be affected
( ) No one will be able to fi
Translation (Score:2)
2-Profit
3
4 Who cares, we already got profit
Re:Microsoft's next step (Score:4, Insightful)
Re: (Score:3)
You're funny. I've been doing security as a profession since times when "windows" referred to the glassy panes you have in your house. I've also had one system of mine compromised in that entire time. But contrary to you, I don't believe that I should be responsible for installing the brakes, airbag, ABS and safety belts in my car, even if I happen to be a mechanic. If the car is inherently unsafe, it's not because the owner failed to install his own brakes, it's because cars ought to have brakes.
And if you