Calif. Attorney General: We Need To Crack Down On Companies That Don't Encrypt 127
tsamsoniw writes "California Attorney Kamala Harris says her office will start cracking down on companies in the Golden State that don't encrypt customer data and fall victim to data breaches; she's also calling on the state to pass a law requiring companies to use encryption. That's just one of the recommendations in the state's newly released data breach report, which says 131 companies in California suffered data breaches in 2012, affecting 2.5 million residents."
Re: (Score:2, Funny)
EBG13 SGJ!
Re: (Score:2)
Double rot13 for the win! ;)
One thing: Encrption of laptop drives and external usb/harddisk is usefule against stupid loss/theft . Encryption of company servers is only buring cpu cycles, since the key is available to users that have to use ist
Re: (Score:2)
I detect someone doesnt know what a HSM nor what one is used for.
Here, have a wikipedia link and learn something today http://en.wikipedia.org/wiki/Hardware_security_module [wikipedia.org]
Re:Encryption (Score:4, Insightful)
So instead of burning cpu cycles, you are burning crypto processor cycles plus you have the cost of buying the hardware in the first place and possibly the bus overhead of sending data to/from the device.
If the server gets compromised while its running, the data is accessible because the server needs access to the data in order to function.
If the server gets physically stolen its likely the crypto hardware will be stolen with it. If you store the key somewhere it can be automatically obtained and used then the key can be stolen too, if you enter the key manually on bootup (ie how you would on a laptop) then you require physical intervention if the server reboots for any reason.
Encryption has its uses, but its not a magic bullet, and poor/inappropriate use of encryption is damaging - not only does it waste resources unnecessarily, but it also brings a false sense of security and encourages lazy thinking... People will simply implement the bare minimum required to comply with the law, which will probably mean encrypting the data while leaving the key on the same box.
You will also end up with a "one size fits all" attitude, which is clearly ridiculous...
You need to consider *what* data your storing, *why* your storing it and *what* needs to access it.
You can segregate the data so that some is only accessible by those systems that need it.
You can tokenize the data, eg for repeat billing of a credit card you can store a token agreed only between you and your payment processor.
You can store rarely referenced data with public/private keys, leaving only the public key online and keeping the private offline for use when necessary.
No, pushing a one size fits all "encrypt your data" mandate is stupid and will only make things worse, each individual case needs to be designed by someone who understands the needs and is technically competent.
Re: (Score:3)
While you are correct about the impact of anything currently running on the server, you are dead wrong about physical theft. An HSM should be hardened against picking the key out of it and should actually destroy the key if tampering is detected. Encryption on the server is still of limited benefit since the data key could probably be abused in most remote exploits on a running system, but for powered down security, such as physical breach, it is very significant, even if the chances of someone breaking
Re: (Score:2)
Way to much text.
Security:
-Something you know (password)
-Something you have (HSM? hardware with key).
-Something you are (biometrics, or gummy bear)
You steal the server, you steal the HSM. It is like requiring a hardware token with a laptop, and then storing the token with the laptop. A HSM does have it uses, but it is again key management that is the trick.
The words tamper resistant HSM sombdy uses.. realize that it is like, kill the data, so nobody can steal it. NOT always the best scenario :) :'(
Re: (Score:2)
You are mistaken. Security is not something you know, have or are. That's authentication. HSM has nothing to do with authentication. It is key management and secure storage. Your understanding of how an HSM is used is also mistaken. The idea with an HSM is that it does all encryption and decryption operations without ever releasing the key and takes care of requiring proper authorization before performing decryption operations.
When initially configuring an HSM, a key should be created and backed up in
Re: (Score:2)
Only in this case, convenience wins out...
Needing a staff member to physical intervene in order to boot the server is far too much of an inconvenience, so it will be configured either to obtain the key from the HSM automatically on boot (and thus the attackers could do the same), or it will be a network based system as you mention in which case when you steal the server you need to steal the key server too.
Even if you do have someone physically enter the key, you have the added inconvenience of managing who
Re: (Score:2)
You don't physically enter the key, you physically enter credentials that activate the HSM. Even if you have the ability to activate the HSM, getting the key out is (near) impossible. It is limited to doing decryptions with whatever restrictions are on the data (for example, you could require that user password be entered to access user data if the system stores data accessed by user accounts.)
Also, even if you do have to use a network based device, it means that they have to either a) steal the networked
Re: (Score:2)
In which case you're no longer relying on encryption, your now relying on obfuscation provided by the HSM... Just takes someone with the right skills/equipment to crack it, and once one person works it out they can provide details of the hack to others.
Re: (Score:2)
I don't disagree that it is not relying on the encryption exclusively. You have to trust the HSM to do it's job correctly. It's a little more than obfuscation though as it is an independent, hardened system with limited I/O, intrusion detection and a hair trigger for self destruction. It may be possible to still extract the key, but there would be a fair degree of luck involved and there is no redo button if you make a mistake trying to extract it. That's a fair bit better than simple obscurity, particu
Re: (Score:2)
Put another way, calling an HSM security by obscurity is a bit like saying that having a server protected by armed guards 24/7 with a block of C4 strapped to it inside the basement of the Pentagon is security through obscurity, since, if someone knew every security measure and was very, very lucky, they might be able to make it through everything.
For that matter, by the same token, encryption itself is security through obscurity since there might be some technology or math trick out there that can decrypt i
Re: (Score:1)
Dude, we steal the server, hem and all, set it up in our lab, then we have all the time in the world to try bus based exploits.
So? Signatures happen on the HSM, which also stores the key material; only the cleartext data going in and the signatures going out are on the bus.
And if you mess up in the lab, the HSM kills its keystore and game-over. (Or, if it doesn't, the folks on the other side were insufficiently paranoid / excessively cheap and it's Their Own Damned Fault).
Re: (Score:3)
I think you have some misconceptions about the CPU cycles involved in encryption. It's basically free. It's just a few clock cycles per byte.
The part everyone is concerned about is key stretching, where a CPU needs to do about half a second worth of processing to hash a password. There is simply no reason to do key stretching on the server. That's a dumb architecture. Instead, make the clients do it. By default, Microsoft does the key stretching on the server, and it's only for about a millisecond, if
Re: (Score:2)
Yea, I work in the security industry and I don't really agree. I hear what you're saying about considering each application and you're not wrong, but I think the potential benefits of this easily outweigh the negatives. It will apply pressure to companies who really do need to encrypt their data and just cannot get the will from the business to do it.
Its not a magic bullet, but especially in the absence of any legitimate way to wipe data from databases in a secure manner it's a reasonable compensating con
Re: (Score:2)
No, it won't. It will cause every non-corporate-run website run by individuals within the state of California to shut down because of the inability to pay the
Re: (Score:2)
The big problem is that the database uses a shared hosting plan and a shared database server run by my ISP. I have no control over whether the database is encrypted on disk or in transit between the shared hosting server and the database server.
You're freaking out over nothing. Hosting providers are not going to leave people high and dry. Actually, it would be nice if they started encrypting their databases. Shared hosting will live on and solutions will be generated.
In order to add that protection, I would have to crank my hosting plan up to a dedicated server at a monthly cost that is equivalent to several years on my current hosting plan and buy a multi-subdomain SSL cert that also costs (annually) as much as several years worth of service.
You're being extremely, extremely silly. SSL certs can be had for next to nothing. Do they provide as much assurance as better certs? No, but they encrypt the traffic and the root cert is trusted by common platforms. Depending on the law you could use self signed certs as well.
NSA (Score:4, Funny)
We Need To Crack Down On Companies That Do Encrypt
Re: (Score:1)
It is a shame that even popular open source projects dont bother. For example Mozillas Thunderbird chat has no OTR support, Mozilla Firefox and other browsers treats self encrypted certs as WORSE than unencrypted and put big scary messages up. Instead there really should be three different "modes" - what exists now in all browsers for the certificate authorities for those who want to talk to their banks etc, self signed certs which should just work like an unencrypted link (no full secure icon etc), then pl
Re:NSA (Score:5, Insightful)
Mozilla Firefox and other browsers treats self encrypted certs as WORSE than unencrypted and put big scary messages up
I think it is reasonable action for a certificate you don't know the source. You can always add the certificate to your browser [mozilla.org] and avoid the error. The rationale for the pop-up is that an unknown self-signed certificate is as bad as no encryption - totally open to a main-in-the-middle attack, but people have a higher expectation of security from SSL.
Re:NSA (Score:5, Insightful)
Is "as bad as no encryption" a reason for yelling on the user and presenting it like the worst security problem ever? Even if I accept the premise that it is as bad as no encryption, the obvious conclusion is that the browser should present it the same as no encryption.
Actually, it is not as bad. It still keeps you safe from passive attacks (like your ISP collecting all data for a three-letter agency, which analyses them later).
Re: (Score:2)
Well it depends on what you are doing. Using your own private service over the internet secured by SSL, no big deal, register the cert. Using your online banking and the cert is self signed, better check on that. The reason is that no encryption is clear that it is unsafe and most people will (hopefully) not do anything sensitive. But putting trust in a self signed certificate is a gamble, especially when you assume that this SSL connection is being used to transfer secure data. The reason why it is conside
Re: (Score:2)
Re: (Score:2)
It isn't the same as no encryption. The site is making a claim that cannot be verified, and which often points to fraud. Treating it as unencrypted would open up all sorts of man-in-the-middle attacks by criminals, ISPs and three-letter agencies quietly intercepting and replacing security certificates. Do you think people check for HTTPS and a valid cert every time they connect to their bank or email account?
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
I think part of the rationale is that a self-signed certificate very well might be a sign that you're the victim of a man-in-the-middle attack, and it needs to be treated as a serious potential threat.
Personally, I don't think the problem is that web browsers treat self-signed certs as dangerous. I think the real problem is that the only infrastructure we have for authenticating certificates is expensive and unwieldy. We need to have a way of generating and authenticating certificates that's easy enough
Re: (Score:1)
I think part of the rationale is that a self-signed certificate very well might be a sign that you're the victim of a man-in-the-middle attack, and it needs to be treated as a serious potential threat.
This sounds good in theory, but the reality is that self-signed certificates (or those signed by an authority your browser does not recognize) are several orders of magnitude more common than MiTM attacks.
Otherwise, I agree that a big part of the problem is unusable UI for managing certificates in almost all existing browsers.
Re:NSA (Score:5, Insightful)
people have a higher expectation of security from SSL.
I think the GPs point was that it does not have to be a all or none - that you can have SSL of a self signed cert without the error message and without giving any "expectation of [high] security" (to quote GP "no full secure icon")
The rationale for the pop-up is that an unknown self-signed certificate is as bad as no encryption
In light of the Snowden revelations and subsequent fallout, this rational has very few legs to stand on. Unencrypted is less desirable than plain text. The only argument I have seen against this rational is that people may be lulled into a false sense of security if they believe self signed certs are as secure as CA issued ones, falling for MITM attacks for their bank traffic etc. The counter to that is that is simple and sensible: no, not if the browser does not try to tell them they have a top secure connection - and treats it like it is a plain text connection.
self-signed certificate is... totally open to a main-in-the-middle attack
The current SSL system is also totally open to a main-in-the-middle attacks by state sponsors, as has been reported here various times. And yes self signed certs are also very vulnerable to the same attack - but the point here is to encrypt the majority of data. State sponsers can always target but with blanket always on encryption they are unable to perform mass illegal capture and storage.... that is the point of not raising an error message on self signed certs.
Any way I cut these arguments, browsers appear to be in the wrong on this one - throw in cosy relationships with CAs, state departments etc and we could have a conspiracy here.
Re: (Score:2)
The current SSL system is also totally open to a main-in-the-middle attacks by state sponsors, as has been reported here various times. And yes self signed certs are also very vulnerable to the same attack - but the point here is to encrypt the majority of data.
They're not vulnerable to "the same attack". One attack requires hacking a CA or exerting very substantial influence over them. The other doesn't. The set of malicious actors who can -- and do -- MitM you if you use self-signed certs is much, much larger than the set of actors who can do it if you use CA-signed certs.
The only argument I have seen against this rational is that people may be lulled into a false sense of security if they believe self signed certs are as secure as CA issued ones, falling for MITM attacks for their bank traffic etc. The counter to that is that is simple and sensible: no, not if the browser does not try to tell them they have a top secure connection - and treats it like it is a plain text connection.
Yes, that's pretty much the argument. The danger is that they could think that their connection is somehow more secure than plaintext. You cannot safely fix this without determining user intent
Re: (Score:2)
The danger is that they could think that their connection is somehow more secure than plaintext.
It is a danger *only* if the browser is giving some indication of security. If the browser does not give any indication or expectation of privacy with self signed certs then there is no danger. Most browsers already do not show the protocol being used for plaintext (no http// display).
You cannot safely fix this without determining user intent, and even the user can't usually be trusted to determine their intent.
You can safely fix it by not giving any change to normal unencrypted experience. If they intended to use HTTPS to get real security but instead were presented with a self-signed certificate, and the browser defaulted into plai
Re: (Score:2)
If they intended to use HTTPS to get real security but instead were presented with a self-signed certificate, and the browser defaulted into plain text view (no ssl icon or indication of security) then the user does not need any extra warning.
When I make a request to a https url I expect the information contained within that request (parts of the url other than the hostname, post data if any, cookies if any) to be sent over an encrypted and authenticated link. By the time I can "look for the padlock" the potentially private information has already been sent. So if the connection cannot be authenticated the browser MUST warn me* BEFORE it continues with the request.
I support systems that allow encrypted but unauthenticated connections to be prese
Re: (Score:2)
When I make a request to a https url I expect the information contained within that request (parts of the url other than the hostname, post data if any, cookies if any) to be sent over an encrypted and authenticated link. By the time I can "look for the padlock" the potentially private information has already been sent. So if the connection cannot be authenticated the browser MUST warn me* BEFORE it continues with the request.
It sounds to me like invoking a very special, very peculiar and rare case to support the current status quo: That of communication of private data during initial handshake. How can a user be sending private information (credit card info in form post data for example) with an expectation of privacy on their part if they have not even accessed the webpage, ever, yet?
Re: (Score:2)
OT
Is there a list of CAs that have been compromised, including evidence. I.E. it would post two signed and valid certificates for google.com for the same time period but one of them with obviously the wrong IP address?
Re: (Score:2)
Dude... companies do this all the time, if for no other reason than to compress network traffic. They just buy boxes like this one [sourcefire.com]. All you do is override DNS and CA. It's standard practice.
Re: (Score:2)
Of course... you need to override CA. I do that too using Charles Proxy to inspect HTTPS traffic locally.
But the big unsubstantiated thought out there is "governments have backdoors in to CAs". This is very easy to prove with evidence, but with no evidence we should probably assume the null hypothesis.
Re: (Score:2)
Re: (Score:2)
Can you, really? I mean, we have a big enough problem with training users to type credentials in a login box served by http://www.myfavoritebank.com/ [myfavoritebank.com] all insecure-like. This area where security intersects user interaction design is a tricksy one.
Re: (Score:3)
On the other hand, a self signed certificate which you have explicitly accepted is in many cases *BETTER* than a ca verified cert. In the former case you have explicitly chosen to trust a single party, whereas in the latter you are reliant on a large number of organisations.
Re: (Score:2)
On the other hand, a self signed certificate which you have explicitly accepted is in many cases *BETTER* than a ca verified cert. In the former case you have explicitly chosen to trust a single party, whereas in the latter you are reliant on a large number of organisations.
A self-signed certificate is better only if you can independently verify that you've got the correct certificate and that it is still valid. Otherwise it is worse, because you've got no way at all to figure out if it is correct and whether it has not been rescinded yet (e.g., because of a break-in on the server). You're far better off to have a private CA run by someone you trust and to explicitly only trust that CA to issue for a particular service, rather than some random other CA. (The downside? That doe
Re: (Score:2)
True. If a corporate cert goes out of date then the warnings pop up and it's a bit confusing at times to figure out how to proceed. Ie, the choice in Firefox between "I know what I'm doing" and "get me out of here" certainly doesn't instill confidence when trying to add the cert.
wait (Score:5, Funny)
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
Well, what was 'good' for shareholders was getting a product out the door, and not spending all of that money on implementing any actual security.
If there's no penalty for not having good security and/or encryption, why would a company spend money on it? From that perspective, it would be bad for shareholders and business.
If you make it costly to have data breaches, it becomes good for shareholders to imp
Re: (Score:2)
Does she also realize that ROT13 isn't sufficient "encryption" though?
HMRC (UK tax office) lost a CD with 15 million people's personal data on it. They released a statement saying it was password protected. A password protected MS Office document is not really "protected" in any meaningful sense.
Re: (Score:2)
Depends on what version of Office/Word. A document secured with a 32+ character password in a recent version (Office 2003 and newer) can use SHA 512 and AES-CBC.
Of course, using a weak password, all bets are off.
If one needed to distribute data on CD encrypted the "right" way, I'd either use a large PGP archive, ship the CD with a TC volume and a keyfile encrypted to the receiving site public keys, or use a commercial utility like PGP Disk and have a volume only openable with the receiving site keys.
Done r
I thought credit card info had to be already (Score:2)
encrypted or the credit card companies won't do business with you. (PCI compliant or something like that)
That leaves social security number and email address/password, but really, you should not use the same password for your Gmail account and Oily Pete's V1agra Online. As for social security, never give it out to anyone under any circumstances unless it's a bank (real one, not a Nigerian prince bank) and you're asking for a loan or opening a checking account.
Just drill that into the head of the IQ challeng
Re: (Score:2)
Re: (Score:3)
Since when is credit card data, e-mail address, and password the only "customer data" a company keeps about you?
Re: (Score:2)
They require that you "encrypt" the data, but they also typically require that you send the data unencrypted (albeit tunnelled over ssl) to actually process a payment, so while the data may be encrypted on disk the server typically also has the ability to decrypt it on demand in order to make use of it... So it's just a case of a hacker working out how, and then triggering the same process to extract the data.
Re: (Score:2)
PCI isn't what it's cracked up to be. It looks like yet another scheme to 'do something' about 'the problem'.
Client side strong encryption, please. (Score:1)
Server side encryption in could is no good. It prevents Calvin reading Susy's diary, but nothing much more than that.
And no proprietary closed up systems, but publicly verifiable implementations end to end, or it's not even worth trying to get people trust on it.
Re: (Score:2)
No, could is correct, as in "it could have been secure but we put it on that silly clusterfuck network instead".
Re: (Score:2)
While I agree with your points, I think the public is unfortunately pitifully trusting. This whole NSA spying stuff will pass through the news cycle and soon not be covered again. It's only making a big splash because Fox News likes making fun of the Obama administration, but before the public actually starts demanding their right to privacy, Fox News will bury the issue and convince their watchers that the government is not spying on them. All of the systems we have in common usage are total crap, and a
technical challenges (Score:1)
Usually at some point the server needs to be able decrypt the data so it can be displayed to a user, so the key needs to be handy. So if you have the key and data on the same sever it's of little security value.
If you want to have this data in some kind of database, there is a good chance you want to be able to search and index this data. Possible to index and pre-sort encrypted data without giving away the content?
Yes, maybe encrypt some sensitive parts, but encrypting all customer data is counter
Re: (Score:2)
I detect someone doesnt know what a HSM nor what one is used for.
Here, have a wikipedia link and learn something today http://en.wikipedia.org/wiki/Hardware_security_module [wikipedia.org] [wikipedia.org]
Re: (Score:3)
So... explain how that helps when someone hacks into the server and requests data using the same mechanisms and level of authority as the server software (which must ultimately manipulate unencrypted data).
Because that's what happens.
Re: (Score:2)
I detect someone who has no idea what the actual problem is but he has a hammer and the world looks like a nail.
Encrypt everything (Score:3, Interesting)
Don't just encrypt private details.
Get rid of users private data, so there is nothing to steal in the first place.
Use eccentric authentication*. Replaces passwords with anonymous client certificates.
Check my: http://eccentric-authentication.org/ [eccentric-...cation.org]
Re: (Score:1)
I offer a way to create accounts anonymously. And much easier than the email-address password combination.
When customers sign up for an account, they create a nickname. That gets signed into the client certifcate. The web server receives that nickname from the crypto-authentication libraries as the username. Do with that username what you want.
Solution for Microsoft and Oracle (Score:2, Redundant)
Sell to California companies for major money.
More importantly, (Score:1)
We need to crack down on government agencies that spy.
Re: (Score:2)
Will that include the decryption key?
too much government (Score:1)
...and in other news, CA police will be cracking down on people who don't lock their house doors and who don't lock their car doors.
Re: (Score:1)
Re: (Score:2)
Not using IDEA is a good call.
http://en.wikipedia.org/wiki/International_Data_Encryption_Algorithm#Security [wikipedia.org]
"As of 2007, the best attack which applies to all keys can break IDEA reduced to 6 rounds (the full IDEA cipher uses 8.5 rounds).[1] Note that a "break" is any attack which requires less than 2128 operations; the 6-round attack requires 264 known plaintexts and 2126.8 operations.
Bruce Schneier thought highly of IDEA in 1996, writing, "In my opinion, it is the best and most secure block algorithm availa
Re: (Score:2)
Re: (Score:2)
> > NO IDEA
Sigh.
Encryption is easy. Key management, not so much (Score:2)
Using encryption is easy. Managing the encryption keys however, not so much. The number of developers I see posting questions (to StackOverflow) on encryption with NO IDEA on basic key management is very worrying.
Re: (Score:1)
Users can't do it wrong anymore. Play with the demo at: http://eccentric-authentication.org/blog/2013/06/07/run-it-yourself.html [eccentric-...cation.org] or take a look at the walkthrough.
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
Create your self signed server certificate and publish it in DNS. Sign it with DNSSEC.
You can't beat free. See http://datatracker.ietf.org/doc/rfc6698/ [ietf.org]
Smart and Hot - where's the crazy (Score:2)
She seems to make a good point, and according to The Wiki she's pretty hot. I feel bad for her significant other (assuming she has one), I bet she's totally nuts.
Re: (Score:3)
The crazy is in thinking she can regulate better security onto any random industry. It doesn't work like that. Security is too complicated to magically fix by insisting on blind usage of a particular tool.
If you look at the article, a huge number of the breaches are to do with credit card leaks. Well, duh, credit cards are a pull model not a push model. Bitcoin is more sensible, but the California DFI is busy harassing Bitcoin companies. So if she really cares about upgraded security, maybe she should get t
Re: (Score:1)
Dictate penalties and properties not methods (Score:5, Insightful)
Good laws of this sort are those which do not impose technical solutions but rather provide general systems level requirements.
The problem with "duh use encryption" there is no guarantee of any kind simply applying encryption makes a system more secure against a specific threat.
Every time you get into the weeds you are guaranteed to codify errors and hurt those who choose to innovate using different but better or equally valid approaches.
Attorney Generals are good things (Score:4, Interesting)
I've dealt with cleaning up some nasty data breaches over the years, I've had conversations with Attorney Generals when the breaches were bad enough. Companies fear Attorney Generals about as much as they fear being on the wrong end of the international news.
I've been involved with companies where data breaches happen where Attorney Generals while and while not get involved. The difference is night and day for things like encryption, notification of consumers, risk mitigation and other such steps. Pause and think about it for a moment, do you really think California is breached that much more often than other locations, or do people simply find out because the companies fear being on the wrong end of the Attorney Generals pointy stick?
Attorney Generals that give a damn are good things, they give the security professionals at the companies in their states the leverage they need to actually do the things that they want to do (encryption etc).
Confused Identity (Score:2)
Tradition normally holds that a person who does a bad act is the guilty party. These days that is becoming rather twisted. If a person steals data then doesn't the guilt fall upon the thief? What they are doing is similar to the rather absurd gun law that can find a person negligent for simply using one lock to secure a gun. A home owner locks his windows and doors and drives off to the market. Mr. bad guy breaks in the back door and steals the gun and later that day shoots someone. Out of the
exactly wrong (Score:2)
We need to make companies liable for any information they are so careless as to lose. Intruding on their business process is the wrong way to go about it: punitive liability judgements (and tighter disclosure laws) are the right way.
Part of the problem here is this horribly mistaken meme that everyone and everything is hackable. It makes people feel not responsible, and it's only true in the sense that evert newborn baby has started dying, or that the universe will cool/stop. Not concerned with this meme
Re: (Score:2)
For things like the electric grid, there shouldn't even be any access at all. It's that critical. It is critical enough that they should have private FIBER following every power line.
For people info like SSN and bank account numbers, the system should be revised so that the number alone only serves to IDENTIFY and is not treated as AUTHORIZATION. Lots of people have other people's SSNs for various reasons. Using the identification number for authorization is totally wrong. This also goes for credit car
Encryption keys ... (Score:2)
... are essential to the servers that handle the data. They can't actually operate on the encrypted data. They have to UN-encrypted it first (and RE-encrypted it to put it back if there any changes). So what does this mean to me? It means I have to grab the encryption key(s) when I break in to get the data.
This reminds me of an incident with a state web site. Someone broke in and did some defacing. The state's top IT director answered a reporter's query with "This needs to be investigated because we b
But we all now know encryption means... (Score:2)
.... nothing to the NSA
Re: (Score:2)
Re: (Score:2)
How many mails have you received that were official and digitally signed (not a signature)?
I work in a company where people are pretty security savy, but email somehow is an exception.. When I ask how they know the mail came from John Doe, they tell it is sure because the email address is John.Doe@example.com.
Quickest way around that: send out a few emails as the company CEO, and set the Reply-to address to a random colleague.
Loads of fun, and all you need is a command line on a server somewhere.
Don't blame me if you lose your job, blame RFC 822...
Long overdue (Score:1)
This is way, way, way overdue. Due diligence is what it is. And not encrypting sensitive data is not due diligence.