New Software To Balance Privacy and Security? 82
An anonymous reader writes "Claiming to provide both security and privacy, researchers at UCLA say they have developed a system to monitor suspicious online communication that discards communications from law-abiding citizens before they ever reach the intelligence community." From the article: "The truly revolutionary facet of the technology is that it is a new and powerful example of a piece of code that has been mathematically proven to be impossible to reverse-engineer. In other words, it can't be analyzed to figure out its components, construction and inner workings, or reveal what information it's collecting and what information it's discarding -- it won't give up its secrets. It can't be manipulated or turned against the user."
Evil potential here (Score:5, Insightful)
It also means that lawful citizens who do fit the parameters are reported on. The same as if the agencies are grepping.
a savvy person may be able to tell that the program is running in the background ... by distributing this software all over the Internet to providers and network administrators, you can easily monitor a huge data flow
How will this software be "distributed"? Virus? Payload in a Sony rootkit? Thousands of patriotic sysadmins? Plenty of potential for evil to be done here!
Re:Evil potential here (Score:2)
And this says almost nothing. The following would also be true:
. Unlawful U.S. Citizens who don't fit the parameters are automatically ruled out.
. Lawful U.S. Citizens who fit the profile are automatically rules in.
. Unlawful non-U.S. Citizens who don't fit the profile are automatically ruled out.
And equally content-free.
So one state we want, plus three we don't, equals...
To be able to judge the value of this,
This magic software only finds bad guys? (Score:2, Insightful)
The problem is not Privacy vs. Security. You will never have Security. Not yours. You can have privacy, though.
The problem is, and always has been, balancing privacy and convenience.
Re:This magic software only finds bad guys? (Score:1)
Re:This magic software only finds bad guys? (Score:1)
you're going to hell.
Re:Scary, but encouraging... (Score:1)
Who wants to bet the secret filter might end up being
spin doctors (Score:5, Insightful)
The software is infallible (Score:1, Funny)
Secret evidence, secret law. secret court. Secret Government.
Re:The software is infallible (Score:2)
Re:spin doctors (Score:2)
What good is this? (Score:5, Insightful)
I mean, the captured documents could already have been altered, no way to prove that they didn't, now.
Not to mention the way it works amounts to what is essentially an eternal wiretap of everyone, guilt, innocence and suspicion matter not.
Re:What good is this? (Score:2)
Is this blogvertising, crappy possible software that promises the world but really only delivers a large amount of funding to a particular group of individuals.
To me it sounds just like your typical advertising spyware (now didn't the department of homeland security hire that particularly nasty fellow of spyware fame, are we now starting to see
Mathematical proof of code is a tough business (Score:5, Insightful)
Re:Mathematical proof of code is a tough business (Score:3, Insightful)
Re:Mathematical proof of code is a tough business (Score:1)
Yeah, but it's a totally bogus claim. I want thirty minutes in a room with the "mathematician" who "proved" this and a blackboard. It isn't possible to make code that can't be reverse engineered because, fundamentally, the processor that executes the code has to know what the low-level operations are in order to execute them. Quite aside from that, in order to prove that code can't be reverse-engineered, you'
Re:Mathematical proof of code is a tough business (Score:5, Interesting)
I'd like to see the demonstration. Until such time, I call bollocks and I refuse to believe an "impossible to reverse-engineer" piece of code ever exists.
I second your bullshit and raise! The problem with proofs such as this is that they assume broad axioms that in reality might not be true in the hardware. For example, they may well have proved the theorem if they assume all operations of a certain set take the same length but in reality they might not. The processor might take a ten billionth of a second longer to do one operation than it does another, or it might release more heat when it does one operation than it does when it performs another, or it might release a certain magnetic field when it does one operation and not another.
Side-channel attacks, as these are called, are often totally devastating. There was one attack [schneier.com] where simply heating the computer up can cause a system to get owned. If the proof is correct, it's certainly interesting but practically we're a long way from getting to this gold standard.
Simon
Re:Mathematical proof of code is a tough business (Score:3, Interesting)
Nah, side-channels have nothing to do with it. Even though the article doesn't mention it, the authors are doing rigorous program obfuscation. In the security model for this problem, the adversary gets access to the code and can do whatever he wants to it: run it (on whatever architecture he pleases) on different inputs, insert or delete instructions, slow it down, speed it up, whatever.
Re:Mathematical proof of code is a tough business (Score:2)
Re:Mathematical proof of code is a tough business (Score:2)
It doesn't, and it's doesn't try to. You are allowed to have full access to the obfuscated code. Still, having all this access doesn't allow you to learn (for example) what strings the code is "grepping" for in the input. Whether there is a match or not, the execution path of the code remains the same, but it produces different outputs.
If this sounds like magic, yes, it pretty much is. But so does
Re:Mathematical proof of code is a tough business (Score:2)
There's always
Re:Mathematical proof of code is a tough business (Score:2)
You mention a dictionary attack, which is certainly something an adversary could attempt. There are two models: in the "plain" obfuscation model, a dictionary attack may succeed, but this doesn't break anything -- if you were given a black-box that truthfully answers "is the password X?" then you could also run a dictionary attack on that. So that doesn't count as "figuring
Re:Mathematical proof of code is a tough business (Score:2)
if(hash(word)==hash_were_looking_for)
This will translate into machine code as being a cmp, followed by a conditional branch (with a bit more cruft, it won't be quite that simple but there has to be a branch somewhere). We know wether we got a hit by which branch we follow- the path we follow 99.99999% of the time is the no match branch. The path we follow once in a blue moon is the match branch. So by seeing
Re:Mathematical proof of code is a tough business (Score:2)
if(hash(word)==hash_were_looking_for)
The code doesn't have to do any such thing, and you are arguing from incredulity ("I don't see how it could be done any other way"). Fortunately, solid mathematical proofs destroy this fallacy.
In the "plain" obfuscation model, it is not successful reverse-engineering to discover when you've got the correct password, because you can discover that when you have a black-box too.
Re:Mathematical proof of code is a tough business (Score:2)
"both matching and non-matching documents appear to be treated precisely the same way. The machine, or anyone else who views the execution is totally unaware if condition is satisfied, as it is executed as a strait-line [sic] code, where condition is never known unless you can break the underlying encryption scheme."
So, there are no conditional statements in the resulting code.
Re:Mathematical proof of code is a tough business (Score:2)
Re:Mathematical proof of code is a tough business (Score:2)
Of course every processor has branch instructions. But the obfuscated programs don't ever use those branch instructions -- at least not when comparing the input to the desired keywords. You do acknowledge that it is possible to write a computer program that doesn't perform any branches, don't you?
As I've said several times before, the output is basically an encrypted yes/no bit. However, there is no place or time in the program execution w
Re:Mathematical proof of code is a tough business (Score:1)
Reading the article it seems to me that they are talking about a TPM/TCPA/palladium type application. If running on a TPM equipped machine then debugging is no help nor is a decompiler. The OS either won't let you run the debugger or decompiler while this programme is running or will refuse access to the secure memory area where it is running. Cracking the "secure box" where the data is stored is simply a case of breaking whichever type of strong encryption is used. The same goes for trying to decrypt the
Re:Mathematical proof of code is a tough business (Score:2)
They're solving the problem of program obfuscation for a certain class of programs. It's known that obfuscation is impossible in general, but for certain classes of programs (like "check if the input text equals a specific string") it can be done. (If you're willing to believe some new, non-standard number theoretic assumptions that seem plausible.)
Yes, it is tough to prove that the obfus
Re:Mathematical proof of code is a tough business (Score:1)
Re:Mathematical proof of code is a tough business (Score:2)
I imagine they are using something like Bruce Schneir's Clueless Agents:
http://www.schneier.com/paper-clueless-agents.html [schneier.com]
I've read the paper a few years ago, but the gist of how it might work in this case is that instead of the agent comparing the actual keywords to a database, they parse the document and try decrypting some of its code against hashes of the keywords (or, say, sorted sub-sets of keywords).
Re:Mathematical proof of code is a tough business (Score:2)
Social Engineering (Score:3, Insightful)
Re:Social Engineering (Score:1)
> has it also been mathematically proven impossible to socially engineer?
I'd like to see a definition of reverse engineering that excludes social engineering without making a special exception for it. Any such definition would be so narrow that it would also exclude numerous other common types of reverse engineering. (No, it won't do to say that the information obtained by the process has to be obtained directly from the i
Re:Social Engineering (Score:2)
Like the Stasi? (Score:2)
And who gets to define what a "law abiding" citizen is? It may be OK now but what happens when the law is that you do not oppose the state, whoops, too late, there is already the infrastructure in place to find out where those damned pro-democracy scum are and what they are upto.
Next, when we're all watching TV and doing our VoIP on the net, all have our home security systems on the net then the government 'sees' everything, 'knows' everything and you have entered into the police state where you can't even
Re:Like the Stasi? (Score:3, Interesting)
Prior to the occupation of Europe, Dehomag (IBM's European Subsidiary) tabulated the census data of unoccupied European Countries at their behest. This seemingly innocent data was then co-opted by the Nazi state, with the help of IBM. IBM had recently introduced Hollerith machines and the Nazis were IBM's best punch card customer. In 1937 Thomas J. Watson was decorated by Hjalmar Schacht, the Nazi Economics Minister with the Me
Re:Like the Stasi? (Score:2)
Since they can't have what doesn't exist, the best protection is to avoid producing it in the first place. Affording oneself greater protection isn't difficult, but it *is* a matter of shedding some of the conveniences to which people have grown so accustomed. "Dangerously easy", or "inconveniently saf
Nonsense! (Score:2)
Maybe they have a mathematical proof that makes reverse-engineering impossible. Fine. But it is still possible to find out what it does in practice, since the nature of the data it processes is known. Just run it in a simulation and see what it does. No reverse-engineering required. From there onwa
How to run encrypted code without the key? (Score:3, Interesting)
The core claim in the article is that an attacker with access to the code has no possibility of knowing if a given input will be flagged or not. I can see how someone with access only to the data storage could be prevented from knowing if the gigabyte of noise it stores just changed randomly or if his message was stored there in public key encrypted form. I can _not_ see how the applying
Re:How to run encrypted code without the key? (Score:2)
Actually it is not that simple. It is possible to make the functionality ununderstandable. Goedel's incompleteness Theorem states that. But for most practical purposes you are quite correct. Especially if the input data set is known.
Re:How to run encrypted code without the key? (Score:2)
So if it's comparing if hashes of keywords match to hash sets, you can't know what it'd match to until it does, even if you have the code (unless you run all possible keyword sets through it, which could be quite a large search space).
Re:How to run encrypted code without the key? (Score:2)
I can give a an example to illustrate how such a system can work. Of course my example will be extremely simplified... analyzing my simplified system to figuring out what it is secretly tracking will merely be an interesting (and hopefully non-obvious) puzzle,
Re:Nonsense! (Score:1)
Professor Ostrovsky (cited in the article) has written in the past about public key encryption with keyword search (PEKS). Here's the abstract [ucla.edu] as well as the paper [ucla.edu] itself (warning PDF file).
INAC (I'm Not A Cryptologist), so take my
Re:Nonsense! (Score:1)
> The filter cannot be broken in the same sense that one cannot crack time-tested
> public-key encryption functions such as those already used for Internet commerce
> and banking applications.
In _what_ same sense. Public-key cryptography relies on the attacker not having any access to the computer system where the private key is stored. Otherwise, it can be broken very very easily. If this filter cannot be broken in the "same sense", it implies only that it cannot be
Re:Nonsense! (Score:1)
> data it processes is known. Just run it in a simulation and see what it does.
> No reverse-engineering required.
Actually, that's one of the most common and useful reverse engineering techniques: run it in a controlled environment and see what it does. The people who are claiming that they have a mathematical proof that this thing _can't_ be reverse engineered are either very dishonest or very ignorant about c
And what are the criteria? (Score:2)
Impossible to reverse engineer! (Score:2, Interesting)
Brrrrrr.. spooky! This sounds like an incredible misinterpretation of whatever the original paper/research is actually doing though. Devices may be reverse engineered without even looking inside if you have access to its inputs and outputs and can continually test and hypothesize and retest, etc. A device that distinguishes between 'evil' and 'regular' packets (as input) an
Re:Impossible to reverse engineer! (Score:2, Informative)
A glance at the paper titles suggests "Private Searching on Streaming Data" as being the closest to the original article.
Impossible to reverse engineer? (Score:3, Informative)
Which executable format does it use?
Unless its running on dedicated hardware with really strong encryption (and even then, thats no gaurantee), it is possible to reverse engineer any piece of code piece by piece (for example, start with the first instructions the program executes and unwrap it from there). If you wanted to go deep, you could use an ICE or similar (or a software emulator with a built-in debugger that cant be detected from the emulated side)
How it works? (Score:1)
Parent is correct.
If I understand this correctly, if it's running locally, you can be spied upon successfully, because encryption prevents you from analyzing the operation of the program, yet it has access to all your data (presumably including encryption keys):
Trusted Network Connect (Score:1)
If it's running elsewhere, say at the ISP, then use a clean-built system, and encrypt your communications
And not get a routable IP address at all because your PC doesn't have an active TPM [slashdot.org].
Here's a more informed article on this software (Score:1, Informative)
Proof? (Score:1)
It can't be manipulated or turned against the user (Score:2)
For crying out loud, this is spyware, by definition.
Re:It can't be manipulated or turned against the u (Score:1)
No, spyware by definition runs on the user's computer. I don't think that's the case here.
What's the bets that VISTA will have it? (Score:2)
One step closer to Big Brother ..... (Score:1)
So let me get this straight (Score:1)
Re: (Score:1)
It really is possible to stop reverse-engineering (Score:4, Interesting)
Let me tell you, as a cryptographer, that these claims are false. The recent field of program obfuscation gives surprisingly strong ways to prevent reverse-engineering, in a very rigorous and strong way.
Not every program can be obfuscated (this has been proven). However, programs that fit a certain template (like: "check if the input string matches the user's password") can be obfuscated. What this means is that you can give the program's entire code to the adversary -- he can run it on his own computer (no DRM required) on whatever inputs he likes, alter it, stretch it, twist it, whatever. After all this he still will not be able to guess the password, any more than if he had some mathematically-perfect black-box that truthfully answered the question: "is [X] the password?" (Actually the definition is even stronger than this, but that's the gist of it.)
Yes, this seems extremely hard to do -- after all, the adversary has complete and total power over the code that is running. Yet it can be done, rigorously and provably, if you're willing to believe that there are some number-theory problems out there (like RSA) that are hard to solve.
For the work described in the article, it sounds like the "black-box" does something like the following: if your input string contains some "watch words," then the output is the same as the input, but encrypted under the government's key. If your input string is "benign," then the output is just "THIS WAS A BENIGN INPUT", encrypted in the government's key -- i.e., it ignores any benign input and replaces it with a placeholder. By running the obfuscated program and looking at the output, you can't tell if the input was flagged or not. Even while watching the program run, you can't tell if the program is flagging the input or not (or learn anything about the government's key). When the government collects the output and decrypts it, it only sees the flagged inputs, as the rest have been ignored.
As I've said, none of this depends on the program requiring any DRM or TPM or any other specialized hardware. It only relies on the mathematics.
Re:Care to offer more information? (Score:2)
The newness and usefulness is the strength of the definition of obfuscation. It goes far beyond just "hash and compare equality" -- a lot of sophistication is needed to construct an obfuscator that satisfies the rigorous definition of security.
A second aspect is that for this application, the adversary is not told whether the input matched the keywords or not, bec
Mathematical certainly not important here (Score:1)
Alright, so say you're running this software for whatever reason, maybe just to keep up appearences. But you don't want your traffic flagged, and you don't want to filter at the router. We can still decompile though. So... What about extracting the placeholder and the public key, then replacing the software with your own version that ALWAYS outputs the encrypted placeholder regardless of the input?
Just a th
Re:Mathematical certainly not important here (Score:2)
Obfuscation doesn't prevent you from changing the functionality of the code. If you have control over the code that is running on a router, then of course you can just delete the filtering code entirely, or replace it with whatever you want.
All that obfuscation does is prevent you from understanding what the original code does, eve
wow, is this for real? (Score:1)
Oh brother.
Um...WTF (Score:3, Insightful)
BREAKING NEWS. The government has devised a fool proof plan to protect your privacy. They will simply garrison an intelligence agent in your house recording everything you do to make sure that the government doesn't inappropriately invade your privacy. (for your own safety please do not attempt to resist; you will have to be beaten to protect your own privacy, after which you will be dumped in a shallow unmarked grave - again for your privacy)
Abiding by the law and mathematical proof. (Score:2)
> online communication that discards communications from law-abiding citizens
> before they ever reach the intelligence community."
"Law-abiding": which laws might that be? The laws intended to prevent disruption of society, like the ones used to jail many civil-rights activists in the 50s and 60s? The laws that declared a black man couldn't marry a white woman? Or the ones that declared a woman can't own real property?
So
Re:Abiding by the law and mathematical proof. (Score:1)
Considering it's coming out of somewhere like UCLA, you can be fairly certain it's the latter. In case you're interested, the related paper appears to be at http://www.cs.ucla.edu/~rafail/PUBLIC/Ostrovsky-Sk eith.html [ucla.edu].
Re:Abiding by the law and mathematical proof. (Score:2)
> it's the latter.
Except for the fact that this is a news piece, not a direct report from the scientists. You never know what the scientists actually said after the reporters get through with it.
> In case you're interested, the related paper appears to be at
> http://www.cs.ucla.edu/~rafail/PUBLIC/Ostrovsky-Sk [ucla.edu] eith.html.
Many thanks, I'll look it over.
Can't be reverse-engineered, eh? (Score:1)
Re:Can't be reverse-engineered, eh? (Score:4, Insightful)
Worst case, pull an SCO and sue them for violating your stuff, and demand un-obfuscated *everything* during discovery.
On the fun side, wait until RIAA/MPAA gets their agenda piggybacked into these little boxes.
Pointer to the actual paper (PDF, sorry) (Score:2)
Postscript: http://www.cs.ucla.edu/~rafail/PUBLIC/Ostrovsky-S
This is a scam (Score:3, Insightful)
Uhm, excuse me, but this is exactly the situation right now. Since when do terrorists ever KNOW that security is on to them until they're caught? Terrorists take precautions against being detected by ANYTHING. Terrorists with the slightest brains do not talk about operations in the clear at any time. What then is this software supposed to detect? Where is the benefit?
Supposedly the benefit is that "harmless" communication is never seen by the Fed. Bullcrap. The parameters of the software are SET by the Fed - they can see anything they want. That's obvious from the article as it glosses entirely over the matter of "criteria" in the first place.
This software would only be safe in the hands of someone who IS safe. In the words of the DRM enthusiasts, it only "keeps honest people honest." And since the criteria is changeable - as well as the appointment (or election) of the people who set the criteria - this is no security at all.
In the hands of George Bush, Dick Cheney and General Hayden, you're screwed, blued and tattooed.
This is nothing more than a propaganda piece put out at this time because Bush is in danger of being impeached over the spying issue. That's the bottom line.
False Premise (Score:3, Insightful)
The tradeoff is between privacy and totalitarianism. Solutions that attempt to split the difference are not helpful.
George Orwell says "I told you so" (Score:1)