Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security Technology

New Software To Balance Privacy and Security? 82

An anonymous reader writes "Claiming to provide both security and privacy, researchers at UCLA say they have developed a system to monitor suspicious online communication that discards communications from law-abiding citizens before they ever reach the intelligence community." From the article: "The truly revolutionary facet of the technology is that it is a new and powerful example of a piece of code that has been mathematically proven to be impossible to reverse-engineer. In other words, it can't be analyzed to figure out its components, construction and inner workings, or reveal what information it's collecting and what information it's discarding -- it won't give up its secrets. It can't be manipulated or turned against the user."
This discussion has been archived. No new comments can be posted.

New Software To Balance Privacy and Security?

Comments Filter:
  • by ribuck ( 943217 ) on Wednesday January 25, 2006 @04:57AM (#14555876)
    That means lawful U.S. citizens who don't fit the parameters are automatically ruled out.

    It also means that lawful citizens who do fit the parameters are reported on. The same as if the agencies are grepping.

    a savvy person may be able to tell that the program is running in the background ... by distributing this software all over the Internet to providers and network administrators, you can easily monitor a huge data flow

    How will this software be "distributed"? Virus? Payload in a Sony rootkit? Thousands of patriotic sysadmins? Plenty of potential for evil to be done here!

    • That means lawful U.S. citizens who don't fit the parameters are automatically ruled out.

      And this says almost nothing. The following would also be true:
      . Unlawful U.S. Citizens who don't fit the parameters are automatically ruled out.
      . Lawful U.S. Citizens who fit the profile are automatically rules in.
      . Unlawful non-U.S. Citizens who don't fit the profile are automatically ruled out.

      And equally content-free.

      So one state we want, plus three we don't, equals...

      To be able to judge the value of this,

  • If that isn't putting the priest in charge of Sunday School, I don't know what is.

    The problem is not Privacy vs. Security. You will never have Security. Not yours. You can have privacy, though.

    The problem is, and always has been, balancing privacy and convenience.
  • spin doctors (Score:5, Insightful)

    by Hakubi_Washu ( 594267 ) <robert.kostenNO@SPAMgmail.com> on Wednesday January 25, 2006 @05:02AM (#14555895)
    So, it collect all data fitting into the criteria set by the agency without any chance of anyone ever knowing what those criteria were? How is the "law-abiding" citizen to know he's not accidentaly fitted one? They say it improves privacy, but it actually removes it, since you can never know you've not been deemed a "terrorist".
  • What good is this? (Score:5, Insightful)

    by NoMoreNicksLeft ( 516230 ) <john.oyler@noSpAm.comcast.net> on Wednesday January 25, 2006 @05:02AM (#14555899) Journal
    So, when I take my case to court, that they're illegally intercepting my communications just to look for dirt to ruin my political campaign, it's impossible to reverse engineer and prove that they were only looking for terrorists?

    I mean, the captured documents could already have been altered, no way to prove that they didn't, now.

    Not to mention the way it works amounts to what is essentially an eternal wiretap of everyone, guilt, innocence and suspicion matter not.
    • Oh it's not for everyone, it's never for everyone, the few as selected by that institution pretending to be your government will never have a problem avoiding it.

      Is this blogvertising, crappy possible software that promises the world but really only delivers a large amount of funding to a particular group of individuals.

      To me it sounds just like your typical advertising spyware (now didn't the department of homeland security hire that particularly nasty fellow of spyware fame, are we now starting to see

  • by javaDragon ( 187973 ) on Wednesday January 25, 2006 @05:07AM (#14555914) Homepage
    Their greping thing is not interesting per itself, but I'd like to see this:
    [...]a new and powerful example of a piece of code that has been mathematically proven to be impossible to reverse-engineer[...]
    I'd like to see the demonstration. Until such time, I call bollocks and I refuse to believe an "impossible to reverse-engineer" piece of code ever exists.
    • by Ckwop ( 707653 ) * on Wednesday January 25, 2006 @05:58AM (#14556067) Homepage

      I'd like to see the demonstration. Until such time, I call bollocks and I refuse to believe an "impossible to reverse-engineer" piece of code ever exists.

      I second your bullshit and raise! The problem with proofs such as this is that they assume broad axioms that in reality might not be true in the hardware. For example, they may well have proved the theorem if they assume all operations of a certain set take the same length but in reality they might not. The processor might take a ten billionth of a second longer to do one operation than it does another, or it might release more heat when it does one operation than it does when it performs another, or it might release a certain magnetic field when it does one operation and not another.

      Side-channel attacks, as these are called, are often totally devastating. There was one attack [schneier.com] where simply heating the computer up can cause a system to get owned. If the proof is correct, it's certainly interesting but practically we're a long way from getting to this gold standard.

      Simon

      • The problem with proofs such as this is that they assume broad axioms that in reality might not be true in the hardware.

        Nah, side-channels have nothing to do with it. Even though the article doesn't mention it, the authors are doing rigorous program obfuscation. In the security model for this problem, the adversary gets access to the code and can do whatever he wants to it: run it (on whatever architecture he pleases) on different inputs, insert or delete instructions, slow it down, speed it up, whatever.
        • Ok. But how does this stop me from installing a debugger and using a disassembler to read the code in assembly? And from there using standard reverse engineering techniques to find the interesting parts, and disassemble those? It can't. Nothing is impossible to reverse engineer, because at some point a computer with a known ISA is going to have to interpret it. If the computer can, so can you.
          • But how does this stop me from installing a debugger and using a disassembler to read the code in assembly?

            It doesn't, and it's doesn't try to. You are allowed to have full access to the obfuscated code. Still, having all this access doesn't allow you to learn (for example) what strings the code is "grepping" for in the input. Whether there is a match or not, the execution path of the code remains the same, but it produces different outputs.

            If this sounds like magic, yes, it pretty much is. But so does
            • But it must be searching for a match to something- for example, a hash with a known algorithm. This something must be unique enough to have few to no false positives in order for it to be useful. Due to that combination of qualities, you could find out what hash they were looking for (or whatever they use in place of a hash) and do a dictionary attack to find what it matches. Or feed it large amounts of fake data on a local network (so it can't report back) and find out by trial and error.

              There's always
              • In fact, these matching algorithms are done in a way much similar to password hashing (though they require much more sophistication).

                You mention a dictionary attack, which is certainly something an adversary could attempt. There are two models: in the "plain" obfuscation model, a dictionary attack may succeed, but this doesn't break anything -- if you were given a black-box that truthfully answers "is the password X?" then you could also run a dictionary attack on that. So that doesn't count as "figuring
                • It doesn't work like that. At some point in the program, the program does a check equivalent to:

                  if(hash(word)==hash_were_looking_for)

                  This will translate into machine code as being a cmp, followed by a conditional branch (with a bit more cruft, it won't be quite that simple but there has to be a branch somewhere). We know wether we got a hit by which branch we follow- the path we follow 99.99999% of the time is the no match branch. The path we follow once in a blue moon is the match branch. So by seeing
                  • It doesn't work like that. At some point in the program, the program does a check equivalent to:

                    if(hash(word)==hash_were_looking_for)


                    The code doesn't have to do any such thing, and you are arguing from incredulity ("I don't see how it could be done any other way"). Fortunately, solid mathematical proofs destroy this fallacy.

                    In the "plain" obfuscation model, it is not successful reverse-engineering to discover when you've got the correct password, because you can discover that when you have a black-box too.
                    • Here is the money quote from the paper:
                      "both matching and non-matching documents appear to be treated precisely the same way. The machine, or anyone else who views the execution is totally unaware if condition is satisfied, as it is executed as a strait-line [sic] code, where condition is never known unless you can break the underlying encryption scheme."

                      So, there are no conditional statements in the resulting code.
                    • That shows an utter lack of knowledge of processors. There IS a branch, unless you're coming up with a type of ISA that does not currently exist on the market. At some point, you need to output A if its true or B if its not. That REQUIRES a branch, or setting a predicate register. Both of these can be detected. It is NOT possible to make it non-reversible.
                    • C'mon, now you're showing an utter lack of knowledge of this work.

                      Of course every processor has branch instructions. But the obfuscated programs don't ever use those branch instructions -- at least not when comparing the input to the desired keywords. You do acknowledge that it is possible to write a computer program that doesn't perform any branches, don't you?

                      As I've said several times before, the output is basically an encrypted yes/no bit. However, there is no place or time in the program execution w
    • Reading the article it seems to me that they are talking about a TPM/TCPA/palladium type application. If running on a TPM equipped machine then debugging is no help nor is a decompiler. The OS either won't let you run the debugger or decompiler while this programme is running or will refuse access to the secure memory area where it is running. Cracking the "secure box" where the data is stored is simply a case of breaking whichever type of strong encryption is used. The same goes for trying to decrypt the

    • Until such time, I call bollocks and I refuse to believe an "impossible to reverse-engineer" piece of code ever exists.

      They're solving the problem of program obfuscation for a certain class of programs. It's known that obfuscation is impossible in general, but for certain classes of programs (like "check if the input text equals a specific string") it can be done. (If you're willing to believe some new, non-standard number theoretic assumptions that seem plausible.)

      Yes, it is tough to prove that the obfus
    • That's why I'm reading the paper these guy published. Scholar.google.com provided listed it first [google.com]; The IACR's e-print archive is kind enough to supply the full postscript document... [iacr.org]
    • mathematically proven to be impossible to reverse-engineer

      I imagine they are using something like Bruce Schneir's Clueless Agents:
      http://www.schneier.com/paper-clueless-agents.html [schneier.com]

      I've read the paper a few years ago, but the gist of how it might work in this case is that instead of the agent comparing the actual keywords to a database, they parse the document and try decrypting some of its code against hashes of the keywords (or, say, sorted sub-sets of keywords).
    • I second your comments. That claim reeks of bullshit.
  • Social Engineering (Score:3, Insightful)

    by mwvdlee ( 775178 ) on Wednesday January 25, 2006 @05:17AM (#14555954) Homepage
    So, it has been mathematically been proben impossible to reverse engineer... has it also been mathematically proven impossible to socially engineer?
    • > So, it has been mathematically been proben impossible to reverse engineer...
      > has it also been mathematically proven impossible to socially engineer?

      I'd like to see a definition of reverse engineering that excludes social engineering without making a special exception for it. Any such definition would be so narrow that it would also exclude numerous other common types of reverse engineering. (No, it won't do to say that the information obtained by the process has to be obtained directly from the i
    • has it also been mathematically proven impossible to socially engineer?
      Give me some time with the guy in charge of the program, a wooden chair, some duct tape, a pair of pliers, and a ball peen hammer. I'll tell you exactly what the program does.

  • And who gets to define what a "law abiding" citizen is? It may be OK now but what happens when the law is that you do not oppose the state, whoops, too late, there is already the infrastructure in place to find out where those damned pro-democracy scum are and what they are upto.

    Next, when we're all watching TV and doing our VoIP on the net, all have our home security systems on the net then the government 'sees' everything, 'knows' everything and you have entered into the police state where you can't even
    • Re:Like the Stasi? (Score:3, Interesting)

      by DrSkwid ( 118965 )
      My only option is to recall Nazism, so please don't apply Godwin's Law to this =)

      Prior to the occupation of Europe, Dehomag (IBM's European Subsidiary) tabulated the census data of unoccupied European Countries at their behest. This seemingly innocent data was then co-opted by the Nazi state, with the help of IBM. IBM had recently introduced Hollerith machines and the Nazis were IBM's best punch card customer. In 1937 Thomas J. Watson was decorated by Hjalmar Schacht, the Nazi Economics Minister with the Me
      • You have just pointed out what I believe is one of the THE major issues with information. Once someone has it, there is no telling, or even controlling, who else may acquire it, or otherwise gain access to it.

        Since they can't have what doesn't exist, the best protection is to avoid producing it in the first place. Affording oneself greater protection isn't difficult, but it *is* a matter of shedding some of the conveniences to which people have grown so accustomed. "Dangerously easy", or "inconveniently saf
  • In other words, it can't be analyzed to figure out its components, construction and inner workings, or reveal what information it's collecting and what information it's discarding -- it won't give up its secrets.

    Maybe they have a mathematical proof that makes reverse-engineering impossible. Fine. But it is still possible to find out what it does in practice, since the nature of the data it processes is known. Just run it in a simulation and see what it does. No reverse-engineering required. From there onwa
    • If it can be run, it can be read. If it can be read, it can be decompiled. If it can be decompiled, it can be understood.

      The core claim in the article is that an attacker with access to the code has no possibility of knowing if a given input will be flagged or not. I can see how someone with access only to the data storage could be prevented from knowing if the gigabyte of noise it stores just changed randomly or if his message was stored there in public key encrypted form. I can _not_ see how the applying
      • If it can be run, it can be read. If it can be read, it can be decompiled. If it can be decompiled, it can be understood.

        Actually it is not that simple. It is possible to make the functionality ununderstandable. Goedel's incompleteness Theorem states that. But for most practical purposes you are quite correct. Especially if the input data set is known.
      • As I mentioned in another post http://it.slashdot.org/comments.pl?sid=175056&cid = 14558431 [slashdot.org], I imagine it's something like Clueless Agents.

        So if it's comparing if hashes of keywords match to hash sets, you can't know what it'd match to until it does, even if you have the code (unless you run all possible keyword sets through it, which could be quite a large search space).
      • It is possible. Here's a link to the research paper itself, [ucla.edu] but it's extremely technical and dense. I wouldn't recommend even trying to read it unless you're extremely familiar with reading this sort of research paper on abstract math and cryptography.

        I can give a an example to illustrate how such a system can work. Of course my example will be extremely simplified... analyzing my simplified system to figuring out what it is secretly tracking will merely be an interesting (and hopefully non-obvious) puzzle,
    • From the article:

      The filter cannot be broken in the same sense that one cannot crack time-tested public-key encryption functions such as those already used for Internet commerce and banking applications. In that aspect, it's essentially a bullet-proof technology.

      Professor Ostrovsky (cited in the article) has written in the past about public key encryption with keyword search (PEKS). Here's the abstract [ucla.edu] as well as the paper [ucla.edu] itself (warning PDF file).

      INAC (I'm Not A Cryptologist), so take my

      • > From the article:
        > The filter cannot be broken in the same sense that one cannot crack time-tested
        > public-key encryption functions such as those already used for Internet commerce
        > and banking applications.

        In _what_ same sense. Public-key cryptography relies on the attacker not having any access to the computer system where the private key is stored. Otherwise, it can be broken very very easily. If this filter cannot be broken in the "same sense", it implies only that it cannot be
    • > But it is still possible to find out what it does in practice, since the nature of the
      > data it processes is known. Just run it in a simulation and see what it does.
      > No reverse-engineering required.

      Actually, that's one of the most common and useful reverse engineering techniques: run it in a controlled environment and see what it does. The people who are claiming that they have a mathematical proof that this thing _can't_ be reverse engineered are either very dishonest or very ignorant about c
  • How much you want to bet that some of the criteria include using encrypted communications and annonymizing proxies and other legitimate security measures that people will start adopting exactly because this kind of snooping system exists? It's a self-fulfilling prophecy.
  • ...it is a new and powerful example of a piece of code that has been mathematically proven to be impossible to reverse-engineer...

    Brrrrrr.. spooky! This sounds like an incredible misinterpretation of whatever the original paper/research is actually doing though. Devices may be reverse engineered without even looking inside if you have access to its inputs and outputs and can continually test and hypothesize and retest, etc. A device that distinguishes between 'evil' and 'regular' packets (as input) an
    • Based on the very little information available in the article, this sounds like an offshoot of the work on interactive proofs, and the UCLA professor quoted does indeed seem to have done some work in the field - see http://www.cs.ucla.edu/~rafail/PUBLIC/index.html for his publications.

      A glance at the paper titles suggests "Private Searching on Streaming Data" as being the closest to the original article.

  • by jonwil ( 467024 ) on Wednesday January 25, 2006 @06:40AM (#14556194)
    Which CPU does it run on?
    Which executable format does it use?
    Unless its running on dedicated hardware with really strong encryption (and even then, thats no gaurantee), it is possible to reverse engineer any piece of code piece by piece (for example, start with the first instructions the program executes and unwrap it from there). If you wanted to go deep, you could use an ICE or similar (or a software emulator with a built-in debugger that cant be detected from the emulated side)
  • How can they claim it has been "mathematically proven to be impossible to reverse-engineer" without having first submitted the code to peer review? My house can be mathematically proven to be impossible to break into. But tell that the the guy with the ski mask and the crow bar.
  • Of course not -- after all, it's already against the user.

    For crying out loud, this is spyware, by definition.

  • "With this new technology, based on highly esoteric mathematics, the software can be distributed to many machines on the Internet, not necessarily trusted or highly secure. The software works by analyzing all of the data and then having the appearance of putting all the data into a 'secure box.' A secret filter inside the box dismisses some data as useless and collects only relevant data according to the confidential criteria that can be programmed into the software. And because it's all done inside encrypt

  • Isn't this like profiling? Everything can be reversed. Just wait until the dark side gets a hand in this.
  • This new software selects which communications are of interest to the intelligence community using an undisclosed algorithm. This algorithm "cannot" be reverse engineered. We just have to take the government's word for it that the selection criteria are correct and are unrelated to anyone's personal or political agendas. This will somehow "may ease some of these privacy concerns by making the tracking of terrorist communications over the Internet more efficient, and more targeted, than ever before."? I d
  • Comment removed based on user account deletion
  • Many commenters are claiming "it is always possible to reverse-engineer a program!," using such reasons as "you can always watch the processor perform the instructions and eventually figure it out."

    Let me tell you, as a cryptographer, that these claims are false. The recent field of program obfuscation gives surprisingly strong ways to prevent reverse-engineering, in a very rigorous and strong way.

    Not every program can be obfuscated (this has been proven). However, programs that fit a certain template (like: "check if the input string matches the user's password") can be obfuscated. What this means is that you can give the program's entire code to the adversary -- he can run it on his own computer (no DRM required) on whatever inputs he likes, alter it, stretch it, twist it, whatever. After all this he still will not be able to guess the password, any more than if he had some mathematically-perfect black-box that truthfully answered the question: "is [X] the password?" (Actually the definition is even stronger than this, but that's the gist of it.)

    Yes, this seems extremely hard to do -- after all, the adversary has complete and total power over the code that is running. Yet it can be done, rigorously and provably, if you're willing to believe that there are some number-theory problems out there (like RSA) that are hard to solve.

    For the work described in the article, it sounds like the "black-box" does something like the following: if your input string contains some "watch words," then the output is the same as the input, but encrypted under the government's key. If your input string is "benign," then the output is just "THIS WAS A BENIGN INPUT", encrypted in the government's key -- i.e., it ignores any benign input and replaces it with a placeholder. By running the obfuscated program and looking at the output, you can't tell if the input was flagged or not. Even while watching the program run, you can't tell if the program is flagging the input or not (or learn anything about the government's key). When the government collects the output and decrypts it, it only sees the flagged inputs, as the rest have been ignored.

    As I've said, none of this depends on the program requiring any DRM or TPM or any other specialized hardware. It only relies on the mathematics.
    • I've an idea then for how to circumvent this, that doesn't require defeating the mathematics involved.

      Alright, so say you're running this software for whatever reason, maybe just to keep up appearences. But you don't want your traffic flagged, and you don't want to filter at the router. We can still decompile though. So... What about extracting the placeholder and the public key, then replacing the software with your own version that ALWAYS outputs the encrypted placeholder regardless of the input?

      Just a th
      • What about extracting the placeholder and the public key, then replacing the software with your own version that ALWAYS outputs the encrypted placeholder regardless of the input?

        Obfuscation doesn't prevent you from changing the functionality of the code. If you have control over the code that is running on a router, then of course you can just delete the filtering code entirely, or replace it with whatever you want.

        All that obfuscation does is prevent you from understanding what the original code does, eve
  • If this can be done, and i see no reason why it cannot, then wait till the bad guys find out how it works and start to make worms that cannot be reverse-engineered.

    Oh brother.
  • Um...WTF (Score:3, Insightful)

    by Hard_Code ( 49548 ) on Wednesday January 25, 2006 @10:19AM (#14557694)
    Uh, have we entered some new bizarre Orwellian Twilight Zone? So basically an uncrackable secret black box that the government can install on any machine to intercept any traffic with no ability for the surveilled party to repudiate the content (or perhaps even be aware of the surveillance?) is somehow a win for privacy? WTF.

    BREAKING NEWS. The government has devised a fool proof plan to protect your privacy. They will simply garrison an intelligence agent in your house recording everything you do to make sure that the government doesn't inappropriately invade your privacy. (for your own safety please do not attempt to resist; you will have to be beaten to protect your own privacy, after which you will be dumped in a shallow unmarked grave - again for your privacy)
  • > "...researchers at UCLA say they have developed a system to monitor suspicious
    > online communication that discards communications from law-abiding citizens
    > before they ever reach the intelligence community."

    "Law-abiding": which laws might that be? The laws intended to prevent disruption of society, like the ones used to jail many civil-rights activists in the 50s and 60s? The laws that declared a black man couldn't marry a white woman? Or the ones that declared a woman can't own real property?

    So
  • Sounds like Famous Last Words...
  • This is a scam (Score:3, Insightful)

    by Master of Transhuman ( 597628 ) on Wednesday January 25, 2006 @12:20PM (#14559258) Homepage
    "Because the code cannot be analyzed, terrorists using the Internet to communicate will never know if the filter has pinpointed their data or not."

    Uhm, excuse me, but this is exactly the situation right now. Since when do terrorists ever KNOW that security is on to them until they're caught? Terrorists take precautions against being detected by ANYTHING. Terrorists with the slightest brains do not talk about operations in the clear at any time. What then is this software supposed to detect? Where is the benefit?

    Supposedly the benefit is that "harmless" communication is never seen by the Fed. Bullcrap. The parameters of the software are SET by the Fed - they can see anything they want. That's obvious from the article as it glosses entirely over the matter of "criteria" in the first place.

    This software would only be safe in the hands of someone who IS safe. In the words of the DRM enthusiasts, it only "keeps honest people honest." And since the criteria is changeable - as well as the appointment (or election) of the people who set the criteria - this is no security at all.

    In the hands of George Bush, Dick Cheney and General Hayden, you're screwed, blued and tattooed.

    This is nothing more than a propaganda piece put out at this time because Bush is in danger of being impeached over the spying issue. That's the bottom line.

  • False Premise (Score:3, Insightful)

    by tom's a-cold ( 253195 ) on Wednesday January 25, 2006 @02:47PM (#14561066) Homepage
    There is no tradeoff between privacy and security, so there is no need to "balance" them. An individual is not secure if their privacy is being routinely violated.

    The tradeoff is between privacy and totalitarianism. Solutions that attempt to split the difference are not helpful.

  • "Any sound that Winston made, above the level of a very low whisper, would be picked up by [the telescreen], moreover, so long as he remained within the field of vision which the metal plaque commanded, he could be seen as well as heard. There was of course no way of knowing whether you were being watched at any given moment. How often, or on what system, the Thought Police plugged in on any individual wire was guesswork. It was even conceivable that they watched everybody all the time. But at any rate they

Business is a good game -- lots of competition and minimum of rules. You keep score with money. -- Nolan Bushnell, founder of Atari

Working...