Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Encryption Security

Battling Steganography 195

An anonymous reader submitted a fairly thin little story about a researcher who is Battling Steganography. I can certainly see the appeal of the study but it really seems like a needle in a hay stack sort of project. And when you actually can detect one technique, new and better techniques will crop up and take its place.
This discussion has been archived. No new comments can be posted.

Battling Steganography

Comments Filter:
  • ... I wanna start *using* stenography! Won't some enterprising Karma Whore throw us a couple links?
  • Wait a minute (Score:5, Insightful)

    by imAck ( 102644 ) on Thursday August 16, 2001 @12:08PM (#2111109) Homepage
    Was it just me, or did the article make it seem like anyone that would use steganography would be a criminal? Since when in a 'free' country should the ability to hide a message be of interest to the "legal community"?

    • Re:Wait a minute (Score:4, Insightful)

      by DeadVulcan ( 182139 ) <dead.vulcan@pob[ ]com ['ox.' in gap]> on Thursday August 16, 2001 @12:48PM (#2113599)

      Was it just me, or did the article make it seem like anyone that would use steganography would be a criminal?

      The article didn't say this at all. In fact, the types of criminal activity that were mentioned were "political and corporate espionage or illegal pornography."

      Talking on the phone is not criminal, but wiretaps are used all the time in fighting organized crime.

    • I got that feeling at first, too. Then I thought about it, and decided that the entire article had
      "give me a grant" written all over it. When you're doing grant writing, you want to make something sound as important or cool as you possibly can. Even if it means you have to play up the problem you're solving a bit.

      Of course, that's PR in general these days... I feel sortof dirty after I write things like that.
    • I think the legal community would be interested in anything that might help them find and interpret potential evidence. When evidence is properly confiscated, we now have the techniques to break locks on door and safes, we now have the techniques to crack certain types of cryptography, and hopefully we'll soon have the techniques to FIND a stack of steganographically-hidden evidence. It's pretty relevant to our legal system.
    • Re:Wait a minute (Score:4, Insightful)

      by twitter ( 104583 ) on Thursday August 16, 2001 @01:56PM (#2135705) Homepage Journal
      You are right, the article did have that feeling.

      We might expect this of a promotional article. Breaking crypto to fight perverts sounds more exciting than studying paterns to detect private messages. Others have proposed better promotion, like making crypto stronger by breaking weak methods.

      A good analogy to fight the underlying assumption of the negative promotion is cloathing. The assumption is that only criminals have something to hide. Bull. Try working words like "naked" and "bare" into your thoughts. Examples: "What, are you still sending naked email?", "Are you foolish enough to trust bare telnet logins?". People will get the idea.

      Society does not work, and it's individuals are debassed when privacy is eliminated. It's impossible to have frank disscusions when you may be overheard by people who may missuderstand. It's impossible to invest or plan without privacy.

  • Stay with me, here...

    Let's say I wanted a message to be available to a wide number of people, hidden with stenography, and encoded as well. I pick a image, such as an X10 ad, that could be easily found from a "legit" source. I encode my message, then hide the encoded message in the least significant bit of the color for each pixel of the image - net effect, the ad looks just about the same, but there is data encoded in it.

    If I knew messages were being passed this way, I might be able to get the message. First, I'd have to acquire the source image. Then, I would do my own diffs, and try to find the meaningful data. At that point, it's a decryption problem.

    But how do I detect the data hiding in the first place? I would have to detect that a stream of data is very similar to another stream of data, but with minor differences.

    Let's say I've solved that problem, and now have some signature, such that all identical data streams have the same signature, and very close streams have very close signatures. Then, I have to catalog data streams as they pass by, assign signatures, count instances of signatures, and call a hit when signatures are significantly close but not the same. A quick visual check can confirm the match.

    Back to my original thought - instead of a data stream representing an image, what if the data stream represented the subject line of an e-mail, or the e-mail itself? A central database could manage signatures, automatically reported by e-mail clients that generate the signatures. When I get a new e-mail, I can get the signature for the header, and send it to the database. It could then report "that might be spam", and I could delete it without downloading the whole message. I could also download the message, upload the signature, and the database could say "that's probably spam", and it could be deleted or moved before it shows up in my Inbox. With many people uploading signatures, the database could quickly generate the average signature and the variance of the signatures, with people double-checking "Yes, I consider this to be spam".

    A couple of benefits would be that, hopefully, the signature doesn't give much info about the text, so it would be safe to upload signatures for personal email. Also, it may be fairly easy to get enough responses to be statisically certain that email with a particular signature is spam, so that many would benefit from a randomly chosen few who choose to respond that an email is spam.

    Of course, it may be impossible to generate that signature, or the signature may be long enough to identify the text of messages. Still, I could see that as a benefit of this kind of research. I'd also like a way to auto-respond "You have been found guilty of forwarding hoax emails. Please stop and desist." to just about everything my family sends me...

  • 1.) Usenet/slashdot post seems oddly coherent.
    2.) There's that certain, special _sumthin'_ about the fractal glint in Asia's lower lip....
    3.) Lip-reader of your acquaintance says, "While he's doing that to Anthony Perkins in that doctored photo, Gore appears to be saying, "Al, your Bates are belong to U.S.."
    4.) Snowcrashing.
  • by dingbat_hp ( 98241 ) on Thursday August 16, 2001 @12:07PM (#2114018) Homepage
    ... The secondary image, woven into the primary one, would not be possible to detect by peeling up one corner of the main image (as has been done here merely for illustrative purposes).

    Excuse me ? Did I wander into The Onion [theonion.com] by mistake ?


  • I like the way he claims a 90% success rate. Either the researcher is a moron or else the person writing the article has already beaten him there.

    What if there were three encrypted messages in each image he processed? Finding one is useless, because the sender could put an easy message in and two extra that won't get caught.

    Better yet: his algorithm could be giving him garbage hits and not be finding anything real. The pictures could be just pictures. Novel concept.

    *whew* Moron alert - eleventy three o'clock.
  • An large number of people in this discussion are entirely missing the point of what Farid does.

    Let's put it this way. If Farid alone can crack a variety of steganography, then the NSA or whoever it is who really want to invade your privacy. If he was trying to crack RSA or DES or PDF's ROT13 encryption, he would be praised - do you really think that steganography is somehow special?

    So the article was rather uninformative. I've met Farid. He's a very cool guy. He's working against things like SDMI - which is a form of steganography. As part of a lecture he gave, he showed how to defeat various watermarking techniques for images (without getting arrested, even.)

    Consider that when you say "battling steganography is battling privacy! We must hate him!" you are using the same logic that put the DMCA in place. Congratulations.
  • by Fencepost ( 107992 ) on Thursday August 16, 2001 @12:52PM (#2115793) Journal
    I haven't actually done any digging on this, but I suspect that for almost any graphic image there are detectable patterns in the ordering of the lowest bits. There will of course be some files (particularly small ones) where there isn't enough information to identify patterns, and there will be others where the distribution truly is random, but that just means that identifying files with steganographically-encoded information won't be a 100% accurate process.

    That lack of certainty really isn't that big an issue, because with a good idea of what percentage of images are false positives it would be fairly simple to look for image sources where the percentage was well outside the norm.

    All of this would of course be very resource intensive and would require access to large amounts of data (Omnivore, anyone?) but it's far from outside the capabilities of most governments.

    Possibly also of interest to people is Benford's Law, which relates to the distribution of numbers - turns out that in many areas it's very simple to identify real data vs random data, because real data has some definite non-random properties.

    • Nice idea, but it is easily thwarted.
      I and my friends generate every image with random trash in it (the output of /dev/random) we do this to EVERY image and generate several versions of each image with trash in it. we make a neat-o plugin for the gimp that does this quietly without the user's info and we do the same for photoshop. over a years time 5-10 people could spread hundres-of-thousands false positive images onto the net. now.. you send a message, a real one. there is no way to detect if it is a decoy or the real thing.

      and this is where prof-bean's idea falls on it's face. as anyone using this system for real work is doing what I just mentioned or something that is generating massive amounts of decoys in a more effient manner. (hell the decoys now become perfect carriers too! espically if you generated several version of the decoys with different junk in them.)

      It's simple to defeat stenography detection. you saturate the detector to the point where the real items get through.
      • It's simple to defeat stenography detection. you saturate the detector to the point where the real items get through.

        Only, if that infomation isn't dectable, you may not want it there for other reasons. For instance mp3 and ogg try to drop information that listeners won't detect for reasons of efficent respresentation. I expect these types of lossy methods will get better (ie less undetectable information that can safely be dropped) over time, particularly where the original information was analog.

      • over a years time 5-10 people could spread hundres-of-thousands false positive images onto the net. now.. you send a message, a real one. there is no way to detect if it is a decoy or the real thing

        Something along the line of Ron Rivest's Chaffing and Winnowing technique? http://theory.lcs.mit.edu/~rivest/chaffing.txt [mit.edu]

  • Anonyone wanna bet that Farid is the AC who submitted the story? I took a programming class from him a few years ago...he seemed pretty full of himself then, too.
  • pointless (Score:2, Insightful)

    by mj6798 ( 514047 )
    Good steganography is essentially the same as adding random noise to an image. You can structure the noise any way you like. There are lots of images that plausibly contain lots of noise, for example images taken in low light and images scanned from film. As long as you don't insist on a very efficient steganographic embedding, there are undetectable steganographic methods. Farid's research is pointless, and it is scary to think that courts may start relying on it.
  • Hiding the message would be a form of encryption protected by that dastardly DMCA....want my hidden message? I'll sue you!
  • Try the following: 1. Go to the Dartmouth home page, 2. Search for Farid, 3. Click on Farid's link, 4. Click on the address for his home page. Obviously this 404 error page must have a hidden message. Results of wavelet compression analysis will be posted later.
  • by gehirntot ( 133829 ) on Thursday August 16, 2001 @01:19PM (#2120056)
    I am bit surprised. I released stegdetect [outguess.org] in early February this year. It automatically detects steganographic content in images. It can even determine which program was used to embed hidden content.

    You might also want to check the techreports [umich.edu] that I published about my research.

    At HAL 2001, I presented on Detecting Steganographic Content on the Internet [umich.edu]. You might like that.

    Dartmouth certainly seems to know how to do PR. I would just like to know where their publications are.

    • Actually, the forum this story was extracted from is pretty much geared towards only generating PR, and not scientific exchange. Attacking Farid and/or Dartmouth for this is silly... this is how institutions generate attention and money for grants.

      But it is especially silly since he does such a bangup job of putting his technical work on-line:

      Farid's Publications [dartmouth.edu]

  • by (void*) ( 113680 ) on Thursday August 16, 2001 @12:37PM (#2120095)
    Suppose one gets caught with such an image. According to him, the technique has a 90% chance of success. So what about the 10%, wherein, one has no message encoded in an image, but triggers tha alarms anyway? If you get caught by the FBI, what can you say?

    You might say that 90% is no pretty significant. But considering how many actual images are there out there with actually no steganographic message, I think you'll actually end up persecuting more innocent people.

    I just more more eveidence than this is required for a warrant to be issued.

    • "So what about the 10%, wherein, one has no message encoded in an image, but triggers tha alarms anyway? If you get caught by the FBI, what can you say?"

      Last I heard, the FBI doesn't go around busting people for passing around what might be secret messages. I know there's been complaining about a general erosion of rights and privacy in the US, but I doubt it's gotten that bad.

    • Suppose one gets caught with such an image. According to him, the technique has a 90% chance of success. So what about the 10%, wherein, one has no message encoded in an image, but triggers tha alarms anyway?

      The 10% miss rate in and of itself should still represent plausable deniability. If you take standard legal practices, a 90% probability of a "match" is still weak enough that it would require other supporting evidence, circumstantial or otherwise to present a reasonable case.

      If you get caught by the FBI, what can you say?

      Caught how? It's not illegal to embed hidden messages in images, just as it's not illegal to hide a plot in pornography - though both are equally unlikely.

      I just more more eveidence than this is required for a warrant to be issued.

      IANAL, but a 90% probability that you're engaging in a perfectly legal activity doesn't seem, on its face, to meet the burden of probable cause necessary to perform a legal search and seizure.

    • by Anonymous Coward
      A 10% miss rate doesn't mean that there is also a 10% false alarm rate.
      • Sorry, my bad. The article does claim it is 90% accurate.

        But my observation stands. Since the population of nonencoded images is presumably very high, the false alram rate must be higher than 10%.

  • Impossibility (Score:4, Informative)

    by zpengo ( 99887 ) on Thursday August 16, 2001 @12:04PM (#2123204) Homepage
    Steganography is nothing new. People have been hiding secret messages in innocuous objects since time began. Naturally, various people want to prevent this, but the method's very nature makes it almost impossible to simply track.
    • An Analogy (Score:3, Interesting)

      by underwhelm ( 53409 )
      Imagine trying to decipher the hidden messages in "The 5000 fingers of Dr. T." [a-movie-to-see.com]. It is a movie and as such contains the symbolism and iconography and messages of many individuals. Some of them are apparent, some of them covert, and some of them downright indecipherable.

      Also, think about the Blade Runner/Ridley Scott "Is Deckard a replicant" business that lasted, well, right up until he told the world the answer. It is that sort of interpretation that someone hoping to decipher steganography would have to perfect. It's not just stuff like: Hi Everyone Likes Punch!

      The only way to get messages out of such texts is intimate knowledge of the author(s) or intended recipients of the hidden meanings. By asking them, or sodium pentothal, or the NSA's computer simulation of everybody's brain.

      I'm no cryptographer, but the most reliable and cost effective way to discover a secret is likely to investigate the people that know the secret, rather than try to divine meaning from a text that came into your hands.
  • It seems like Dr. Farid's research would have wide application in detecting and "battling" related technologies like digital watermarking in sounds and images. I wonder if the media companies will try to use the DMCA to bully him out of his research, as we've seen in similar cases.
  • Not Quite Useless (Score:3, Insightful)

    by lblack ( 124294 ) on Thursday August 16, 2001 @12:18PM (#2125289)
    While it's true that human beings can interpret images to mean something that a machine could never pick up on, that's not the thrust of the research being done here.

    He is doing research into a very particular kind of steganography, whereby messages are concealed within an image via slightly altering the least significant bits of an image.

    When you encode information in this way, somebody knowing how to extract it can pull out a message which is not subjective (as in the example of interpreted images given by another poster), but rather is very concrete.

    There is some evidence that this form of encoding has been used to communicate information throughout terrorist cells.

    What the researcher is doing is developing a method to detect when the LSB's in an image have been manipulated slightly. He is not trying to decode the message, but only to flag particular images as being suspicious.

    Decoding would be a matter for someone completely different -- like the FBI, for instance.

    His method does have applications, and if it is through alteration of LSB that a message is embedded in an image, it will apparently detect such 90% of the time.

    This is a vast improvement over any existing methods I know of for detecting LSB manipulation.

    So he's not quite looking for a needle in a haystack. He's examining millions of haystacks, and pinpointing the ones that probably *do* have needles in them.

    Quite a large difference, really.

    -l
  • battling privacy? (Score:1, Insightful)

    by Anonymous Coward
    So is this guy also battling privacy?

    I don't see how anyone with a conscience could decide to intentionally try to destroy methods with which people can protect their privacy.

    • Re:battling privacy? (Score:2, Interesting)

      by invenustus ( 56481 )
      If I'm feeding a troll, I apologize, but....
      I don't see how anyone with a conscience could decide to intentionally try to destroy methods with which people can protect their privacy.
      That's the paradox that's inherent in almost all of the cryptology field. If you want to make cryptography better, trying to break cryptography is a great way to go about it. It's better if the good guys do it first. If anyone ever figures out a polynomial-time algorith to factor a big number, it's going to fsck up a whole lot of the world's cryptosystems, but whoever figures it out is going to be a well-known name in the crypto community.

      The same applies to steganography, IMHO. SOMEONE has to break it - it might as well be me.

    • This is more about the perception of privacy.

      If I were using a technique to protect my privacy that could be cracked, I would want to know about it and it takes this kind of research to find out.

      Having said that, this guy comes off as somewhat of a tool in this article. Not all people who wish to protect their privacy are criminals. Moreover, law enforcement does not necessarily represent the side of good (and corporations almost never do). This is also a method used by people to protect themselves from the abuses of both.

      But regardless of the motives of the research, this knowledge will ultimately lead to more privacy through inovation. And if this guy can crack it, who's to say the FBI hasn't been doing it for years?

    • Wake up. Unless you are happy and want to permanently settle on the present stenographic techniques, you need people like this dude to figure out how they can be defeated. Imagine if you had said the same thing when "ROT-13" was invented.
  • Remember, a good encryption algorithm will render its output indistinguishable from random bits. So, his techniques will work only as long as the data is not encrypted. Once the baddies(?) start encrypting data and putting it in there, he won't be able to detect the presence using statistical techniques.

    I know, there's the problem of key distribution. But you could include the key itself as plain text in the first x number of bytes of your payload, followed by the actual data encrypted using DES/AES/TwoFish. Unless the decoder knows the length and the location of the key (something you can decide on beforehand), s/he won't be able to decode it.
  • If you strong encrypted the data before placing it into a data stream, then you have made the task all that much harder because you now have no sensible data to extract, just seemingly random noise. Just a thought...
  • > This process led to the creation of a computer
    > program that can determine the likelihood that
    > a secret message has been hidden within an
    > image.

    So he can show that something is in there? That's not as big a deal as the article makes it out to be...half the time, you'll know the data has encrypted information embedded in it. The hard part is getting the info OUT OF the data, which the article doesn't really address.
  • by crisco ( 4669 ) on Thursday August 16, 2001 @12:56PM (#2128058) Homepage
    The reason we have effective encryption (when it is implemented right) available to use is because of the large amount of research that has gone into breaking encryption. Because of the community of mathematicians and others actively trying to break weak algorithms we know the strengths and weaknesses of various ways to encrypt data.

    Now we have more people looking at steganography. This can only make it more effective. Sure, the methods we have now might be broken but what about the next ones, the ones that don't show up on the statistical analysis that he appears to be using.

  • It occurs to me that all compression involves quantization.

    If you consider the case rounding 0.5 to an integer, it's clear that either possible choice 1 or 0 is equally good, and in fact the best answer in that case is usually to pick one value at random so as not to add a consitant bias. Therefor, in these rare cases the resulting bits must, by definition, be completely orthagonal to any properties of the resulting image - you could change them all you like.

    A stenography routine that did it's own compression and only changed these bits would, by definition, be undetectable.

    So, with some fairly heavy constraints, undetectable stenography is inherently possible.

    There must be various ways of making stenography routines that used this property, even routines that don't do the original compression, by finding lsb's that by some measures are really good candidates for having orginally been rounded from near 0.5 and only touching those.

    What cha all think?
  • ...new and better techniques will crop up and take its place.
    Two responses to this observation come to mind: "Duh." And, "So?" Obviously, once an encryption scheme is cracked, people will stop using that method and try to find a new method. But this will only happen after it is known that the encryption is being broken. Thus, there is a window of time, however short, during which the encryption cracker will be able to intercept and read encrypted messages as plain text. Therefore, cracking encryption is a useful enterprise. It's stupid to act like it doesn't make any sense to defeat one encryption scheme just because another one will eventually replace it.
  • Wasn't that what the kids in Along came a Spider used to chat in class? Somekind of over-simplified version of it, anyways ^_^

  • ...if his research leads to easy ways to decode and search image files for hidden messages.

    Can you imagine using his techniques to search through Google's image archives, or perhaps a gnutella network just to see what is sitting out there?

    This sounds like it could uncover yet another seedy underbelly of world culture.

    I imagine there could potentially be millions of hidden messages out there that noone knows about.

    • > I imagine there could potentially be millions of hidden messages out there that noone knows about.

      ...but HipCrime would still be an idiot trying to do a DOS attack on USENET through open SOCKS proxies ;)

  • some thoughts (Score:3, Interesting)

    by Proud Geek ( 260376 ) on Thursday August 16, 2001 @12:18PM (#2131394) Homepage Journal
    First, Taco's comment about "new and better techniques" is ill-informed. This is an information-theoretic method, where the inclusion of hidden information alters the nature of the information in the original document. What this technique does not give you is any hint on how to extract the hidden information.

    Second, I'm not sure how to react to this. I don't use steganography to hide information, nor do I encrypt my email normally. I guess it's good to know if the techniques used to do this are detectable or breakable, but if it was actually used on a large scale you can bet I'd be screaming, "Big Brother!!!"

  • The fact that an image after altered can be detected via a mathematical function is true, but saying that it can be detected without having a source image to begin with? What If I take a picture of a random image and then stuff the message which was encrypted into the image. Voila undetectable. Randomness makes the perfect concealment.

    I can see detectability from some of the crude software packages out there, but not the better ones that make sure the applied file is expanded to the size of the image and reversed.
    • I'd assume that they're working with photographs and photograph-like images (as opposed to stick-man drawings or something like that). In that case, the function could look at certain things that would appear in a photograph - colour borders, gradient, etc. If the picture consistently doesn't show what's expected, then that could be used to show that there's been some sort of change made to it. I don't know much about graphics analysis, so I couldn't say for sure, but I can see this working.
    • What If I take a picture of a random image and then stuff the message which was encrypted into the image. Voila undetectable.
      Nope, you're missing the point. All normal images have common mathematical characteristics. I.e. a picture I take with my digital camera and one that you scan with a scanner, will exhibit common mathematical characteristics, differing from one that has had some sort of steganography applied to it. This way, if you intercept a random image and run the mathematical analysis on it, you can tell whether someone has fiddled with the bits. I don't know that this helps you determine what did the fiddling, but it would just be the first step in decrypting the hidden message. Although the article doesn't say specifically, I would think you could even detect random bit twiddling.
  • Mix steganography with good encryption and/or coding and it seems impossible unless you know before hand what the unadulturated image is on a bit per bit basis.

    For instance two diffent jpeg encoders, both at the same quality level will result in subtly different encodings of the same source image. If you take these two images, calculate the difference at each decoded pixel, and amplify the diffence (so that you can easily detect minue intensity differences) you'll see the signature of the differences between the encoding engines.

    Now if I encode a message in the image (a 1 megapixel image, small by todays standards, can encode a 1 megabit steganographic message assuming only a 1 bit change in colour). If you could get the source image and do the above described difference calculation you would see the pattern representing the message.

    If you pick the wrong source image (it LOOKS identical but was compressed slightly differently), you'll only reveal a combination of the signature and message.

    Do whatever statistical examination of this noisy signature you want, I don't see how you can determine that the image concealed data. Well, unless you do an impressively poor job of concealing the data in the message. Encoding your message in a pure white gif, jpg or png would be a bad idea for instance.

  • Seems to me that a watermark is a form of steganography. I wonder if these techniques would work for watermark detection?
  • neils provos (openbsd and openssh developer) has a stego detector based on similar principles (i.e., look for statistical anomalies in jpeg files).

    in fact he is presenting a paper on the subject at the usenix security conference tomorrow.

    unlike the dartmouth folks, who apparently think press reports are the proper medium for scientific interchange, provos makes his results publicly available; see

    http://www.citi.umich.edu/techreports/

    reports 01-1 and 01-4.

    nobody
  • Comment removed based on user account deletion
  • And when you actually can detect one technique, new and better techniques will crop up and take its place.

    That's like saying 'if somebody can break 56-bit keys, you can just increase the key length'. In other words, it's really not that simple. Firstly, you're assuming that there will always be new techniques. Secondly, you're suggesting that these new techniques will always be harder to detect than previous techniques. Thirdly, you're assuming the licensing model of such techniques will allow them to take the place of existing techniques.

    In short, until you know what you're talking about, or are able to engage your brain, please shut up with your opinion, and just deliver articles and facts. Thanks.
  • by Bonker ( 243350 ) on Thursday August 16, 2001 @12:10PM (#2132346)
    The article stated that the guy used an algorithm to detect statistical variations and predict wether an image had steganographically hidden data 90% of the time.

    How about a GIMP or Photoshop plugin to randomly insert junk data in any JPEG saved in order to make this technique useless? It'd be fun to the the NSA sit and fret over an image that apparently had a list of Warez traders and DMCA violators but instead contained the lyrics to 'Penny Lane'.

    Better yet, how about an Apache module that does this same thing to every JPG it serves?

    The point is, that as soon as it becomes common procedure to intercept images to check for steganography, those who use steganography will switch methods. I bet PGP data encoded in a JPG is a lot harder to detect, and infinitely harder to extract.

    • Hey, remember the site on the net that had the lyrics to many songs that got shut down? Embedding the lyrics to Penny Lane is illegal :-(

      Robert
    • How about a GIMP or Photoshop plugin to randomly insert junk data in any JPEG saved in order to make this technique useless?

      You can't do that. JPEG/DCT (as is the norm with files adhering to the JIFF) is a lossy compression scheme, which means LSB's are lost in the process.

      This is one reason why I think it is not practical to embed messages in images files posted over the Internet. De-facto standards are JPEG and GIF's, and although LZW is lossless, you don't want to mess with LSB's in a 256-color palleted image (except if you "color" pallete is an ordered grayscale pallet). A TIFF file with either grayscale, RGB or CMY/CMYK data would do the trick, but who sends TIFF's? If someone already has an eye on you, that would definitely look suspicious.

    • An Apache module which automatically inserted noise into JPEG images to simulate steganographically hidden messages is a good idea...

      The problem is that it would corrupt any real steganographically hidden messages in the images, hence rendering images a bit of an unreliable mechanism for storing hidden text... ;)

      • No, this would actually be really cool. Make an Apache module which automatically inserts something steganographically into every JPG it serves. Some people put encrypted data into the images, and others just direct it to read from randomly encrypted gibberish. Then the government has to deal with lots of script kiddies who think they are cool by embedding Brittney Spears mp3s into the images from their webpages.
      • well.. if the junk is properly stenographed so you can retreive that junk (although penny lane isn't junk, good song :) then you can use the lyrics and the knowledge of stenographic technique to restore the picture to it's former state, at which point you can run the stenography detecter again and get the real secret...

        of course it gets more cunning when the data you remove stenagraphically is itself an image with stenographed data on it, and that data is...

        and eschelon has a machine do do all this but completely missed your bombing plans which were the subject of the picture itself and not the stenographed data itself... hiding the wood in the tree's as it were.

        dave
      • An Apache module that automatically inserts noise in a jpeg is a BAD idea. As soon as it exists, the feds will pass a law stating that ALL jpgs being transmitted on the internet have to go thru such a filter...
  • http://members.tripod.com/steganography/stego.html [tripod.com]
    is a great place and has a software archive.
  • How could something like this be held up as any sort of evidince. From what I interpret of what this guy is trying to do is check if that there may be data by checking with compression rates, and randomness compairsons. But what if the photograph or audio file is inherently noisy? Or what if you use a poor implimetation of the compression algorithim?

    With standard encryption, if you are in court you can be ordered to decrypt it, but if there is a chance where there is nothing there, they can't force you to do anything.

    This just seems to be a waste of time to me.

  • by graybeard ( 114823 ) on Thursday August 16, 2001 @12:12PM (#2133676)
    u cn b a stngrfr!
    • Re:F u cn rd ths ... (Score:5, Interesting)

      by dschuetz ( 10924 ) <davidNO@SPAMdasnet.org> on Thursday August 16, 2001 @12:55PM (#2121039)
      If steganography can be made "turnkey", it'll work
      for most of today's privacy requirements.

      You might think that it'd be easy to detect,
      or simple to prevent, but that's simply not true.
      Unless someone lists all the ways in which one

      can hide information, and a fantastically fast
      approach to testing any given communication on the
      net against those techniques. Otherwise, to

      read a steganographically-encoded message,
      each recipient will need to figure out which of
      all the messages intercepted even includes the
      data you're looking for, and what was used in

      this particular instance. Hell, one might even
      have two or more different techniques applied
      in a single message. Like this message does.
      Sort of.

      ....

      • Maybe a subliminal technique (executed right when explicit and weirdly aberrated information triggers your opponent's unencrypting reader) can obscurely manipulate mental awareness, negating data.
      • Very clever... (Score:2, Insightful)

        by Anonymous Coward
        I must commend you. For those not tallented enough (and those who wish to not take the time) to find the hidden message"s":

        1) Take the first letter of each line.

        2) Take the first work of each paragraph.

        • Thanks for the complement. It was a lot harder than I'd expected it to be.

          Certainly there are tools out there that put together random, sensical-looking text with specific patterns in word usage, punctuation, spacing, whatever, to encode messages, but to actually tweak a message with intrinsic meaning in itself is a bit more difficult.....
    • Sorry. Not steganography. That's compression. Steganography adds data. Compression removes it.
  • by Anonymous Coward
    Name your attachment letter.doc.pif and send it with the message "I send you this file in order to have your advice"
  • The article sure does make it seem like steganography is the work of the devil. But watermarking documents and sound files is endorsed by such fine members of the establishment as the RIAA and SDMI. So is steganography evil or good? It's just neutral, despite what the article says.

    This guy should still be afraid of violating the DMCA. If he tries to detect steganographic images in a sound file, he might run afoul of the RIAA. He shouldn't even think about publishing his research.
  • How does open-sourcing a steganographic technique impact its usefulness? I suppose it would depend on the nature of the technique. For example, it seems open-source public-key encryption techniques don't compromise their usefulness simply by sharing their algorithms/source. Is this equally possible with steganography or must the methods remain more secretive?
  • by Contact ( 109819 ) on Thursday August 16, 2001 @12:13PM (#2134214)
    Dislaimer: I'm not an encryption expert by any stretch of the imagination...

    This is an interesting idea, but surely any good encryption produces an output which is indistinguishable from random noise. So, how can the algorithms mentioned in the article (which is interesting, but rather short on facts...) distinguish between the noise added by a steganographically embedded encrypted message and the noise caused by a slightly underspecced A to D converter?

    I'm honestly curious... has anyone got any links to a more detailed report on this?

    • by bartle ( 447377 ) on Thursday August 16, 2001 @12:34PM (#2119591) Homepage

      So, how can the algorithms mentioned in the article (which is interesting, but rather short on facts...) distinguish between the noise added by a steganographically embedded encrypted message and the noise caused by a slightly underspecced A to D converter?

      You're right, there isn't too much of a difference between random noise and an encrypted communication. If you had a pure digital stream that had just been converted from analog, you could stick data in the least significant bits and no one would be the wiser. For example, a CD is just a sequence of 16 bit words iterated 44,100 times a second; you could just replace the least significant bit in each word with bits from your hidden message and it would be indistiguishable from random noise.

      The problem arises when you try to compress digital information. These compression algorithms use the most optimum way to represent data that they can find and discard the least significant data, so they would completely destroy the afore mentioned hidden message. To hide data in a compressed file you need to play with how the compression mechanism stores the data, and the resulting file is most probably not going to be optimally compressed when you're done. What this guy is doing is looking at how the information was compressed, extract the overlying data that was being stored, and making sure the compression algorithm was indeed optimal. If there are any odd quirks in the compressed data or it doesn't look like the compression was optimal, it may be because data is hidden inside.

      I hope this is a good enough explanation. I'm short on the examples but the underlying ideas are pretty basic.

      • well, only lossy compression chuck's data. if you used gzip to compress an executable file then it had better come out the other end looking identical or someone will be annoyed. now if someone mp3'd that audio then fair enough. but you shouldn't generalise that all audio compression is lossy.

        dave

      • Here's an interesting article that mentions some steganographic pictures hidden on some ebay auctions! Bin Laden at work? ;-)

        NSA, Pentagon, Police Fund Research Into Steganography [info-sec.com]
    • lets say you take a picture with a very high quality digital camera and save the picture as an uncompressed BMP. When that file is converted into a .jpg tere are specific patterns in the file that show that it was compressed as a jpg. Colors are related to colors next to it, and you end up with odd compression fragments when the file is uncompressed. If a coded message is inserted into the jpg it will alter those compression patterns. The article talks about altering the least significant bit of color. in the JPG algorithem a small change like that would have drastic effects on how the image was compressed. By analizing those patterns they can tell if something odd was inserted into the file. They can't tell what it was, but they can tell the the picture was altered in some way. At least thats what I interpreted from the article, as alway, i could be wrong.
  • Resource Intensive (Score:3, Interesting)

    by Gregoyle ( 122532 ) on Thursday August 16, 2001 @12:13PM (#2137399)
    I agree with the "needle in a haystack" idea. It doesn't seem like this technique would be practical given the relation between bandwidth and image size.

    Given a certain state of network bandwidth, the quality of images transferred over the network is likely to increase as the ability to transmit that data increases. This means that anyone trying a large scale data mining for steganographic data, for example in a Carnivore-type application, would need to have many times the bandwidth of ALL the senders/recievers in order to analyze that much data.

    That would make it so the only real application of this method would be for people you already suspect of sending steganographic data. You could direct the search toward them. However, then it is still trial and error to find which steganographic protocol they used, etc., and you're back to square one.

    Maybe if the steganographic checking system was actually *intergrated* to the Carnivore system you could get somewhere. It might be a good way to search for messages that were "suspicious".

    It is interesting, though, that this method is possible without knowing the individual steganographic protocols. It just seems that it would be too resource-intensive to deploy on a wide scale, and a wide scale is the only place it would be really more useful than trial and error.

  • What it boils down to is this:

    The more the corporations, and their lackeys in government restrict freedom, the more determined those to preserve it will become, and the less effective their efforts will be.

    For one thing, it's a challenge, and nothing inspires great accomplishments from hackers than waving the red flag.
  • I suggest that we flood the net with documents containing hidden bogus messages. Maybe an innocuous worm or virus would do the trick. It could seek out audio and image files and insert random messages. That should keep the spying computers of the government and other freedom hating organizations busy.

    But wait a minute, seeing they can enact freedom squashing laws like the DMCA with impunity, what's to keep them from making steganography illegal? Resist Big Brother. Demand freedom always!
  • There's that stenography tool, Outguess [outguess.org], that claims it can hide info into a pic without changing the pic's statistical properties (entropy et al, I surmise). I wonder if it's Outguess that makes false (or misinformed) claims, or if Prof. Farid's research on statistical analysis is already out of date...

    Personally, no matter what, I wish Prof. Farid a lot of luck. His work might be what will save our collective ass from SDMI-like schemes down the road.

"I am, therefore I am." -- Akira

Working...