How Mailinator Compresses Its Email Stream By 90% 75
An anonymous reader writes "Paul Tyma, creator of Mailinator, writes about a greedy algorithm to analyze the huge amount of email Mailinator receives and finds ways to reduce its memory footprint by 90%. Quoting: 'I grabbed a few hundred megs of the Mailinator stream and ran it through several compressors. Mostly just stuff I had on hand 7z, bzip, gzip, etc. Venerable zip reduced the file by 63%. Not bad. Then I tried the LZMA/2 algorithm (7z) which got it down by 85%! Well. OK! Article is over! Everyone out! 85% is good enough. Actually — there were two problems with that result. One was that, LZMA, like many compression algorithms build their dictionary based on a fixed dataset. As it compresses it builds a dictionary of common sequences and improves and uses that dictionary to compress everything thereafter. That works great on static files — but Mailinator is not a static file. Its a big, honking, several gigabyte cache of ever changing email. If I compressed a million emails, and then some user wanted to read email #502,922 — I'd have to "seek" through the preceding half-million or so to build the dictionary in order to decompress it. That's probably not feasible.'"
Really Dumb (Score:5, Funny)
Is how I feel after reading the referenced article.
Re: (Score:1, Flamebait)
Re:Really Dumb (Score:4, Insightful)
Alright, I apologize. I was in the wrong. It also came off significantly more sarcastic than I meant it too. The point that I tried (and failed) to make was that there really is nothing that should make anyone feel dumb, it's really just a lack of learning that can be fixed. Thank you for calling me out.
Easy compression: (Score:3)
Just code "Prince of Nigeria" for 1 bit and you've got (17*8==136):1 compression. Continue with that line of thinking... "expand your manhood", "Pass this along to a friend", "Dear beloved in Christ", etc.
Related anecdote: Way back when, as relatively innocent SW listeners, some friends and I thought it would be awesome to listen in on phone calls. They were all over; radiotelephone, on C-band satellite, etc. You just had to figure out where they were. Well, after about an hour of actual listening, we deter
Re: (Score:2)
I used to think that the difference between the bright ones and the not so bright was really just education. That intelligence was really a measure of curiosity and drive to learn. Attempting to teach people material has cured me of this fantasy.
For every person who just needs education or needs the material presented to them in another way that clicks for them there are thousands who simply don't and won't get it no matter how it is presented.
Not that I'm saying the fp is in one group or the other.
Re: (Score:1)
Yep... Compression algorithm designed with knowledge of what it's compressing does better than compression algorithm without... Also, the Pope is Catholic.
Re: (Score:3)
I don't know that I understand entirely or have read it this fast, but basically no. He used algorithms stored into arrays and some database engineering voodoo.
Re: (Score:2)
Close, actually. Very close. They split the headers it's true, but he compresses the emails together so that emails that are exactly (or almost exactly) the same ("get viagra now!" or a newsletter) don't have to be stored in different places in memory. Only large emails get LZMA (much better than bzip fyi).
Re:You cut off at the good part. (Score:5, Informative)
Actually there is even more to it than that.
Re: (Score:3)
Actually, the core of his compression scheme seems to be constructing an LZW dictionary but using line patterns instead of bits or letters. The reason it works is because he jumps ahead of all those arrangements that would have probably been made letter by letter a, an, and, and , and h, (etc) which is what makes lzw and it's variants slow.
It was clever, but any Theory of Information course should tell you that choosing your "default" symbols correctly is very important. He did just that (:
Re: (Score:2)
Only large emails get LZMA (much better than bzip fyi).
For text Bzip2 is actually quite good. It is also substantially faster than LZMA, which means he may have been able to hit his 10MB/s mark and compress everything. Further, Bzip2 actually operates in blocks (max 900kb) using up to 6 dictionaries. I'd actually assume pretty much all compression algorithms at least support a mode amenable to streaming, if it's not baked in from the get go. In general, more dictionaries are actually better, if you can get away with the overhead. A super giant dictionary, for g
Re: (Score:1)
Close enough. He essentially used deduplication instead of real compression (well, he also used some compression in the end). He essentially split messages into headers/body, then used line deduplication. Then compressed any messages above a certain threshold.
It's an interesting article, you should read it.
Re:You cut off at the good part. (Score:4, Interesting)
Deduplication is compression.
Re: (Score:1)
Meh. Deduplication is replacing identical byte sequences with a more optimal representation (it's a vocabulary transformation), while deduplication is replacing identical byte sequences with a singular token (it's a storage transformation). While the approaches are very similar, the difference is in the size of the atom and the scope of the data set.
Compression in computing already has a very specific meaning (it's a stream operation), and in general technical people do not like overloading (cue the "copyri
Re: (Score:2)
I agree with your sentiment wholeheartedly: precision in language and conversation is important.
Compression is an aspect of Information Theory or Entropy. From this perspective, it is the reduction of redundant bits of information in a given corpus (which is usually a stream because that's natural, but I don't know that there is an inherent requirement). All software
Re: (Score:3, Funny)
Why do the men I love always lie to me?
Re: (Score:2)
Maybe next time you could actually read TFA.
Re: (Score:1)
Ok, I read the article and it was interesting. But the summary is IMO appalling. That said, it is Slashdot and I should have known better...
Thanks for the tip.
Because nobody RTFA (Score:5, Informative)
The end result is that he made his own compression-for-emails, where it scans strings in every email and stores the same strings in memory, with the emails storing only pointers to the strings.
For large emails (he says >20k as an estimate), he applies LZMA on top of that, with a sliding dictionary based on the emails from the last few hours or so.
All in all a very good read for someone (like me) who has an interest in data compression but knows little about it yet. I like to read other people's thought processes.
Reminds me of another good read I found in someone's ./ comment about "compressing" random data: http://www.patrickcraig.co.uk/other/compression.htm [patrickcraig.co.uk]
Re:Because nobody RTFA (Score:5, Informative)
If you enjoyed reading that, you might also enjoy reading this [ejohn.org] and the follow-up [ejohn.org] about efficiently storing a dictionary of words and dealing with memory v.s processing trade-offs.
Re: (Score:2)
I did very much enjoy reading that, thank you. The genius of many people trying to solve their own specific problems never ceases to amaze me. (I was also regrettably unaware of what a "trie" was until now- learn something new every day)
Re: (Score:2)
[ TP neglected to mention the ejohn articles are covering compression using Javascript/Node.js ]
Simple and Good (Score:2)
Tries and Bloomfilters are wonderful algorithms, because they are simple, if you want something a tad bit more complicated use Locality-Sensitive Hashing [stanford.edu] to find similar documents from a big set of documents.
Re: (Score:2)
Patrick appears to be in the wrong in that article, incidentally. He acknowledges the fact that filesystems use more space for multiple files than for a single file, but doesnt seem to understand that the only way his "compressor" could have worked is by using a delimiter or marker of some kind to indicate where data was stripped, and that said delimiter must take up some amount of space.
His rebuttal is basically "its not my fault that modern filesystems have to use non-zero space to store information".
It
Re: (Score:2)
I don't think he was ever trying to get anyone to believe he's not wrong, he seems to just be having a bit of fun by making a work-around that *almost* looks like it works
Re: (Score:2)
He would have done a much better job checking if the original.dat could be found to have a square, cube, etc number in the first N characters. I mean, if the first 520 hex characters comprise a hex number that you can take the cube (or higher root) of, you would be able to use that root as a magic number, and the operation to "exponentize" it again would contain the "hidden information". With a large enough number and a large enough root, the difference between the two might be large enough to net you s
Re: (Score:2)
Im tempted to try to write a compressor based on this now to try to win that challenge:
for N=200 to bytesInFile Do (
if (
IsInt( cubeRoot(readBytes(N))
) Then (
Output("Magic number is " & cubeRoot(readBytes(N))
TruncateBytesFromFile(N)
)
)
Decompressor would be astonishingly small, just an append cube(MagicNumber) to the original file.
In fact, as I think about this, given enough CPU time and a large enough file (lets say 50MB), there is almost no file that you could not compress-- set the minimum length hex number to check, and look for roots that yield integers starting with X^0.33, and counting down to X^0.01. Eventually you would find a root that would work, and with the size of the numbers you would be working with the space savings would be incredible. You could even write a general decompressor, and make the first 20 bytes of the file what the magic number was, and what the exponent was.
A quick check (cubing 0x9999 9999 9999) reveals that you could drop from 47 bytes to 12 bytes if your first "hit" was at byte 47. Imagine if your first hit was at 200 bytes :)
Can anyone comment on how this is working, and where the information is being hidden in this scheme?
Re: (Score:2)
The problem is that your odds of finding a decent sized cube in a random number are pretty abysmal.
Re: (Score:2)
Part of the problem is that Mike Goldman makes as if to outline precise technical constraints on the problem (data file of such size, you tell me this, I send you that, you send me those, they output so-and-so) but includes without being explicit the spirit of the bet. The challenge is about compression, yes, but if you start to give precise constraints on how the bet can be won, you start to imply that any activity within the constraints is fair game.
The confusion here is about the nature of human communi
Re: (Score:2, Interesting)
One was that, LZMA, like many compression algorithms build their dictionary based on a fixed dataset. As it compresses it builds a dictionary of common sequences and improves and uses that dictionary to compress everything thereafter.
What?! LZMA keeps a dictionary of recent data, not a "fixed dataset".
Its a big, honking, several gigabyte cache of ever changing email. If I compressed a million emails, and then some user wanted to read email #502,922 — I'd have to "seek" through the preceding half-million or so to build the dictionary in order to decompress it. That's probably not feasible.'"
This is called a solid archive; what the author wants is a non-solid archive.
Seriously.
7zip. LZMA2. Whatever speed/compression setting you want (I always roll 9). Non-solid mode or a solid block size limited to whatever size you want (or whatever number of files you want, or both).
LZMA2 automagically does it's dictionary thing, and the non-solid nature does it per file, or if you limit solid block size it does it per group of n files or per group of files that fit in size x or both. If you have a lot of duplication across files so far apart that they won't share a dictionary und
Re: (Score:1)
Did you RTFA?
He said 7zip was too slow/CPU-intensive, and got worse compression with a solid archive (85%) than his custom solution (90%). AFAICT, going non-solid and backing off the compression setting would make it even worser, right?
And W/R/T this:
If you have a lot of duplication across files so far apart that they won't share a dictionary under LZMA2, you can get some improvement by first creating a master dictionary (across all files, ignore non-solid mode or solid block limits) for those duplicated chunks and then writing down all the pointer locations for them, then sending the rest of the data to LZMA2 to be compressed.
Which would more-or-less do what he's accomplishing, with two very big differences:
Re: (Score:2)
Did you RTFA?
He said 7zip was too slow/CPU-intensive, and got worse compression with a solid archive (85%) than his custom solution (90%). AFAICT, going non-solid and backing off the compression setting would make it even worser, right?
And W/R/T this:
If you have a lot of duplication across files so far apart that they won't share a dictionary under LZMA2, you can get some improvement by first creating a master dictionary (across all files, ignore non-solid mode or solid block limits) for those duplicated chunks and then writing down all the pointer locations for them, then sending the rest of the data to LZMA2 to be compressed.
Which would more-or-less do what he's accomplishing, with two very big differences:
You can tune the performance however you want, and use whatever filters in whatever order you want.
If you would RTFA for 7Zip, you would realize that filters can have multiple output streams. You can have an "already compressed" stream that skips the LZMA2 compressor, you can have a "requires compression" stream that gets hit by LZMA2 afterward, you can have debug/control streams, whatever the fuck you want.
I simply gave a basic example of how to use 7zip with 2 encoding methods. PPMd is specifically for
Re: (Score:2)
Wow, I hope you have a good THAC0!
Pfff easy (Score:1)
Just delete the emails that are not on the compressing dictionary.
Mailinator Rocks (Score:4, Informative)
I use mailinator all the time, it is fantasticly useful. Sometimes I encounter a website that won't accept mailinator addresses, some even go to the effort of tracking the alternate domains he uses and blocking them too. I find mailinator so useful that when a website refuses mailionator addresses, I just won't use that website.
The Mailinator Man's blog is also pretty good, the guy is articulate and has a knack for talking about interesting architectural stuff. This latest entry is just another in a great series, if you like this sort of stuff and haven't read his previous entries you should take the time to read through them.
To cut down on sockpuppetry (Score:2)
And anyone who blocks them. They know they're a scummy site and REALLY want to be sending you spam.
That or a site that uses a service that blocks over 4,000 disposable e-mail address domains [block-disp...-email.com] might just want a more persistent identifier to cut down on sockpuppet registrations.
Re: (Score:2)
They have a pretty awesome FAQ too.
He has a simple solution for that problem. (Score:2)
If I compressed a million emails, and then some user wanted to read email #502,922 — I'd have to "seek" through the preceding half-million or so to build the dictionary in order to decompress it. That's probably not feasible.
What the summary does not say was that, email number 502,922 is special cased and is stored in plain text at the head of the compression dictionary. So it will trivially fetch email number 502,922.
Well, now I know what a mailinator is. (Score:1)
That service is pretty cool. Never realized there was something out there like that.
It's not the algorithm, it's the data (Score:5, Informative)
Mailinator can achieve high compression rates because most people use it for registration emails. Those mails differ from each other in only a few words, making the data set highly redundant, and easily compressible.
Re: (Score:3)
Re: (Score:3, Informative)
Mailinator can achieve high compression rates because most people use it for registration emails. Those mails differ from each other in only a few words, making the data set highly redundant, and easily compressible.
The accomplishment here is that he determined a very tactical set of strategies for solving a real world problem of large scale. No, it didn't take a math PhD with some deep understanding of Fourier analysis to invent this algorithm, but it most certainly took a software developer who was knowledgeable, creative, and passionate for his task. So yeah... it's not the 90% compression that's impressive, it's the real-time performance that's cool.
Re: (Score:1)
A common email phrase dictionary based on typical spam....I mean content, would resemble:
* Viagra
* prince
* you won!
* sexy
* Nigerian
* Please send your check to
* Free free free!
* $9.95
* Survive Armageddon
Just use compressed tokens for those and most spam, I mean emails, would be just a few bytes.
Re: (Score:2)
Mailinator can achieve high compression rates because most people use it for registration emails. Those mails differ from each other in only a few words, making the data set highly redundant, and easily compressible.
Which reminds me, Facebook is backed up onto a single LTO-5
This is how much i've read (Score:1, Funny)
Paul Tyma, creator of Mailinator, writes about a greedy algorithm to analyze the huge amount of email Mailinator receives and finds ways to reduce its memory footprint by 90%. Quoting: 'I grabbed a few hundred megs of the Mailinator stream and ran it through several compressors. Mostly just stuff I had on hand 7z, bzip, gzip, etc. Venerable zip reduced the file by 63%. Not bad. Then I tried the LZMA/2 algorithm (7z) which got it down by 85%! Well. OK! Article is over!
TFA right there.
A similar result (with much less effort...) (Score:1)
I run a similar (though waaaay less popular) site - http://dudmail.com/ [dudmail.com]
My mail is stored on disk in a mysql db so I don't have quite the same memory constraints as this.
I had originally created this site naively stashing the uncompressed source straight into the db. For the ~100,000 mails I'd typically retain this would take up anywhere from 800mb to slightly over a gig.
At a recent rails camp, I was in need of a mini project so decided that some sort of compression was in order. Not being quite so clever I
Re: (Score:2)
Not really - each file uses up an inode, so there's 4k gone per file. A better solution would be to store an arbitrary number of emails in each file, compressed and then concatenated, and just store the filename:offset:length of each one in the db. Each individual email is quickly recovered, way fewer inodes used.
Re: (Score:1)
yeah, this was going to be my original approach. (I had a previous project where I had stored images in a db, which showed the limitations of this approach).
However, I ended up chucking them in the database for simplicity. I'm able to just move database dumps from production to dev and that's a complete snapshot of the application - no need to worry about also having to sync an emails directory. It also means I don't have to worry about error handling for when an email body is not found (if the db record i
I love Mailinator (Score:1)
If anybody responsible for the site comes this way, thank you for the excellent (and free) service.