More on Bayesian Spam Filtering 251
michaeld writes "The "Bayesian" techniques for spam filtering recently publicized in Paul Graham's essay A Plan for Spam doesn't actually seem to have anything Bayesian about it, according to Gary Robinson (an expert on collaborative filtering). It is based on a non-Bayesian probabilistic approach. It works well enough, because it is frequently the case that technology doesn't have to be 100% perfect in order to do something that really needs to be done. The problem interested Robinson, and he posted his thoughts about trying to fix the problems in the Graham approach, including adding an actual Bayesian element to the calculations."
Post your results here (Score:5, Interesting)
I'd like to hear about modifications to this system. I removed Graham's doubling of "good" word frequencies, and I trained my filter using digrams. I also tried all the various methods supplied by the program "rainbow", with good results, but the implmentation was too slow and klunky to place in the middle of my email delivery system. What are other possible modifications?
The proof of the pudding... (Score:5, Interesting)
We will now have many slashdot posts saying "I've not tested this but I think A (or B, or C, or X)"
Here's where the scientific method comes into its own. Anyone who cares enough can actually test and post their results. I'd be interested in seeing what they look like. I don't have a database of spam to test against (and please don't volunteer to sign me up for some
Naive Bayesian Learning (Score:2, Interesting)
Re:Post your results here (Score:3, Interesting)
Whatever Jaguar (Mac OS X 10.2) uses works! (Score:1, Interesting)
I, myself, am not sure but the new Mail.app is smart and it does learn. After a week of "learning" it has correcly determined messages as spam more than 99 out of a 100 times.
Neural Net Spam Filtering (Score:3, Interesting)
Our approach worked pretty well (95-97% accuracy), and we had to deal with the same issues that the above "Bayesian" approach did. I.e., weighing the neurons so that false positives occur much less frequently than false negatives, etc. We built it using data on spam collected from the UCI machine learning repository.
It ties in with procmail. I'm not really a windows guy, so if anyone knows how to put a filter between an IMAP server and Microsoft Outlook/Netscape Communicator, I'd be interested in hearing how it's done.
The README for it is at: http://www-cse.ucsd.edu/~wkerney/spamfilter.READM
And you can download it at:
http://www-cse.ucsd.edu/~wkerney/spamfilter.
-Bill Kerney
wkerney at ucsd.edu
SpamAssassin - duh (Score:3, Interesting)
With so many people using SpamAssassin these days, I can't see how this is a timely or newsworthy item. More like from the been-there-done-that-dept..
Re:Post your results here (Score:5, Interesting)
You can grab the source here [saturn5.com], but it is specific to the exact way that my mail gets delivered (via offlineimap into maildirs).
keyword matching isnt the answer (Score:2, Interesting)
i don't see why they cant implement some system that scans incoming mail for its users' mailboxes, maybe does a checksum for each message or something, and if it finds that a number of its users are receiving exactly (or nearly exactly) the same message, assume it's spam. nuke the messages, and any new incoming ones.
yeah, if such a system only scans a small number of mailboxes, it may filter out mailing list posts and so on. but it gets more and more reliable the higher number of mailboxes it tracks.
this avoids searching for certain keywords and eliminates false positives. after all, how well would these keyword searching methods do if i were to quote a spam message in an email to a friend?
Re:How do you pronounce "Bayesian" anyways? (Score:2, Interesting)
Yes and no.
To defeat a bayesian filter, the spammer needs to make his email contain similar words, and combinations of words, to your genuine email, while at the same time making sure that the words used are different to those in known spam.
So saying 'click here to make $$$' won't work any more, since most of your regular emails don't contain the word combinations 'click here' and 'make $$$', whereas known spam emails will.
However, we're already beginning to see spammers making their emails less obviously spam.
For example, the spammer may use an email along the lines of:
"How's things?
Have you seen yet?
Don't forget to mail me those documents.
Regards,
A Spammer"
Even a bayesian filter will struggle to distinguish that from:
"Have you seen the story on slashdot yet?
Don't forget those reports.
Regards,
Your Boss"
Re:Post your results here (Score:3, Interesting)
I continue to train.
or did you train it initially and leave it that way. How large is your training set?
I started off with a base.
Details! My training set was 300 spams and 3500 not-spams.
I started with a little more than 300 spam, and around 1000 valid messages.
My count is now:
Good messages read: 1194
Bad messages read: 644
That's because I only train on deleted mail, and I don't tend to delete my mailing lists except for once a month or 2...
With digrams, my filter traps 618 out of 621 spams in my spam folder, which is 99.5%
Against my start set, I nailed about 97%, including refiling 2 false positives from my old anti-spam system as being not spam. I've noticed that the system is really good at nailing stuff it already knows about, but the learning curve is a little steep for 'new spam types'. Still, I'm pretty happy with it.
Re:Post your results here (Score:3, Interesting)
My experience is that I get a few percent false-negatives and about 1% false positives. I'm not seeing zero false positives, like many people are, but that probably has to do with the training sets used. Statistically speaking, you always have to trade off false negative with false positives, so it's reasonable in my 'real world' tests.
As a side note, everyone should test out of sample. E.g. set aside half your good e-mails and half your spam e-mails, build the filter on one half, and then test on the other half. That's the only way to get a fair test of the filter.
For my "good" email corpus, I dumped my entire e-mail archive since 1995. That included personal e-mail, receipts from online shopping, some mailing lists, etc. The few things that get flagged as spam (a) are almost always sent in HTML format, and (b) very short with little real content. (E.g., "Hey, looking forward to seeing you this weekend. Call me if you go out. My number is... Bye.")
The spam corpus I took from on online resource while I build up my own. The e-mails that slip by unflagged are usually (a) short and (b) phrased like friend making a suggestion. (E.g., "Hi, I just thought you'd be interested in hearing about a this new, cool website, http://...") It seems to be close enough to a real message to slip through. Thankfully, few of them are like that.
I'm including subject lines, from addresses, and the body so far. I'm not parsing ip addresses or html tags specially, however, just basic words using a simple perl regexp.
Interestingly, "COLOR" is the one of the most often flagged words indicating spam. HTML formatting text seems to be the biggest culprit in my false positives. I might explicitly exclude the ones that show up in good mail (e.g. from friends who use crappy e-mail programs like aol) like COLOR, FONT, FACE, etc., but leave in the ones that spammer use like TD, TR, etc.
-XDG
Re:Tutorial on Bayesian Inference (Score:3, Interesting)
On the web, see: Assoc. for Uncertainty in Artificial Intelligence [auai.org] -- this is the primary conference devoted to belief networks, which are a class of graphical (in the circles and arrows sense) Bayesian probability models. There are tutorials and other papers on the main AUAI web page, and links to the last several years of conference proceedings. By the way, Heckerman and Horvitz, now doing belief networkish work at MS Research, are in the AUAI crowd.
In print, my favorite reference is E.T. Jaynes, "Probability Theory: The Logic of Science", which is due out soon. See this web site devoted to Jaynes' work [wustl.edu] for the status. I am also fond of Castillo, Gutierrez, & Hadi, "Expert Systems and Probabilistic Network Models".
There are a vast (well, maybe just large) number of alternative models to classify things; a good introduction is Hastie, Tibshirani, & Friedman, "Elements of Statistical Learning". Incidentally, they use spam classification to illustrate several kinds of models.
Finally, if you're wondering what the heck is the difference between Bayesian probability and any other kind -- just google the posts in sci.stat.math; there is a Bayesian vs frequentist flame war about once a year. :^)