Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Spam

Plan for Spam, Version 2 464

bugbear writes "I just posted a new version of the Plan for Spam Bayesian filtering algorithm. The big change is to mark tokens by context. The new version decreases spams missed by 50%, to 2.5 per 1000, even though spam has gotten harder to filter since the summer. I also talk about how spam will evolve, and what to do about it."
This discussion has been archived. No new comments can be posted.

Plan for Spam, Version 2

Comments Filter:
  • by Anonymous Coward on Tuesday January 21, 2003 @02:22PM (#5128108)
    I'm using the filters in Moz 1.3 alpha and the Base64 encoded emails are not being recoginized and flagged as spam. I've trained and trained and trained.

    They almost always get through.

    Anyone else experience this?

    Also, how do can you flag an ad that is an image? Block all HTML email?

    I dunno.

  • January 2003

    (This article was given as a talk at the 2003 Spam Conference. It describes the work I've done to improve the performance of the algorithm described in A Plan for Spam, and what I plan to do in the future.)

    The first discovery I'd like to present here is an algorithm for lazy evaluation of research papers. Just write whatever you want and don't cite any previous work, and indignant readers will send you references to all the papers you should have cited. I discovered this algorithm after ``A Plan for Spam'' [1] was on Slashdot.

    Spam filtering is a subset of text classification, which is a well established field, but the first papers about Bayesian spam filtering per se seem to have been two given at the same conference in 1998, one by Pantel and Lin [2], and another by a group from Microsoft Research [3].

    When I heard about this work I was a bit surprised. If people had been onto Bayesian filtering four years ago, why wasn't everyone using it? When I read the papers I found out why. Pantel and Lin's filter was the more effective of the two, but it only caught 92% of spam, with 1.16% false positives.

    When I tried writing a Bayesian spam filter, it caught 99.5% of spam with less than .03% false positives [4]. It's always alarming when two people trying the same experiment get widely divergent results. It's especially alarming here because those two sets of numbers might yield opposite conclusions. Different users have different requirements, but I think for many people a filtering rate of 92% with 1.16% false positives means that filtering is not an acceptable solution, whereas 99.5% with less than .03% false positives means that it is.

    So why did we get such different numbers? I haven't tried to reproduce Pantel and Lin's results, but from reading the paper I see five things that probably account for the difference.

    One is simply that they trained their filter on very little data: 160 spam and 466 nonspam mails. Filter performance should still be climbing with data sets that small. So their numbers may not even be an accurate measure of the performance of their algorithm, let alone of Bayesian spam filtering in general.

    But I think the most important difference is probably that they ignored message headers. To anyone who has worked on spam filters, this will seem a perverse decision. And yet in the very first filters I tried writing, I ignored the headers too. Why? Because I wanted to keep the problem neat. I didn't know much about mail headers then, and they seemed to me full of random stuff. There is a lesson here for filter writers: don't ignore data. You'd think this lesson would be too obvious to mention, but I've had to learn it several times.

    Third, Pantel and Lin stemmed the tokens, meaning they reduced e.g. both ``mailing'' and ``mailed'' to the root ``mail''. They may have felt they were forced to do this by the small size of their corpus, but if so this is a kind of premature optimization.

    Fourth, they calculated probabilities differently. They used all the tokens, whereas I only use the 15 most significant. If you use all the tokens you'll tend to miss longer spams, the type where someone tells you their life story up to the point where they got rich from some multilevel marketing scheme. And such an algorithm would be easy for spammers to spoof: just add a big chunk of random text to counterbalance the spam terms.

    Finally, they didn't bias against false positives. I think any spam filtering algorithm ought to have a convenient knob you can twist to decrease the false positive rate at the expense of the filtering rate. I do this by counting the occurrences of tokens in the nonspam corpus double.

    I don't think it's a good idea to treat spam filtering as a straight text classification problem. You can use text classification techniques, but solutions can and should reflect the fact that the text is email, and spam in particular. Email is not just text; it has structure. Spam filtering is not just classification, because false positives are so much worse than false negatives that you should treat them as a different kind of error. And the source of error is not just random variation, but a live human spammer working actively to defeat your filter.

    Tokens

    Another project I heard about after the Slashdot article was Bill Yerazunis' CRM114 [5]. This is the counterexample to the design principle I just mentioned. It's a straight text classifier, but such a stunningly effective one that it manages to filter spam almost perfectly without even knowing that's what it's doing.

    Once I understood how CRM114 worked, it seemed inevitable that I would eventually have to move from filtering based on single words to an approach like this. But first, I thought, I'll see how far I can get with single words. And the answer is, surprisingly far.

    Mostly I've been working on smarter tokenization. On current spam, I've been able to achieve filtering rates that approach CRM114's. These techniques are mostly orthogonal to Bill's; an optimal solution might incorporate both.

    ``A Plan for Spam'' uses a very simple definition of a token. Letters, digits, dashes, apostrophes, and dollar signs are constituent characters, and everything else is a token separator. I also ignored case. Now I have a more complicated definition of a token:

    Case is preserved.

    Exclamation points are constituent characters.

    Periods and commas are constituents if they occur between two digits. This lets me get ip addresses and prices intact.

    A price range like $20-25 yields two tokens, $20 and $25.

    Tokens that occur within the To, From, Subject, and Return-Path lines, or within urls, get marked accordingly. E.g. ``foo'' in the Subject line becomes ``Subject*foo''. (The asterisk could be any character you don't allow as a constituent.)
    Such measures increase the filter's vocabulary, which makes it more discriminating. For example, in the current filter, ``free'' in the Subject line has a spam probability of 98%, whereas the same token in the body has a spam probability of only 65%.

    In the Plan for Spam filter, all these tokens would have had the same probability, .7602. That filter recognized about 23,000 tokens. The current one recognizes about 187,000.

    The disadvantage of having a larger universe of tokens is that there is more chance of misses. Spreading your corpus out over more tokens has the same effect as making it smaller. If you consider exclamation points as constituents, for example, then you could end up not having a spam probability for free with seven exclamation points, even though you know that free with just two exclamation points has a probability of 99.99%.

    One solution to this is what I call degeneration. If you can't find an exact match for a token, treat it as if it were a less specific version. I consider terminal exclamation points, uppercase letters, and occurring in one of the five marked contexts as making a token more specific. For example, if I don't find a probability for ``Subject*free!'', I look for probabilities for ``Subject*free'', ``free!'', and ``free'', and take whichever one is farthest from .5.

    Here are the alternatives [7] considered if the filter sees ``FREE!!!'' in the Subject line and doesn't have a probability for it.

    If you do this, be sure to consider versions with initial caps as well as all uppercase and all lowercase. Spams tend to have more sentences in imperative voice, and in those the first word is a verb. So verbs with initial caps have higher spam probabilities than they would in all lowercase. In my filter, the spam probability of ``Act'' is 98% and for ``act'' only 62%.

    If you increase your filter's vocabulary, you can end up counting the same word multiple times, according to your old definition of ``same''. Logically, they're not the same token anymore. But if this still bothers you, let me add from experience that the words you seem to be counting multiple times tend to be exactly the ones you'd want to.

    Another effect of a larger vocabulary is that when you look at an incoming mail you find more interesting tokens, meaning those with probabilities far from .5. I use the 15 most interesting to decide if mail is spam. But you can run into a problem when you use a fixed number like this. If you find a lot of maximally interesting tokens, the result can end up being decided by whatever random factor determines the ordering of equally interesting tokens. One way to deal with this is to treat some as more interesting than others.

    For example, the token ``dalco'' occurs 3 times in my spam corpus and never in my legitimate corpus. The token ``Url*optmails'' (meaning ``optmails'' within a url) occurs 1223 times. And yet, as I used to calculate probabilities for tokens, both would have the same spam probability, the threshold of .99.

    That doesn't feel right. There are theoretical arguments for giving these two tokens substantially different probabilities (Pantel and Lin do), but I haven't tried that yet. It does seem at least that if we find more than 15 tokens that only occur in one corpus or the other, we ought to give priority to the ones that occur a lot. So now there are two threshold values. For tokens that occur only in the spam corpus, the probability is .9999 if they occur more than 10 times and .9998 otherwise. Ditto at the other end of the scale for tokens found only in the legitimate corpus.

    I may later scale token probabilities substantially, but this tiny amount of scaling at least ensures that tokens get sorted the right way.

    Another possibility would be to consider not just 15 tokens, but all the tokens over a certain threshold of interestingness. Steven Hauser does this in his statistical spam filter [8]. If you use a threshold, make it very high, or spammers could spoof you by packing messages with more innocent words.

    Finally, what should one do about html? I've tried the whole spectrum of options, from ignoring it to parsing it all. Ignoring html is a bad idea, because it's full of useful spam signs. But if you parse it all, your filter might degenerate into a mere html recognizer. The most effective approach seems to be the middle course, to notice some tokens but not others. I look at a, img, and font tags, and ignore the rest. Links and images you should certainly look at, because they contain urls.

    I could probably be smarter about dealing with html, but I don't think it's worth putting a lot of time into this. Spams full of html are easy to filter. The smarter spammers already avoid it. So performance in the future should not depend much on how you deal with html.

    Performance

    Between December 10 2002 and January 10 2003 I got about 1750 spams. Of these, 4 got through. That's a filtering rate of about 99.75%.

    Two of the four spams I missed got through because they happened to use words that occur often in my legitimate email.

    The third was one of those that exploit an insecure cgi script to send mail to third parties. They're hard to filter based just on the content because the headers are innocent and they're careful about the words they use. Even so I can usually catch them. This one squeaked by with a probability of .88, just under the threshold of .9.

    Of course, looking at multiple token sequences would catch it easily. ``Below is the result of your feedback form'' is an instant giveaway.

    The fourth spam was what I call a spam-of-the-future, because this is what I expect spam to evolve into: some completely neutral text followed by a url. In this case it was was from someone saying they had finally finished their homepage and would I go look at it. (The page was of course an ad for a porn site.)

    If the spammers are careful about the headers and use a fresh url, there is nothing in spam-of-the-future for filters to notice. We can of course counter by sending a crawler to look at the page. But that might not be necessary. The response rate for spam-of-the-future must be low, or everyone would be doing it. If it's low enough, it won't pay for spammers to send it, and we won't have to work too hard on filtering it.

    Now for the really shocking news: during that same one-month period I got three false positives.

    In a way it's a relief to get some false positives. When I wrote ``A Plan for Spam'' I hadn't had any, and I didn't know what they'd be like. Now that I've had a few, I'm relieved to find they're not as bad as I feared. False positives yielded by statistical filters turn out to be mails that sound a lot like spam, and these tend to be the ones you would least mind missing [9].

    Two of the false positives were newsletters from companies I've bought things from. I never asked to receive them, so arguably they were spams, but I count them as false positives because I hadn't been deleting them as spams before. The reason the filters caught them was that both companies in January switched to commercial email senders instead of sending the mails from their own servers, and both the headers and the bodies became much spammier.

    The third false positive was a bad one, though. It was from someone in Egypt and written in all uppercase. This was a direct result of making tokens case sensitive; the Plan for Spam filter wouldn't have caught it.

    It's hard to say what the overall false positive rate is, because we're up in the noise, statistically. Anyone who has worked on filters (at least, effective filters) will be aware of this problem. With some emails it's hard to say whether they're spam or not, and these are the ones you end up looking at when you get filters really tight. For example, so far the filter has caught two emails that were sent to my address because of a typo, and one sent to me in the belief that I was someone else. Arguably, these are neither my spam nor my nonspam mail.

    Another false positive was from a vice president at Virtumundo. I wrote to them pretending to be a customer, and since the reply came back through Virtumundo's mail servers it had the most incriminating headers imaginable. Arguably this isn't a real false positive either, but a sort of Heisenberg uncertainty effect: I only got it because I was writing about spam filtering.

    Not counting these, I've had a total of five false positives so far, out of about 7740 legitimate emails, a rate of .06%. The other two were a notice that something I bought was back-ordered, and a party reminder from Evite.

    I don't think this number can be trusted, partly because the sample is so small, and partly because I think I can fix the filter not to catch some of these.

    False positives seem to me a different kind of error from false negatives. Filtering rate is a measure of performance. False positives I consider more like bugs. I approach improving the filtering rate as optimization, and decreasing false positives as debugging.

    So these five false positives are my bug list. For example, the mail from Egypt got nailed because the uppercase text made it look to the filter like a Nigerian spam. This really is kind of a bug. As with html, the email being all uppercase is really conceptually one feature, not one for each word. I need to handle case in a more sophisticated way.

    So what to make of this .06%? Not much, I think. You could treat it as an upper bound, bearing in mind the small sample size. But at this stage it is more a measure of the bugs in my implementation than some intrinsic false positive rate of Bayesian filtering.

    Future

    What next? Filtering is an optimization problem, and the key to optimization is profiling. Don't try to guess where your code is slow, because you'll guess wrong. Look at where your code is slow, and fix that. In filtering, this translates to: look at the spams you miss, and figure out what you could have done to catch them.

    For example, spammers are now working aggressively to evade filters, and one of the things they're doing is breaking up and misspelling words to prevent filters from recognizing them. But working on this is not my first priority, because I still have no trouble catching these spams [10].

    There are two kinds of spams I currently do have trouble with. One is the type that pretends to be an email from a woman inviting you to go chat with her or see her profile on a dating site. These get through because they're the one type of sales pitch you can make without using sales talk. They use the same vocabulary as ordinary email.

    The other kind of spams I have trouble filtering are those from companies in e.g. Bulgaria offering contract programming services. These get through because I'm a programmer too, and the spams are full of the same words as my real mail.

    I'll probably focus on the personal ad type first. I think if I look closer I'll be able to find statistical differences between these and my real mail. The style of writing is certainly different, though it may take multiword filtering to catch that. Also, I notice they tend to repeat the url, and someone including a url in a legitimate mail wouldn't do that [11].

    The outsourcing type are going to be hard to catch. Even if you sent a crawler to the site, you wouldn't find a smoking statistical gun. Maybe the only answer is a central list of domains advertised in spams [12]. But there can't be that many of this type of mail. If the only spams left were unsolicited offers of contract programming services from Bulgaria, we could all probably move on to working on something else.

    Will statistical filtering actually get us to that point? I don't know. Right now, for me personally, spam is not a problem. But spammers haven't yet made a serious effort to spoof statistical filters. What will happen when they do?

    I'm not optimistic about filters that work at the network level [13]. When there is a static obstacle worth getting past, spammers are pretty efficient at getting past it. There is already a company called Assurance Systems that will run your mail through Spamassassin and tell you whether it will get filtered out.

    Network-level filters won't be completely useless. They may be enough to kill all the "opt-in" spam, meaning spam from companies like Virtumundo and Equalamail who claim that they're really running opt-in lists. You can filter those based just on the headers, no matter what they say in the body. But anyone willing to falsify headers or use open relays, presumably including most porn spammers, should be able to get some message past network-level filters if they want to. (By no means the message they'd like to send though, which is something.)

    The kind of filters I'm optimistic about are ones that calculate probabilities based on each individual user's mail. These can be much more effective, not only in avoiding false positives, but in filtering too: for example, finding the recipient's email address base-64 encoded anywhere in a message is a very good spam indicator.

    But the real advantage of individual filters is that they'll all be different. If everyone's filters have different probabilities, it will make the spammers' optimization loop, what programmers would call their edit-compile-test cycle, appallingly slow. Instead of just tweaking a spam till it gets through a copy of some filter they have on their desktop, they'll have to do a test mailing for each tweak. It would be like programming in a language without an interactive toplevel, and I wouldn't wish that on anyone.

    Notes

    [1] Paul Graham. ``A Plan for Spam.'' August 2002. http://paulgraham.com/spam.html.

    Probabilities in this algorithm are calculated using a degenerate case of Bayes' Rule. There are two simplifying assumptions: that the probabilities of features (i.e. words) are independent, and that we know nothing about the prior probability of an email being spam.

    The first assumption is widespread in text classification. Algorithms that use it are called ``naive Bayesian.''

    The second assumption I made because the proportion of spam in my incoming mail fluctuated so much from day to day (indeed, from hour to hour) that the overall prior ratio seemed worthless as a predictor. If you assume that P(spam) and P(nonspam) are both .5, they cancel out and you can remove them from the formula.

    If you were doing Bayesian filtering in a situation where the ratio of spam to nonspam was consistently very high or (especially) very low, you could probably improve filter performance by incorporating prior probabilities. To do this right you'd have to track ratios by time of day, because spam and legitimate mail volume both have distinct daily patterns.

    [2] Patrick Pantel and Dekang Lin. ``SpamCop-- A Spam Classification & Organization Program.'' Proceedings of AAAI-98 Workshop on Learning for Text Categorization.

    [3] Mehran Sahami, Susan Dumais, David Heckerman and Eric Horvitz. ``A Bayesian Approach to Filtering Junk E-Mail.'' Proceedings of AAAI-98 Workshop on Learning for Text Categorization.

    [4] At the time I had zero false positives out of about 4,000 legitimate emails. If the next legitimate email was a false positive, this would give us .03%. These false positive rates are untrustworthy, as I explain later. I quote a number here only to emphasize that whatever the false positive rate is, it is less than 1.16%.

    [5] Bill Yerazunis. ``Sparse Binary Polynomial Hash Message Filtering and The CRM114 Discriminator.'' Proceedings of 2003 Spam Conference.

    [6] In ``A Plan for Spam'' I used thresholds of .99 and .01. It seems justifiable to use thresholds proportionate to the size of the corpora. Since I now have on the order of 10,000 of each type of mail, I use .9999 and .0001.

    [7] There is a flaw here I should probably fix. Currently, when ``Subject*foo'' degenerates to just ``foo'', what that means is you're getting the stats for occurrences of ``foo'' in the body or header lines other than those I mark. What I should do is keep track of statistics for ``foo'' overall as well as specific versions, and degenerate from ``Subject*foo'' not to ``foo'' but to ``Anywhere*foo''. Ditto for case: I should degenerate from uppercase to any-case, not lowercase.

    It would probably be a win to do this with prices too, e.g. to degenerate from ``$129.99'' to ``$--9.99'', ``$--.99'', and ``$--''.

    You could also degenerate from words to their stems, but this would probably only improve filtering rates early on when you had small corpora.

    [8] Steven Hauser. ``Statistical Spam Filter Works for Me.'' http://www.sofbot.com.

    [9] False positives are not all equal, and we should remember this when comparing techniques for stopping spam. Whereas many of the false positives caused by filters will be near-spams that you wouldn't mind missing, false positives caused by blacklists, for example, will be just mail from people who chose the wrong ISP. In both cases you catch mail that's near spam, but for blacklists nearness is physical, and for filters it's textual.

    In fairness, it should be added that the new generation of responsible blacklists, like the SBL, cause far fewer false positives than earlier blacklists like the MAPS RBL, for whom causing large numbers of false positives was a deliberate technique to get the attention of ISPs.

    [10] If spammers get good enough at obscuring tokens for this to be a problem, we can respond by simply removing whitespace, periods, commas, etc. and using a dictionary to pick the words out of the resulting sequence. And of course finding words this way that weren't visible in the original text would in itself be evidence of spam.

    Picking out the words won't be trivial. It will require more than just reconstructing word boundaries; spammers both add (``xHot nPorn cSite'') and omit (``P#rn'') letters. Vision research may be useful here, since human vision is the limit that such tricks will approach.

    [11] In general, spams are more repetitive than regular email. They want to pound that message home. I currently don't allow duplicates in the top 15 tokens, because you could get a false positive if the sender happens to use some bad word multiple times. (In my current filter, ``dick'' has a spam probabilty of .9999, but it's also a name.) It seems we should at least notice duplication though, so I may try allowing up to two of each token, as Brian Burton does in SpamProbe.

    [12] This is what approaches like Brightmail's will degenerate into once spammers are pushed into using mad-lib techniques to generate everything else in the message.

    [13] It's sometimes argued that we should be working on filtering at the network level, because it is more efficient. What people usually mean when they say this is: we currently filter at the network level, and we don't want to start over from scratch. But you can't dictate the problem to fit your solution.

    Historically, scarce-resource arguments have been the losing side in debates about software design. People only tend to use them to justify choices (inaction in particular) made for other reasons.

    Thanks to Sarah Harlin, Trevor Blackwell, and Dan Giffin for reading drafts of this paper, and to Dan again for most of the infrastructure that this filter runs on.
  • Stop spam? (Score:5, Interesting)

    by slykens ( 85844 ) on Tuesday January 21, 2003 @02:26PM (#5128135)
    Filtering is nice, I've been using SpamAssassin with reasonable results for the last few months. It has nearly no false positives but has recently been missing more. Perhaps I should update.

    Anyway, I've said a few times the only way to effectively stop spam is to make it more expensive to the companies having it done. Filtering, blocking ports, refusing mail from RBL'd hosts all helps, but it will not stop until it is fully against the law and people bring legal action to stop it.

    Even people who are supposed to be clueful don't get it. I got spammed to buy EZ-Pass for the PA Turnpike. I sent a nastygram to the state DoT. The keyboard monkey responded that I should look closely at the email, that I signed up to receive it. If I had a dollar for every site that claimed I signed up with them I would be rich. What an idiot.

  • by GGardner ( 97375 ) on Tuesday January 21, 2003 @02:26PM (#5128138)
    Conventional wisdom seems to say that we can't outlaw spam. I don't understand why this is. My state has a do not call list. Since signing up for it, I have gotten zero phone solicitations, down from 2 or 3 a day. It is illegal to make a phone solicitation to a cell phone, and also, I get zero phone spams on my cell phone.

    Some states, like California, have anti-spam laws, but curiously, they only cover spam sent from California to California. My state's telephone do-not-call list covers all calls to my number, no matter where they originate.

    Now, I understand that there would be problems with international spam, but stopping domestic spam would be a huge boon to everyone. It seems like this legislation would be wildly popular, and easy to pass.

  • by twemperor ( 626154 ) on Tuesday January 21, 2003 @02:27PM (#5128140) Homepage
    I really like this analytic approach. I've been using Hotmail's spam filtering, which merely removes e-mails from addresses not in my address book. While this is most of the time effective and very easy to implement, there does seem to be a major problem with false positives. ie I give my e-mail to someone, who's not in my address book.

    Does anyone think AOL or Hotmail could start using such a system as the one outlined in the article?
  • by Rojo^ ( 78973 ) on Tuesday January 21, 2003 @02:29PM (#5128160) Homepage Journal
    This is a wonderful tool that is being developed. However, I don't think any one tool will succeed in eliminating spam. From a spammer's point of view, if my income depends on messages making it through filters, by damn I will bypass those filters by whatever means I can. These assholes send penis enlargement advertisements to my mother -- If her gender doesn't stop them, neither will an email filter.

    On a different subject, in a story about a week ago, someone posted a link to a peer-peer network of spam emails for MS Outlook available at http://www.cloudmark.com that will trap a significant amount of emails based on (and this is overly simplified, of course) users' votes. Does such a solution exist in the open source world?
  • by Anonymous Coward on Tuesday January 21, 2003 @02:33PM (#5128189)
    I have just set up a system which parses spam email, locates any Web addresses, strips out the parameters, and then visits the Web site. Just think if we ALL did this. So rather than the poor spammer only getting a .001% hit rate, they get an astounding 100% hit rate. So 1 million emails sent, 1 million instant Web page hits. And it is not like they can complain about this, after all they are ASKING for the hits.

    Even better is that my domain gets multiple spams from the same company.
  • Bayesian filtering (Score:5, Interesting)

    by blakestah ( 91866 ) <blakestah@gmail.com> on Tuesday January 21, 2003 @02:37PM (#5128220) Homepage
    The basics are, you take all good mails, and create a database of words used in them. Make a different database for spam mails. Then, for each incoming mail, compare to each database, and classify as spam or non-spam.

    The algorithm starts out conservative, ie: you get most of the mail classified as good. For each "good" email that is spam, you manually re-classify it.

    Then, after a few weeks, the filter does all the work. It is basically using word-databases to compare emails and classify them the way you, the user would. Periodically you will receive another spam email, then you re-classify it, and never see an email like it again (in your inbox).

    Bogofilter and CRM114 are among the more successful efforts so far, but there are many. And they are FAR more successful than blacklist/whitelist/fixed token comparison filters. But Bayesian filtering is just a near optimal way to replicate the classification of the user, which is also why it works so well.
  • by delta407 ( 518868 ) <slashdot@nosPAm.lerfjhax.com> on Tuesday January 21, 2003 @02:39PM (#5128231) Homepage
    Also, how do can you flag an ad that is an image?
    Razor [sourceforge.net].

    Vipul's Razor marks MIME parts individually, so an ad, a picture of Viagra, or even the "Unsubscribe" button can be marked spam and contribute to the overall score of the message.
  • by notsoanonymouscoward ( 102492 ) on Tuesday January 21, 2003 @02:42PM (#5128262) Journal
    This makes no sense to me... spam to me is primarily 1) friends sending stories, jokes, quizzes, etc... or 2) someone trying to sell you something. now if we all cc'd everyone on everything, we'd have even more spam by my 1st definition of spam, and it wouldn't affect the 2nd definition at all. how is this supposed to help?
  • by Anonvmous Coward ( 589068 ) on Tuesday January 21, 2003 @02:52PM (#5128340)
    I hope you all realize that at best you're buying time, not solving the Spam problem. It won't take long for these guys to find ways through the filter.

    The problems need to be solved on a different level. The problem is not the messages themselves, it's that people are allowed to send these messages to anybody they want without any real challenges as to their authenticity.

    Let me explain how I have things set up right now, and hopefully my stance on this issue will be a little clearer. All my messages come into the same mailbox. I have a bunch of email aliases, though. If I sign up for Slashdot, for example, then I create a new alias like 'slashdot@insertdomainnamehere.com'. I then add that email address into my 'email allowed' list so that it gets funneled through into a visible folder. If that address gets abused, I shut down the email alias.

    My personal friends are treated a little differently. Once they email me, I add their address into my list of friends, and they get put into a friends folder. I treat this differently than a registration place because my friends all need one address to contact me at, I don't mind them sharing it with each other. If my address changes, then their messages still get through.

    I plan on going farther down the road. I'm going to give people an email address, and when they email it they get an automated message with instructdions on how to 'request permission' to send me email. When permission is granted, they don't get that message anymore. It basically means that the only messages that get through to me are the ones that have a human behind them to read the response and then go through the proper channels to reach me.

    I'm not claiming to have done anyting new here. I'm basically mimicking the way IM works, and I'm doing it without having to do anything real fancy. Outlook's Rules Wizard is doing quite a bit of the work here. But since people actually have to take the time to request my authorization, it means that it's a message meant for ME as opposed to a message meant for anybody who's out there. With an approach like this, it'd be a lot harder for spammers to get through.
  • Actually - (Score:4, Interesting)

    by sean.peters ( 568334 ) on Tuesday January 21, 2003 @02:52PM (#5128343) Homepage
    You don't speak for everyone. On the contrary, I think that most people realize that e-mail delivery isn't guaranteed - and therefore they expect that truly vital messages will need to be backed up with a phone call or some other means, to be sure the message was delivered.

    I would prefer to lose one or two legitimate mails in return for a virtually zero rate of missed detections.

    Sean
  • by hrieke ( 126185 ) on Tuesday January 21, 2003 @03:03PM (#5128418) Homepage
    Way down in the footnotes:
    [13] It's sometimes argued that we should be working on filtering at the network level, because it is more efficient. What people usually mean when they say this is: we currently filter at the network level, and we don't want to start over from scratch. But you can't dictate the problem to fit your solution.
    This is where the problem of spam will be solved, by having a web of trust between the mail servers, that sign the message in a maner which makes it easier to back track a message and if these servers also do filting, well we kill two birds with one stone. The problems are:
    • CPU intensive
    • Need to look at every message
    • Seeding the filter database
    • Building trust with other servers
    And others of course.
    Think about this one, what does the typical email a porn star would get look like? What we think of spam, might not be someone else's.
    How would the system scale?
    And what would stop a spammer from installing a server with a bogus filter database, or just signing off on each message as being legit?

    Perhaps filtering based on each user's personal corpus of valid email is the only workable solution, or that spammers will kill off email as a usable means of communication.

  • by IWantMoreSpamPlease ( 571972 ) on Tuesday January 21, 2003 @03:17PM (#5128499) Homepage Journal
    Because of where I work, I have to use Outlook Express. I know, sucks to be me. OE does have a filter setting so I can at least start putting keywords in and have mail sent to different boxes. I have found that a large (greater than 95%) of spam sent to me is "personalized", meaning that somewhere in the spam is my name.

    Co-workers, friends, family, don't call me by my name, so I add my name to the kill-filter list and most spam goes bye bye. I only wish OE had an option to kill-filter anything with HTML in it since nearly 100% of my incoming spam contains HTML, sound, images and whatnot.

    I'd love to see M$ get their act together and fix OE and Outlook and include modern filterin techniques (such as discussed in the main article) but I doubt it'll ever happen.
  • by zootread ( 569199 ) <zootread@NOsPaM.yahoo.com> on Tuesday January 21, 2003 @03:27PM (#5128585)
    Simply use a free account for any registration required sites / internet posting and only check it when necessary to confirm registration. Use another account for regular everyday things, and make sure it sin't something simple like abc123@hotmail.com. I do that and never get spam to my real accounts. This whole spam thing is way overblown.

    Well, that won't work in a lot cases. I can create an e-mail account on my ISP (Roadrunner) and within hours I am getting spam without having even used it. The must be allowing easy access to the account list. Free accounts are worse (hotmail, yahoo), create an account and you're guaranteed to get spam, even if you've kept the e-mail address a complete secret.

    On the other hand, at work, I don't get a single piece of spam because I am careful with the address.
  • by minas-beede ( 561803 ) on Tuesday January 21, 2003 @03:35PM (#5128654)
    "Also, isn't it easy for a spammer to workaround a spam honeypot -- create a hotmail account, add it to your spam list, and verify that it did go through."

    Yes. So far many don't (I don't know of any that do, but spammers do, eventually, stop sending to a honeypot.) Ralsky never caught on to the Moscow honeypot that was whacking him last year (I think he's the one who told Shiksaa - visit NANAE to find out who she is - that SPEWS was killing him, just at the time of the major whacking Ralsky was getting.) (Chuckle.) I looked for spammer dropbox addresses in trapped spam 3 years ago - I figured they'd use the same address every once and a while in the list of victims. I sorted the list of recipients, sorted again, removing duplicates, and compared. No differences: each victim showed up once. They could do it, they don't. Years of experience has taught them that they can test for open relays and abuse them incautiously - nobody does anything to counter them. They think they own the internet because people ignore their attempts to relay. It's easy to knock the smirk off their faces: pay attention to illicit connection attempts.

    There is a project already in motion to collect all recipient addresses for honeypot-collected spam in a central location. If any address shows up too frequently then that's a suspicious address. The real problem isn't what the spammers do or could do, it is that too few people use this very simple method to wreck the spam path.

    My original honeypot went down last week (I retired in 2001; I haven't really checked to see what the current managers are doing with it.) This year I only captured relay messages, delivered nothing. When it went down last week it had captured over 100 relay test messages in January. You can also go after spammers with these (and I did - no results yet to report, I'm hoping for some big results.) Spammers could detect that - but too late.

    There's a sneakier version of what you suggested that the spammers could use. I won't tell them what it is.

    Volume is the key - many honeypots are needed, quickly, to whack them before they adapt. Same for open proxies. It is an absolutely simple approach. You could set up Granny's system to run a honeypot and it would work, if she has a connection to a segment the spammers search for open relays. http://jackpot.uk.net/

    Try Jackpot and see for yourself, if you can.
  • by knobmaker ( 523595 ) on Tuesday January 21, 2003 @03:36PM (#5128667) Homepage Journal
    This whole spam thing is way overblown.

    Maybe having spamtrap addresses works if you only use the internet as a personal communication medium. But what if you run an online business, and need to keep email addresses on your websites?

    That's why antispam technology is important to me.

  • MATH THEORY (Score:2, Interesting)

    by EEgopher ( 527984 ) on Tuesday January 21, 2003 @03:38PM (#5128686) Homepage
    (exhales loudly as he reclines the brown chair)
    Upon reading these extremely fine articles, my mind picks and dances at one particular point, and that is the SIZE of corpuses to use for the training. It seems to me, that at infinitely large bodies of training material, both spam and non-spam tokens would have equal chances of being passed or rejected. Even for large (4000) bodies of corpus, would you really want to be training with equal numbers of spam examples vs. non-spam examples? It seems to me that the filter could cycle unto itself, giving the word "the" superior priority to "mortgage", and so-on-and-so-forth such that the filter would have learned so many words -- regardless of good vs. bad -- that the filter would again (raises fist to clear throat) turn in on itself; cycle unto its own voidance.
    Does anyone have any ideas on this? If I missed something from the article, such as the "weighting" system he gives to known "good" text (which I still see as being futile at large sample sizes) please inform me.
  • by WatertonMan ( 550706 ) on Tuesday January 21, 2003 @03:43PM (#5128723)
    The big problem with most current spam filters is that they work at the server level or else require an extra "intermediary" pop-like server between you and your regular mail server. This is a problem because they assume a "one size fits all" approach to Spam. The problem is that one man's spam is an other man's interesting offer. Further they require the maintainer of the server continually update the corpus that trains the filter.

    The real fact of the matter is that for most people the hassle is nearly as bad as the spam! I don't want to spend the time setting up such things. And when people have set them up *for* me I get too many false positives, if only because my interests differ from them. Thus any filter has to be trained with user data and be trainable in an unobtrusive, easy fashion.

    The only software I know of that does this is Apple's Mail program in OSX. Unfortunately the program has many limitations and annoyances. (Damn that drawer) However Apple's approach to Spam ought to be followed by all other email clients. Adding Bayesian inference to an email client is very easy. Putting it in the sever is a mistake because you *can't* easily click and lable an email as spam. As with unfortunately too much Open Source software, the interface has been ill conceived.

  • by bheer ( 633842 ) <rbheer AT gmail DOT com> on Tuesday January 21, 2003 @03:56PM (#5128833)
    The kind who are taken in by the stupidest spam tricks, like the "future spam" he describes (nonsensical but grammatical set of English text designed to slip past Bayesian filters, followed by a URL.) What kind of a moron would click on such a URL?
    No kidding. Here's an example from my mailbox -- Moz's 1.3a spam filter didn't recognize this one. Note that I actually *know* people who write like this IRL.

    Frank

    You've gotta see this website: http://www.geocities.com/lordrings179/

    I downloaded Lord of the Rings: The Two Towers and I'm now watching it on my computer. Picture quality is great and it was tottally free.
    They've got a whole bunch of other games and movies as well. Take a look. Also, please forward this email to anyone you think would be interested.

  • by Kjella ( 173770 ) on Tuesday January 21, 2003 @04:03PM (#5128886) Homepage
    ...at least in any version I've looked at, is "language" filter. Maybe 90% of the email I recieve is in norwegian, with hardly no spam. Most of my english mail is spam, simply because I have very little legitimate mail in english. Is there any guesstimate (a la winXPs "language recognition")? By the way, that function is a major PITA for writing english references in a norwegian paper.

    Kjella
  • by orthogonal ( 588627 ) on Tuesday January 21, 2003 @04:17PM (#5128988) Journal
    I'd like to experiment with my own anti-spam software; to do so I'd like to be able to modify a pop/smtp proxy.

    Anyone know of a decent GPL'd (BSD'd, MIT'd) pop and smtp proxy coded in C or (better) C++?

    How about one that runs under MS-Windows?

    Thanks.
  • by bergeron76 ( 176351 ) on Tuesday January 21, 2003 @04:19PM (#5129006) Homepage
    I wonder what the implications on the OpenSource community are going to be because of this? Details can be found here [uspto.gov].
  • False positives (Score:4, Interesting)

    by stemcell ( 636823 ) on Tuesday January 21, 2003 @04:55PM (#5129335)
    Has anyone found a Bayesian filter that not only redirects spam into a spam folder but also sorts it's history of redirected mail into a probability list, so that it's easy to check the mails that were close to being accepted.

    Of the 4 programs I just looked at, none mentioned this feature but pretty much everyone complains about periodically having to scan their 'spam' folder for false +ves, and a history sorted into probability would make that easier.

    Stemmo
  • Wrong solution (Score:1, Interesting)

    by Anonymous Coward on Tuesday January 21, 2003 @09:45PM (#5131851)
    This only hides the spam from the reader - it doesnt eliminate the cost to ISPs and end users to receive, store, and download it. It also doesnt hide the spam from the truly clueless that actually BUY the junk products that spam usually advertises.

    The right solution is to force ISP's to shut down spammers COLD when the begin getting complaints - many ISPs *STILL* do not due this, some even accept premium payments from spammers to let them continue using their service (wether it be access to actually send the spam, or to host websites or reply-boxes so they can collect their cash from the suckers)

    If anyone *really* wants to be educated on some ways that have been proven to be effective in getting (some) ISP's to clean up their act, here are some URLs

    SPEWS - Spam Prevention and Early Warning System
    http://www.spews.org/

    ROKSO - Who's behind your spam.
    http://www.spamhaus.org/rokso/

    (Note - I am not affilated with nor do I represent any either of these sites - I just happen to agree with their goals and methods)

    Any questions, see news://news.admin.net-abuse.email - lurk for a week first, then post. If you dont have usenet access or a decent newsreader, you can get there thru google groups: http://groups.google.com/groups?hl=en&lr=&ie=UTF-8 &group=news.admin.net-abuse.email

    In either case, you might want to read the FAQ first: try

    http://www.samspade.org/d/nanaefaq.html
    or
    htt p://www.spamfaq.net/

All seems condemned in the long run to approximate a state akin to Gaussian noise. -- James Martin

Working...