Response to Gordon Cormack's Study of Spam Detection 229
Nuclear Elephant writes "In light of Gordon Cormack's Study of Spam Detection recently posted on Slashdot, I felt compelled to architect an appropriate response to Cormack's technical errors in testing which ultimately explain why one of the world's most accurate spam filters (CRM114) could possibly end up at the bottom of the list, underneath SpamAssassin. I spend some time explaining what is a correct test process and keep my grievances simplified about the shortcomings of Cormack's research."
How I do (Score:5, Interesting)
So, whenever I get a mail more than 95% similar to a mail that I know is a spam, I dump it.
This combined with Apple's Mail.app Bayesian filter and there may only be a few spams left.
Re:How I do (Score:2)
slashdotspam@mydomain.com
If I get mail from there, I know how it came in. The combination of all these keeps my total spam to probably three or four a week.
Re:How I do (Score:4, Informative)
Excellent review (Score:5, Informative)
DSPAM, IMHO, provides far better results than this report was leading too. A properly trained Bayes filter, but a somewhat intellegent person provides simply amazing results. I swear I can go weeks on end without a single spam getting through, no false positives -- and between 20 and 100 SPAM in my "spam" box per day!
DSpam using Bayes algorithm is by far the best filtering method i've used. And I've used alot! (From SpamAssassin to SpamProbe and all the inbetweens). The only setback, DSpam takes a couple weeks to train...
Priceless Photos [pricelessphotos.org]
False positives. (Score:3, Informative)
This is what I don't get - in order to be sure you have no false positives, you have to comb through all of the spam by hand, which for the most part defeats the purpose of a spam filter. If you don't do so, then you can't claim zero false positives - you can only claim that you haven't _noticed_ any false positives.
I have a whitelist at work, and it works quite
Re:False positives. (Score:2, Insightful)
That is the main reason I don't use any spam filters.
Without a filter I can check emails as they come rather than create myself a "homework" of having to check 50 messages at once...
Re:False positives. (Score:2)
Re:False positives. (Score:3, Insightful)
Re:False positives. (Score:2)
I had the foresight to save all my junk mail, about 3 years ago. I used it to train the filters when I switched to mozilla, and I hit the ground running with about an 80% rate (trained with about 5000 spam mails and about 800 real mails).
Since then, I have given my email address to any site that asks for it, because I figure the more spam, the better for my filter. This ha
Re:False positives. (Score:2)
It defaulted to like 10/90 (I don't remember which score was more spamlike, so imagine less than 10 was almost certainly ok, and greater than 90 was almost certainly spam) - I set it much lower for awhile (50) until I
Re:False positives. (Score:2)
Re:False positives. (Score:3, Informative)
This is (AFAIK) done against tokens in both the mail body and the headers, which pays dividends if the delivery paths are clustered (for example, if your whole family has accounts with MyISP.com, you'll probably get good filtering provided the spam is
Re:False positives. (Score:2)
I file spam in a spam box as I can easily scan across the contents in 10 seconds and hit delete before I go to bed, as opposed to the distraction when an email arrives and you go to check it i
Re:False positives. (Score:2)
True. But you can check if your false positive rate is low enough by statistical sampling.
So once every few days I scan through a thousand or so items marked as spam by procmail. As long as I continue to find 0 or 1 false positives (which I add to my whitelists), I consider my filters 99.9% good. That error rate is probably better than my own human error rate for misfiling and/or
Re:Excellent review (Score:2, Insightful)
"Excellent counterattack" might be more fitting.
Studies create discussion (Score:5, Insightful)
The benifit of these studies though is that fantical crap aside informed people will usually take the time to interpret results or suggest corrections/improvements that actually benifit developers and improve their knowledge base more than any information provided by the actual study.
Re:Studies create discussion (Score:2, Insightful)
For example, I am working on a GIS application. I looked at offerings from ArcView and MapInfo and found that while they do what I need to do out of the box, they are quite expensive and required a license for every seat of my application. So I looked to Open Source. There I found hundreds of tools, none of which did w
Re:Studies create discussion (Score:5, Insightful)
You miss an important point. This is not "our" problem, it's YOUR problem. I don't need a GIS program and neither to millions of other other people. YOU need one and too bad for you they cost tens of thousands of dollars. You have no right to complain that somebody else hasn't taken the time and effort required to give you a free equavalent.
What you need to understand is the open source is nothing but scratching an itch. This is your itch and you need to scratch it.
OPEN SOURCE ONLY WORKS IF PEOPLE CONTRIBUTE. This very simple and obvious point seems to be lost on most people. You are not supposed to sit around till somebody else does the work and give you something for nothing. You need to contribute.
You need to start an organization and start raising money to fund an open source development effort or to accelerate and existing one. You need to get involved and contribute. BTW bitching on slashdot does not count as contributing.
"This is like blaming McDonalds for your big, fat ass, or blaming Microsoft because you got a virus when you didn't run the patch they released to prevent it."
Or blaming the open source community because they didn't give you something for free.
Re:Studies create discussion (Score:2)
Which is why Open Source will probably always be for developers by developers. Unless of course the non-developing users decide to contribute cash...
It's sort of like public television. You can sit around and watch it for free, or you can donate and help other people watch it for free. "Your generous donations will make this software free-beer for everyone!"
Hello? (Score:2)
This is the kind of response he was talking about that does no good. Rather, you should acknowledge that the area is weak and that more focus needs to be given there in the future.
(Incidentally, I'm interested in OSS in the GIS field. Any ideas/good pointers? Anyone?)
Re:Hello? (Score:3, Insightful)
You don't have to be a developer. As I said you can start a campaign to ask for donations, you can write letters to companies asking for sponsorship, you can donate some of your own money, you can try to get like minded individuals together to solve the problem.
OPEN SOURCE DOES NOT WORK UNLESS YOU CONTRIBUTE.
" Rather, you should acknowledge that the area is weak and that more focus needs to be given there in the future."
More focus needs to be give
You don't like my software so I'll flame you (Score:2, Insightful)
Information on his professional career was very hard to find on the site.
This just seems like a flame because his software(dspam) didn't perform well in the test.
Re:You don't like my software so I'll flame you (Score:2)
Re:You don't like my software so I'll flame you (Score:3, Insightful)
> Cormack's article.
Articles aren't 'successful` - they're either useful, or they're just fun to read. Perhaps his is the latter.
From the response:
---
It turned out that Cormack was using the wrong flags, didn't understand how to train correctly, and seemed very reluctant to fully read the documentation. I don't mean to ride on Cormack, but proper testing requires a significant amount of research, and research seems to be
Re:You don't like my software so I'll flame you (Score:5, Insightful)
Jonathan, next time:
Re:You don't like my software so I'll flame you (Score:3, Insightful)
I believe the author of the article would have two issues with that assertion.
First off, you can have science about how fast grass grows. You have science about how many sexual partners a person has. You have science about how to manipulate people with irrational arguments. Science can be applied to anything that you apply scientific princepals to. Science in a lot of ways, is merely a matter of measuring in a controlled manner and then
Re:You don't like my software so I'll flame you (Score:5, Insightful)
Let me explain why he's irritated, as somebody who has conducted spam filter statistical tests and made publications on the topic.
Yes, it is irritating when somebody demonstrates that his method is better than yours. However, most researchers are able to accept this, and continue improving their own work.
However, what is far more irritating (by an order of magnitude at least) is when somebody "demonstrates" the inferiority of your work, and they do so in a completely scientifically bogus way.
Let me give a concrete example. Suppose you were Galileo. You have just put forth the postulate that all objects fall at the same speed regardless of mass. A "debunker" attempts to demonstrate that this isn't true by dropping an iron ball and a feather. Obviously, the feather falls much more slowly.
"Ha ha, neener, neener!" cries the debunker. Of course, Galileo knows his method is flawed. If people actually listen to this supposed debunker, Galileo might become very, very irritated indeed.
Re:why can't we all just get along? (Score:2)
True, but neither is ignoring his points simply because he had some attitude.
I do think he handled the stress a little poorly.
Re:You don't like my software so I'll flame you (Score:4, Interesting)
This was the most important point, I think, and was buried 2/3rds of the way down:
The emails being 8 months old, heuristic rules were clearly updated during this time to detect spams from the past eight months. The tests perform no analysis of how well SpamAssassin would do up against emails received the next day, or the next eight months. Essentially, by the time the tests were performed, SpamAssassin had already been told (by a programmer) to watch for these spams. [...] What good is a test to detect spam filter accuracy when the filter has clearly been programmed to detect its test set?
Spamassasin is good but not that good... (Score:5, Informative)
Point being - I was darn surprised to see SA at the top of his charts.
Now - if only mimedefang would easily use another spam-checker....
Re:Spamassasin is good but not that good... (Score:2)
Re:Spamassasin is good but not that good... (Score:2)
We deployed SA on our own internal MX and we have over 99% accuracy over the past 3 months. Although the bayes filter is primitive compared to what other advanced filters are doing, with enough training and a bigger token DB, SA works very very well. Couple that with network checks (ie, Razor2, Pyzor, DCC) and the system is comparable to the best statistical filters.
Just read it - (Score:2, Informative)
1. Cormack is very inexperienced in the area of statistical filtering. Agreed!!!
2. Cormack went into the testing with many presuppositions. Also Agreed!!
And in case you're not familiar with the word presupposition:
1. To believe or suppose in advance.
2. To require or involve necessarily as an antecedent condition.
Overall, this is a very good articl
Re:Just read it - (Score:4, Informative)
Disagreed. Gordon Cormack has been doing information retrieval for 20 years. He is fairly well known in the area. See his publication history at DBLP [uni-trier.de].
A far more likely conclusion about what's going on here is that Zdiarski's ego has been hurt. Both he and Dr. Yerazunis engage in some very sketchy statistics in their papers and I think that it has caught up to them.
1. Yerazunis' study of "human classification performance" is fundamentally flawed. He did a "user study" where he sat down and re-classified a few thousand of his personal e-mails and wrote down how many mistakes he made. He repeats this experiment once and calls his results "conclusive." There are several reasons why this is not a sound methodology:
a) He has only one test subject (himself). You cannot infer much about the population from a sample size of 1.
b) He has already seen the messages before. We have very good associative memory. You will also notice that he makes fewer mistakes on the second run which indicates that a human's classification accuracy (on the same messages) increases with experience. For this very reason, it is of the utmost importance to test classification performance on unseen data. After all, the problem tends towards "duplicate detection" when you've seen the data before hand.
c) He evaluates his own performance. When someone's own ego is on the line, you would expect that it would be very difficult to remain objective.
2. Both Yerazunis and Zdziarski make use of "chained tokens" in their software. This is referred to in other circles as an "n-gram" model. As with many nonlinear models (the complexity of an n-gram model is exponential with n), it is very easy to over-fit the n-gram model to the training data. Natural language tends to follow the Pareto law (sometimes called the 80/20 rule) where the ranking of a term is inversely proportional to the frequency of occurence of that term. The exponential complexity of the n-gram model contributes to the sparse distribution of text leading to a database with noisy probability estimates.
3. Zdziarski uses a "noise reduction algorithm" called Dobly to smooth out probability estimates in the messages. Aside from his unsubstantiated claim of increased accuracy, I have never seen anything to suggest that it actually works as advertised.
Considering these points, I was not surprised at all by the results of Dr. Cormack's study. While one may argue that his experimental configuration can use some improvement, his evaluation methods are logically and statistically sound. What I personally saw in the results of this paper was that two classifiers that use unproven technology did not perform as advertised. After all, every other Bayes-based spam filter performed acceptably well.
Lastly, I won't really touch his flawed arguments about how using domain knowledge about spam (i.e. SpamAssassin's heuristic) somehow hinders the classifier over time when you are also using a personalised classifier. You'll notice that SpamAssassin still did acceptably well when all of the rules were disabled.
Go read some more of Zdziarski's work and draw your own conclusions about his work. Pay careful attention to his use of personal attacks when comparing his filter to that of others.
Re:And to that... (Score:4, Insightful)
And to that, I would say... Someone writing an article for publication in a peer-reviewed journal should become experienced in their area of research before attempting to publish their results!
For example, I'm sure you don't have much experience with Nuclear Magnetic Resonance imaging - And you might or might not have experience with X11 forwarding. But unless you are fluent with both of those topics, I would not expect you to attempt to publish a paper in a peer-reviewed journal discussing those topics!
(Like I did, last December [wisc.edu])
However, for the sake of presenting some evidence to back up what I'm saying here, I'll take your example of Consumer Reports.
From their site: CR has the most comprehensive auto-test program and reliability survey data of any U.S. publication; its auto experts have decades of experience in driving, testing, and reporting on cars.
I'm not saying we wouldn't get our hair mussed... (Score:3, Funny)
I wouldn't take this critique too seriously (Score:5, Interesting)
There are several warning signs in this article.
That said, he does raise a few valid points, such as the timeline:
Re:I wouldn't take this critique too seriously (Score:5, Interesting)
I also have to say that my experience was much more along the line of Cormacks. I've tried DSPAM for a while on my server, starting from scratch. Training on error with only new emails. On a small mail server with about 10 users of different types (geeks, businesses, moms etc).
- DSPAM took way too long to produce any kind of results
- 2500 emails before advanced features kick in is *a lot* for the average soccer mom
- DPSAM produced way too many false positives early on
- The spam filtering accuracy leveled off at about 80% (number from DSPAMs web interfac)
So this is not another overzealus CS student here, but real world testing.
The DSPAM author does not address any of the real points and just rags on Cormack.
Not much of a "rebutal" in my book.
Re:I wouldn't take this critique too seriously (Score:3, Funny)
But does it bode well or ill?
Re:I wouldn't take this critique too seriously (Score:2)
Thank you, that is all.
Re:I wouldn't take this critique too seriously (Score:2)
You ignore the change in relative accuracy.
Assume for example that Spam Assasin is in fact the best around, but it has a 10% false spam rate. Every other program is slightly worse with an 11% false spam rate, always making the same mistake that Spa
What is typical (Score:4, Insightful)
I'm not happy about this, first he says that this account has a abnormally high spam ratio and then says that a normal user can have 60%. Where do we get these figures from I would like to know as my average is pushing up against 100%. I don't think that there is such as thing as an average user, some people seem to get nearly no spam and the rest of us get almost complete spam.
Reviewing todays inbox reveals around 200 emails, of which 8 were legit. You do the maths, I would be making progress if it was only 81%.
To cut through the spam (Score:5, Insightful)
His main points (at least the ones I agreed with):
1. No training period, many features only turn on after lots of real emails have been processed. Fair enough.
2. No purge window, stale emails get purged over time (e.g. 4 months), but in a test everthing is shoved through at once (in minutes) and so nothing gets purged. Again fair.
The rest of it complains about the tester, or complains that it was less than ideal conditions & settings for the particular filter.
We call that 'the real world' here.
Sys admins are not experts in configuring filters.
Also he should realise that any new filter gets a better rating than the dominant filter. Spammers try to defeat the most popular filter of the day. So sure a new filter might perform better than an existing one *initially* simply because the spammers are targetting it. Until it becomes dominant and then the spammers adjust the spam to defeat the new dominant filter.
So in the real world the data set will always be unusual because the spammers make it that way.
Re:SA vs SA... SA Wins! (Score:2)
I read that bit, but Comicks words:
"The test sequence contained 49,086 messages. Our gold standard classified 9,038
(18.4%) as ham and 40,048 (81.6%) as spam.
>>>>>>>The gold standard was derived from
X's initial judgements, amended to correct errors that were observed as the result
of disagreements between these judgements and the various runs."
From this I am left with the impression that X was the judge, not Spam Ass
Main issue (Score:2)
If this is true then to me this is a critical flaw in Cormack's methodology.
Not saying there are, or aren't other flaws. But this to me is the main one to consider. Zdziarski should have just put this at the top of his response, instead of putting a lot of waffle about stuff that does "not appear to have been a problem with Cormack's tests".
But is it correct? (Score:2)
"The test sequence contained 49,086 messages. Our gold standard classified 9,038
(18.4%) as ham and 40,048 (81.6%) as spam.
The gold standard was derived from
X's initial judgements, amended to correct errors that were observed as the result
of disagreements between these judgements and the various runs."
From this I got that:
1. He had an initial set of Spam judged by person X. (e.g. 99.84% accurate).
2. That he ran it through each test filter
Why not... (Score:2)
Re:Why not... (Score:2)
Constructing arguments (Score:5, Informative)
This is something that is featured throughout the rebuttal - an argument that runs:
a) Such and such was done incorrectly
b) Therefore the system was inaccurate
c) Therefore CRM-114 is better than stated
The ultimate point where I lost patience was where he claimed that the results were invalid because they didn't conform to accepted, real world knowledge. The study was empirical; it shows something, based on how it was set up; and what it shows is valuable. If you discarded results each time they contradicted agreed wisdom we would still think of a geocentric universe.
Re:Constructing arguments (Score:3, Insightful)
The ultimate point where I lost patience was where he claimed that the results were invalid because they didn't conform to accepted, real world knowledge. The study was empirical; it shows something, based on how it was set up; and what it shows is valuable.
But without knowing how the test was set up, how can you trust the test's so-called empirical results?
In medicine, research results aren't generally trusted unless 1) the study was sound, e.g., double-blind and 2) a separate team has recreated equi
Anyone got Gordon's email addy? (Score:2, Funny)
POPFile OTOH (Score:4, Informative)
No need to read a study, or even the author's opinion. No wild claims made, just real data.
Here it is:
http://www.usethesource.com/popfile_stats.html
Shows that POPFile has an _average accuracy_ over all users, including the training period of 95%. After it's seen 500 emails it has an accuracy of 97%. And the average POPFile user has 5 categories of classification.
John.
DSPAM (Score:2, Interesting)
If you want to talk about the results from a single filter in my current arsenal, I would give DSPAM the highest marks. I found it to catch more spams than a trained and customized SpamAssassin with no false positives. It's also very fast,
Obfuscated Hyperverbosity (Score:2)
I'd advise the author not to use the word "percept", because he doesn't know what it means.
I'd advise the author not to use the word "someodd", because dictionary.com doesn't know what it means.
As for "very unique"...
The problem w/ Bayes (Score:3, Informative)
Re: Response to Gordon Cormack's Study of Spam (Score:3, Funny)
What'd you say?
Cormack?
Nevermind...
Spam Assasin validation telling point (Score:2)
OBVIOUSLY, Spam Assasin is going to agree with Spam Assasin being the best.
What the test really did was determine how close to Spam Assasin the other spam detecters were, not how good they were at detecting spam.
Atypical, high volume of traffic? (Score:3, Informative)
Maybe I'm a hardcore geek, but I do do exactly what Gordon does -- have several accounts feeding a `master' mail account, using addresses I've owned for over a decade. I also post to Usenet and mailing lists with my unobfuscated mailing address -- I want people to be able to reach me, and I refuse to let the spammers take that away from me.
And I think I'm very sane, thank you.
I agree. That's an absurdly *small* amount. I personally receive over 1500 spams/day -- so I'd have 49,000 in under a month. Obviously the amount of spam I receive is because I set myself up as a target, but I'm hardly the only one. Even Jonathan's email address is clearly listed on his page, unobfuscated, so he's doing it too, at least to some degree.(As a piece of anecdotal evidence, Spamassassin catches all but about 4/day of the spams I get, and false positives are extremely rare. Of course, I have spent a good deal of time tweaking SA to work best with my email, and it now works very well.)
That sounds fine in theory, but in practice it's hard to do. How many people from all non-geek walks of life save *all* their email, including spam, and are willing to give it to you so you can analyze it?And merely capturing all their email won't do it -- they need to categorize it for you, because they're the only ones who can reliably decide what's spam *for them* and what's not.
I do agree, that the study had more than it's share of issues, but this critique goes way over the top.
Crap writing (Score:3, Insightful)
English does evolve, and good writers sometimes repurpose words to great effect. Alas, judging by the rest of the reviews here, our hero is NOT a good writer -- having built a shoddy and ramshackle outhouse, he proudly crowns himself the architect of it.
As for all those people who shout "prescriptive grammarian!", I often suspect they're just too lazy to learn to write well, and have decided that claiming that rules are passe is an effective workaround.
an important consideration left out (Score:2)
Contrast this with the effectiveness of RBLs, which block spam based on the source and immediately cut off the huge resource requirement needed by these "filters".
By
RBL (black lists) do not help with zombie systems (Score:3, Insightful)
I have noticed that black lists are indeed effective. Many spammers now use "bullet proof" spam hosts, so they use static domain names. However, there has been an marked rise in zombie systems sending spams. These are systems that are infected by viruses and then used as spam hosts. Since these systems come on line rapidly (when they are infected) and then drop out (when they are cleared of the virus or booted off their ISP) it seems unlikely that black lists will help.
At least in the spam stream I s
Cormack and Lynam re Zdziarski's factual errors (Score:5, Informative)
We encourage interested parties to read our paper [uwaterloo.ca] and our points of fact re Zdziarski [uwaterloo.ca].
Thomas Lynam
Gordon Cormack
June 24, 2004
Re:Cormack and Lynam re Zdziarski's factual errors (Score:2)
Re:Cormack and Lynam re Zdziarski's factual errors (Score:2, Insightful)
"We shall not respond" -- huh? Pull the log out of your ass guys. Like it or not, he's got legitimate beefs with your study. What's more, he's got cred: dude puts SERIOUS effort into GPL'd software that helps people, so his input is relevant and valid. Get over it.
Besides, his questioning o
Collaborative filtering? (Score:2)
I am always confused by the omission from these tests of collaborative filters like Cloudmark's SpamNet [cloudmark.com], which I have used at work for a long time with a very high "catch" rate, no real processing time, and no false positives. Essentially, every email you get it hashes and checks with the server. If you get a spam, you right-click and report it as such. Then it pulls any messages from your inbox which enough credible people have marked before you. (A gross oversimplification, but close enough.)
I feel li
CRM114 is impossible to get installed (Score:3, Insightful)
the corpus was *not* classified by SA alone (Score:5, Informative)
My $.02. disclaimer: I'm one of the SA developers.
"The Corpus was Classified by SpamAssassin, for SpamAssassin", and "The Accuracy of the Test Subject's Corpus is Questionable":
No, this is incorrect. Firstly, he states that he used user feedback to reclassify FNs and FPs (p. 4).
The misunderstanding probably comes from p. 6, where he notes that he also ran SpamAssassin 2.63 over the "gold standard" corpus once it was complete, to verify his original classifications.
However, in addition to that, he states 'all subsequent disagreements between the gold standard and later runs were also manually adjudicated, and all runs were repeated with the updated gold standard. The results presented here are based on this revised standard, in which all cases of disagreement have been vetted manually.' So in other words, the "gold standard" should be as near as possible to 100% accurate, since all the tested filters and the human classification have "had a shot" at classifying every mail, and the human has had final say on every misclassification.
In other words, if any misclassifications remain in the "gold standard" corpus, every one of the tested filters agreed on that misclassification.
IMO, that's as good as a hand-classified corpus can get.
"old versions of software were used":
It's unrealistic to expect the author to use the most up-to-date versions of filters available by the time the paper is made available to the public. That's the difference between results and a paper -- it takes time to analyze results, write it up and come to valid conclusions, once the testing results are obtained. IMO, the author can't be faulted for spending some time on that end of things.
Given that, using 6-month old release versions of the software under test seems reasonable.
SpamAssassin 2.60, when new SpamAssassin rules were last added to a released ruleset, is 9 months old (released 2003-09-22); so logically, in testing against DSPAM 2.8 (released 2003-11-26), DSPAM should therefore have had the edge. ;)
"test started with untrained filters":
IMO, that's the real world. People don't start with fully-trained filters.
In addition, the graphs on pp. 15-20 show accuracy over the course of the entire 8 month period, so "post-training" accuracy can be viewed there.
"spam in the test is as old as 14 months":
Nope, he states (p. 4) that the corpus uses mail between August 2003 and March 2004.
"it should purge old data":
SpamAssassin purges its Bayes databases automatically, based on the age of messages in the corpus. We call it "expiry".
In that test, the "SA-Standard" dataset would be using this, so stating "Cormack did not perform any purge simulation at all" is not accurate. However, that would not have increased SpamAssassin's accuracy figures, since we have generally have found that while it keeps the overhead of bayes database sizes and memory down, it marginally reduces accuracy, instead of increasing it (at the default settings).
(Also worth noting that it can deal with being run from an en-masse check over a static corpus, as it uses the timestamp information in the Received headers rather than the current system time. So even if this test was run in the course of 4 hours, it'd still be an accurate simulation of what would happen in "real world" use over the course of 8 months.)
And finally, what Henry said in comment 9520473 [slashdot.org].
--j.
DSPAM. (Score:2)
I use and recommend DSPAM. Many of the accounts that are aggregated in my inbox have been exposed on the web and in Usenet for several years, so my spam load is probably about as high as anyone else's. No comparison testing analysis can change the fact that my inbox
Re:Architect is not a verb. (Score:2, Insightful)
Haven't you ever Googled something? Haven't you ever input data into a computer? (The use of the word input as a verb is, of course, the result of verbing, and it's now considered acceptable usage.) In recent years it has become common in English to "verb" nouns. In fact, I just did it. English, like any other language, evolves over time.
To deny this fact makes you just another prescriptivist language maven, completely disconnected from reality and any sens
Re:Architect is not a verb. (Score:3, Insightful)
Re:Architect is not a verb. (Score:2)
Haven't you ever input data into a computer?
Why is the readability of that sentence poor?
Re:Architect is not a verb. (Score:2)
But "verb" isn't a verb, it's a noun! You can't "verb" something, or go around "verbing" things...check it out here [reference.com].
Re:Architect is not a verb. (Score:2)
Sure you can, but verbing weirds language.
Daniel
Re:Architect is not a verb. (Score:2)
Re:Architect is not a verb. (Score:2)
Then the Germans had better stop joining words together however they please. It creates these big, long words which are incomprehensible to non-native speakers. They seem to do it willy-nilly!
That's what you're saying. Right?
It's elitism, nothing less.
Prescriptivism is the only thing elitist here.
Re:Architect is not a verb. (Score:2)
Re:Architect is not a verb. (Score:2)
I wouldn't mind verbing so much if the right usage hadn't been drilled into me as a kid...but "verbing" the word "architect" is not a language advancement. It's a sloppy shortcut normally used in buzz-speak (that's why you almost never hear it in everyday English, but so often in computer- and business-related fields). It's ambiguous and makes English even more difficult to understand than it is already. The fact that enough people complained about it for this thread to occur shows that in fact, it is not "
Verbing is not a verb (Score:2)
The language evolves, but slowly as everyone needs to be able to keep up. This is the problem with Open Standards: creating a stable API can sometimes slow or stifle innovation
Phillip.
Re:Verbing is not a verb (Score:2)
How do you think those usages get placed in dictionaries? They don't fall from the sky. The noun "input" got verbed. And the use of "verb" as a verb will also eventually be accepted in the dictionary.
Acceptance in the dictionary, and acceptance as usage in language are two distinct things, however.
Re:Verbing is not a verb (Score:2)
Nope; in fact, the verb "input" got nouned. The first known use of "input" as a verb in the context of computers was in 1946; the first known use of "input" as a noun in the same context was in 1948.
Outside of the specific case of computers, the difference is even more distinct, with the verb "ynputt" pre-dating the noun "input" by almost four hundred years.
No. (Score:2)
Re:Verbing is not a verb (Score:2)
Re:Architect is not a verb. (Score:2)
If you mean being proud of knowing that "architecting" was not even close to being the right word, then I'm proud, sure.
Language does evolve over time and new words do come into usage, but how does that mean that just picking words at random and using them instead of already existing, perfectly adequate, words is not pointless, unclear, and pretentious?
To deny this fact makes you just another prescriptivist language maven, completely disconnected from reali
Re:Architect is not a verb. (Score:2)
Re:Architect is not a verb. (Score:2)
Alright, genius: did he mean "write" or "design"? And why was not using one of those an appropriate choice?
TWW
It could be worse... (Score:2)
Re:It could be worse... (Score:2)
Re:Is that what your mom worded (Score:2)
Secondly: Your second posting is not only off topic, but also insulting and purely flaming.
Re:Why use "architect" - why not "write" (Score:2)
You're being purposefully dense.
To architect a response would imply careful consideration, artistic presentation, and stunning aesthetics. I don't necessarily agree that that's what he's done here, but obviously that is what he meant to convey with his choice of words.
And if you disagree with verbing words, you have better stop "inputting" data into a computer, or "Googling" for answers, or "bookmarking" links, or "
definitely curious and also concerned (Score:2)
I feel the Plain English Campaign [plainenglish.co.uk] offers a useful guide "We define plain English as something that the intended audience can read, understand and act upon the first time they read it". So, perhaps you are right for the majority of people. But I had to pause a while and think about what michael meant.
I
Re:Why use "architect" - why not "write" (Score:2)
Re:Why use "architect" - why not "write" (Score:2)
TWW
Re:Confirmed: Architect not a verb (Score:2, Offtopic)
The World-Wide Web search engine that
indexes the greatest number of web pages - over two billion by
December 2001 and provides a free service that searches this
index in less than a second.
The site's name is apparently derived from "googol", but
note the difference in spelling.
The "Google" spelling is also used in "The Hitchhikers Guide
to the Galaxy" by Douglas Adams, in which one of Deep
Thought's designers asks, "And are you not," said Fook,
leaning anxiously
Confirmed: Architect IS a verb (Score:5, Informative)
The use of "architect" as a verb isn't even recently invented: Keats wrote "This was architected thus By the great Oceanus" in 1818.
It's a decent paper, but take it with some salt... (Score:5, Interesting)
Re:????? Did you even... (Score:2, Insightful)
Read the article, then post!
There's really very little to be said in favor of Jonathan A. Zdiarski's "defence?"
Now, I could start posting how ignorant that statement is, but then I'd just be rewriting Zdiarski's article. Cormack's entire test was flawed - He used SpamAssassin (95% accuracy) to create his 'ham' corpus. He used software versions that were 6+ months old. Even the email address he used for testing is incredibly unique and atypical! (He uses an address that he's had for 20+ years;
Re:????? Did you even... (Score:2)
Re:Special Pleading (Score:2)
You must have been skimming very badly. I read it, and this kind of argument was never used at all. Basically, he pointed out flaws in the way the test was set up that biased it towards SpamAssassin. Particularly that the test was started with untrained filters, and that the version of SpamAssassin's ruleset used was more recent than the messages
Re:architect (Score:3, Insightful)
Languages are dynamic, not static. If enough people begin to use 'architect' as a verb, then it is a verb. I have a strong hunch that 20 years from now, the verb form of architect will appear in Merriam-Webster...
Re:architect (Score:2, Funny)
Ya.
And for the love of Howard Phillips Lovecraft, "Cthulhu" is not spelled "Cthulu".
Duh.