Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Advertising Spam IT

Cornell Software Fingers Fake Online Reviews 122

Eric Smalley writes "If you're like most people, you give yourself high ratings when it comes to figuring out when someone's trying to con you. Problem is, most people aren't actually good at it — at least as far as detecting fake positive consumer reviews. Fortunately, technology is poised to make up for this all-too-human failing. Cornell University researchers have developed software that they say can detect fake reviews (PDF)."
This discussion has been archived. No new comments can be posted.

Cornell Software Fingers Fake Online Reviews

Comments Filter:
  • by kimvette ( 919543 ) on Tuesday July 26, 2011 @06:15PM (#36889438) Homepage Journal

    This topic is TOPS! It is a beautiful summary!!! I highly recommend it!!!

  • by Anonymous Coward

    They also gave hints on how to defeat it.

  • by jamesh ( 87723 )

    if (review == negative && product_made_by == someone_we_received_cash_from)
        review_fake = true
       

    • I'm confused. Shouldn't it be review == positive?
      • Re: (Score:2, Informative)

        by Dunbal ( 464142 ) *
        You want good reviews from people you received cash from to be declared fake by your algorithm? Don't quit your day job...
        • by Flush1 ( 2356344 )
          Umm i believe that logic equation means if someone elses review is negative..and that product is made by a company whom paid the algorithm creator money...its default fake...duh. as in it only claims fake review so that the negative review is illegitamate? L2read:)
          • by Plombo ( 1914028 )
            The equation in the original post says that, but the post he replied to (incorrectly) says the equation should be different.
        • I see now. I misread "they" for "me", so I interpreted the joke from the perspective of lazy scientists instead of unscrupulous scientists. For instance, the following might be a lazy scientist's code:

          if (review == positive && product_made_by == someone_they_received_cash_from)

          • review_fake = true
  • I picked out the fake review immediately.

    • I did too! It's not hard to spot fake reviews!! I'll be sure to spot more in the future!

      It's easy - fake reviews are overly-enthusiastic, and some of them go as far as to slam other brands. Real positive reviews are usually more sedate, little over-use of exclamation points!! and usually point out shortcomings of a product. Also, companies who astroturf usually submit multiple reviews, and it's usually posted in the same wording and typing style, which makes the fake reviews stick out even more. That's why

      • by v1 ( 525388 )

        fewer astroturfers invest the time in complete reviews

        I look for reviews that include both positives and negatives. Nothing is perfect, and most good reviewers will find at least one drawback or limitation in the object being reviewed. Reviews that are nothing but positives I don't give a lot of credit to - eve if they are real, they're probably not being objective.

        I know anytime I write a review, I give a quick breakdown, hit the positives, hit the negatives, and close with a summary. (a lot of professio

        • by pipedwho ( 1174327 ) on Tuesday July 26, 2011 @07:35PM (#36890040)

          If I was being to paid to astroturf a product, I'd prepare a number of generally positive and a few 'negative' reviews that pointed out a few flaws in the product. Of course those would only be flaws that the majority of people will either not care about, or will see as positive to themselves. The idea is to seed a small element of trust in the product, and take away the feeling of blind risk from the potential customer.

          In some of those reviews I'd also mention another product or two that I felt were 'better', again only in some specific way that most people wouldn't care about. This seeds an element of legitimacy to the product, especially when comparing it against known good competition. The idea is not to get 100% of people thinking your product is the best, but to catch a large number of customers that would have otherwise never even considered your product.

          The third sneaky thing that I'd do would be to review a competing product or two with a very minor mention of the product I'm being paid to astroturf. In that review I'd be generally positive, while at the same time throwing in a few questions that seeded some doubt and uncertainty about some seemingly important aspect of the competitor's product that my actual product reviews (and legitimate advertisements) had covered as being fully supported. In this way, the person 'researching' the general class of product is likely to do some further research into 'my' products - specifically searching for these 'missing' or 'doubtful' aspects that I've alluded to.

          The above tactics are readily seen across the board in general advertisements where a company will harp on about some new 'feature' that they have - especially a feature that competing products don't officially advertise or mention - thus implying that the feature does not (or may not) exist in those other products. In reality the feature is a straw man style argument that compares apples/oranges in a way that less than 10% of the potential market would see as an invalid or incomplete comparison.

          How does anyone know that there aren't people out there readily using any or all of the above tactics?

          • These are effective strategies that have been in use and working for some time. There are a few refinements the past decade. People often skip the middle of the review or zone out in the middle. So when you're 'turfing a product with fatal flaws like for example a phone that can't multitask, has no apps and can't even set your "I like big butts" MP3 as a ringtone for your amply so endowed love interest like other phones can, that's a good spot to pretend those slights don't exist and instead go on about s

            • So when you're 'turfing a product with fatal flaws like for example a phone that can't multitask, has no apps and can't even set your "I like big butts" MP3 as a ringtone for your amply so endowed love interest like other phones can, that's a good spot to pretend those slights don't exist and instead go on about some minor flaw that absolutely nobody could care about before dismissing it as a minor issue not worthy of subtracting a star.

              It's interesting you point out well known issues in the original iPhone that you feel are 'fatal flaws' in one of the most successful single products on the market at the time. By extension of its success, I'd say those features are were significantly less important to a huge majority of Apple's target audience, and were in fact the minor and not the major flaws.

              Now imagine that Apple's advertising agency had a number of astro-turfers out there specifically seeding those 'flaws' into general discussion alon

              • Congratulations - you've outed yourself. You're who we're talking about. That wasn't hard. For your sake I hope this is a throwaway account. I've friended you here and I hope all my friends will too, so we can see the nonsense you post. Now that you're stuck in it, do you see the trap? In hindsight it should be obvious. Those were features iPhone lacked about a billion phones ago, when they faced competition that didn't have those features, or lacked other critical features iPhone did have. They're al

                • Dude, feel free to hate Apple. And hate me too if you want.

                  But, I have no affiliations with Apple, Microsoft, Google, etc, and I don't work in marketing or advertising.

                  Yeah, WM7 phones had the same issues at launch, but I used the original iPhone as a counter-example because history has shown that at the time it was incredibly successful despite those things that you claim are major flaws. Those were all issues that were regularly echoed in this forum and on many other internet blogs and boards. Yet, people

          • Flawless astroturf is indistinguishable from hard work. It's certainly possible, but is it actually cheaper than not making a crap product in the first place? The makers of flawless astroturf are unlikely to be employed by Motel 69. I've heard it presently costs somewhere north of $500m to get a new drug approved by the FDA. The dreadnoughts of Amazonia are overstated.

            I've argued several times recently for the virtues of pseudonymous pluripotency and against the consolidated identity of Google+. But t

            • by epine ( 68316 )

              The Slashdot comic-box gods are on to me:

              How sharper than a serpent's tooth is a sister's "See?" -- Linus Van Pelt

              One instance of the word "sample" in my post above could perhaps been rendered as "wind-up" instead, rife with puns and elisions ... and Charlie Brown's pitching elbow. Makes for a bad sentence. The sibilants of Spain are mainly the same. Is that a linguistic fact? I wonder.

          • I think that post pretty much encapsulates the political process in any democracy.
            Trash the apples in your opponent's plan and present the oranges of your own plan as apples.
    • by Dthief ( 1700318 )
      yes, because no one used the hotel name at end of sentence in the review...it was a bad example
    • FYI, The article is wrong, both are fake. Its in an article, duh. If it has too promotional of a tone, if there are buzzwords. If the spelling and grammar is too perfect.

      Oh yeah...honest reviews of sufficient length always have unrelated details. Some promotional slob in an office somewhere isn't going to go off topic. Ya know, crap like "I treated my self to the Sopranos when I stayed there!". or "The maid was hot"

      So, if you're like me, and into writing fake reviews...(yeah yeah, I know). don't capit
    • On the other hand you had a 50-50 shot to start with and had the advantage of knowing that one was fake.
       

    • by Anonymous Coward

      Now they have to make it to recognize fake profiles in dating sites (grin)

  • It's an "undecidable" problem. Think for a moment how you'd define "fake".
    • It includes reviews by anyone who has never used the product. Example, this Zenith watch [amazon.com]. I won't debate the hilarity of the reviews, but no doubt they are fake.

      I think the "undecidability" factor comes into play in any machine learning problem. For example, which email is spam and which isn't? But significant strides have been made to solve that problem and I don't see why online review system is any different.
      • This is Spam! Citi, on behalf of Sears, has decided to spam me daily with notifications of my balance due. This despite my "opt-in" to be to be notified monthly and the fact that my "opt-in" request was working "once a month" until last week. And, No. I am not overdue, late, in arrears or in any other negative Citi space! I did bring this spam to Citi's attention. They replied that my email was forwarded to "someone who doesn't give a flying fuck" although, not in so many words! And, the next "higher-up t
    • I think it incorrectly states "fake" where it should really say "automated".
      • I think it incorrectly states "fake" where it should really say "automated".

        I would say that it's more of a conflict of interest than fake or automated. The reviewer may be a real person and he or she may have actually owned/consumed/used the product being reviewed. The issue is that the reviewer is in the employ of the product manufacturer or marketing agency hired by the manufacturer.

        • Yes, but if you look at the reviews made by actual people employed by the product manufacturer, then it is a stupid project - there is no way a system would be able to differentiate between a real and a fake review written by a human. This on ly makes sense as the means of detecting automatically generated reviews.
    • Let's be thankful machines aren't good at this yet.
    • by arth1 ( 260657 )

      It's an "undecidable" problem. Think for a moment how you'd define "fake".

      Do you mean fake as in promises, or fake as in breasts?

    • How about untruthful? It's not decidable in the provable mathematical sense, but they appear to have a statistical classifier system.

      As for how they defined "fake", they went to Mechanical Turk and deliberately paid people for "fake" reviews. As this is one particular behavior we wish to detect and punish, I don't care about the ontological arguments. The real problem is whether this can cope once shills use it to tweak their bullshit until it "passes".

  • They tested it on Slashdot's Packt book reviews!
  • by kabloom ( 755503 ) on Tuesday July 26, 2011 @06:37PM (#36889580) Homepage

    I think you want the link for the paper [aclweb.org], rather than the slides.

    • If you are one of the millions suffering from subluxation a slide show would be the least of your concerns.
  • If this catches on it will just make the fake-reviewers work harder, but it won't, by itself, stop them.

    Reputation-based reviews are probably coming. Of course, it's possible to create a fake reputation if you plan ahead of time.

  • The bubbly-over-the-top sound of the review on the right screamed "this is not the opinion of a person exercising sober judgment."

    So, even if it wasn't a fake it wouldn't be useful.

    The one on the right is formulaic enough that it *could* be fake, so I would need to rely on the reputation of the author or publisher. If I saw it in a major newspaper's web site with a byline of a newspaper employee or a reputable wire service, I would assume it was legit. If I found it on some random blog or a web site where

  • This is why I look at the negative reviews first to see if there are any particularly frequent problems with the product.

    Of course, a few years ago when I was apartment shopping online, I found that this tactic doesn't always work so well. On one side, you have the management companies posting shill positive reviews for their apartments, and on the other side, you have disgruntled evicted tenants posting overblown negative reviews for those apartments.

  • I've seen enough hentai to know where this is going.
  • its *ALL* fake. Either pissed off people or paid supports.

  • by joocemann ( 1273720 ) on Tuesday July 26, 2011 @08:54PM (#36890422)

    ... what we know is that this is just another form of information security.

    The people who produce false reviews will develop a tool that not only fakes the reviews, but then applies this exact software (in the article) to analyze it, and then provides logical adjustments until this exact software cannot discern the difference between the adjusted outcome or real reviews.

    All these cold wars suck. A little honesty and integrity in the world would be great, but when capitalism pits us against each other to survive (not for wants, but for needs), this is exactly what you should expect. I prefer cooperation over competition, when it comes to survival, but if the social environment dictates that I must do bad to survive, I must survive.

    If you respond, please don't be the shallow minded bum that thinks there isn't a causal relationship between what I just said and how people fake reviews (for revenue).

    • by wrook ( 134116 )

      when capitalism pits us against each other to survive (not for wants, but for needs)

      http://en.wikipedia.org/wiki/Fundamental_human_needs [wikipedia.org] puts this statement in context. I can sympathise with your feelings, but in a 1st world country, the items on this list are available in abundance. It is true that there are some people who are unable to take advantage of this abundance due to mental illness and the like, but I don't agree that competition for resources is a large factor.

      Rather, I think that perhaps rampant consumerism couches our wants in terms of needs. We follow a logical fallacy wh

      • I wish I had time to respond in full, but I don't right now (pretty busy with life, sorry about that because you deserve more from me).

        But what I can say is that the needs I'm talking about are absolutely realistic, encompassing the complexity of society as it exists today and the intricacies that induce participation in competitive roles to fulfill needs. I could brainstorm many points in the fundamental human needs you posted that are not available by social means, especially once one considers relative d

      • Most Americans would become homeless after a year of unemployment, or even lack of full-time employment, thanks to inflated rent and real estate prices. The way things are going, this is a very realistic scenario for many people, that scares them shitless.

        In most other countries unemployment will make you uncomfortable, but in US it means, you are literally left to die on the street.

    • by Animats ( 122034 )

      The people who produce false reviews will develop a tool that not only fakes the reviews, but then applies this exact software (in the article) to analyze it, and then provides logical adjustments until this exact software cannot discern the difference between the adjusted outcome or real reviews.

      That's a very real problem. A few years ago, phishing sites could be distinguished from real sites by observing bad grammar, poor layout, and other indicators of low quality. Today, phishing sites look very much like real ones to humans.

      For a few years, until 2008, there was the Web Spam Challenge [lip6.fr]. A large number of web pages had been classified by humans as "spam" or "not spam", and people ran classifiers against them to try to match the human judgement. That used to have some effectiveness, but the qu

    • +1 - Especially because a sarcastic review is decidedly NOT positive. I guess it is wholly possible to modify the search criteria of Cornell's engine to look for review spam that trashes products. I am sure that practice is just as wide spread as overly positive reviews.
    • Interesting. They are not real reviews so technically they should come up as deceptive, which I guess they are.
  • I like the idea very much. The article showed an example of two comments: one on the left and one on the right. This example is only easily spotted for folks familiar with SEO and web dev stuff. The review on the right hand side was clearly spam because the name of the product was used several times in the comment to pad a search engine. My guess is that, for a laymen, it isn't that obvious. Signs to look for are: (1) excessive use of capitol letters and punctuation marks, (2) excessive use of the prod
  • When I shop at newegg.ca the reviews can also be by verified owners, who actually bought the product. I tend to check off that when looking at reviews. Everything else is fluff.
  • Are fake as well. There are very few shows that don't get their products to review directly from the manufacturer. Most only get the products if they give a positive review. Did you ever watch a car show that didn't praise a car? How many shows actually compare products they bought in shops themselves and aren't afraid to fail the lot?
  • Apparently quite well – I’ve checked a few reviews on the product and all three people gave it 5 stars and called it AWESOME!!
  • Using the very same method, we could leverage moderation logs to predict which comments are informative, funny, deceptive...
    Please admins, where's the API?

"Hello again, Peabody here..." -- Mister Peabody

Working...