Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Businesses IT

Machine Learning Expert Michael Jordan On the Delusions of Big Data 145

First time accepted submitter agent elevator writes In a wide-ranging interview at IEEE Spectrum, Michael I. Jordan skewers a bunch of sacred cows, basically saying that: The overeager adoption of big data is likely to result in catastrophes of analysis comparable to a national epidemic of collapsing bridges. Hardware designers creating chips based on the human brain are engaged in a faith-based undertaking likely to prove a fool's errand; and despite recent claims to the contrary, we are no further along with computer vision than we were with physics when Isaac Newton sat under his apple tree.
This discussion has been archived. No new comments can be posted.

Machine Learning Expert Michael Jordan On the Delusions of Big Data

Comments Filter:
  • by Anonymous Coward on Thursday October 23, 2014 @05:40AM (#48211043)

    A man of many talents.

  • Computer vision... (Score:5, Interesting)

    by Savage-Rabbit ( 308260 ) on Thursday October 23, 2014 @05:51AM (#48211077)

    ... and despite recent claims to the contrary, we are no further along with computer vision than we were with physics when Isaac Newton sat under his apple tree.

    That's true, I looked into object recognition for image classification by content. Face recognition is proceeding fairly nicely but doing stuff like just programmatically classifying/tagging images by whether they contain a car, airplane, house, tree, dog, mountain .... without even trying to do things like identifying the type of airplane/dog/car is pretty much undoable in any reasonable amount of time with human level accuracy needed on garden variety PCs and tablets which is the application I'd be interested in. The fastest and most accurate image classifier/tagger is still a human. Am still looking forward to they day that changes but I'm not sure that will be within my lifetime.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Are you kidding? We have frickin' self driving cars now! Those aren't mere claims - they're a practical application of computer vision.

      • by Lennie ( 16154 ) on Thursday October 23, 2014 @06:17AM (#48211143)

        Self driving cars isn't done based on looking at still images only. They have LIDAR which helps identify where objects are and what the size could be. Also they have very detailed maps of the roads, these are all taken into account when identifying objects.

        Have a good look at the limitations section on Wikipedia:
        "...that the lidar technology cannot spot potholes or humans, such as a police officer, signaling the car to stop."

        "The vehicles are unable to recognize temporary traffic signals. ... They are also unable to navigate through parking lots. Vehicles are unable to differentiate between pedestrian and policeman or between crumpled up paper and a rock."

        https://en.wikipedia.org/wiki/... [wikipedia.org]

        Does that seem like a system that solved computer vision ?

        • To be fair though, a lot of times human drivers can't identify the difference between crumpled paper, or a plastic bag, or some other innocuous road debris... and a rock.
        • by Ol Olsoc ( 1175323 ) on Thursday October 23, 2014 @08:56AM (#48211829)

          Vehicles are unable to differentiate between pedestrian and policeman or between crumpled up paper and a rock."

          Stupid damn things are always choosing scissors.

        • Have a good look at the limitations section on Wikipedia:
          "...that the lidar technology cannot spot potholes or humans, such as a police officer, signaling the car to stop."

          We already have technology that can handle potholes to some extent: Semi-active suspension management.
          It has been slowly trickling down from high end and commercial automobiles.

          There's two basic ways the systems work.
          1. Magnetic shock fluid whose viscosity can be changed with a magnetic field
          2. Actively adjusted shock valving

          With the right accelerometers, both systems allow for detection of potholes (actually the detection of rapid drops) and can almost instantly increase the shock dampening to prevent your [imgur.com]

      • True...to a point. It's mainly limited by domain or function. Can drive on well known established paths. Or only when *all* the cars communicate with each other. Or at slow speeds.

        It'll be a long time before a car can drive on an expressway, through a construction zone, then past a parade in your home town and respond correctly to a police officer's hand waving instructions past an accident.

        I'm beginning to believe that we are still 20 years from fully autonomous self-driving cars.

        Peace,
        Randy -- althoug

      • by sjames ( 1099 )

        That's because the one thing it must correctly identify is the lane markers. Pretty much everything else is just 'object' It doesn't matter what the object is, it just has to not hit it.

    • ... needed on garden variety PCs and tablets...

      This is where you're wrong and people need to stop thinking this way. Nowadays, any even moderately heavy compute job is shipped off to the cloud where it is done in massively parallel systems. Hell even Google Now and Siri need the cloud for simple voice commands. Picasa uses it for face recognition. All of these things are done in near real time due to our much faster and robust worldwide Internet. There is no reason to think that any of these jobs need to

      • Re:Cloud (Score:4, Insightful)

        by JaredOfEuropa ( 526365 ) on Thursday October 23, 2014 @07:15AM (#48211315) Journal
        There's plenty of reasons I can think of why I'd prefer image recognition on my phone rather than the cloud. Privacy, for one. If you let FB tag your photos with the names of the people in it (after teaching it those names), what do you think happens to that data? You might not even want to share the photo or video stream with anyone... Another reason is that we still do not live in a world with ubiquitous and cheap mobile data. Travel abroad, and you'll find out quickly why cloud-based services like Waze aren't always a viable option.
      • Re: (Score:2, Insightful)

        by Anonymous Coward

        Of course there's a reason, and it's not privacy or latency, it's cost. If you want to provide a free app that does image recognition, who pays for the cloud servers? The user has horsepower on his phone that rivals an early Cray. Only an idiot would go to the trouble and expense of putting this into the cloud, only to end up paying for the cloud service while struggling to get any upstream revenue. I notice that the two examples you give are from companies that run cloud systems themselves. Nobody doe

      • I don't think there's actually any need for Siri to send the speech off for processing. A modern phone has plenty of processing power for that. It's for other practical reasons, not resource limitations. It allows for a very rapid update cycle without having to download new software to every phone weekly. It allows for the collection of a vast library of speech samples that can be mined by machine learning to further improve recognition. In the case of Google, personal data is their business, quite literall

        • Pay for Google Now, rather. I was focused on the tech, and got the companies confused.

          Siri pays for itsself by promoting other Apple products.

      • Re: (Score:3, Insightful)

        That's selective quoting taken to the extreme. The GP was talking about the applications he'd be interested in. Do you know what he's interested in? I don't. But I do have a friend who escapes the Scottish winter every year to go searching for undiscovered orchid specieses in a Vietnamese rainforest. Now call me a pessimist, but I doubt he's going to get a 3G signal out there. What if he wants to check if a flower is a known species? He can do that within his area (the orchids) but he can't be expected to h

        • That's selective quoting taken to the extreme. The GP was talking about the applications he'd be interested in. Do you know what he's interested in? I don't. But I do have a friend who escapes the Scottish winter every year to go searching for undiscovered orchid specieses in a Vietnamese rainforest. Now call me a pessimist, but I doubt he's going to get a 3G signal out there. What if he wants to check if a flower is a known species? He can do that within his area (the orchids) but he can't be expected to have an encyclopedic knowledge of all extant plant-life. Wouldn't it be nice if his mobile phone could flag up a potentially unknown species that he stumbles across, giving him to opportunity to take a sample back for analysis?

          Yeah, that's pretty much what I had in mind. Take a photo of an orchid, compare it to a library of descriptors and get a near human accurate classification without routing the image through a super computer on the cloud. My use case was more like: With all networking on your device switched off, snap a photo of a plane and the app tells you that's a Virgin Airlines Boeing 737. Or: search a library of images for snaps that show one or more South African Air Force P-51Ds (including partial shots showing only,

      • Yup, that's all we need, an internet outage causing multiple fatal accidents because cars traveling 80 miles an hour suddenly can't tell the difference between a police officer in the road and a tree beside it.

        • by brunes69 ( 86786 )

          A car does not need to tell the difference between those things. It only needs to know something is on the road that should not be there. What that thing is, is irrelevant.

          Do you think Google cars can differentiate between trees and police officers? Think again. You are over-complicating the vision problem.

  • by bouldin ( 828821 ) on Thursday October 23, 2014 @06:13AM (#48211135)
    This is why I don't take Ray Kurzweil's predictions seriously. People like Prof. Jordan, who would actually make the vision become reality, dont take Kurzweil's ideas seriously.
    • Comment removed (Score:5, Interesting)

      by account_deleted ( 4530225 ) on Thursday October 23, 2014 @06:27AM (#48211169)
      Comment removed based on user account deletion
      • Comment removed based on user account deletion
      • Re:zomg singularity! (Score:4, Interesting)

        by Mostly a lurker ( 634878 ) on Thursday October 23, 2014 @06:52AM (#48211255)

        I was disappointed at how in claiming a never-ending increase in the pace of technological advancement, Kurzweil never dealt with the regulatory and consumer factors, and the whole notion of how humans perceive time in general. The wheels of government can only move so fast, and so mankind's access to radical new technology outside the lab (e.g. self-driving cars, new medical tech) must slow down to match the speed of regulatory agencies.

        You make some good points. However, I believe the march towards the singularity will march inexorably forward for one (highly undesirable) reason: the insatiable appetite of the leaders of nations for power. The populations of those countries will not even be allowed to know much of what is being developed with hundreds of billions of their tax dollars, but technologies that leaders perceive could enhance their ability to dominate the world will be financed. There will be no regulation. If you want to know the state of the art in visual recognition, you should look at military applications: robot soldiers and autonomous drones. For applications of big data (especially its usefulness in widespread blackmailing activities) then, in spite of some initial missteps, look at the pervasive collection of data by the world's "intelligence agencies".

        • by dcw3 ( 649211 )

          If you want to know the state of the art in visual recognition, you should look at military applications: robot soldiers and autonomous drones.

          Not so much visual recognition as remotely controlled, or using GPS and waypoints.

        • If you want to know the state of the art in visual recognition, you should look at military applications: robot soldiers and autonomous drones. For applications of big data (especially its usefulness in widespread blackmailing activities) then, in spite of some initial missteps, look at the pervasive collection of data by the world's "intelligence agencies".

          And yet, one of the most important tools in the visual recognition toolbox for military intelligence purposes is... the human.

          Seriously, as computers cannot yet tell the difference between various naturally-occurring geographical features and a human building, when the US military were looking for potential Taliban complexes in the Afghan deserts, they used people to identify as part of their image recognition path.

          The solution was pretty elegant, actually. First, the computer would process the satellite im

      • Yeah, that and the fact that Kurzweil is the biggest hack on the planet.
        • by gweihir ( 88907 )

          Actually, I believe he is not even a hack in the sense that he never had done any real technology connected to his ravings. He is a delusional loon that is completely disconnected from reality.

      • by bouldin ( 828821 )

        Kurzweil and academics like Jordan seem to have very different ideas about when we will solve the problems of intelligence.

        Kurzweil says things like the "design of the human brain, while not simple, is nonetheless a billion times simpler than it appears, due to massive redundancy". He has predicted (as I understand it) that by 2029, we will have completely reverse engineered the brain.

        In the interview, Jordan said, "but it's true that with neuroscience, it's going to require decades or even hundreds of yea

        • Re: (Score:2, Interesting)

          by Anonymous Coward

          While its true that the brain is amazingly complicated and malleable, it is not impossible to understand. I work in AI/ML, and have a doctorate in the subject (posting anon from work). There are a few things that give me hope that it can be replicated:

          1 - There are many brains which are functionally useful without having human-level intelligence. Example: Dogs can recognize 340 words, perform trained tricks, and identify objects. A robot which has "border collie" level intelligence, train-ability, and i

          • by gweihir ( 88907 )

            Your scaling is borked. You overlook that the connections in the brain are not "regular" an any way and many are quite long. That adds several orders of magnitude. You also seem to be unaware that interconnect basically has hit a wall some years back in chip design and was the most serious problem for at least the last two decades before that. So on some abstract level (ignoring said non-regularity) we may not be that far away, but it seems doubtful that even on that level the increase can actually happen w

            • gweihir - the GP could actually be in their late 20s and worked straight through to their doctorate. 2086 - 2014 = 72 years. Rough estimate using average high school graduation at 18, bachelor's at 23 (5 year plan), doctorate at 29 (6 years). That puts him/her at age 101 in 2086 which would be well within the range of possibilities. Move any of those numbers down (graduated high school early, did bachelors in 4 years, doctorate in 4) and that puts him/her in their late 90s. Life expectancy in their

              • by gweihir ( 88907 )

                I know that. My point was that the estimate is pretty much way off for "average" life expectancy and shows a tendency for wishful thinking. That then in turn puts doubt on the other estimates by this person.

              • Sounds like you chose the right parents. Your name wouldn't be Lazurus Long, would it?

                • Unfortunately, I don't qualify for the Long family - only one side is long lived, and only had one grandparent alive when I was married. If I had married at the age Maureen did, however I would have just made it. :)

          • "A robot which has "border collie" level intelligence, train-ability, and independent problem solving with robotic implements can/will be incredibly useful in many applications."

            and if we could "wet jack" actual border collies we could do a bunch of really cool stuff at about 5% of the cost (even if we supplied the border collie with body armour).

        • by gweihir ( 88907 )

          Well, that just shows that Kurzweil has no clue what he is talking about: Redundancy does not make systems simpler, it makes them more complex. What redundancy does is make the interface behavior simpler, i.e. what you see form the outside, as error-cases become less likely or negligible. But when you want to build the thing, redundancy makes everything more complex, possibly lot.

          Just think of a very simple situation: RAID vs. single disk. Of course, the RAID set-up is much more complicated. It is often ea

      • It's not just regulation and consumer acceptance that limits the pace of technological change: it's also the need to amortize development costs over shorter and shorter product lifecycles (before being leapfrogged by competition). Does this imply that technology-driven markets will increasingly become "natural monopolies"? Not because of patent laws as we all fear, but because a monopolistic company can set the pace of innovation in its market such that a desired minimum ROI is achieved.

      • by SuricouRaven ( 1897204 ) on Thursday October 23, 2014 @07:52AM (#48211415)

        I think he underestimated the power of stupidity.

        You can grant every reasonably well-off person in a country a device that gives them access to all scientific and engineering knowledge and a vast communications network - and half of them will use it to publish rambling arguments that the moon landing was fake, fossils are a hoax scientists made up to disprove the bible, autism is caused by vaccines and Obama is secretly a Kenyan Muslim Communist Atheist Black-Supremecist who hates America.

        • by gtall ( 79522 )

          Yep, and what the other half believe is truly weird.

        • autism is caused by vaccines

          And of course there's never been a valid reason to suspect that everything we've been told by Big Pharma might not be entirely true [morganverkamp.com]...

        • I think he underestimated the power of stupidity.

          You can grant every reasonably well-off person in a country a device that gives them access to all scientific and engineering knowledge and a vast communications network - and half of them will use it to publish rambling arguments that the moon landing was fake, fossils are a hoax scientists made up to disprove the bible, autism is caused by vaccines and Obama is secretly a Kenyan Muslim Communist Atheist Black-Supremecist who hates America.

          It wasn't until this message that I noticed something: I haven't heard a conspiracy theory involving "Jewish bankers" for years. Why have the conspiracy theorists dropped them from their theories? There is only one possible explanation: a plot... by the Jewish bankers.

          (I shall point out my sarcasm now, before anyone jumps on me!!!! ;-)

          • No, I think the conspiracy theorists just dropped the 'Jewish' part.

            Sometimes the conspiracies are even true. The Libor scandal comes to mind, and I'm sure there are many such plots going undetected.

            • The bankers were just trying to make as much profit as possible, in order to win bigger bonuses at the end of the year. "Jewish banker" conspiracy theories went a lot further than that, with them controlling the entire world. The World Bank neo-liberal agenda is admittedly worryingly close to the old paranoia (but without any particular racial or religious grouping behind it).
        • ...and the other half will post to slashdot.

      • by gweihir ( 88907 ) on Thursday October 23, 2014 @11:06AM (#48212861)

        The whole idea of "the Singularity" is nonsense. It is basically people seeking a surrogate "God" in technology, and the singularity is needed to create the "all knowing" aspect. There is however zero reason to believe it is even a remote possibility. All practical connections of more hardware have had a speed-up below 1 (i.e. use 2x the hardware get less than 2x the computing power) often significantly and fundamentally so.

        The singularity is the production of a child-like fantasy that ignores any and all facts that are known. Just like the idea of a religious "God" it does touch something in many people that makes them want to believe against better judgment.
         

        • by babymac ( 312364 )

          The whole idea of "the Singularity" is nonsense. It is basically people seeking a surrogate "God" in technology, and the singularity is needed to create the "all knowing" aspect.

          This all depends on one's individual interpretation of the word singularity. My interpretation of it means a point in history and technological development beyond which predictions become impossible. There is no "all knowing" aspect in my interpretation. There is certainly no "God" in my interpretation. Some people interpret the term to mean the point at which humanity and machines merge. Once again, that's not my interpretation. My idea of the singularity raises questions about what happens to human

          • by gweihir ( 88907 )

            In other words, if everybody would just be using _your_ definition, then the world would be a better place for _you_. Do you even realize how narcissistic that is? Do you want to communicate (and accept general definitions) or not? Well, guess what, your whole statement is of the same fine intellectual quality. It is fine to fantasize, but it is impolite to do it publicly and with any expectation to be taken seriously.

            • by babymac ( 312364 )
              There is no "general definition" of the term technological singularity. At least, none that I'm aware of.
              • by gweihir ( 88907 )

                Since the whole general idea is BS, there would not be. That does not mean you can simply grab it and pontificate about your personal vision without being called out on it.

        • Actually, one of the strong arguments the Singularity has going for it is that biological evolution has pretty much come to a standstill, especially relative to cultural and technological evolution.

          The conviction that humans are special runs extremely deep (see: pretty much all sci-fi), but the reality is that we are a very general purpose platform evolved to shove dead animals and plants into its face whilst not being killed by snakes and tigers. We need to exercise regularly, just to convince our body not

      • I think there is a significant turning point that is arriving and it's NOT the singularity that Kurzweil imagined with artificial intelligence. It's computers and robots that are "good enough" to replace most human jobs. There are still going to be jobs for the people who fix the robots, but we are well on our path to diminishing jobs in return for progress in technology. The shovel-ready jobs are going to go away. Checkout lines, fast food, security, transportation -- there are many fields where an automat

    • by tmosley ( 996283 )
      Jordan thinks incrementally, Kurzweil thinks in terms of hockey sticks. Both are valid, but the latter is more forward thinking when it comes to self-referencing systems.
      • by gweihir ( 88907 )

        The latter also has a tendency to be utterly wrong in most cases.

        • by tmosley ( 996283 )
          Most systems aren't self referencing. Economies are. The Agricultural and Industrial revolutions are good examples of economic hockey sticks. A new method made EVERYTHING better, almost all at once. A thinking computer will do much the same, but to an even greater degree. Imagine trillions of people just sitting around thinking about ways to solve various problems all day, with no need for sleep, and only needing a little electricity to eat. Then turn some of that thinking power toward improving their
          • by gweihir ( 88907 )

            So basically, you identify some "magic thing" that would make "everything better, almost at once"? And you completely ignore whether that magic thing can be created? Yes, that sounds like what is going on in AI.

    • by gweihir ( 88907 )

      Same here. The nonsense Ray Kurzweil is spouting to keep his marks giving him money is pure fantasy. Kurzweil has absolutely no clue about AI and related fields, but is good at making up grand visions that never deliver.

  • His Presponce: "Come on and slam"
  • I disagree. (Score:5, Informative)

    by serviscope_minor ( 664417 ) on Thursday October 23, 2014 @07:27AM (#48211355) Journal

    As it happens, I am a computer vision expert.

    I do wonder how much useful stuff was done with the results from physics back then as opposed to emperical hand-hacking of everything. I suspect not much.

    Computer vision has a long way to go. On the other hand, there are plenty of things which it does do, some of which are more or less impossible otherwise.

    OCR is very useful. It runs the mail system of many countries and has plenty of use when it comes to digitising old documents. This would be possible, but deeply tedious by hand.

    Structure from motion is used heavily in the film industry to work out 3D structure and motion for placing virtual objects. Almost impossible to do well without computer vision.

    Photo stitching for automatic panoramas. Classic CV system, and my phone comes with it built in.

    Number plate recognition. Apart from the rather unpleasant big brother potential, London's congestion charging system runs off this and it does very good things for London.

    Those cameras/phones with face detection built in. Not sure how useful it is but it works.

    Lego Fusion is a recently released game which appears to rely on computer vision.

    Oh those phone based barcode and QR scanners. Very useful.

    The pick and place machines which use vision for accurate placement.

    This machine which is really awesome: https://www.youtube.com/watch?... [youtube.com]

    Lots of other industrial things are controlled by CV.

    Certain types of super resolution microscopy are based on computer vision.

    And that's just a few off the top of my head.

    So yeah computer vision has a long way to go. On the other hand, it's out there doing real things right now. It might not be very advanced CV (the industrial stuff often is not because it needs to be reliable), but it's still CV and it's still being used.

    • Re:I disagree. (Score:5, Interesting)

      by ledow ( 319597 ) on Thursday October 23, 2014 @07:57AM (#48211423) Homepage

      The problem with computer vision is not that it's not useful, but that it's sold as a complete solution comparable to a human.

      In reality, it's only used where it doesn't really matter.

      OCR - mistakes are corrected by spellcheckers or humans afterwards.

      Mail systems - sure, there are postcode errors, but they result in a slight delay, not a catastrophe of the system.

      Structure from motion - fair enough, but it's not "accurate" and most of that kind of work isn't to do with CV as much as actual laser measurements etc.

      Photo stitching - I'd be hard pushed to see this as more of a toy. It's like a photoshop filter. Sure, it's useful, but we could live without it or do it manually. Probably biggest use in mapping, where it's a time-saver and not much else. It doesn't work miracles.

      Number plate recognition - well-defined formats on tuned cameras aimed at the right point, and I guarantee there are still errors. The systems I've been sold in the past claim 95% accuracy at best. Like OCR, if the number plate is read slightly wrongly, there are fallbacks before you issue a fine to someone based on the image.

      Face detection is a joke in terms of accuracy. If we're talking about biometric logon, it's still a joke. If we're talking about working out if there's a face in-shot, still a joke. And, again, not put to serious use.

      QR scanners - that I'll give you. But it's more to do with old barcode technology that we had 20 years ago, and a very well defined (and very error-correcting) format.

      Pick-and-place rarely relies on vision only. There's much better ways of making sure something is aligned that don't come down to CV (and, again, usually involve actually measuring rather than just looking).

      I'll give you medical imaging - things like MRI and microscopy are greatly enhanced with CV, and the only industry I know where a friend with a CV doctorate has been hired. Counting luminescent genes / cells is a task easily done by CV. Because, again, accuracy is not key. I can also refer you to my girlfriend who works in this field (not CV) and will show you how many times the most expensive CV-using machine in the hospital can get it catastrophically wrong and hence there's a human to double-check.

      CV is, hence, a tool. Used properly, you can save a human time. That's the extent of it. Used improperly, or relied upon to do the work all by itself, it's actually not so good.

      I'm sorry to attack your field of study, it's a difficult and complex area as I know myself being a mathematician that adores coding theory (i.e. I can tell you how/why a QR code works even if large portions of the image are broken, or how Voyager is able to keep communicating, despite interference on an unbelievable magnitude).

      The problem is that, like AI, practical applications run into tool-time (saving a human having to do a laborious repetitive task, helping that task along, but not able to replace the human in the long run or operate entirely unsupervised). Meanwhile, the headlines are telling us that we've invented "yet-another-human-brain", which are so vastly untrue as to be truly laughable.

      What you have is an expertise in image manipulation. That's all CV is. You can manipulate the image to be easier read by a computer which can extract some of the information it's after. How the machine deals with that, or how your manipulations cope with different scenarios, requires either a constrained environment (QR codes, number plates), or constant human manipulation to deal with.

      Yet it's sold as something that "thinks" or "sees" (and thus interprets the image) like we do. It's not.

      The CV expert I know has code in an ATM-like machine in one of the southern American counties. It recognises dollar bills, and things like that. Useful? Yes. Perfect? No. Intelligent? Far from it. From what I tell, most of the system is things like edge detection (i.e. image manipulation via a matrix, not unlike every Photoshop-compatible filter going back 20 years), derived heuristics and error-margins.

      Hence, "computer vision" is really a misnomer, where "Photoshopping an image to make it easier to read" is probably closer.

      • Re:I disagree. (Score:5, Insightful)

        by serviscope_minor ( 664417 ) on Thursday October 23, 2014 @10:39AM (#48212655) Journal

        In reality, it's only used where it doesn't really matter.

        That's patently false. It's used for industrial process control and things like that too. See for example the video I posted. To the manufacturers who use such a machine, it matters an awful lot.

        OCR - mistakes are corrected by spellcheckers or humans afterwards.

        I don't know how much you count this as "mattering". The IEEE has scanned and OCR'd their back catalogue of papers. I don't think they've been human checked due to the sheer volume. It's very useful to be able to get these online now.

        Mail systems - sure, there are postcode errors, but they result in a slight delay, not a catastrophe of the system.

        Well, it's not like humans are error free either. This is something people often forget. A national postal system is a very important thing, and CV is used to massively reduce the costs of being able to ship vast quantities of mail. Sure it makes mistakes, so do hand sorters. By an astonishing coincidence, I actually got a letter through my letterbox for my neighbour only yesterday.

        Structure from motion - fair enough, but it's not "accurate" and most of that kind of work isn't to do with CV as much as actual laser measurements etc.

        I'mn not sure what you mean by "not accurate". It has a scale ambiguity, for sure, and drifts, but so does any relative measurement system including lasers.

        Photo stitching - I'd be hard pushed to see this as more of a toy. It's like a photoshop filter. Sure, it's useful, but we could live without it or do it manually.

        Well, of course we could live without it. Turns out that humans can survive with nothing more than a pointed stick and a bit of animal fur. This means we could survive without almost everything around us.

        Anyhow, I doubt you'd get remotely comparable results by hand. You have things like vignetting, exposure changes, radial distortion etc to contend with. It's very, very hard to get a seam-free stitch.

        Number plate recognition - well-defined formats on tuned cameras aimed at the right point, and I guarantee there are still errors. The systems I've been sold in the past claim 95% accuracy at best. Like OCR, if the number plate is read slightly wrongly, there are fallbacks before you issue a fine to someone based on the image.

        But all systems have errors. Humans are quite error prone, especially in really boring repetitive tasks. One thing I've noticed is that where humans are really really good, they're held up as a gold standard, when they're not, perfection is held up as a gold standard.

        Face detection is a joke in terms of accuracy. If we're talking about biometric logon, it's still a joke. If we're talking about working out if there's a face in-shot, still a joke. And, again, not put to serious use.

        Face detection (not face recognition) works "pretty well", I reckon. You can download an old, non-state of the art algorithm like Viola-Jones in OpenCV. It's pretty good on the whole. And anyway: define "serious". But yeah, biometrics is a joke. I never would claim otherweise.

        QR scanners - that I'll give you. But it's more to do with old barcode technology that we had 20 years ago, and a very well defined (and very error-correcting) format.

        No, the old tech was laser or LED based scanning. The current ones use computer vision to avoid those complex, mechanical systems to be able to do a pretty good job with ubiquitous off the shelf sensors. Also, a generic vision based one can read pretty much all formats from a single place.

        Pick-and-place rarely relies on vision only. There's much better ways of making sure something is aligned that don't come down to CV (and, again, usually involve actually measuring rather than just looking).

        Sure they use servos and stuff for positioning, but those little crosshair marks over the board are what they use to get the high accuracy. The problem with the cheap-ass Chinese machines for a few gr

      • by gweihir ( 88907 )

        Excellent comment. The problem really is "Yet it's sold as something that "thinks" or "sees" (and thus interprets the image) like we do. It's not." From following the field for several decades now, I deduce that they are basically nowhere closer to "thinks" than they were at any time before. Yes, the signal processing capabilities are impressive. They are still purely mechanical, no intelligence in there anywhere. And they are useful, which is why the AI field keeps getting money, despite significant parts

        • The problem really is "Yet it's sold as something that "thinks" or "sees" (and thus interprets the image) like we do.

          Well, charlatans will sell anything as anything. And science journalists couldn't be trusted to find their gluteus maximus with both hands. I've never heard anyone in the field talk about it as something that "thinks".

          You might get that crap from companies, but I've never seen it in a grant proposal from academics.

          I deduce that they are basically nowhere closer to "thinks" than they were at a

          • by gweihir ( 88907 )

            You might get that crap from companies, but I've never seen it in a grant proposal from academics.

            Grant-proposal: no. But public statements by academics, yes, and rather often. Just look at what people like Marvin Minsky claim. That is just wrong and highly unethical.

            • Grant-proposal: no. But public statements by academics, yes, and rather often. Just look at what people like Marvin Minsky claim. That is just wrong and highly unethical.

              Well, that's one guy. Perhaps one of the worst (I don't know, I've not read much by him). None of the academics I know, which is quite a few since I spent 12 years in research, say things like that.

              There's nutjobs and charlatans in every area. If you judge any area by only the nutjobs and charlatans (assuming they don't actually make up mos

              • by gweihir ( 88907 )

                Well, none of the researchers in fields related to AI that I know personally are saying such things either. Yet the press always seems to find some academic that is willing to make statements like that. As a result, the public has a completely messed up picture of what computers can do today and about what is to come in the near future.

                • Yeah OK fair enough. We can certainly agree on that.

                  As I mentioned, I know a lot of scientists. I was an academic for quite a while and my SO still is. Most scientists still read the papers and etc, so they tend to have a rather dim view of journalists. That's because journalists have a habit of misrepresenting substantially to make a better "story".

                  • by gweihir ( 88907 )

                    Well, yes, that is part of the problem as well. I have some experience with what can come out when you talk to the press as a scientist. Fortunately it was not too bad and my name was not attached to it.

      • Re: (Score:3, Informative)

        by Sqreater ( 895148 )
        I work in the USPS as an Electronics Technician (with an engineering degree) and I'd like to point out that our OCR system is accurate, fast, and robust. Our read rate is up to 98-99% and most of our human REC centers (humans read the addresses the OCR system cannot and send the result back to the machine in real time) are now shut down. Our scanners read and our image computers interpret typed and handwritten addresses, bar codes, id tags, and indicia at up to 30,000 letters per hour per machine. And they
      • by lorinc ( 2470890 )

        You should probably take some lectures in computer vision, it would change your view on it. It's either that, or you have a misconception about what a human does when he's learning.

        I'll take the Turing view on humans: big and horribly complex machine running a big and horribly complex algorithm. A part of this algorithm and its dedicated hardware is something we call "vision". Of course, it's a big an clunky part, and we even don't know its exact boundaries.

        Now suppose you have a computer that does run an a

        • in the traffic cop scenario would it work to have pingers in the light wands?? basically you would have a LEFT and RIGHT wand with 2? sensors each that would send out an X/Y position to any AIs in range. Im thinking that there is a small limit of signals since you have LEFT ,RIGHT ,STOP ,GO ,BACKUP/TURN AROUND and YOU!

    • I guess he means the theoretical bases on which these systems operate has not improved much. It's the same old tricks hacked together at higher and higher complexity, and the best guidance is uninformed trial-and-error. Useful sometimes and at times an engineering feat, but there's no interesting science in it.
      • It's the same old tricks hacked together at higher and higher complexity,

        Actually, there's quite a lot of new tricks hacked to gether at higher and higher complexity. Still a bunch of tricks, but the tricks are improving.

        Useful sometimes and at times an engineering feat, but there's no interesting science in it.

        I disagree there.

    • This machine which is really awesome: https://www.youtube.com/watch [youtube.com]?... [youtube.com]

      Sorry but this is not what I have in mind when I think of CV. This could be accomplished using hardware alone. All the pencils are very carefully lined up and running at a fixed rate past a sensor. The image is very small and all you have to look for is is the bit pattern representing the specific color then activate the solenoid for the puff of air.

      When I have thought of CV, and it comes around often, the biggest problem I see is the randomness of the perspective view of the object. Take bowling ball fo

      • Sorry but this is not what I have in mind when I think of CV. This could be accomplished using hardware alone. All the pencils are very carefully lined up and running at a fixed rate past a sensor. The image is very small and all you have to look for is is the bit pattern representing the specific color then activate the solenoid for the puff of air.

        It's certainly CV. It's not the more fashionable end of CV, but CV it certainly is. It involves making inferences from an image. That's more or less the definit

  • by Anonymous Coward

    "[W]e are no further along with computer vision than we were with physics when Isaac Newton sat under his apple tree."

    On the other hand, we have 10 times the population today so there are 10 Isaacs working on the problem.

    Probably closer to 100.

    • by gweihir ( 88907 )

      Does not look like it. The number of fundamental break-throughs seems to have massively diminished, not accelerated. Of course, part of that is that Newton went after low-hanging fruit.

      • by Anonymous Coward

        I thought the low-hanging fruit went after him....

  • by Anonymous Coward on Thursday October 23, 2014 @07:52AM (#48211413)

    I am doing a postdoc in applied statistics/machine learning and I was very surprised by this interview since it is contradictory to what Michael Jordan has himself expressed as an invited speaker at conferences as well as what his most recent research projects are focused at. It appears that, according to Michael Jordan himself as expressed on his webpage, the article is a hack-job where the journalist is completely misrepresenting his view on big data. To quote:


    I’ve found myself engaged with the Media recently (...) for an interview that has been published in the IEEE Spectrum.

    That latter process was disillusioning. Well, perhaps a better way to say it is that I didn’t harbor that many illusions about science and technology journalism going in, and the process left me with even fewer.

    The interview is here: http://spectrum.ieee.org/robotics/artificial-intelligence/machinelearning-maestro-michael-jordan-on-the-delusions-of-big-data-and-other-huge-engineering-efforts

    Read the title and the first paragraph and attempt to infer what’s in the body of the interview. Now go read the interview and see what you think about the choice of title.

    The title contains the phrase “The Delusions of Big Data and Other Huge Engineering Efforts”. It took me a moment to realize that this was the title that had been placed (without my knowledge) on the interview I did a couple of weeks ago. Anyway who knows me, or who’s attended any of my recent talks knows that I don’t feel that Big Data is a delusion at all; rather, it’s a transformative topic, one that is changing academia (e.g., for the first time in my 25-year career, a topic has emerged that almost everyone in academia feels is on the critical path for their sub-discipline), and is changing society (most notably, the micro-economies made possible by learning about individual preferences and then connecting suppliers and consumers directly are transformative). But most of all, from my point of view, it’s a *major engineering and mathematical challenge*, one that will not be solved by just gluing together a few existing ideas from statistics, optimization, databases and computer systems.

    Source: https://amplab.cs.berkeley.edu/2014/10/22/big-data-hype-the-media-and-other-provocative-words-to-put-in-a-title/

    • by Zordak ( 123132 )
      Dangit, I clicked on the comments hoping for some good "+5, Funnies" about "Michael Jordan," and all I got was a stupid on-topic, well-researched, and educational comment on what the real Michael Jordan thinks about the challenges of "big data." And the best we could do on the name is "A man of many talents"? That does it. Slashdot is dead. (Netcraft confirms it.)
  • Read the interview (Score:5, Informative)

    by Anonymous Coward on Thursday October 23, 2014 @08:11AM (#48211479)

    No, seriously. Here are some choice quotes:

    "I read all the time about engineers describing their new chip designs in what seems to me to be an incredible abuse of language. They talk about the “neurons” or the “synapses” on their chips. But that can’t possibly be the case; a neuron is a living, breathing cell of unbelievable complexity."

    "It’s always been my impression that when people in computer science describe how the brain works, they are making horribly reductionist statements that you would never hear from neuroscientists."

    "Lately there seems to be an epidemic of stories about how computers have tackled the vision problem, and that computers have become just as good as people at vision."

    "Even in facial recognition, my impression is that it still only works if you’ve got pretty clean images to begin with."

    "I have a hobby of searching for information about silly Kickstarter projects, mostly to see how preposterous they are, and I end up getting served ads from the same companies for many months."

    Here's the catch: all of these quotes are from the interviewer. Jordan has a lot of really nuanced claims here, but it's clear that the interviewer has an agenda of his own.

    • by Zalbik ( 308903 ) on Thursday October 23, 2014 @12:30PM (#48213753)

      Here's the catch: all of these quotes are from the interviewer. Jordan has a lot of really nuanced claims here, but it's clear that the interviewer has an agenda of his own.

      Yes, this is one of the more shameful examples of the reporter attempting to shove words down the interviewee's mouth, and completely misrepresenting the results.

      Take a look at the first sentence:
      "The overeager adoption of big data is likely to result in catastrophes of analysis comparable to a national epidemic of collapsing bridges"

      Then read the interview. At no point does Jordan indicate that the misanalysis of big data will cause a catastrophe comparable to the epidemic of collapsing bridges. Never. What he does (and apparently the reporter is either too stupid or too dishonest to represent), is provide an analogy between building a bridge without scientific principles and not performing proper statistical analysis on big data.

      He never makes a comparison between the outcomes of these two events. He basically says: if you build a bridge without scientific principles, it will fall down. If you are not careful in your analysis of big data, your results will be wrong.

      The whole article goes on in a very similar manner. Science reporters used to have something called "journalistic integrity". Here we get a click-bait article where a "reporter" has predetermined a topic that will gain lots of hits and is desperately trying to fit the interviewees words into his agenda.

      Shameful.

      • The whole article goes on in a very similar manner. Science reporters used to have something called "journalistic integrity". Here we get a click-bait article where a "reporter" has predetermined a topic that will gain lots of hits and is desperately trying to fit the interviewees words into his agenda.

        So what you are saying is that it is a slightly better than average news story since it was the reporter rather than a direct government employee deciding the slant.

  • by plopez ( 54068 )

    Garbage in, garbage out. Mindlessly throwing analytics at data which is garbage will result in ..... garbage. I have worked at a number of places where we aggregated data from numerous sources. I most cases when we QA'd those data we found missing data, stale data, and flat out incorrect data. We had to spend a large sum of $$ scrubbing it. Once a data stream is polluted cleaning it is almost impossible.

    And the matter is made worse by poor DB design, anyone who designs a DB which allows nulls and does not m

    • by gweihir ( 88907 )

      Yes, I have seen that happen too. In addition, at least some of the big-data people seem to be the most arrogant, yet most clueless when it comes to actually make anything work. Those that I get to see mess up regularly, cannot even estimate a data-volume when all the raw numbers are served to them on a platter.

"When the going gets tough, the tough get empirical." -- Jon Carroll

Working...