Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
IT Technology

IT Snake Oil — Six Tech Cure-Alls That Went Bunk 483

snydeq writes "InfoWorld's Dan Tynan surveys six 'transformational' tech-panacea sales pitches that have left egg on at least some IT department faces. Billed with legendary promises, each of the six technologies — five old, one new — has earned the dubious distinction of being the hype king of its respective era, falling far short of legendary promises. Consultant greed, analyst oversight, dirty vendor tricks — 'the one thing you can count on in the land of IT is a slick vendor presentation and a whole lot of hype. Eras shift, technologies change, but the sales pitch always sounds eerily familiar. In virtually every decade there's at least one transformational technology that promises to revolutionize the enterprise, slash operational costs, reduce capital expenditures, align your IT initiatives with your core business practices, boost employee productivity, and leave your breath clean and minty fresh.' Today, cloud computing, virtualization, and tablet PCs are vying for the hype crown." What other horrible hype stories do some of our seasoned vets have?
This discussion has been archived. No new comments can be posted.

IT Snake Oil — Six Tech Cure-Alls That Went Bunk

Comments Filter:
  • My Meta-assessment (Score:4, Interesting)

    by Anonymous Coward on Monday November 02, 2009 @02:25PM (#29952790)

    IT snake oil: Six tech cure-alls that went bunk
    By Dan Tynan
    Created 2009-11-02 03:00AM

    Today, cloud computing [4], virtualization [5], and tablet PCs [6] are vying for the hype crown. At this point it's impossible to tell which claims will bear fruit, and which will fall to the earth and rot.

    [...]

    1. Artificial intelligence
    2. Computer-aided software engineering (CASE)
    3. Thin clients
    4. ERP systems
    5. B-to-b marketplaces
    6. Enterprise social media

    1. AI: Has to have existed before it can be "bunk"
    2. CASE: Regarding Wikipedia [wikipedia.org], it seems to be alive and kicking.
    3. Thin Clients: Tell that to the guys over at TiVo that thin-client set-top-boxes are bunk.
    4. ERP Systems: For low complexity companies, I don't see why ERP software isn't possible.
    5. Web B2B: He is right about this one.
    6. Social media: Big companies like IBM have been doing "social media" within their organization for quite some time.It's just a new name for an old practice

    And as far as his first comment,

    "Today, cloud computing [4], virtualization [5], and tablet PCs [6] are vying for the hype crown. At this point it's impossible to tell which claims will bear fruit, and which will fall to the earth and rot."

    [4] Google.
    [5] Data Servers.
    [6] eBooks and medical applications.

  • by Anonymous Coward on Monday November 02, 2009 @02:25PM (#29952794)

    OOP was hyped as a cure-all, but only turned out to help out in a few portions of apps, and trigger a philosophical holy-war between set fans (relational) and graph fans (oop). As a new tool to add to the tool box, fine. As a cure-all, NOT.

  • by John Whitley ( 6067 ) on Monday November 02, 2009 @02:28PM (#29952850) Homepage

    The people actually putting artificial intelligence into practice knew that AI, like so many other things, would benefit us in small steps.

    Actually, there was a period very early on ('50s) when it was naively thought that "we'll have thinking machines within five years!" That's a paraphrase from a now-hilarious film reel interview with an MIT prof from the early 1950's. A film reel which was shown as the first thing in my graduate level AI class, I might add. Sadly, I no longer have the reference to this clip.

    One major lesson was that there's an error in thinking "surely solving hard problem X must mean we've achieved artificial intelligence." As each of these problems fell (a computer passing the freshman calc exam at MIT, a computer beating a chess grandmaster, and many others), we realized that the solutions were simply due to understanding the problem and designing appropriate algorithms and/or hardware.

    The other lesson from that first day of AI class was that the above properties made AI into the incredible shrinking discipline: each of its successes weren't recognized as "intelligence", but often did spawn entire new disciplines of powerful problem solving that are used everywhere today. So "AI" research gets no credit, even though its researchers have made great strides for computing in general.

  • by Monkeedude1212 ( 1560403 ) on Monday November 02, 2009 @02:31PM (#29952868) Journal

    Today, cloud computing, virtualization, and tablet PCs are vying for the hype crown. At this point it's impossible to tell which claims will bear fruit, and which will fall to the earth and rot.

    I agree with your post (not the article) - these technologies have all had success in the experimental fields in which they've been applied. but ESPECIALLY virtualization, which is way past experimenting and is starting to become so big in the workplace that I've started using it at home. No need to setup a dual boot with virtualization, and the risk of losing data is virtually removed (pun intended) because anytime the virtual machine gets infected you just overwrite it with yesterdays backup. No need to set up dual boots through the BIOS (for those who are scared to venture there).

    I have yet to find an application of Virtualization that has failed to do what it promised.

  • by harmonise ( 1484057 ) on Monday November 02, 2009 @02:36PM (#29952940)

    This is a bit OT but I wanted to say that snydeq deserves a cookie for linking to the print version. I can only imagine that the regular version is at least seven pages. I hope slashdot finds a way to reward considerate contributors such as him or her for making things easy for the rest of us.

  • by E. Edward Grey ( 815075 ) on Monday November 02, 2009 @02:39PM (#29952976)

    I don't know of a single IT department that hasn't been helped by virtualization of servers. It makes more efficient use of purchased hardware, keeps businesses from some of the manipulations to which their hardware and OS vendors can subject them, and is (in the long term) cheaper to operate than a traditional datacenter. IT departments have wondered for a long time: "if I have all this processing power, memory, and storage, why can't I use all of it?" Virtualization answers that question, and does it in an elegant way, so I don't consider it snake oil.

  • The crazy hottie (Score:5, Interesting)

    by GPLDAN ( 732269 ) on Monday November 02, 2009 @02:42PM (#29953024)
    I kind of miss the crazy hotties that used to pervade the network sales arena. I won't even name the worst offenders, although the worst started with the word cable. They would go to job fairs and hire the hottest birds, put them in the shortest shirts and low cut blouses, usually white with black push-up bras - and send them in to sell you switches.

    It was like watching the cast of a porn film come visit. Complete with the sleazebag regional manager, some of them even had gold chains on. Pimps up, big daddy!

    They would laugh at whatever the customer said wildly, even if it wasn't really funny. The girls would bat their eyelashes and drop pencils. It was so ridiculous it was funny, it was like a real life comedy show skit.

    I wonder how much skimming went on in those days. Bogus purchase orders, fake invoices. Slap and tickle. The WORST was if your company had no money to afford any of the infratsructure and the networking company would get their "capital finance" team involved. Some really seedy slimy stuff went down in the dot-com boom. And not just down pantlegs, either.
  • by Mike Buddha ( 10734 ) on Monday November 02, 2009 @02:45PM (#29953060)

    2. CASE: Regarding Wikipedia [wikipedia.org], it seems to be alive and kicking.

    As a programmer, CASE sounds pretty neat. I think it probably won't obviate the need for programmers any time soon, but it has the potential to automate some of the more tedious aspects of programming. I'd personally rather spend more of my time designing applications and less time hammering out the plumbing. It's interesting to note that a lot of the CASE tools in that wikipedia article I'm familiar with, although they were never referred to as CASE tools when I was learning how to use them. I think the CASE concept may have been too broad, and had gotten a bad name, even thought some of the parts were/are useful.

  • AI done poorly (Score:2, Interesting)

    by jspenguin1 ( 883588 ) <jspenguin@gmail.com> on Monday November 02, 2009 @02:50PM (#29953110) Homepage
  • by Animats ( 122034 ) on Monday November 02, 2009 @02:58PM (#29953204) Homepage

    Having taken several courses on AI, I never found a contributor to the field that promised it to be the silver bullet -- or even remotely comparable to the human mind.

    Not today, after the "AI Winter". But when I went through Stanford CS in the 1980s, there were indeed faculty members proclaiming in print that strong AI was going to result from expert systems Real Soon Now. Feigenbaum was probably the worst offender. His 1984 book, The Fifth Generation [amazon.com] (available for $0.01 through Amazon.com) is particularly embarrassing. Expert systems don't really do all that much. They're basically a way to encode troubleshooting books in a machine-processable way. What you put in is what you get out.

    Machine learning, though, has made progress in recent years. There's now some decent theory underneath. Neural nets, simulated annealing, and similar ad-hoc algorithms have been subsumed into machine learning algorithms with solid statistics underneath. Strong AI remains a long way off.

    Compute power doesn't seem to be the problem. Moravec's classic chart [cmu.edu] indicates that today, enough compute power to do a brain should only cost about $1 million. There are plenty of server farms with more compute power and far more storage than the human brain. A terabyte drive is now only $199, after all.

  • Re:ERP? (Score:4, Interesting)

    by smooth wombat ( 796938 ) on Monday November 02, 2009 @03:01PM (#29953242) Journal
    you've got a straight road to expensive failure.

    Sing it brother (or sister)! As one who is currently helping to support an Oracle-based ERP project, expensive doesn't begin to describe how much it's costing us. Original estimated cost: $20 million. Last known official number I heard for current cost: $46 million. I'm sure that number is over $50 million by now.

    But wait, there's more. We bought an off-the-shelf portion of their product and of course have to shoe-horn it to do what we want. There are portions of our home-grown process that aren't yet implemented and probably won't be implemented for several more months even though those portions are a critical part of our operations.

    But hey, the people who are "managing" the project get to put it on their résumé and act like they know what they're doing, which is all that matters.

    an aggressive sales force that would sell ice to eskimos

    I see you've read my column [earthlink.net].
  • by pthreadunixman ( 1370403 ) on Monday November 02, 2009 @03:06PM (#29953346)
    Yes, it helps, but it really only helps with under-utilized hardware (and this is really only a problem in Microsoft shops). It doesn't help at all with OS creep; in fact, it makes it worse by making the upfront costs of allocating new "machines" very low; however, it has been and continues to be marketed a cure all which is where the snake-oil comes in. VMware's solution to OS creep: run tiny stripped down VMs with a RPC like management interface (that will naturally only work with vSphere) so that the VM instances essentially become just really heavy weight processes. We are basically coming full circle back to ESX just being yet another general purpose operating system where applications are written specifically for it and thereby defeats the entire purpose of using "virtualization" in the first place.
  • by Chris Burke ( 6130 ) on Monday November 02, 2009 @03:10PM (#29953388) Homepage

    A film reel which was shown as the first thing in my graduate level AI class, I might add. Sadly, I no longer have the reference to this clip.

    Heh. Day 1 of my AI class, the lecture was titled: "It's 2001 -- where's HAL?"

    The other lesson from that first day of AI class was that the above properties made AI into the incredible shrinking discipline: each of its successes weren't recognized as "intelligence", but often did spawn entire new disciplines of powerful problem solving that are used everywhere today. So "AI" research gets no credit, even though its researchers have made great strides for computing in general.

    Yeah that's when the prof introduced the concept of "Strong AI" (HAL) and "Weak AI" (expert systems, computer learning, chess algorithms etc). "Strong" AI hasn't achieved its goals, but "Weak" AI has been amazingly successful, often due to the efforts of those trying to invent HAL.

    Of course the rest of the semester was devoted to "Weak AI". But it's quite useful stuff!

  • by digitalhermit ( 113459 ) on Monday November 02, 2009 @03:14PM (#29953442) Homepage

    I administer hundreds of virtual machines and virtualization has solved a few different problems while introducing others.

    Virtualization is often sold as a means to completely utilize servers. Rather than having two or three applications on two or three servers, virtualization would allow condensing of those environments into one large server, saving power, data center floor space, plus allowing all the other benefits (virtual console, ease of backup, ease of recovery, etc..).

    In one sense it did solve the under-utilization problem. Well, actually it worked around the problem. The actual problem was often that certain applications were buggy and did not play well with other applications. If the application crashed it could bring down the entire system. I'm not picking on Windows here, but in the past the Windows systems were notorious for this. Also, PCs were notoriously unreliable (but they were cheap, so we weighed the cost/reliability). To "solve" the problem, applications were segregated to separate servers. We used RAID, HA, clusters, etc., all to get around the problem of unreliability.

    Fast forward a few years and PCs are a lot more reliable (and more powerful) but we still have this mentality that we need to segregate applications. So rather than fixing the OS we work around it by virtualizing. The problem is that virtualization can have significant overhead. On Power/AIX systems, the hypervisor and management required can eat up 10% or more of RAM and processing power. Terabytes of disk space across each virtual machine is eaten up in multiple copies of the OS, swap space, etc.. Even with dynamic CPU and memory allocation, systems have significant wasted resources. It's getting better, but still only partially addresses the problem of under-utilization.

    So what's the solution? Maybe a big, highly reliable box with multiple applications running? Sound familiar?

  • by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Monday November 02, 2009 @03:19PM (#29953508) Homepage Journal

    because virtualization only works for large companies with many, many servers

    You're full of crap. At my company, a coworker and I are the only one handling the virtualization for a single rackful of servers. He virtualizes Windows stuff because of stupid limitations in so much of the software. For example, we still use a lot of legacy FoxPro databases. Did you know that MS's own FoxPro client libraries are single-threaded and may only be loaded once per instance, so that a Windows box is only capable of executing one single query at a time? We got around that by deploying several virtualized instances and querying them round-robin. It's not perfect, but works as well as anything could given that FoxPro is involved in the formula. None of those instances need to have more than about 256MB of RAM or any CPU to speak of, but we need several of them. While that's an extreme example, it serves the point: sometimes with Windows you really want a specific application to be the only thing running on the machine, and virtualization gives that to us.

    I do the same thing on the Unix side. Suppose we're rolling out a new Internet-facing service. I don't really want to install it on the same system as other critical services, but I don't want to ask my boss for a new 1U rackmount that will sit with a load average of 0.01 for the next 5 years. Since we use FreeBSD, I find a lightly-loaded server and fire up a new jail instance. Since each jail only requires the disk space to hold software that's not part of the base system, I can do things like deploying a Jabber server in its own virtualized environment in only 100MB.

    I don't think our $2,000 Dell rackmounts count as "super-servers" by any definition. If we have a machine sitting their mostly idle, and can virtualize a new OS instance with damn near zero resource waste that solves a very real business or security need, then why on earth not other than because it doesn't appeal to the warped tastes of certain purists?

  • by afidel ( 530433 ) on Monday November 02, 2009 @03:26PM (#29953616)
    It saved us from having to do a $1M datacenter upgrade so yeah, I'd say it benefited us.
  • Worse list ever... (Score:1, Interesting)

    by Anonymous Coward on Monday November 02, 2009 @03:27PM (#29953634)

    umm... worse list ever...

    and anyone remember all the shit java was gonna solve for us? i'm still waiting for the OS to not matter. Computers got faster and java go slower. Worse POS ever.

  • by Rei ( 128717 ) on Monday November 02, 2009 @03:31PM (#29953686) Homepage

    Actually, the funny thing is, real snake oil actually does what it was originally supposed to do. "Snake oil" comes from traditional Chinese medicine (as a cure for joint pain), and was made from the fat of the Chinese water snake, Enhydris chinensis. It is extremely high in omega-3 fatty acids (particularly EPA), and is very similar to what is sold today as fish oil. Omega-3 fatty acids (in particular, EPA) are now known to reduce the progression and symptoms of rheumatoid arthritis.

    Now, in the US, a variety of hucksters took fats from any old snake (if it even involved snake oil at all) and made all sorts of miraculous, unsubstantiated claims about what it would do. But concerning in its original role in Chinese medicine, snake oil likely did exactly what it was claimed to do.

  • by rhsanborn ( 773855 ) on Monday November 02, 2009 @03:46PM (#29953920)
    I disagree. There are some real benefits for smaller companies who can afford to virtualize, more or less depending on the types of applications. Yes, I can buy one server to run any number of business critical applications, but I've seen, in most cases, that several applications are independently business critical and needed to be available at least for the full business day or some important aspect of the company was shut down. So while a single virtual server running everything sucks, you really can get a very close effect when only one of those servers you listed above fails. Add in a VM solution with two servers running VMWare with vmotion and you can get load balancing and fault tolerance. My experience is in the financial services industry. A small company doesn't have a ton of cash to throw around, but having new applications stop for a day while you try to get your single server back up and running costs a lot more than a couple more tens of thousands of dollars to buy a fault tolerant solution. Virtualization is perfect for that.
  • by Caesar Tjalbo ( 1010523 ) on Monday November 02, 2009 @04:09PM (#29954234)
    I tell them it's an artificially intelligent terminal.
  • by Jesus_666 ( 702802 ) on Monday November 02, 2009 @04:14PM (#29954302)

    For instance, your hand written mail is most likely read by a machine that uses optical character recognition to decide where it goes with a pretty good success rate and confidence factor to fail over to humans.

    In fact, the Deutsche Post (Germany's biggest mail company) uses a neural network to process hand-written zip codes. It works rather well, as far as I know. Classic AI, too.

    Plus, spam filters. Yes, they merely use a glorified Bayes classifier but, well... learning classifiers are a part of AI. Low-level AI, for sure, but AI.

    Ont thing about AI that confuses laypersons is that it's a term describing many things from the lowliest classifier to SKYNET. Much like laypersons tend to associate chemistry with mixing colored liquids until something happens, they associate artificial intelligence with either artificial human-like brains or the behavior of bots in ego shooters (which, amusingly, often doesn't contain any AI at all).

  • by HockeyPuck ( 141947 ) on Monday November 02, 2009 @04:20PM (#29954382)

    EMC, IBM, HDS and HP I'm looking at you.

    You've been pushing this Storage Virtualization on us storage admins for years now, and it's more trouble than it's worth. What is it? It's putting some sort of appliance (or in HDS's view a new disk array) in front of all of my other disk arrays, trying to commoditize my back end disk arrays, so that I can have capacity provided by any vendor I choose. You make claims like,

    1. "You'll never have vendor lock-in with Storage virtualization!" However, now that I'm using your appliance to provide the intelligence (snapshots, sync/async replication, migration etc) I'm now locked into your solution.
    2. "This will be easy to manage." How many of these fucking appliances do I need for my new 100TB disk array? When I've got over 300 storage ports on my various arrays, and my appliance has 4 (IBM SVC I'm looking at you), how many nodes do I need? I'm now spending as much time trying to scale up your appliance solution that for every large array I deploy, I need 4 racks worth of appliances.
    3. "This will be homogeneous!" Bull fucking shit. You claimed that this stuff will work with any vendor's disk arrays so that I can purchase the cheapest $/GB arrays out there. No more DMX, just clariion, no more DS8000 now fastT. What a load. You only support other vendor's disk arrays during the initial migration and then I'm pretty much stuck with your arrays until the end of time. So much for your utopian view of any vendor. So now that I've got to standardize on your back end disk arrays, it's not like you're saving me the trouble of only having one loadbalancing software solutions (DMP, Powerpath, HDLM, SDD etc..). If I have DMX on the backend, I'm using Powerpath whether I like it or not. This would have been nice if I was willing to have four different vendor's selling me backend capacity, but since I don't want to deal with service contracts from four different vendors, that idea is a goner.

    Besides, when I go to your large conferences down in Tampa, FL; even your own IT doesn't use it. Why? Because all you did is add another layer of complexity (troubleshooting, firmware updates, configuration) between my servers and their storage.

    You can take this appliance (or switch based in EMC's case) based storage virtualization and Shove It!

    btw: There's a reason why we connect mainframe channels directly to the control units. (OpenSystems translation: Connecting hba ports to storage array ports.) Answer: Cable doesn't need upgrading, doesn't need maintenance contracts and is 100% passive.

  • by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Monday November 02, 2009 @04:50PM (#29954784) Homepage Journal

    I don't agree with the guy who said its only for enterprises, but I think you would have been better off just not using foxpro.

    The codebase started back in the DOS days.

    Its not that difficult to transition from. just do it. You'll be happier.

    I wouldn't say that! We've moved a lot of data into PostgreSQL with the help of a tool I wrote that my boss let me release under the GPL [sourceforge.net]. There's still a lot of code in FP, though, and we're in the planning stages of a multi-year conversion process.

    Trust me: we've seen the light! Now it's just a matter of moving on with zero allowed downtime.

  • Re:ERP? (Score:3, Interesting)

    by q2k ( 67077 ) on Monday November 02, 2009 @05:23PM (#29955178) Homepage

    Wow, an actively maintained ~ (tilde) web site. I don't think I've seen one of those since about 2002 ;) Your column is spot on.

  • by daveime ( 1253762 ) on Monday November 02, 2009 @05:29PM (#29955294)

    In fact, this was an internal web based app for our office, which dealt with hotel reservations.

    When setting up a new hotel on the system, the users (our staff), had to find and supply the telephone number as part of the standard contact details we needed for every hotel.

    Do you know of any hotel that DOESN'T have a telephone, and if so, how would we call them to make a reservation ?

    There are sometimes instances where some fields MUST be filled in, otherwise the whole record becomes worthless.

  • by Anonymous Coward on Monday November 02, 2009 @05:52PM (#29955572)

    One guy IT shop here. Walked into an environment with aging servers and little to no redundancy/backup/business continuity/disaster recovery plan. I had a small budget worked out by my predecessor (35K) to replace the email system that included purchasing two servers. I took that budget and implemented a virtualization/iSCSI storage/email/AD update project that put +75% of our 15 servers into a virtualized environment that supports machines going down and has redundancy.

    Next year I'll be adding another host for capacity and replicating data to a remote office for disaster recovery. I didn't use the top of the line storage and have all the bells and whistles from VMWare but it works and it works well for the money I've spent.

    There's a definite benefit to running virtualized servers with centralized storage even for businesses with 5-6 servers. You don't have to have a super-server to handle separate servers hosting AD/DNS/DHCP, email for 60 people, small business accounting systems, small databases, and a file server. It is nice that those environments don't live in the same OS space. Also, if you're building a 1 host virtualized environment you're asking for trouble but a two host environment for a small business could easily support up to 8 to 10 machines with failover redundancy.

  • by lessermilton ( 863868 ) on Monday November 02, 2009 @05:53PM (#29955574) Homepage
    This reminds me of a story I read a while ago about decision making - they tossed some folks in an MRI and presented them with a series of choices. According to the study, people made the decision up to 8 seconds before they actually made their decision. The argument in the article was that this is proof that there's no free will. My first thought when I read that was 'how does that make any sense?' The only thing conclusive is that people are terrible at knowing when they've made a decision. I think people don't really understand most things, including stuff we think we understand. On the other hand, I also tend to wonder if there really is life inside the computer, and each time I push a key or something I'm killing a little electron.
  • by ckaminski ( 82854 ) <slashdot-nospam.darthcoder@com> on Monday November 02, 2009 @06:13PM (#29955864) Homepage
    Simple, Mr. Web Guy. I don't trust you with my fucking number. I barely trust you with my email, but getting spam there is sort of a solved problem for me (Thank you GMail). But getting called because you want to upsell me on some $4 widget? No thanks. Stop REQUIRING my phone number. Just because your marketing guy wants it doesn't make it useful to get.

    That's the problem now... marketing is so good at getting your message across, you try at all costs to get the upsell and get your value back. Meanwhile, you just create jaded customers.
  • by Phantasmagoria ( 1595 ) <loban.rahman+slashdot@NoSpAm.gmail.com> on Monday November 02, 2009 @09:22PM (#29958130)

    Hell, almost all the cases should be considered successes now. The problem was that they were all massively over hyped back in the day.

    Our massive move to web-applications and the newly-but-stupidly-coined "Cloud" is as much a thin client solution as it was back then.

    To many, Google can be considered an AI. After all, it helps answers your questions. With more and more NLP being built into it (and other web applications), it its getting closer to directly answering your questions.

    So what if ERP always went over budget and was deployed only half the time? That is still a HUGE amount. Do you know of any large companies that DONT use some form of ERP?

  • by Zalbik ( 308903 ) on Tuesday November 03, 2009 @02:13AM (#29960470)

    The more you Idiot-Proof a system, the smarter the Idiots become. Not smarter at actually entering the correct data, just smarter at bypassing the protections you put in place.

    Sigh.

    This depresses me.

    The same old lazy "users are idiots" arguments.

    Did you bother finding out WHY users were going to such lengths to get around your validation routines? Maybe...just maybe, they had perfectly good reasons for not entering this piece of data. For the most part I have found that users have very good reasons for doing what they do. They might be the wrong reasons (i.e. in that what they are doing doesn't accomplish what they think it does), but they usually have a perfectly justifiable reason.

    It may be enjoyable to try and build a system with nothing more that your awesome psychic powers for determining requirements, but sometimes you need to crawl out of your mom's basement and actually talk to some users to find out what they want. You may find that they are actually smarter than you thought...

Scientists will study your brain to learn more about your distant cousin, Man.

Working...