IT Snake Oil — Six Tech Cure-Alls That Went Bunk 483
snydeq writes "InfoWorld's Dan Tynan surveys six 'transformational' tech-panacea sales pitches that have left egg on at least some IT department faces. Billed with legendary promises, each of the six technologies — five old, one new — has earned the dubious distinction of being the hype king of its respective era, falling far short of legendary promises. Consultant greed, analyst oversight, dirty vendor tricks — 'the one thing you can count on in the land of IT is a slick vendor presentation and a whole lot of hype. Eras shift, technologies change, but the sales pitch always sounds eerily familiar. In virtually every decade there's at least one transformational technology that promises to revolutionize the enterprise, slash operational costs, reduce capital expenditures, align your IT initiatives with your core business practices, boost employee productivity, and leave your breath clean and minty fresh.' Today, cloud computing, virtualization, and tablet PCs are vying for the hype crown." What other horrible hype stories do some of our seasoned vets have?
In Defense of Artificial Intelligence (Score:5, Insightful)
The bad news is that artificial intelligence has yet to fully deliver on its promises.
Only idiots, marketers, businessmen and outsiders ever thought we would be completely replaced by artificially intelligent machines. The people actually putting artificial intelligence into practice knew that AI, like so many other things, would benefit us in small steps. So many forms of automation are technically basic artificial intelligence, it's just very simple artificial intelligence. While you might want to argue that the things we benefit from are heuristics, statistics and messes of if/then decision trees, successful AI is nothing more than that. Everyone reading this enjoys benefits of AI but you probably don't know it. For instance, your hand written mail is most likely read by a machine that uses optical character recognition to decide where it goes with a pretty good success rate and confidence factor to fail over to humans. Recommendation systems are often based on AI algorithms. I mean, the article even says this:
The ability of your bank's financial software to detect potentially fraudulent activity on your accounts or alter your credit score when you miss a mortgage payment are just two of many common examples of AI at work, says Mow. Speech and handwriting recognition, business process management, data mining, and medical diagnostics -- they all owe a debt to AI.
Having taken several courses on AI, I never found a contributor to the field that promised it to be the silver bullet -- or even remotely comparable to the human mind. I don't ever recall reading anything other than fiction claiming that humans would soon be replaced completely by thinking machines.
In short, I don't think it's fair to put it in this list as it has had success. It's easy to dismiss AI if the only person you hear talking about it is the cult-like Ray Kurzweil but I assure you the field is a valid one [arxiv.org] (unlike CASE or ERP). In short, AI will never die because the list of applications -- though small -- slowly but surely grows. It has not gone 'bunk' (whatever the hell that means [wiktionary.org]). You can say expert systems have failed to keep their promises but not AI on the whole. The only thing that's left a sour taste in your mouth is salesmen and businessmen promising you something they simply cannot deliver on. And that's nothing new nor anything specific to AI.
Re: (Score:2)
The same defense applies to pretty much all of these (except maybe CASE).
Re:What? CASE was a success! (Score:5, Insightful)
That's not CASE. CASE was about translating requirements into code without human involvement. Your examples are all about abstraction and APIs.
Re: (Score:3, Funny)
Since when are PHP programmers human ?
<matrix-parody>
I'd like to share a revelation I've had with you, it came to me when I tried to classify your programmers. Every programmer on this planet forms a natural equilibrium with the software project, but you PHP programmers do not. You multiply and multiply script snippets until every semblance of readability and logic is removed. And then you simply spread to another project. There is another organism on this planet that follows the same pattern. Do you
Re:In Defense of Artificial Intelligence (Score:5, Interesting)
The people actually putting artificial intelligence into practice knew that AI, like so many other things, would benefit us in small steps.
Actually, there was a period very early on ('50s) when it was naively thought that "we'll have thinking machines within five years!" That's a paraphrase from a now-hilarious film reel interview with an MIT prof from the early 1950's. A film reel which was shown as the first thing in my graduate level AI class, I might add. Sadly, I no longer have the reference to this clip.
One major lesson was that there's an error in thinking "surely solving hard problem X must mean we've achieved artificial intelligence." As each of these problems fell (a computer passing the freshman calc exam at MIT, a computer beating a chess grandmaster, and many others), we realized that the solutions were simply due to understanding the problem and designing appropriate algorithms and/or hardware.
The other lesson from that first day of AI class was that the above properties made AI into the incredible shrinking discipline: each of its successes weren't recognized as "intelligence", but often did spawn entire new disciplines of powerful problem solving that are used everywhere today. So "AI" research gets no credit, even though its researchers have made great strides for computing in general.
Re:In Defense of Artificial Intelligence (Score:4, Interesting)
A film reel which was shown as the first thing in my graduate level AI class, I might add. Sadly, I no longer have the reference to this clip.
Heh. Day 1 of my AI class, the lecture was titled: "It's 2001 -- where's HAL?"
The other lesson from that first day of AI class was that the above properties made AI into the incredible shrinking discipline: each of its successes weren't recognized as "intelligence", but often did spawn entire new disciplines of powerful problem solving that are used everywhere today. So "AI" research gets no credit, even though its researchers have made great strides for computing in general.
Yeah that's when the prof introduced the concept of "Strong AI" (HAL) and "Weak AI" (expert systems, computer learning, chess algorithms etc). "Strong" AI hasn't achieved its goals, but "Weak" AI has been amazingly successful, often due to the efforts of those trying to invent HAL.
Of course the rest of the semester was devoted to "Weak AI". But it's quite useful stuff!
Re: (Score:2)
``Having taken several courses on AI, I never found a contributor to the field that promised it to be the silver bullet -- or even remotely comparable to the human mind.''
The problem is that, if it isn't that, then what is "artificial intelligence", rather than flashy marketing speak for just another bunch of algorithms?
Re: (Score:3, Insightful)
The problem is that, if it isn't that, then what is "artificial intelligence", rather than flashy marketing speak for just another bunch of algorithms?
Exactly. "Artificial intelligence" seems to serve various purposes—at best vacuous and at worst deceptive. How many millions of dollars have academicians raked in for various projects that involve research into "artificial intelligence"?
What makes all this silliness sustainable is the philosophical fog that surrounds words such as "intelligence" and "thinking". Such words easily slip their moorings in our common language, and acquire some very strange uses. Yet, because they are recognizably legitima
Re: (Score:3, Insightful)
When I hear about intelligent anything to do with computers, I just think of a system that learns. That, to me, is the key differentiator. On the other hand, my mom's friend was telling me one night at dinner that her son was taking a cla
Re:In Defense of Artificial Intelligence (Score:4, Insightful)
Do you also complain when airplanes don't flap their wings? (Sci-fi's Ornithopters excempted of course.)
Knowledge systems/rules engines and neural networks can deduce answers, that is sufficient to be labeled "intelligence" in my book.
Re: (Score:3, Insightful)
Re:In Defense of Artificial Intelligence (Score:5, Insightful)
Re:In Defense of Artificial Intelligence (Score:5, Insightful)
A very good example, that. That $20 DSP does nothing but a brute force search on certain sound patterns. This is not in any way similar to how humans process speech.
I am not in the camp that says humans have a certain ineffable something that computers can never replicate, but using brute force pattern matching is not the way to find out just how human perception works and reimplementing it in a machine. Chess, BTW, is an example of the opposite: even humans do a brute force search down the decision tree. Sometimes they're trained enough to prune the tree quickly, but that is no different from the common algorithms currently in use.
As Douglas Hofstadter puts it, the most interesting things happen in those 100ms between seeing a picture of your mother and going 'Mom!', and we're nowhere near understanding that problem space enough to implement it in AI. At least, we weren't a couple of years back. I haven't kept up with current developments though.
Mart
Re:In Defense of Artificial Intelligence (Score:4, Insightful)
How do humans do it?
Re:In Defense of Artificial Intelligence (Score:5, Insightful)
How do humans do it?
It's a fascinatingly complex process. Seriously, read up a bit on Wikipedia and perhaps take a few foreign languages. There are many, many points of failure. I think it's interesting to consider Orwell's argument about language in 1984. When thinking of Orwell, I'm glad that I've had the opportunity to be exposed to as many languages as I have. The more languages I learn, even if only a few words and concepts, the more modes of thinking I open myself up to. A new language to me can sometimes introduce a whole new viewpoint on the world, simply through the specific connotations and denotations. Usually denotations are easy to translate, however connotations can pose such of a problem that sometimes we prefer to just outright borrow a word from another language to express precisely our meaning. Language can evoke all 5 senses.
Personally, I'm fascinated by language, written and spoken. There are words I learned in Germany that I still use today even though I'm no longer anywhere near fluent (use it or lose it). For example, in English we have a "shortcut," but I can't readily think of the opposite unless I use the German word "Umweg." Another example: as I was looking at art in a story today I came across some Japanese characters (because we know that hanging up symbols you have no idea about is so cool), I noticed that the kanji for woman was one of the radicals in a kanji that was translated as "tranquility." It made me wonder who, thousands of years ago, thought about the concept of tranquility and decided that the lower radical should be the symbol for "woman." I could go on like this. Suffice to say, language is perhaps the single tool we use to define our consciousness has humans.
I'd further pontificate that unless we were to create an AI for whom language is as prevasive as in the human mind, chasing strong AI will always result in failure.
Brute force is how humans do it (Score:4, Informative)
Huh? In the ear, "thousands of "hair cells" are set in motion, and convert that motion to electrical signals that are communicated via neurotransmitters to many thousands of nerve cells" [wikipedia.org]. Wouldn't you say the joint work of "thousands of nerve cells" is exactly what "brute force" is about?
The reason why artificial intelligence still seems so distant is because no artificial computer has the brute force of the human brain. The average brain has tens of billions of neurons, each of which can process thousands of inputs a few hundreds times per second.
Although computers have been able to simulate smaller assemblages of neurons very precisely, simulating the full scope of a human brain is still off reach, even for Google.
Re:In Defense of Artificial Intelligence (Score:5, Insightful)
ERP could work if the vendors would realistically deal with GIGO.
Unless you lock down the permissions so tightly that the system is unusable, your users will enter bad data. They'll add new entries for objects that already exist, they'll misspell the name of an object and then create a new object instead of editing the one they just created. They'll make every possible data entry error you can imagine, and plenty that you can't.
We'd see a lot more progress in business software applications if all vendors would follow two rules:
Re:In Defense of Artificial Intelligence (Score:5, Funny)
I don't think so but the possibility can't be ruled out without further investigation. Have you ever tried to expose a database application to users and subsequently lost all faith in humanity?
Re:In Defense of Artificial Intelligence (Score:5, Funny)
I'm still getting therapy.
We had a simple field on a form to "Supply a Telephone Number". The users didn't, so we used JS to validate they had filled it in.
Then they filled in garbage, so we enforced numerals only. The users entered "1111111" everywhere.
Then we enforced standard number formats based on a Country selector, with correct International Dialling Codes and pattern / format matching. The users entered "0044 (1)1111111" everywhere.
Finally we checked that the numbers didn't look like "0044 (1)1111111" i.e. too many repeated characters, after extensive testing to avoid false-positives. The users now enter "0044 (1)2121212" everywhere.
The more you Idiot-Proof a system, the smarter the Idiots become. Not smarter at actually entering the correct data, just smarter at bypassing the protections you put in place.
Re:In Defense of Artificial Intelligence (Score:5, Insightful)
Just out of curiosity... did you ever try to find out WHY people were making entries with invalid phone numbers? Is it at all possible that instead of your users being idiots, they HAD to make an entry, but the phone number was one piece of data that simply wasn't available?
If I've learned anything over a lot of years of programming, it's that when your users absolutely insist on doing something contrary to what your program wants them to do, it's time to sit down and listen.
Re:In Defense of Artificial Intelligence (Score:5, Funny)
Well, we tried ringing them...
Re:In Defense of Artificial Intelligence (Score:5, Insightful)
"We had a simple field on a form to "Supply a Telephone Number". The users didn't, so we used JS to validate they had filled it in."
So instead of validation server-side you rely on validation client-side?
"The more you Idiot-Proof a system, the smarter the Idiots become. Not smarter at actually entering the correct data, just smarter at bypassing the protections you put in place."
Hummm... Why are your users entering such telephone numbers as 1111111? Are you *sure* they do it on mistake? Or might it be that they don't *want* to give their telephone number to you for their own valid reasons and you still didn't add the option "I don't have or don't want to share my telephone number with you"?
I'm not sure which keyboard end is the idiot one in this case.
Re:In Defense of Artificial Intelligence (Score:5, Interesting)
In fact, this was an internal web based app for our office, which dealt with hotel reservations.
When setting up a new hotel on the system, the users (our staff), had to find and supply the telephone number as part of the standard contact details we needed for every hotel.
Do you know of any hotel that DOESN'T have a telephone, and if so, how would we call them to make a reservation ?
There are sometimes instances where some fields MUST be filled in, otherwise the whole record becomes worthless.
Re:In Defense of Artificial Intelligence (Score:4, Insightful)
Re: (Score:3, Interesting)
That's the problem now... marketing is so good at getting your message across, you try at all costs to get the upsell and get your value back. Meanw
Re: (Score:3, Interesting)
Sigh.
This depresses me.
The same old lazy "users are idiots" arguments.
Did you bother finding out WHY users were going to such lengths to get around your validation routines? Maybe...just maybe, they had perfectly good reasons for not entering this piece of data. For the most part I have found that users have very good reasons for do
Re:In Defense of Artificial Intelligence (Score:5, Insightful)
I've never worked for a software company but as the "computer guy" I got to help move people from the "emailing spreadsheets around" workflow to basic MS Access database applications (I know just enough about databases to be horrified about the idea of using Access for critical business functions but it's better than Excel).
As the maintenance manager of a factory I got to help the plant manager make software purchasing decisions. I've come to the conclusion that mid-sized to large corporations should just bite the bullet and hire their own programmers. If it makes sense to design your product and design your own assembly lines and design your own tooling, jigs and fixtures then it makes sense to design your own software. Any cost savings you can achieve by outsourcing to a more specialized company never seems to materalize.
Re: (Score:3, Insightful)
I have to agree with this...
When a corporation passes a certain size, having a packaged ERP is a good idea (for legal compliance).
The problem is, generic accounting packages, ERP packages, etc. work best when you don't have a bunch of exceptional processes (our PO process had 17 variations- from as little as 2 lines on the form- to a full form plus multiple attachments-- people used the same form area for many different meanings-- the 1 page form was really about a 4 page form).
The solution was to cut down
GUIs, games, compilers used to be called AI (Score:2)
Some researchers divided this into "soft" and "hard" AI. The later would be someone conversational humna-like mentality. The former is any software technology along the way.
Re: (Score:2)
Having taken several courses on AI, I never found a contributor to the field that promised it to be the silver bullet -- or even remotely comparable to the human mind.
Douglas Lenat [wikipedia.org], perhaps?
Re:In Defense of Artificial Intelligence (Score:5, Interesting)
Having taken several courses on AI, I never found a contributor to the field that promised it to be the silver bullet -- or even remotely comparable to the human mind.
Not today, after the "AI Winter". But when I went through Stanford CS in the 1980s, there were indeed faculty members proclaiming in print that strong AI was going to result from expert systems Real Soon Now. Feigenbaum was probably the worst offender. His 1984 book, The Fifth Generation [amazon.com] (available for $0.01 through Amazon.com) is particularly embarrassing. Expert systems don't really do all that much. They're basically a way to encode troubleshooting books in a machine-processable way. What you put in is what you get out.
Machine learning, though, has made progress in recent years. There's now some decent theory underneath. Neural nets, simulated annealing, and similar ad-hoc algorithms have been subsumed into machine learning algorithms with solid statistics underneath. Strong AI remains a long way off.
Compute power doesn't seem to be the problem. Moravec's classic chart [cmu.edu] indicates that today, enough compute power to do a brain should only cost about $1 million. There are plenty of server farms with more compute power and far more storage than the human brain. A terabyte drive is now only $199, after all.
Re:In Defense of Artificial Intelligence (Score:4, Informative)
"CASE" isn't entirely bunk either. CASE as CASE might be, but computer aided software design isn't. Perhaps most here are now too young to remember when, if you wanted a GUI, you had to design it by hand, positioning all the elements manually in code and then linking things up manually, in code.
Now almost nobody designs a GUI without a RAD tool of some kind. You drop your GUI elements on the window and the tool generates code stubs for the interaction. That's way, way nicer, and way, way faster than, for example, setting up transfer records for a Windows 3.1 form.
Re: (Score:3, Funny)
You repeat yourself, Mr. eldavojohn.
AI already succeeded (Score:5, Insightful)
AI already has successes. But, as an AI researcher friend of mine points out, once they succeed it's no longer 'AI'. Things like packet routing, used to be AI. Path-finding, as in games, or route-finding, as with GPS: solved. So yes, AI will never arrive, because AI is always 'other than the AI we already have.'
Re: (Score:3, Interesting)
In fact, the Deutsche Post (Germany's biggest mail company) uses a neural network to process hand-written zip codes. It works rather well, as far as I know. Classic AI, too.
Plus, spam filters. Yes, they merely use a glorified Bayes classifier but, well... learning classifiers are a part of AI. Low-
Virtualization has worked (Score:5, Insightful)
Not sure why virtualization made it into the potential snake-oil of the future. It's demonstrating real benefits today...practically all of the companies I deal with have virtualized big chunks of their infrastructure.
I'd vote for cloud computing, previously known as utility computing. It's a lot more work than expected to offload processing outside your organization.
Re:Virtualization has worked (Score:5, Insightful)
Yup, even for "just" development, virtualization has been a great gift. With one or two beefy machines, each developer can have an exact mirror of a production environment, and not cause issues on the production side or even for other developers while testing code and such.
Re:Virtualization has worked (Score:4, Insightful)
because virtualization only works for large companies with many, many servers, yet contractors and vendors sell it to any company with a couple of servers. You should virtualize you email ($2,000 by itself, give or take a little), web server, ($2,000 by itself, give or take a little), source control ($1,000 by itself, give or take a little, and a couple of others. So you have maybe $10,000 in 5 to 6 servers needed to run a small to mid-size company and spend tens of thousands to put them on one super-server running a complex setup of virtualized servers...oh no, the motherboard died and the entire biz is offline.
Virtualization has it's place, but only at the larger companies.
Re: (Score:3, Insightful)
You could replace "virtualization" with "mainframe" or "big unix server" and still have the same issue.
You would also end up with similar approaches to the problem. With some of these (mainframe), virtualization has been mundane/commonplace for decades.
Re:Virtualization has worked (Score:5, Insightful)
Re: (Score:3, Funny)
Umm, welcome to reality.
Re:Virtualization has worked (Score:5, Insightful)
Re:Virtualization has worked (Score:4, Insightful)
Spoken like someone who invested the technology five years ago, and hasn't updated their information since.
1. If a small business is running more than two servers, then it's likely it'll be cheaper, over the next five years, to virtualize those servers.
2. If a small business needs any sort of guaranteed uptime, it's cheaper to virtualize - two machines and high availability with VMWare, and you are good to go.
3. Setting up VMWare, for example, is relatively simple, and actually makes remote management easier, since I have CONSOLE access from remote sites to my machine. Need to change the network connection or segment for a machine remotely? You can't do it safely without virtualization.
There is more, but I recommend you check this out again, before continuing to spout this stuff. It's just not true anymore.
Re:Virtualization has worked (Score:4, Interesting)
because virtualization only works for large companies with many, many servers
You're full of crap. At my company, a coworker and I are the only one handling the virtualization for a single rackful of servers. He virtualizes Windows stuff because of stupid limitations in so much of the software. For example, we still use a lot of legacy FoxPro databases. Did you know that MS's own FoxPro client libraries are single-threaded and may only be loaded once per instance, so that a Windows box is only capable of executing one single query at a time? We got around that by deploying several virtualized instances and querying them round-robin. It's not perfect, but works as well as anything could given that FoxPro is involved in the formula. None of those instances need to have more than about 256MB of RAM or any CPU to speak of, but we need several of them. While that's an extreme example, it serves the point: sometimes with Windows you really want a specific application to be the only thing running on the machine, and virtualization gives that to us.
I do the same thing on the Unix side. Suppose we're rolling out a new Internet-facing service. I don't really want to install it on the same system as other critical services, but I don't want to ask my boss for a new 1U rackmount that will sit with a load average of 0.01 for the next 5 years. Since we use FreeBSD, I find a lightly-loaded server and fire up a new jail instance. Since each jail only requires the disk space to hold software that's not part of the base system, I can do things like deploying a Jabber server in its own virtualized environment in only 100MB.
I don't think our $2,000 Dell rackmounts count as "super-servers" by any definition. If we have a machine sitting their mostly idle, and can virtualize a new OS instance with damn near zero resource waste that solves a very real business or security need, then why on earth not other than because it doesn't appeal to the warped tastes of certain purists?
Re: (Score:3, Interesting)
I don't agree with the guy who said its only for enterprises, but I think you would have been better off just not using foxpro.
The codebase started back in the DOS days.
Its not that difficult to transition from. just do it. You'll be happier.
I wouldn't say that! We've moved a lot of data into PostgreSQL with the help of a tool I wrote that my boss let me release under the GPL [sourceforge.net]. There's still a lot of code in FP, though, and we're in the planning stages of a multi-year conversion process.
Trust me: we've seen the light! Now it's just a matter of moving on with zero allowed downtime.
Re: (Score:3, Interesting)
Re:Virtualization has worked (Score:5, Interesting)
Actually, the funny thing is, real snake oil actually does what it was originally supposed to do. "Snake oil" comes from traditional Chinese medicine (as a cure for joint pain), and was made from the fat of the Chinese water snake, Enhydris chinensis. It is extremely high in omega-3 fatty acids (particularly EPA), and is very similar to what is sold today as fish oil. Omega-3 fatty acids (in particular, EPA) are now known to reduce the progression and symptoms of rheumatoid arthritis.
Now, in the US, a variety of hucksters took fats from any old snake (if it even involved snake oil at all) and made all sorts of miraculous, unsubstantiated claims about what it would do. But concerning in its original role in Chinese medicine, snake oil likely did exactly what it was claimed to do.
Re: (Score:2)
Yeah, I don't think this stuff can simply be called "snake oil". ERP systems are in use. They're not a cure-all, but failing to fix every problem doesn't make a thing useless. The current usefulness of "artificial intelligence" depends on how you define it. There are some fairly complex statistical analysis systems that are already pretty useful. Full on AI just doesn't exist yet, and we can't even quite agree on what it would be, but it would likely have some use if we ever made it.
Virtualization is
Re: (Score:3, Interesting)
Today, cloud computing, virtualization, and tablet PCs are vying for the hype crown. At this point it's impossible to tell which claims will bear fruit, and which will fall to the earth and rot.
I agree with your post (not the article) - these technologies have all had success in the experimental fields in which they've been applied. but ESPECIALLY virtualization, which is way past experimenting and is starting to become so big in the workplace that I've started using it at home. No need to setup a dual boot with virtualization, and the risk of losing data is virtually removed (pun intended) because anytime the virtual machine gets infected you just overwrite it with yesterdays backup. No need to s
Re: (Score:2)
``Not sure why virtualization made it into the potential snake-oil of the future. It's demonstrating real benefits today...practically all of the companies I deal with have virtualized big chunks of their infrastructure.''
I am sure they have, but does it actually benefit them? In many cases, it seems to me, it's just people trying their best to come up with problems, just so they can apply virtualization as a solution.
Re: (Score:3, Interesting)
Re: (Score:3, Insightful)
I think the issue I have with both virtualization and cloud computing is a lack of concrete assessment. They are touted as wunder-technologies, and while they have their place and their use, a lot of folks are leaping into them with little thought as to how they integrate into existing technologies and the kind of overhead (hardware, software, wetware) that will go with it.
Virtualization certainly has some great uses, but I've seen an increasing number of organizations thinking they can turf their server r
Re:Virtualization has worked (Score:5, Interesting)
I administer hundreds of virtual machines and virtualization has solved a few different problems while introducing others.
Virtualization is often sold as a means to completely utilize servers. Rather than having two or three applications on two or three servers, virtualization would allow condensing of those environments into one large server, saving power, data center floor space, plus allowing all the other benefits (virtual console, ease of backup, ease of recovery, etc..).
In one sense it did solve the under-utilization problem. Well, actually it worked around the problem. The actual problem was often that certain applications were buggy and did not play well with other applications. If the application crashed it could bring down the entire system. I'm not picking on Windows here, but in the past the Windows systems were notorious for this. Also, PCs were notoriously unreliable (but they were cheap, so we weighed the cost/reliability). To "solve" the problem, applications were segregated to separate servers. We used RAID, HA, clusters, etc., all to get around the problem of unreliability.
Fast forward a few years and PCs are a lot more reliable (and more powerful) but we still have this mentality that we need to segregate applications. So rather than fixing the OS we work around it by virtualizing. The problem is that virtualization can have significant overhead. On Power/AIX systems, the hypervisor and management required can eat up 10% or more of RAM and processing power. Terabytes of disk space across each virtual machine is eaten up in multiple copies of the OS, swap space, etc.. Even with dynamic CPU and memory allocation, systems have significant wasted resources. It's getting better, but still only partially addresses the problem of under-utilization.
So what's the solution? Maybe a big, highly reliable box with multiple applications running? Sound familiar?
Re: (Score:3, Interesting)
disappointing... (Score:5, Funny)
Re: (Score:2)
I don't see anything wrong with this list... (Score:5, Funny)
We need to bring about a paradigm shift, to think outside the box, and produce a clear synergy between cloud computing and virtualization.
Re:I don't see anything wrong with this list... (Score:5, Funny)
Damn it all, man. Your don't produce synergy! You leverage synergy. Please get it right will you? The sooner you do, the sooner you can return to your core competency and synthesize some maximum value for your investors. M'kay?
That would be a win-win-win (Score:3, Funny)
When it's all said and done, that's a good day.
Re: (Score:2)
Having not read the article, I figured they discussed Jem's hologram-inducing supercomputer in the AI section.
My Meta-assessment (Score:4, Interesting)
IT snake oil: Six tech cure-alls that went bunk
By Dan Tynan
Created 2009-11-02 03:00AM
Today, cloud computing [4], virtualization [5], and tablet PCs [6] are vying for the hype crown. At this point it's impossible to tell which claims will bear fruit, and which will fall to the earth and rot.
[...]
1. Artificial intelligence
2. Computer-aided software engineering (CASE)
3. Thin clients
4. ERP systems
5. B-to-b marketplaces
6. Enterprise social media
1. AI: Has to have existed before it can be "bunk"
2. CASE: Regarding Wikipedia [wikipedia.org], it seems to be alive and kicking.
3. Thin Clients: Tell that to the guys over at TiVo that thin-client set-top-boxes are bunk.
4. ERP Systems: For low complexity companies, I don't see why ERP software isn't possible.
5. Web B2B: He is right about this one.
6. Social media: Big companies like IBM have been doing "social media" within their organization for quite some time.It's just a new name for an old practice
And as far as his first comment,
"Today, cloud computing [4], virtualization [5], and tablet PCs [6] are vying for the hype crown. At this point it's impossible to tell which claims will bear fruit, and which will fall to the earth and rot."
[4] Google.
[5] Data Servers.
[6] eBooks and medical applications.
Re:My Meta-assessment (Score:5, Insightful)
There's a pattern here. Many of the hyped technologies eventually find a nice little niche. It's good to experiment with new things to find out where they might fit in or teach us new options. The problem comes when they are touted as a general solution to most IT ills. Treat them like the religious dudes who knock on your door: go ahead and talk to them for a while on the porch, but don't let them into the house.
Re: (Score:2)
> 3. Thin Clients: Tell that to the guys over at TiVo that thin-client set-top-boxes are bunk.
Nevermind the Tivo. Web based "thin client computing" has been on the rise in corporate computing for over 10 years now. There are a lot of corporate Windows users that use what is essentially a Windows based dumb terminal. Larger companies even go out of their way to make sure that changing the setup on your desktop office PC is about as hard as doing the same to a Tivo.
Client based computing (java or .net) is
Re: (Score:2)
> 3. Thin Clients: Tell that to the guys over at TiVo that thin-client set-top-boxes are bunk.
Nevermind the Tivo. Web based "thin client computing" has been on the rise in corporate computing for over 10 years now. There are a lot of corporate Windows users that use what is essentially a Windows based dumb terminal. Larger companies even go out of their way to make sure that changing the setup on your desktop office PC is about as hard as doing the same to a Tivo.
Client based computing (java or .net) is infact "all the rage".
They've been doing that for years. Strangely, even when your desktop PCs are locked down so tight they may as well be dumb terminals, a lot of people will still scream blue murder if it really is a dumb terminal being put on their desk.
Re: (Score:3, Insightful)
Re: (Score:3, Interesting)
2. CASE: Regarding Wikipedia [wikipedia.org], it seems to be alive and kicking.
As a programmer, CASE sounds pretty neat. I think it probably won't obviate the need for programmers any time soon, but it has the potential to automate some of the more tedious aspects of programming. I'd personally rather spend more of my time designing applications and less time hammering out the plumbing. It's interesting to note that a lot of the CASE tools in that wikipedia article I'm familiar with, although they were never referred to as CASE tools when I was learning how to use them. I think the CA
The Cloud (Score:5, Funny)
It has vaporware all over it.
Re: (Score:2, Funny)
Clouds are actually water vapors. So it literally is vaporware.
Re: (Score:2)
Clouds are actually water vapors. So it literally is vaporware. ...and since it's water vapour it's no surprise that letting it anywhere near your computer hardware will instantly make that hardware go on the fritz.
There is just one Myth. (Score:5, Insightful)
Bonus points if the salesman admits that he doesn't need to know your problems before selling it to you.
Machine translation replacing human translation (Score:4, Insightful)
Let's just say the technology is not quite there yet.
Re: (Score:3, Funny)
"Let's just say the technology is not quite there yet"
aka
"Pertaining to the acceptability, us, speaking of the mechanical acumen almost has arrived, still"
ERP? (Score:2)
I was surprised to find ERP on this list. Sure, it's a huge effort and always oversold, but there's hardly a large manufacturing company out there that could survive without some sort of basic ERP implementation.
Re: (Score:3, Insightful)
Re:ERP? (Score:4, Interesting)
Sing it brother (or sister)! As one who is currently helping to support an Oracle-based ERP project, expensive doesn't begin to describe how much it's costing us. Original estimated cost: $20 million. Last known official number I heard for current cost: $46 million. I'm sure that number is over $50 million by now.
But wait, there's more. We bought an off-the-shelf portion of their product and of course have to shoe-horn it to do what we want. There are portions of our home-grown process that aren't yet implemented and probably won't be implemented for several more months even though those portions are a critical part of our operations.
But hey, the people who are "managing" the project get to put it on their résumé and act like they know what they're doing, which is all that matters.
an aggressive sales force that would sell ice to eskimos
I see you've read my column [earthlink.net].
Re: (Score:3, Interesting)
Wow, an actively maintained ~ (tilde) web site. I don't think I've seen one of those since about 2002 ;) Your column is spot on.
Microsoft silverlight (Score:5, Insightful)
Expert systems (Score:4, Insightful)
Thin Clients? (Score:2)
Worse, users resented giving up control over their machines, adds Mike Slavin, partner and managing director responsible for leading TPI's Innovation Center. "The technology underestimated the value users place upon having their own 'personal' computer, rather than a device analogous -- stretching to make a point here -- to the days of dumb terminals," he says.
So why does it look good now? Oh right different people heard the setup and a new generation gets suckered on it.
Re: (Score:3, Insightful)
Because cloud computing doesn't require a thin client? The two things aren't related at all. Offloading processing and data makes perfect sense for many applications.
Thanks for linking to the print version (Score:5, Interesting)
This is a bit OT but I wanted to say that snydeq deserves a cookie for linking to the print version. I can only imagine that the regular version is at least seven pages. I hope slashdot finds a way to reward considerate contributors such as him or her for making things easy for the rest of us.
Virtualization is not bunk. (Score:3, Interesting)
I don't know of a single IT department that hasn't been helped by virtualization of servers. It makes more efficient use of purchased hardware, keeps businesses from some of the manipulations to which their hardware and OS vendors can subject them, and is (in the long term) cheaper to operate than a traditional datacenter. IT departments have wondered for a long time: "if I have all this processing power, memory, and storage, why can't I use all of it?" Virtualization answers that question, and does it in an elegant way, so I don't consider it snake oil.
The crazy hottie (Score:5, Interesting)
It was like watching the cast of a porn film come visit. Complete with the sleazebag regional manager, some of them even had gold chains on. Pimps up, big daddy!
They would laugh at whatever the customer said wildly, even if it wasn't really funny. The girls would bat their eyelashes and drop pencils. It was so ridiculous it was funny, it was like a real life comedy show skit.
I wonder how much skimming went on in those days. Bogus purchase orders, fake invoices. Slap and tickle. The WORST was if your company had no money to afford any of the infratsructure and the networking company would get their "capital finance" team involved. Some really seedy slimy stuff went down in the dot-com boom. And not just down pantlegs, either.
Re: (Score:3, Insightful)
Booth babes are the best thing about trade shows.
It's always the hype problem. (Score:5, Insightful)
For example, AI works and is a very strong technology, but only the SF authors and idiots expect their computer to have a conversation with them. Expert systems (a better name) or technologies that are part of them are in place in thousands of back-office systems.
But, if you're looking for HAL, you have another 2001 years to wait. Nobody seriously is working toward that, except as a dream goal. Everybody wants a better prediction model for the stock market first.
Overhyped, but note quite snake oil (Score:2, Informative)
Same with thin clients. Just today I put together a proposal for three 100 seat thin client (Sunray) labs. VDI allows us to use Solaris, multiple Lin
AI done poorly (Score:2, Interesting)
Tech cure-all missing option: emacs (Score:5, Funny)
Apparently it cures everything but RSI.
Those aren't all (Score:5, Insightful)
We used to play buzzword bingo when vendors would come in for a show. Some of my personal favorites:
IT Best Practices - Has anyone seen my big book of best practices? I seem to have misplaced it. But that never stopped vendors from pretending there was an IT bible out there that spelled out the procedures for running an IT shop. And always it was their product at the core of IT best practices.
Agile Computing - I never did figure that one out. This is your PC, this is your PC in spin class.
Lean IT - Cut half your staff and spend 3x what you were paying them to pay us for doing the exact same thing only with worse service.
Web 2.0 - Javascript by any other name is still var rose.
SOA - What a gold mine that one was. Calling it "web services" didn't command a very high premium. But tack on a great acronym like SOA and you can charge lots more!
All those are just ways for vendors and contractors to make management feel stupid and out of touch. Many management teams don't need any help in that arena, most of them are already out of touch before the vendor walks in. Exactly why they're not running back to their internal IT people to inquire why installing Siebel is a really BAD idea. You can't fix bad business practices with technology. Fix your business practices first, then find the solution that best fits what you're already doing.
And whoever has my IT Best Practices book, please bring it back. Thanks.
I call BS on this story (Score:3, Insightful)
"Artificial intelligence" - what's keeping the spam out of YOUR inbox? How does Netflix decide what to recommend to you? Ever gotten directions from Google Maps?
"Computer-aided software engineering" - tools like valgrind, findbugs, fuzzing tools for finding security problems.
"Thin clients" - ever heard of a "Web Browser"?
"Enterprise social media" - That really describes most of the Internet
As soon as I saw an opionion from "Ron Enderle" I knew this story would be BS.
ERP is snake oil? (Score:3, Funny)
I doubt validity of TFA (Score:3, Insightful)
I don't think author has a clue. The secrets which could be accidentally spilled are not worth keeping. If it so short it bound to be trivial, really essential results are megabytes and megabytes of data or code or know-how. Treat your researcher as prisoners, get prison science in return.
Java (Score:3, Insightful)
It was not too long ago that Java was going to:
Give us applets to do what Browsers can never do: Bring animated and reactive interfaces to the web browsing experience!
Take over the desktop. Write once, run anywhere and render the dominance of Intel/MS moot by creating a neutral development platform!
Yes, perhaps its found a niche somewhere. But its fair to say it fell short of the hype.
Why Artificial Intelligence may never exist (Score:5, Insightful)
The most obvious counterexample to the "AI" nonsense is to consider that, back around 1800 or any time earlier, it was obvious to anyone that the ability to count and do arithmetic was a sign of intelligence. Not even smart animals like dogs or monkeys could add or subtract; only we smart humans could do that. Then those engineer types invented the adding machine. Were people amazed by the advent of intelligent machines? No; they simply reclassified adding and subtracting as "mechanical" actions that required no intelligence at all.
Fast forward to the computer age, and you see the same process over and over. As soon as something becomes routinely doable by a computer, it is no longer considered a sign of intelligence; it's a mere mechanical activity. Back in the 1960s, when the widely-used programming languages were Fortran and Cobol, the AI researchers were developing languages like LISP that could actually process free-form, variable-length lists. This promised to be the start of truly intelligent computers. By the early 1970s, however, list processing was taught in low-level programming courses and had become a routine part of the software developers toolkits. So it was just a "software engineering" tool, a mechanical activity that didn't require any machine intelligence.
Meanwhile, the AI researchers were developing more sophisticated "intelligent" data structures, such as tables that could associate arbitrary strings with each other. Did these lead to development of intelligent software? Well, now some of our common programming languages (perl, prolog, etc.) include such tables as basic data types, and the programmers use them routinely. But nobody considers the resulting software "intelligent"; it's merely more complex computer software, but basically still just as mechanical and unintelligent as the first adding machines.
So my prediction is that we'll never have Artificial Intelligence. Every new advance in that direction will always be reclassified from "intelligent" to "merely mechanical". When we have computer software composing best-selling music and writing best-selling novels or creating entire computer-generated movies from scratch, it will be obvious that such things are merely mechanical activities, requiring no actual intelligence.
Whether there will still be things that humans are intelligent enough to do, I can't predict.
Re:Why Artificial Intelligence may never exist (Score:4, Informative)
The most obvious counterexample to the "AI" nonsense is to consider that, back around 1800 or any time earlier, it was obvious to anyone that the ability to count and do arithmetic was a sign of intelligence. Not even smart animals like dogs or monkeys could add or subtract; only we smart humans could do that.
Interestingly, in recent years, many animals have been found to be able to perform simple mathematical tasks.
Dolphins:
http://www.apa.org/monitor/sep05/marine.html [apa.org]
Monkeys:
http://www.foxnews.com/story/0,2933,317526,00.html [foxnews.com]
Dogs can do calculus:
http://www.sciencenewsforkids.org/articles/20031008/Feature1.asp [sciencenewsforkids.org]
Where's Cloud Computing on that list?? (Score:3, Insightful)
I definitely agree with a lot of the items on that list. This time around, however, thin clients are definitely in the running because of all the amazing VDI, virtual app stuff and fast cheap networks. However, anyone who tells you that you can replace every single PC or laptop in your company needs to calm down a little. Same goes for the people who explain thin clients in a way that makes it sound like client problems go away magically. They don't - you just roll them all up into the data center, where you had better have a crack operations staff who can keep everything going. Why? Because if the network fails, your users have a useless paperweighr on their desk until you fix it.
I'm definitely surprised to not see cloud computing on that list. This is another rehashed technology, this time with the fast cheap network connectivity thrown in. The design principles are great -- build your app so it's abstracted from physical hardware, etc. but I've seen way too many cloud vendors downplay the whole data ownership and vendor lock-in problems. In my opinion, this makes sense for people's Facebook photos, not a company's annual budget numbers.
Storage Virtualization (Score:3, Interesting)
EMC, IBM, HDS and HP I'm looking at you.
You've been pushing this Storage Virtualization on us storage admins for years now, and it's more trouble than it's worth. What is it? It's putting some sort of appliance (or in HDS's view a new disk array) in front of all of my other disk arrays, trying to commoditize my back end disk arrays, so that I can have capacity provided by any vendor I choose. You make claims like,
1. "You'll never have vendor lock-in with Storage virtualization!" However, now that I'm using your appliance to provide the intelligence (snapshots, sync/async replication, migration etc) I'm now locked into your solution.
2. "This will be easy to manage." How many of these fucking appliances do I need for my new 100TB disk array? When I've got over 300 storage ports on my various arrays, and my appliance has 4 (IBM SVC I'm looking at you), how many nodes do I need? I'm now spending as much time trying to scale up your appliance solution that for every large array I deploy, I need 4 racks worth of appliances.
3. "This will be homogeneous!" Bull fucking shit. You claimed that this stuff will work with any vendor's disk arrays so that I can purchase the cheapest $/GB arrays out there. No more DMX, just clariion, no more DS8000 now fastT. What a load. You only support other vendor's disk arrays during the initial migration and then I'm pretty much stuck with your arrays until the end of time. So much for your utopian view of any vendor. So now that I've got to standardize on your back end disk arrays, it's not like you're saving me the trouble of only having one loadbalancing software solutions (DMP, Powerpath, HDLM, SDD etc..). If I have DMX on the backend, I'm using Powerpath whether I like it or not. This would have been nice if I was willing to have four different vendor's selling me backend capacity, but since I don't want to deal with service contracts from four different vendors, that idea is a goner.
Besides, when I go to your large conferences down in Tampa, FL; even your own IT doesn't use it. Why? Because all you did is add another layer of complexity (troubleshooting, firmware updates, configuration) between my servers and their storage.
You can take this appliance (or switch based in EMC's case) based storage virtualization and Shove It!
btw: There's a reason why we connect mainframe channels directly to the control units. (OpenSystems translation: Connecting hba ports to storage array ports.) Answer: Cable doesn't need upgrading, doesn't need maintenance contracts and is 100% passive.
Advanced Technology saying (Score:3, Insightful)
Re: (Score:3, Interesting)
Hell, almost all the cases should be considered successes now. The problem was that they were all massively over hyped back in the day.
Our massive move to web-applications and the newly-but-stupidly-coined "Cloud" is as much a thin client solution as it was back then.
To many, Google can be considered an AI. After all, it helps answers your questions. With more and more NLP being built into it (and other web applications), it its getting closer to directly answering your questions.
So what if ERP always went
Re: (Score:3, Insightful)
Another modern and heavily used AI: vehicle control systems (especially fighter jets and race cars).