Why Your IT Spending Is About To Hit the Wall 301
CowboyRobot writes "For decades, rapid increases in storage, processor speed, and bandwidth have kept up with the enormous increases in computer usage. That could change however, as consumption finally outpaces the supply of these resources. It is instructive to review the 19th-century Economics theory known as Jevons Paradox. Common sense suggests that as efficiencies rise in the use of a resource, the consumption goes down. Jevons Paradox posits that efficiencies actually drive up usage, and we're already seeing examples of this: our computers are faster than ever and we have more bandwidth than ever, yet our machines are often slow and have trouble connecting. The more we have, the even more we use."
Re:There are two schools of thought (Score:3, Interesting)
not really. seems the former are a bunch of fucking idiots constantly tampering with shit that doesn't need tampering with (gnome 3, anyone? unity?), while the latter are either refreshingly pragmatic, or, as you say, tedious old farts resisting progress for the sake of pretending to be smart (windows xp, anyone?)
my view is fuck the both of them, and there must be a third school of thought. however, i'm too tired and drunk to think what it might be.
Two no three major flaws with this. (Score:3, Interesting)
1. Computer hardware is not a finite resource like coal is or any other natural resource. Prices go up; somebody build a plant to make more. Econ 101.
2. This assumes that computer hardware will be used the same way as it has been in the past. We are already seeing major changes. Less individual storage and more online storage; different devices that are less hardware intensive and computing is being used differently - less desktop and more handheld and all the differences down the chain from that.
3. No mention of significant technology changes. Who's to say will still be using the current architectures or even silicon tech in the future. This assumes the same old same old for the future.
IT spending dropping dramatically (Score:4, Interesting)
Peak Computing? (Score:5, Interesting)
If I gather what this article is speculating on, it's a phenomena similar to peak-oil.
Peak-oil doesn't necessarily mean that you run-out of oil, it just means that the marginal cost of producing more oil reaches a point which causes the rate of oil production to decrease. In the backdrop of increasing demand, and limited supply this implies a sharp downturn in availability of oil at historical prices.
If applied to computing, it would imply a limit to computing resources. I don't think we are there (although computing takes lots of electrical power and there seems to be enough semiconductor manufacturing capacity for the moment), but we may be at a point where demand increases beyond the rate at which technology can keep it on its historical increasing MIP/$ trend. If this MIP/$ trend flattens out, it may be difficult to find funding for new technological advances and fundamentally change the market for computing.
hello self licking ice cream cone (Score:4, Interesting)
I read the headline for this story and laughed - it doesn't matter how much faster my computers or networks get - Our IT department just installs more and more virus scanners, software maintenance tools, firewalls, monitoring tools ,etc.... Each computer I get has more CPU cores and memory and faster graphics and they are able to do less and less and take longer and longer to boot. I figure before too long I'll have to go back to my old TI-30 calculator and some engineering graph paper and I'll be equal in efficiency to my computer once I factor in all the time I spend waiting for it to get around to sparing .5% of the 12 CPU cores to run the actual software I need to use....
Re:slow where (Score:4, Interesting)
Re:Bloated apps. (Score:2, Interesting)
It's time this myth was debunked.
Java itself is not slow. Properly written and optimized Java code runs almost as fast as equivalent C/C++. (I know, I write such code, and I measure timings for operations in nanoseconds in Java.) The JIT compilers built into modern JVMs generate very optimized machine code. (I know, I've looked at the assembler output.)
Unfortunately, Java has a tendency to magnify poor programming decisions, and it's easy to be an idiot and still write Java "code." Sources of poor Java performance are generally instantiating way too many objects per second, resulting in frequent garbage collector use, and poor choices of algorithms and data structures for the underlying problem. (The same thing could be said of C/C++, although with manual memory management in C/C++ you're far less likely to run into a problem allocating/freeing too much memory per second.)
Personally, I'm fine with this as there will always be career opportunities in rewriting/optimizing someone else's crappy code.
Re:There are two schools of thought (Score:4, Interesting)
1) reduced latency (someone else has mentioned it) - it really helps a lot some applications;
2) less time loading roaming profiles / less time spent refreshing network shares;
3) increased bandwidth (even at 100Mbit) - Gigabit gear is usually more error-resistant, and implement smarter and faster error correction;
4) inter-departament high-speed sychronization - good for replicating storage, machine snapshotting/CDP, distributed filesystems and such;
5) instant 10x speed upgrade on recent infrastructure, since 1000T is Cat5-based (no scrapping except the switches)
My internet connection alone has 120Mbps downstream. And yes, I use it.
Re:I know what you're talking about (Score:4, Interesting)
No, it had plenty of bugs before the Ajax rewrite. Like the longstanding pagination bugs that they eventually just gave up on.