Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Businesses The Almighty Buck IT Hardware

Why Your IT Spending Is About To Hit the Wall 301

CowboyRobot writes "For decades, rapid increases in storage, processor speed, and bandwidth have kept up with the enormous increases in computer usage. That could change however, as consumption finally outpaces the supply of these resources. It is instructive to review the 19th-century Economics theory known as Jevons Paradox. Common sense suggests that as efficiencies rise in the use of a resource, the consumption goes down. Jevons Paradox posits that efficiencies actually drive up usage, and we're already seeing examples of this: our computers are faster than ever and we have more bandwidth than ever, yet our machines are often slow and have trouble connecting. The more we have, the even more we use."
This discussion has been archived. No new comments can be posted.

Why Your IT Spending Is About To Hit the Wall

Comments Filter:
  • by SpaceCadetTrav ( 641261 ) on Friday April 13, 2012 @06:26PM (#39680957) Homepage
    Despite technological advancements, it takes forever for Slashdot to load on my phone.
    • by MachDelta ( 704883 ) on Friday April 13, 2012 @06:37PM (#39681065)

      Yeah, they broke^h^h^h^h^h^h improved the comment system a while ago. In the name of progress, of course.

    • by Anonymous Coward on Friday April 13, 2012 @06:49PM (#39681159)
      In light of technological advanvements, better bottlenecks are being implemented.
    • Comment removed (Score:5, Insightful)

      by account_deleted ( 4530225 ) on Friday April 13, 2012 @06:54PM (#39681207)
      Comment removed based on user account deletion
      • by Anonymous Coward on Friday April 13, 2012 @07:03PM (#39681267)

        No it's not the networks, it's the morons in charge of the content.

        Slashdot should load 10X faster than it does, but the uneducated developers and designers put in a lot of crap that is not needed to add in "pretty" that does not add to the content at all.

        So slashdot now takes over 10X in bandwidth and processing power to deliver the same content it did 8 years ago. All so I can gave some web 2.0 crap that does nothing at all.

        But it's not just slashdot. ALL websites are bum rushing the add more crap idea. Facebook takes 10X longer to load from 5 years ago, CNN, ESPN, etc.. all of them have went from hiring competent people that understand that adding more data to send to the viewer is bad , to a bunch of morons that use every JS toolkit known to man so I download 40mb of libraries before the page loads. Some JS is useful. Good programmers put in the libraries only what is needed, posers put in the whole damn library. This same trend is on Desktops and phones. android and IOS suffer from this as well.

        It is about to hit the wall because low paid low skill developers are what companies hire compared to highly skilled people that will do it right.

        • by jmorris42 ( 1458 ) * <jmorris@[ ]u.org ['bea' in gap]> on Friday April 13, 2012 @07:47PM (#39681567)

          > But it's not just slashdot.

          No it isn't. If the average visitor isn't impacted the devels don't care. But if the average user were impacte dthey would. Which is the problem with the concept under discussion. The belief that bloat MUST be therefore there being nothing that can be done we are all doomed to spend ourselves into poverty fighting a problem that will never exist.

          Because as soon as it becomes a problem, suddenly the average pageview will suddenly be able to shrink in half without impacting usability at all and if that doesn't do it it can cut in half again with minimal impact. And it isn't just webpages, most everything suffers the same bloating. Does a simple little game that was a 50K download on Palm OS really need to be a 1MB app on Android or iOS? Nope. But because users don't care the developers don't care either. And again, if the first part of that statement changes you can bet yer butt the second one will.

          Short version: This is a self correcting non-problem.

          • by dgatwood ( 11270 ) on Friday April 13, 2012 @08:22PM (#39681815) Homepage Journal

            Does a simple little game that was a 50K download on Palm OS really need to be a 1MB app on Android or iOS?

            Depends on the app. If we're talking about an all-text game, that's a little extreme. On the other hand, if it contains any image assets at all, that is probably not unreasonable.

            Remember that the original Palm hardware had 240 x 160 resolution in black and white. A current-generation iPhone has 960 x 640 resolution in 24-bit color, and it is usually bundled as dual-platform for iPad, which is 2048 x 1536 in 24-bit color. So if that 50k app on Palm were nothing but uncompressed image data, you would expect the iPad/iPhone version of the app to be a whopping 96 megabytes.

            Obviously image compression helps with that, and obviously an app contains content other than image assets, both of which contribute to that being something of an overestimate. That said, using that as an upper bound, a mere one megabyte doesn't sound bad at all.

        • by Jane Q. Public ( 1010737 ) on Friday April 13, 2012 @08:09PM (#39681721)

          "No it's not the networks, it's the morons in charge of the content."

          In many ways I must agree. I have strenuously protested many of the changes made to Slashdot over the last couple of years, which have seemed to add nothing substantial to usability, and instead have added overhead and time, and actually made it MORE difficult to use.

        • While you're absolutely correct, the solution is not to re-train developers to "do it right"; it's really not feasible, and isn't a final solution by itself anyway.

          Just like has been predicted for years, IPV4 is running out of addresses. We *could* force the A block owners to give up their IPs, we DID kludge in NAT, but the proper solution was to increase the available resources. For bandwidth/processing power issues, we need a multi-pronged approach, including increased resources (infrastructure upgra
          • Comment removed (Score:4, Insightful)

            by account_deleted ( 4530225 ) on Friday April 13, 2012 @08:40PM (#39681959)
            Comment removed based on user account deletion
            • by gmack ( 197796 )

              It just isn't possible to build a road that really lasts as long as the contractors know that if the road breaks next year they will get payed to fix it. The system needs changing to one where the contractor guarantees the work for x number of years so they aren't motivated to pull crap like overheating the asphalt to save trucking costs (makes the road last for a much shorter time).

            • by Tadu ( 141809 )

              We have also seen there IS a way to build a road so it will really last, just look at the Autobahn, but you have to lay a really solid foundation and build up.

              Hate to burst your bubble, but the Autobahn doesn't last forever, either, even if it might be better made than the roads in the USA. And in Switzerland they're experimenting with special asphalt with even longer durability (IIRC with nano particles in it), as closing the highways crossing the alps has an extreme economic impact... Also, there are lo

        • >>>Facebook takes 10X longer to load from 5 years ago

          Yes. Facebook is just about unusable on my Kindle G3. It wasn't like that before they switched to the Timeline setting that loads a ton of junk & makes the poor 500 MHz processor go into la-la land. I wish there was a simpler version of facebook.

        • i'm currently working on an engineering compliance app i php and i've found that its very easy to increase the page size with the simplest things (like js event handlers on items in every data row). not so bad for as as its only a lan app so performance isn't really a problem.

          i think its just the nature of html.

          as a static langauge (requiring js etc for interactivity without page or iframe reloads) everything must be fully described from the get-go. if you have hidden sections that appear with js, you
        • I agree, and here's an anecdote for what it's worth. One PC was working perfectly for all work related tasks - and then the user started mucking about on Facebook and the web browser brought the system to a crawl. That was a while back but still it's insane to hit hardware limits and get to 100% CPU for a few seconds just to put a single page of text and a few pictures on a screen. Even if it's got a good reason to take a while at least give the user something to look at in the meantime.
        • Re: (Score:3, Insightful)

          is it windy on your high horse? Fact is, computers exist to be used - and if the programmer isn't using the hardward to facilitate a pleasurable experience (for the developer or the user, actually, since happy devs eq. bug free programs), they might as well be wasting the consumers time/computer. In short, cycles spent on abstraction are the best spent cycles, because that's what computers are for.

        • But it's not just slashdot. ALL websites are bum rushing the add more crap idea.

          Correct. But as a geek site, slashdot should know better and lead by example.

          And yes, other companies do look towards (perceived) geek sites such as slashdot, gnu.org and redhat.com in order to justify their own inadequacies. A while back, our company was putting a new website online, which had huge horse blinkers. When I pointed this out to the webmasters, their response was yeah, but just look your geek friends at gnu.org (which indeed had small blinkers at the time) and redhat.com (which is just fugly).

      • Buffer problem?

        President Obama did hatch a plan to get high-speed internet. Unfortunately his plan was to turn-off free TV (all channels 25 and up) and turn it over to wireless companies. That's not a solution... at least not as good as Fiber to every home.

        >>>Which shows it isn't the OS or the hardware, its the networks

        No it's the programmers. How else do you explain being able to run WordPerfect on a 1/2 megabyte machine (referring to my 68000 Commodore Amiga, Atari ST, Apple Mac), or Word on m

        • by dgatwood ( 11270 ) on Friday April 13, 2012 @08:36PM (#39681925) Homepage Journal

          Unfortunately his plan was to turn-off free TV (all channels 25 and up) and turn it over to wireless companies. That's not a solution... at least not as good as Fiber to every home.

          It's a fundamentally unworkable solution to the problem. The reason we don't have enough capacity is not because we need more bandwidth. The reason we don't have enough capacity is that we're trying to use one tower every 15-30 miles to provide service to hundreds of thousands of people. If those folks are mostly using it occasionally (as they do with cell phones), it works reasonably well. When they're sitting there for hours on end surfing the Internet at home or work, it breaks down very badly. We're orders of magnitude away from being able to handle that.

          Wireless works really well at short distances where each cell is talking to dozens of people (e.g. Wi-Fi). The larger the number of people per cell, the more infeasible it becomes due to interference from other devices, not to mention all the multipath problems inherent in wireless delivery over long distances. Even if you could make the bandwidth ten thousand times wider, we still wouldn't have enough bandwidth to service every man, woman, and child's home Internet needs somewhere like New York or San Francisco using cell towers. It's entirely the wrong solution to to the problem.

          Instead, we should be focusing on making VoIP and VPNs roam transparently between cellular services and Wi-Fi, roam transparently between multiple Wi-Fi hot spots, etc. And we should be moving more towards providing free public Wi-Fi services at high densities so that only the last few feet are wireless.

      • p>

        Personally I think it is high time we use an old solution to fix a new problem...bring back the WPA. a lot of our bandwidth problems would disappear if we had nationwide FTTH or at least fiber to the neighborhood. It seems like a great way to put all those sitting at home on unemployment to work and you build it right and just as many bridges built by the WPA in rural areas still work fine so too could a well built fiber network last us for ages.

        I think there is plenty of old-school WPA-type work that those people could be doing. lt won't happen because it means "Big Government" giving opportunity to poor people, and that is somehow un-American.

      • This is what's happening with the National Broadband Network in Australia. Fibre to the home, for at least 90% of the population. Hopefully it will gain enough momentum before it gets cancelled by the next change of government.

      • Fiber networks are fine and dandy but you need to make it an all optical passive network. CWDM is cheap and works well and you can always upgrade to dwdm if you need more to a single drop. I'm talking about at least a fiber to each house with the municipality or some other owner being responsible for the glass only. Anybody that wants to can lease space in the CO or fiber to the CO (I would expect a big market for CO to CO fiber would spring up). Let them deliver anything they want internet, phone cable,

    • by tqk ( 413719 )

      Despite technological advancements, it takes forever for Slashdot to load on my phone.

      It's close to instantaneous on my laptop. I don't believe surfing the web on a cellphone qualifies as a "technological advancement." More like, "Hey look. I can stick my left big toe in my right ear!"

      "Yeah, but who wants to?" Thx to "BC" (Johnny Hart?).

  • by bigredradio ( 631970 ) on Friday April 13, 2012 @06:28PM (#39680971) Homepage Journal

    From my own observations, there are two schools of thought.

    1. People who think anything older than 6 months is ancient and obsolete.
    2. People who say, "if it ain't broke don't fix it!"

    Seems the former spend their time fixing things and the later spend time bitching about "damn kids" and their lawns.

    • Re: (Score:3, Interesting)

      by Anonymous Coward

      not really. seems the former are a bunch of fucking idiots constantly tampering with shit that doesn't need tampering with (gnome 3, anyone? unity?), while the latter are either refreshingly pragmatic, or, as you say, tedious old farts resisting progress for the sake of pretending to be smart (windows xp, anyone?)

      my view is fuck the both of them, and there must be a third school of thought. however, i'm too tired and drunk to think what it might be.

    • by Lumpy ( 12016 ) on Friday April 13, 2012 @07:05PM (#39681279) Homepage

      and you forget those of us in the middle. we buy the 6 month old gear for $0.10 o nthe dollar off of ebay and get to use higher end gear from the used market for lower price.

      No company needs 1000bt for the accounting and sales department. But there is always some moron IT guy out there that thinks they do so they scrap all their perfect 100bt gear. and I snap it up for nothign and sell it to small businesses for a profit.

      • by Eristone ( 146133 ) * <slashdot@casaichiban.com> on Friday April 13, 2012 @07:52PM (#39681607) Homepage

        No company needs 1000bt for the accounting and sales department. But there is always some moron IT guy out there that thinks they do so they scrap all their perfect 100bt gear. and I snap it up for nothign and sell it to small businesses for a profit.

        I see you aren't using more recent accounting and CRM/ERP packages and don't have people pushing multi-megabyte PowerPoint and video presentations around. (or in my case - Sales pushing around vm images of a couple gig) Or people moving between desks from other parts of the company. That moron IT guy that replaces everything with 1000bt gear is sitting there going "There. Now I don't have to worry which switch the conference rooms are plugged into or if the head of HR and the CEO snag someone's office so that person goes to an empty desk to do something... "

        • by Lumpy ( 12016 )

          Nope. we wasted a lot of cash of CRM/ERP It's a waste of money in a small/medium sized business. as for multi megabyte PPT files, that is not a problem with 100bt. It never has been unless the Network engineer needs to be fired.

          Heck we even do HD videoconferencing over 100bt. if you think you need 1000bt in a general office, you need to hire better networking guys, they seem to not be able to do their job right.

        • I see you aren't using more recent accounting and CRM/ERP packages and don't have people pushing multi-megabyte PowerPoint and video presentations around.

          Does not matter. Because once it hits the VoIP with PoE for their phones it will be knocked down to 100Mb/s anyway.

          gig-switch ---- VoIP-phone-with-PoE --- computer
          Means that the computer is only going to get 100Mb/s.

          You want to run 2x as many lines as you need to so some people can get gig to the desktop? As long as someone above you is willing to sign off

      • 1000bT isn't necessarily for the bandwidth. For many environments, it's used for the reduced latency. In one particular case, we had 100bT from desktop to switch, 100bT sw2sw, and 100bT to server. Replacing the switches with a single 24x1gb with 1gb links to switches with 2x1gb + 48x100mb, then 100mb to desktop more than doubled performance of one critical app. In most instances, bandwidth wasn't a factor, but the reduced latency can be of tremendous benefit for some apps, including many accounting apps.

        But

      • by rev0lt ( 1950662 ) on Friday April 13, 2012 @09:15PM (#39682191)
        Most "modern" (3 year old and newer) machines do have Gigabit connectors, so why not use them? On local networks, there are several advantages:

        1) reduced latency (someone else has mentioned it) - it really helps a lot some applications;
        2) less time loading roaming profiles / less time spent refreshing network shares;
        3) increased bandwidth (even at 100Mbit) - Gigabit gear is usually more error-resistant, and implement smarter and faster error correction;
        4) inter-departament high-speed sychronization - good for replicating storage, machine snapshotting/CDP, distributed filesystems and such;
        5) instant 10x speed upgrade on recent infrastructure, since 1000T is Cat5-based (no scrapping except the switches)

        My internet connection alone has 120Mbps downstream. And yes, I use it.
    • >>>2.People who say, "if it ain't broke don't fix it!"

      Time is finite and I don't want to waste it on things that don't matter (like the color of my desktop, or installing a new OS and trying to fix the broken parts that failed to work). I've used the same Windows XP desktop for 10 years now, and it's done everything I asked it to do. And I saved a LOT of days in my life but not having to relearn a new OS or new arrangement (think Office ribbon). Also cash.

  • There was a time when my 486/25 with the 120 megabyte hard drive and the 14.4 modem was "all you'll ever need". That didn't last long . . .
    • by DarkOx ( 621550 ) on Friday April 13, 2012 @06:36PM (#39681051) Journal

      Its still all you ever needed its just not all you'd ever want.

    • I remember 110 baud. And soldering your own circuit boards for S-100 computers and tuning your drives with an oscilliscope.

      Not to mention slide rules. Not the plastic kind - a fine grained wood one.

      • I remember my Apple II with 110, 300, and 1200 baud serial. The 1200 baud would not work on the 110/300 baud modems of the day. But I figured out how to get the serial port working at 440 baud by crossing some flags for 110 with some flags for 1200 in the serial port device register. Amazingly, that actually worked on a 300 baud modem calling another setup done exactly the same way.

    • I did development on a 486/80 with 12MB-16MB (yes, MB) RAM, running NT4. Don't remember the HD size.

    • Had a roommate who spend $300 on a 300mb harddrive so that he could have many games installed at once. We told him he was a moron for wasting all that money because there was no why in hell he'd ever use all that space.

      Oops.
  • by WillAffleckUW ( 858324 ) on Friday April 13, 2012 @06:34PM (#39681031) Homepage Journal

    As we in the military, research university, and government spheres move to IPv6 and Internet So Fast It Makes Your Ears Bleed (tm), have you ever considered that perhaps it might be slow for you but not for us?

    I mean 1000 Gbps is considered normal here, and some of us are running on faster connections, using less energy total to do the same thing.

    We rarely print things anymore, and just because you have slower access to resources, you have to realize it could be because, in the war between Urban America and the rest of the country, Urban America with its more efficient energy usage and lower distances traveled - basically won the war.

    • by ColdWetDog ( 752185 ) on Friday April 13, 2012 @06:46PM (#39681129) Homepage

      ... you have to realize it could be because, in the war between Urban America and the rest of the country, Urban America with its more efficient energy usage and lower distances traveled - basically won the war.

      Good. Then you can eat all the Internet you want. We'll keep the food.

      Yours Truly,

      Rural America

      (I'd expand this comment but it takes a long time to get stuff uploaded on our 300 baud lines.)

    • I have to ask at 1000 Gbps are your hard drives even able to write that fast ? That's 125 Gigabytes per second, 500 MB/s is pretty good for an SSD. Also, what are you doing that requires that kind of speed?
    • Comment removed based on user account deletion
      • Which are you supposed to be for the purpose of this thread, my karma-whoring, Microsoft-astroturfing friend hairyfeet?

    • by Lumpy ( 12016 ) on Friday April 13, 2012 @07:09PM (#39681305) Homepage

      "Urban America with its more efficient energy usage and lower distances traveled - basically won the war."

      Until the power goes out. then I own you with my farm and it's source of food you dont have.

      rural america will always rule urban america. You cant raise cows in central park.

      • by Sir_Sri ( 199544 )

        Have you ever been to Delhi (As in the Capital of India, and not the new, cow free half of the city)? Right. Go there. The think long and hard about whether or not you can have cows in central park, or times square for all it matters.

  • slow where (Score:5, Insightful)

    by magarity ( 164372 ) on Friday April 13, 2012 @06:35PM (#39681037)

    My work Pc is slow and has trouble connecting because of the n layers of Corp security whatnot. My home Pc is reasonably fast and always connects quickly.

    • Re:slow where (Score:4, Interesting)

      by jrminter ( 1123885 ) on Friday April 13, 2012 @07:33PM (#39681479)
      Ding ding ding - we have a winner. Our IT folks put so much crapware on our corporate image, that I had to take all my lab computers out of the domain and run vanilla installs w/ minimal antivirus and our imaging hardware/software. Makes a BIG difference.
  • by mlts ( 1038732 ) * on Friday April 13, 2012 @06:36PM (#39681047)

    IT is a lot more than just CPU and the amount of little switches on a die. Yes, those get better and continue to do so, but there are a lot of bottlenecks that are not going away anytime soon. Until these are dealt with, things will stay almost the same in the IT world.

    Couple examples:

    1: Wireless bandwidth fees. This has gotten worse as time progresses. Two years ago, my T-Mobile CLIQ had unlimited tethering. Now, if I want to transfer 500 gigs of data, I'd have to pay my provider over five digits for that month.

    2: Regular bandwidth. A year ago, bandwidth might be throttled on P2P downloads. Now it is metered as well on most ISPs.

    3: Backups. The enterprise has the advantage that once they pay for the LTO-5 tape drives, individual cartridges are cheap, rugged, and have a lifetime guarentee. Individuals usually don't have the cash for the drive, so have to deal with hard disks which usually have a year warranty, and there is no consumer level software to handle backups, where it knows where a specific revision of a file is on what volume, be it a primary volume, or a copy saved in a safe deposit box somewhere. The enterprise has NetBackup, TSM, Networker, and other items. So, there is a major issue with making sure data is saved safely for anyone who can't afford to stick an EMC VNX array in their garage.

    In the past, tape drives were not just affordable by consumers, and kept up with hard disks, but usually had some decent software that could help find media in case of a disaster. These days, there are not any good consumer level backup utilities, especially ones that can restore bare-metal.

    4: Encryption. As grows storage grows the need to protect the data from everything from tapes falling off the pickup truck to hard disk drives getting yanked out of arrays.

    Just raw CPU power may help things, but that is more incremental than anything else. Right now, IT is more affected by the BYOD trend than it would be by any CPU revolution. What would stir the pot would be bandwidth increases that don't have corresponding fee hikes. Having the ability to have fiber-channel bandwidth over the WAN fabric on the cheap would revolutionize things.

    • >>> if I want to transfer 500 gigs of data, I'd have to pay my provider over five digits for that month.

      You must have a darn fast connection to get 500 GB per month. Or you could buy the Sprint(?) plan that costs ~$90 and gives unlimited cellular data.

      P2P is metered on your ISP? Wow. Verizon DSL has not done that to me.

    • Backups. The enterprise has the advantage that once they pay for the LTO-5 tape drives, individual cartridges are cheap, rugged, and have a lifetime guarantee.

      Who modded this moron up? Obviously, he's never had to buy LTO-5 tape drives in bulk. I don't mean a few boxes totally 25 tapes, but hundreds and THOUSANDS of cartridges. LTO-5 isn't cheap. The enterprises may be upgrading their tape drives, but the cartridges that are often bought are LTO-4 because they are so much cheaper. Plus they can still be used in the LTO-4 drives, for which putting an LTO-5 media in an LTO-4 is a waste of $$.

      This is why backup to disk is moving in. Media is expensive and rest

  • by Anonymous Coward on Friday April 13, 2012 @06:49PM (#39681165)

    1. Computer hardware is not a finite resource like coal is or any other natural resource. Prices go up; somebody build a plant to make more. Econ 101.

    2. This assumes that computer hardware will be used the same way as it has been in the past. We are already seeing major changes. Less individual storage and more online storage; different devices that are less hardware intensive and computing is being used differently - less desktop and more handheld and all the differences down the chain from that.

    3. No mention of significant technology changes. Who's to say will still be using the current architectures or even silicon tech in the future. This assumes the same old same old for the future.

    • by Sir_Sri ( 199544 )

      I expect we're going to transition to a home server with an online backup model, where people may have fast or slow (depending on needs) heterogenous terminals with local irrelevant fast storage (SSD's, only saving things that can be reacquired easily from disk, or the web). The real data will be in a more specialized networking device that will share files for everyone in the household.

      The other issue that I'm not sure they quite get is that people have certain tolerances for how technology behaves. As m

  • Don't we hear this same story every so often? Before it was trace width or storage density or whatever. Perhaps some day we'll run out of tricks to making better cheaper hardware but there seems to be a long way to go yet. I mean, we don't even have tenth generation AI hologrammatic computers with IQs of 6,000 yet!
  • by gmuslera ( 3436 ) * on Friday April 13, 2012 @06:51PM (#39681191) Homepage Journal
    will rise a lot when they have to move to local servers and companies to avoid the intrusion on their private data mandated by US government
  • by Brannon ( 221550 ) on Friday April 13, 2012 @06:52PM (#39681193)

    I'm not even sure where to start other than to say--technology is only ever adopted broadly if it is cost-effective to do so. The printing press wasn't successful because of some incontrovertible march of progress--it was successful because it was cheaper to make books that way than by having monks transcribe them by hand. Yes, that caused more people to read which drove up the demand for books. And I'm sure some jackass back then wrote an article saying that demand for books was accelerating at a rate that we weren't going to be able to afford enough printing presses anymore.

    • And I'm sure some jackass back then wrote an article saying that demand for books was accelerating at a rate that we weren't going to be able to afford enough printing presses anymore.

      Oh yeah, I remember that guy! He talked to the King about it, and the King said "ok, well, to cope with that we'll introduce copyright. That way, anyone who can't get the books they want becomes a criminal, and no books for crims. Problem solved!"

      What a bastard that guy was! But a couple of years later he tried to sel

  • Is that we allow bloat to continue. We should be *demanding* efficiency in code.

    There is really no excuse for the sorry state of affairs we are in. My Atari ST from a good 20 years ago boots and runs faster than a current PC, and does just as much.

    • by Dwedit ( 232252 )

      Sure, let's try to decode MP3 audio in real time on an 8MHz 68000 processor.
      Nope, it doesn't quite do "just as much".

    • What really slows things down is the volume of data we are asking our machines to handle. Your Atari ST doesn't play HD video, for example.

    • and does just as much

      That's a goofy thing to claim. My desktop workstation is powering a display area 2400x1920 in size while running three virtualized OSes on top of a fourth.
      I'll agree that the Atari ST did more with the hardware that it had than this machine does now, but to claim that they can do just as much is laughable. I've never understood people that could make that sort of claim with a straight face.

  • Human perception (Score:5, Insightful)

    by Dan East ( 318230 ) on Friday April 13, 2012 @06:58PM (#39681235) Journal

    There are limits to what will be demanded, and we have reached them in some areas already. Audio is a good example of this. The storage and bandwidth requirements for good (as in good enough for 99% of the population) audio is now a very small drop in the bucket. How many songs can you fit on a 16 GB micro SD card the size of your fingernail? How many songs can you stream real-time at once on a typical broadband connection? We have surpassed the technical requirements for audio by such a massive margin that it isn't even a consideration when purchasing hardware or bandwidth.

    There are limits to video too. These so-called "retina" displays are a good example of the resolution limit of the human eye (we passed the color depth perception limit a good decade ago). The eye cannot discern individual pixels within the normal focal range (by the time you bring it close enough to the eye to make out individual pixels, the eye can no longer keep it in focus). We have a long ways to go to be able to store and stream video at such high resolution. However we will reach it before too long. Then it's a matter of how many hours / days of video do you need to store on how small of a device, and how many video streams do you need at one time over your internet connection.

    One day we'll be moving and storing movie-length retina-resolution video with the same flippant ease as MP3s today. When we've reached that point, what would we need more bandwidth and storage for? Not for anything by human consumption - and that is the key factor.

    • We just haven't invented the motor/sensory-fold interface necessary to consume higher bandwidths yet.

      Hence we still procreate.

    • Hmm. I get what you're saying, and I agree that we'll eventually get to the "flippant ease" state for video. On the other hand, there's always going to be something new. Maybe it'll be holographic, adding another dimension to the data/processing required. Maybe it'll be the ability to zoom, rotate, or otherwise manipulate any part of movie. Maybe it'll be the ability to map/record/manipulate all objects in a 3d virtual environment that becomes the new "video". Who knows? The point is, every new revol
  • by DogDude ( 805747 ) on Friday April 13, 2012 @06:58PM (#39681237)
    In our company, IT spending is actually dropping, even as we expand. The cost of used hardware is insanely low because of all of the individuals and companies who still feel the need to buy "new" equipment so rapidly. We have no problems running Pentium 4's and Windows XP throughout our business, and wil do so for the foreseeable future.. We've moved our email/backup/web hosting services out to providers, and all of that is sill insanely cheap. Tech has actually exceeded our needs, so our IT spending has dropped significantly. Keep buying new machines every few years, people! We're loving buying your completely functional equipment at yard sale prices!
    • I agree. I was recently pulled into a project to develop some software that was going to run on a system with a highly-customized real-time Linux kernel built from scratch from the 2009 version of Ubuntu (Karmic Koala.) I needed to make sure my code ran on that platform, so I grabbed an old (2007 vintage) laptop and installed Karmic. I was surprised how peppy it was. I suspect that it would do 99% of what most students and office workers would need. The problem is that designers keep putting out content tha
    • You can buy very very reasonable used Intel Core 2 Duo / 3 year old IBM / HP / Dell workstations, add a 100$ SSD and 50$ of ram and have a PC which performs as fast or faster than the 800$ new boxes for literally half or less of the price.

      I wouldn't recommend it for a large company but mid size it seems quite reasonable to me.

    • The security holes in XP start to really worry me. You can put an uptodate Linux on an old box and be quite safe.

      I picked up an older Dell Xeon box from eBay about a year ago. As time goes on, I have upgraded with more memory, faster CPU (with quad-core chips), larger hard drive, 64-bit OS. But it is still insanely fast for what I am doing with it.

  • by msobkow ( 48369 ) on Friday April 13, 2012 @07:01PM (#39681253) Homepage Journal

    When you lash together the disparate clouds of application, compute, and storage facilities from the various vendors in that space, and truly begin to tie them together as distributed applications, an amazing thing happens.

    The work load distributes. The storage requirements distribute. The compute requirements distribute.

    And the more distributed they become, the closer we approach a true peer-to-peer architecture.

    Now take it one step further, with each person having their own "data server" nodes in their home or leased from such cloud providers. Your device is no longer used for storage, but just presentation. It caches the data from your server(s), but it doesn't need to keep the data unless you expect to use it again in the near future. Your whole SSD/HDD system in the device becomes a cache, similar to the Andrew File System, but using different communications technologies including torrents that map into a virtual file space, and private downloads directly from your data servers for content that you own personally.

    Suddenly you realize the problem is not that we need infinite capacity, but that we need to break the mindset that industries like banks "own" the data. They don't. It's OUR data, and it should be on OUR servers, with them needing OUR permission to access or modify it.

    Problem solved.

  • Peak Computing? (Score:5, Interesting)

    by slew ( 2918 ) on Friday April 13, 2012 @07:02PM (#39681257)

    If I gather what this article is speculating on, it's a phenomena similar to peak-oil.

    Peak-oil doesn't necessarily mean that you run-out of oil, it just means that the marginal cost of producing more oil reaches a point which causes the rate of oil production to decrease. In the backdrop of increasing demand, and limited supply this implies a sharp downturn in availability of oil at historical prices.

    If applied to computing, it would imply a limit to computing resources. I don't think we are there (although computing takes lots of electrical power and there seems to be enough semiconductor manufacturing capacity for the moment), but we may be at a point where demand increases beyond the rate at which technology can keep it on its historical increasing MIP/$ trend. If this MIP/$ trend flattens out, it may be difficult to find funding for new technological advances and fundamentally change the market for computing.

    • Nice explanation. At the moment I think the actual physical computing growth is fairly easily covered since it is a fixed cost and quite cheap for capital expenditure. What is very expensive and doesn't scale well is software licensing. I've been on plenty of projects where all resources were available apart from the money for expensive licensing (try getting LPARS off a third-party provider for a dev, integration and production environments, then get enough for Internet scale; or pay for Oracle or Enterpri
  • Bloated apps. (Score:4, Insightful)

    by toonces33 ( 841696 ) on Friday April 13, 2012 @07:14PM (#39681335)

    It isn't so much that users are expecting more from the apps, but that application vendors bloat their software as time goes on so that newer versions really only run on newer and faster hardware. I won't point fingers too much - there are many offenders here.

    And on top of that, the industry is using more Java which is as slow as snot. The attitude seems to be that if it runs slow, then throw some more iron at it.

    I remember my first Linux box - i486 at about 90MHz. Those were the days..

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      the industry is using more Java which is as slow as snot. The attitude seems to be that if it runs slow, then throw some more iron at it.

      It's time this myth was debunked.

      Java itself is not slow. Properly written and optimized Java code runs almost as fast as equivalent C/C++. (I know, I write such code, and I measure timings for operations in nanoseconds in Java.) The JIT compilers built into modern JVMs generate very optimized machine code. (I know, I've looked at the assembler output.)

      Unfortunately, J

  • by Dolphinzilla ( 199489 ) on Friday April 13, 2012 @07:23PM (#39681407) Journal

    I read the headline for this story and laughed - it doesn't matter how much faster my computers or networks get - Our IT department just installs more and more virus scanners, software maintenance tools, firewalls, monitoring tools ,etc.... Each computer I get has more CPU cores and memory and faster graphics and they are able to do less and less and take longer and longer to boot. I figure before too long I'll have to go back to my old TI-30 calculator and some engineering graph paper and I'll be equal in efficiency to my computer once I factor in all the time I spend waiting for it to get around to sparing .5% of the 12 CPU cores to run the actual software I need to use....

    • This!
    • I read the headline for this story and laughed - it doesn't matter how much faster my computers or networks get - Our IT department just installs more and more virus scanners, software maintenance tools, firewalls, monitoring tools ,etc.... Each computer I get has more CPU cores and memory and faster graphics and they are able to do less and less and take longer and longer to boot. I figure before too long I'll have to go back to my old TI-30 calculator and some engineering graph paper and I'll be equal in efficiency to my computer once I factor in all the time I spend waiting for it to get around to sparing .5% of the 12 CPU cores to run the actual software I need to use....

      My work computer is a new Corei5, 4GB RAM, running a 10 year old OS (XP). The thing should fly. Yet it's much slower than my home PC which is a 4 year old AMD Dual core, 2GB RAM, running Windows 7.

      A few years ago I bought a surplus PIII from work. At work with XP these machines would crawl (5+ minutes till usable). When I got it home and loaded a clean XP install, the machine flew (relatively speaking).

      I'm amazed at how much crap IT departments manage to put on computers to slow them down. And how all softw

  • We used to say Andy [Grove] giveth but Bill [Gates] taketh away.

    These days though it's more the result of hard drive capacity growing faster than CPU power.

    Which is good for me professionally because I like to work on algorithms for web scale data handling.

  • by brillow ( 917507 ) on Friday April 13, 2012 @07:47PM (#39681569)

    It's all FUD. There is no reason to believe any limit is being approached. If we need more network capacity, it will be built.

    1. The Thailand monsoon is NOT helping matters. It's put us a year behind in hard drive capacity.
    2. CPU clock speeds hit a brick wall half a decade ago. So they switched to spending the Moore's law transistors on extra cores instead. Now that's reached a limit on memory bandwidth. There now has to be a major CPU architecture change -- probably to MIMD, loosely connected CPU & memory modules.

    Certainly the business and scientific servers will need to be faster and have more storage, as will the home gaming

  • ...and we're already seeing examples of this: our computers are faster than ever and we have more bandwidth than ever, yet our machines are often slow and have trouble connecting. The more we have, the even more we use.

    That's because it's not just me trying to use my machine any more. Now it's me, some guy in Shanghai trying to log my keystrokes, the software I installed to keep him out, plus all the various companies who paid my hardware vendor to put pointless "dashboards" on my machine.

  • Rest Assured (Score:4, Insightful)

    by tunapez ( 1161697 ) on Friday April 13, 2012 @09:30PM (#39682261)

    What Intel giveth, Microsoft taketh away.

Never test for an error condition you don't know how to handle. -- Steinbach

Working...