Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Sun Microsystems IT

Sun Grid Utility Goes Live for Employees 227

museumpeace writes "CNET is reporting that Sun Microsystems turned on its Grid computing utility, hosting large ERP applications for its employees to test out the server infrastructure and user acceptance of the Computing-as-metered-utility model. General availability is scheduled for October. The rates? "Sun is offering processing and storage in a pay-as-you-go arrangement of $1 per CPU per hour, delivered via an Internet connection". Sun is still retooling its Thin Client interfaces and support SW. Experts quoted in the article wonder if Sun can make any money this way." Slashdot also covered the original announcement back in February.
This discussion has been archived. No new comments can be posted.

Sun Grid Utility Goes Live for Employees

Comments Filter:
  • by Seumas ( 6865 ) * on Wednesday August 24, 2005 @04:33PM (#13392724)
    I wonder how they plan to compete with the distributed/remote computing power provided by all of the unpatched and unprotected Windows based systems in the world that are freely available to anyone with a couple scripts and an internet connection?
  • by Anonymous Coward on Wednesday August 24, 2005 @04:33PM (#13392727)
    This is just the reincarnation of the mainframe era. Everyone (Sun, MicroSoft, et al), want to put us back in the days where the storage/cpu and most importantly the applications themselves are in their "capable" hands.

    I'm not even going to enertain the idea of having MY data stored on another (microsoft/sun/etc)server, and paying for the rights to access/modify it.

    There is a reason it's called the PC,and not a dumb terminal.
    • by oldmanmtn ( 33675 ) on Wednesday August 24, 2005 @05:36PM (#13393153)
      There is a reason it's called the PC,and not a dumb terminal.

      There are no dumb terminals - only dumb users.

      This isn't targeted at PC users. This is for (for example) the hedge fund that needs 50 machines for 8 hours, once a week, to run a complex model. This gives them the power they need for a fraction of the price of the raw hardware, and they don't have to pay anybody to maintain it.

      I've had projects where I really wanted 1000 CPUs for a week, just so I could do scalability testing. There's no way we could afford $1,000,000 to buy 1000 machines just for that one test, but we could probably have swung $50,000 to get them for five 10 hour days or ten 5 hour days.
    • This is just the reincarnation of the mainframe era.

      And what is wrong with mainframes? Putting all the computing power in one place, is about sharing that equipment between all users, making optimal use of the hardware. As opposed to everyone having their own box that comes with everything and the kitchen sink, but sits around doing nothing 98% of the time. Talk about waste...

      I'm not even going to enertain the idea of having MY data stored on another (microsoft/sun/etc)server, and paying for the rights

    • Hardly "insightful." I give the parent a -1 Shortsighted.

      This is just the reincarnation of the mainframe era. Everyone (Sun, MicroSoft, et al), want to put us back in the days where the storage/cpu and most importantly the applications themselves are in their "capable" hands.

      The application is in the hands of the person creating it. You are purchasing scaleable processing power.

      I'm not even going to enertain the idea of having MY data stored on another (microsoft/sun/etc)server, and paying for the rights t
    • Yeah...but I'd sure love to use this instead of losing my home pc for a week when I want to render out one and a half minutes of 3D.
  • OpenOffice (Score:2, Funny)

    by gkozlyk ( 247448 )
    Maybe it can be used to improve OpenOffice load times.
  • Yah! Now I can get my stats back up. /me cracks knucles

    Ouch that hurt... Sun's spirit will always be amongst us.
  • But CloudNet, AirNet and UpperAtmosphereNet are currently leading contenders. Analysts feel however it may be something in a similar vein which is finally chosen.
  • With a Microsoft partnership, we now understand how we plan to have the oomph to run Windows Vista when it comes out.

    $1 per CPU per hour...the true money-making scheme here is that if you run Linux, they'll charge you the $699 for each processor on behalf of SCO.

    So for $50 bucks an hour, you can run a SWING application almost without a performance drop.

    With the licensing model, you can run apps with it, but you can't alter any data that passes through without our permission. Want to see the results of your
  • Hey, look. Sun has a new idea. Hm, if they get 9 more, maybe their stock price will reach the price I paid for a nanosecond, so I can dump it :-)

    Seriously, I like the sound of this. One can argue about costs, etc., but at least they have something other than inertia that might encourage a scientific user to choose Sun.

  • Considering that a "CPU" can be had for $400 (2.8GHz Celeron D [gateway.com] without even trying, just a search on google).

    So 24 hours a day, $400 -> 16 days work. Let's add in 25% for "stuff" (electricity costs, etc., being generous...) and you're still saying that a problem that takes 20 days or more, you're better off buying a throw-away PC and running Linux on it.

    So, it must be aimed at the smaller problems. Like what ?

    Simon
    • by Keeper ( 56691 ) on Wednesday August 24, 2005 @04:46PM (#13392821)
      Actually, I would imagine this would be more useful for solving larger problems that are run infrequently, where you want do do a distributd task once a month that takes 1000 machine hours and get back a result in 1 hour.
      • for example (Score:5, Insightful)

        by taniwha ( 70410 ) on Wednesday August 24, 2005 @05:10PM (#13392949) Homepage Journal
        think of chip design - you want about a gazillion machines for QA simulation in the 2 months prior to tapeout and they're going to be idle for the rest if the year

        Mind you the cost of chip design software is the limiting factor here, not the cost of hardware to run it on

    • by Paul Jakma ( 2677 ) on Wednesday August 24, 2005 @04:50PM (#13392845) Homepage Journal
      For the same $400 you could get 100 CPUs for four hours. If your problem divides up reasonably well, then instead of spending your $400 CPU and waiting 16 days, you could instead get your answer in hours.

      Maybe you could do it cheaper by buying your own CPUs, but you could be waiting two weeks for your Dells to arrive. How much is it worth to you to get an answer within hours or days versus a few weeks or months of waiting?

      --paulj
    • Your costs are way off; try adding 250% for other stuff (especially management).

      Also consider the case where your problem needs 20 CPU-days -- in 8 hours.
      • You know, I've got several linux boxes. I don't believe any of them cost $100 for every 20 days runtime! As for 250%, Oh boy! I have a bridge to sell you!

        And, if it's my computer, what management are we talking about ? It's a program running on a computer. I start it. I wait. I analyse the results. What's to manage ?

        If I buy 100 of these things, you use a simple batch script (I wrote one at college in about 2 days). Typing 'batch ' was all that was required to start something. Typing 'batch list' gave y
        • by Paul Jakma ( 2677 ) on Wednesday August 24, 2005 @06:53PM (#13393658) Homepage Journal
          I don't believe any of them cost $100 for every 20 days runtime! As for 250%, Oh boy! I have a bridge to sell you!

          You're not paying for and hence you do not have:

          - data centre floor space with:
              - heavy duty UPS
              - generator backup
              - climate control
              - security
              - redundant networking
              - multiply redundant storage
              - tape backup silos / HSM
          - The 24x7 staff to:
              - monitor security
              - test the generator weekly
              - monitor the backup processes
              - monitor and maintain the network
              - monitor and maintain the hardware

          etc. etc. If you think your costs as "Joe Bloggs the guy who runs a few Linux PCs at home" are comparable to a corporate affair then you're simply kidding yourself, particularly when you're not billing yourself for your own time ;), and your SLA with yourself is pretty flexible and forgiving ;).

          A lot of corporates have thought what you thought "Ah sure, it can't be expensive to run a few servers in our own 'data centre'", and they typically either under-estimate the costs, or they end-up with very shoddy server facilities. Then they'll have reliability problems due to:

          - servers overheating cause they're stuffed into a cupboard (seen this)
          - lack of staff expertise (all too common)
          - utilities failures (they couldn't afford the large UPS + diesel generators + cut-over switches + electricians expenses)
          - the gradual increasing burden of maintaing installed plant, which if not planned professionally slowly but surely turns into a huge sprawl of unmarked cables, till it gets to point even simple rewiring tasks are a massive (and error-prone) undertaking.

          Eventually, to a lot of these types of small corporations, locating in a managed data-centre and letting someone else take care of the details becomes very very attractive. (particularly for coporates whose primary business is *not* computing).

          You are almost certainly underestimating the costs.

          --paulj
          • Look, what part of this is unclear ? I am not trying to compete with Sun!

            In the university environment I mentioned above, they already have all that. They were paying for it regardless of if I push 10 boxes on the shelves in the spare rack-space and wire them up to the switch. Which is exactly what I did. With their blessing. "It'll be lost in the noise, go right ahead"...

            So. Zero extra cost apart from electricity.

            Most corporates have an IT dept as well, and I would expect the same to apply.

            In the personal
            • Look, what part of this is unclear ? I am not trying to compete with Sun!

              Didn't say you were, you were however trying to extrapolate costs from what you think your home costs are.

              They were paying for it regardless of if I push 10 boxes on the shelves in the spare rack-space and wire them up to the switch.

              And that slack capacity, in terms of floor and rackspace and network, has a definite cost. So does planning ahead in order to balance cost against sufficient slack (people's time has a cost).

              So. Zero extr
              • I try to avoid using sarcasm in posts, but for some reason, the "pixies" irritated me...

                It takes tremendous resource-planning - this 20-day project. I mean, there's absolutely nowhere that any multi-national organisation, university, home, garage whatever could possibly put another computer. Christ no. Everywhere is *completely* budgeted for, for that 20 day period. For the space of a desktop PC. Of course it is.

                And (to make sure those 20 days are properly accounted for) I'll have to employ at least a dozen
                • I try to avoid using sarcasm in posts, but for some reason, the "pixies" irritated me...

                  Ah, sorry, I should have put in a smiley. It was intended as good-humoured banter. My posts generally should be taken as such.

                  Everywhere is *completely* budgeted for, for that 20 day period. For the space of a desktop PC. Of course it is.

                  I did not say "budgeted for", I said it had a cost. Whether you track that cost and budget for it or not is a different thing. You may feel it well-worth it to simply provide for lots of
    • by Glonoinha ( 587375 ) on Wednesday August 24, 2005 @05:10PM (#13392948) Journal
      If you have a problem that takes 400 CPU hours to run, your answer is either inanely worthless, or mind-bendingly valuable (I needed to throw one of those in there for the SETI group, but I won't say which.)
      Well that or you need to optimize your code, or get a faster machine.

      That said, it probably isn't worthwhile to the guy with a $400 problem - more likely they are looking to appeal to the kinds of guys that want to crack 128-bit encrypted data streams in real-time, or run two neural networks against each other in a zillion games of chess in order to teach (evolve) their neural network, or crunch two terabytes of data picked up by an Indy race team over three days at the track. Brute forcing 1024-bit encryption is totally possible, but the data isn't generally valuable a thousand years after you start decrypting it. Throw enough horsepower to decrypt 1024-bit RSA in real-time and you will find yourself rich (or dead.)

      Knowing the winning numbers to the lottery thirty minutes after they are announced is pretty worthless.
      Knowing the winning numbers to the lottery thirty minutes before they are picked is worth a hundred million dollars.
      Amazing difference having the answers an hour earlier makes - I'm not saying that these computers will give you that much of an advantage, but I'm still saying ... I currently work on problems where an hour difference in processing time can make a single data-crunching run cost about an additional $100,000.
      • 400 CPU hour problems can fit in to lots of other categories. Weather prediction, where the results have to be out in hours but could easily take in excess of 400 CPU hours are an example. Another good example is chip routing, at a previous job the chip routing could take days or weeks on multi-cpu boxes cost tens of thousands of dollars, the results were valuable, but not mind-bendingly so. Another good example might be special effects for a mid budget film, buying the hardware to get results in a short en
    • economics and scalability are not that simple to mix. Your $400 machine simply can't handle the dataset or the array sizes or the threads or much any metric you'd get when you size up any number of huge but very practical programs that businesses sometimes run. To name but one example: the monster linear systems that model supply chain and what-if a huge number of variables so the company can find the right product mix...the list of such programs is bigger than you may be aware.
  • Seems like the main categories of potential users fall into two camps:

    The high-energy physics folks, who generally get government and university subsidies for their high-performance computing needs, and so certainly get computation much cheaper than $1/cpu-hour.

    Commercial folks, maybe in the financial services sector who are (rightfully) paranoid about security, and just aren't going to send their sensitive data from Wall Street to California, so matter how much SSL-this and triple-DES that happens on

    • What about 3D rendering? There are lots of people renting time on render farms right now to make deadlines. If I can run PRrenderman on these CPUs, and the price includes storage for my rendered frames, it might be price competitive to buying a lot of processing speed that I'm not going to be using 24/7.

  • What if you lease time from Sun, and the computers that your data are being crunched on your compiditors computer? What about someone wanting to see what you are doing, and what is going on? This is not breaking encryption or finding aliens, this could be used by insurance, health, or governmental agencies for crunching large numbers, or searching databases. What have they done to address this?
  • A good compute cluster can be had for $2500 a dual-CPU node. Assuming another $500/node/year for operating costs/upkeep, thats still

    $1250 for a CPU-year. Compared with $8000/cpu/year for Sun's solution. So you better need BURSTS of CPU but not sustained CPU. And you better not be able to smooth out the burst demands with a batch-job system.
    • I think your prices are a bit low for a good compute cluster - don't forget to factor in things like high-speed, low latency interconnect etc. Also, don't forget to account for technician time to keep it running (ours has roughly 40% of a technician's time allocated to it). Also, don't forget things like electricity and air conditioning (and clusters need a lot of aitcon). Even then, it works out around a factor of 5 cheaper to buy than to rent (offsetting the costs over three years).

      Of course, that's

    • by Glonoinha ( 587375 ) on Wednesday August 24, 2005 @05:14PM (#13392980) Journal
      You forgot the most expensive (and often overlooked) part of infrastructure : the infrastructure staff.

      Add a few $65,000 / year staffers in there to install / support those $2,500 machines and you are looking at $13,500 per year (every year) per machine. I know, that's what my company bills my department for each server I have on the network.
  • Wrong scale (Score:2, Interesting)

    by jd ( 1658 )
    CPU charging has been done within Universities for mainframes and super-computers for a long time. Usually, though, it'll be in terms of clock cycles used and priority. The former can be metered exactly, as the OS will have that information whenever it swaps the process in or out. The latter can be fixed at the start.

    (For real-time processes, you can even fix the clock cycles in advance.)

    The advantages of doing things on this scale are that most heavy tasks will take in the order of seconds - at worst, minu

    • Most batch nodes are space-shared, not time-shared. That means only one job is running on a particular node at a particular time. This does not create any perverse incentives; developers are still incented to make their code as efficient as possible. If the job is blocking on I/O, that's the user's fault, not Sun's.
    • Presumably they mean an hour of CPU-time rather than an hour of wall-time on a single CPU.
    • As other replies have aluded to, there's a difference between CPU time and wall time.


      $ man 3 clock
      clock - determine processor time used

      LIBRARY
      Standard C Library (libc, -lc)

      SYNOPSIS
      #include

      clock_t
      clock(void);

      DESCRIPTION
      The clock() function determines the amount of processor time used since
  • Sun's main problem is they are not able to stick to one strategy for rescuing themselves from the mess. Last year Java Desktop System was all over the press. 100$ per developer..I don't hear of it anymore - now it is 1$ per cpu. They are going to have a hard time getting the trust of enterprises to use a CPU in their servers. Good luck Sun, more importantly Good luck Java
  • bang for the buck (Score:4, Interesting)

    by cybergrunt69 ( 730228 ) <cybergrunt69@@@yahoo...com> on Wednesday August 24, 2005 @05:04PM (#13392919) Journal
    OK, so for one year of CPU time, maybe it is initially cheaper to buy a whitebox and install linux than it is to use this Sun solution.

    However, if you got a linux whitebox to run this, not only would you have to worry about power costs, but also every other detail that comes with making sure your machine is running. What about patches, upgrades, network, bad hardware, runaway processes, general administration, backups, storage, etc? Most of the people here would be able to do the standard stuff that's needed, but I'm sure a business that needs "xyz" computed would gladly pay the 2x price. Not only would it do away with all the minor details, but they'd also have their results back in a significantly shorter amount of time! I'm too lazy to do math right now, but I'd say a year of cpu time could easily be done in less than month. That alone could be _the_ deciding factor and the justification for the expense.
    • I think power costs are so small you can safely ignore them. I don't know how much electricity costs in the US, but based on UK prices, you would be looking at about $0.003 per hour to power a computer.
      • Wrong. In Chicago, electricity is $0.08275 per kilowatt hour. Let's just say 8 cents to make the math easier. My Linux box uses about 200W of power (it's a 420W PSU, but I'll assume I'm not maxing it out). This is 0.2KWh / h or 144KWh per month. Multiplied by that rate, it's $11.52 an hour. Over a year, that's $138.24. For a college student like me, that's not pocket change!

        Needless to say, I've decommissioned that machine and only turn it on when I need to use it. My fileserver / e-mail / web serve
  • 1. gridhost% setenv DISPLAY crappy_host.mydomain

    2. gridhost % while 1
       ?  /usr/openwin/bin/xinit /usr/dt/bin/Xsession &
       ?  ???
       ?  echo PROFIT!!!
       ?  end
  • Some pretty interesting things might come of just letting it be used by employees for now. I'm sure a few of them have had ideas that would need oodles of power to flesh out, that nobody would pay for the big iron to run. I, of course, would recompile Quake 3 and get myself some geek cred with a stupidly high FPS.
  • emerge update -distcc -sungrid
  • big margins (Score:5, Insightful)

    by rnd() ( 118781 ) on Wednesday August 24, 2005 @05:10PM (#13392953) Homepage
    Sun has some fairly substantial profit margins built in at the $1/CPU/hour price.

    Consider that if you have a moderately large data set that you need to crunch it's not at all uncommon for it to take 3 hours on a 300 node cluster. That's $1800 if each machine is a dual proc machine.

    So suppose Sun has a 300 node cluster, for example, where each machine cost $1800. Ever 3 hours one of the machines is paid for. In other words, a 300 node cluster is paid for in 38 days. Well, the hardware is, anyway.

    I really don't know who the main clients would be of this kind of service, however. I'm guessing that if your company can't afford a 10-20 node cluster (fairly cheap) and still needs to do large scale computing, renting CPU cycles from Sun would make sense, though it would very quickly cost more than the 10-20 node cluster would have. So it's really going to benefit customers who need large scale number crunching results more quickly than they can obtain them simply by building a smaller cluster and waiting for the results, or customers whose problems involve data sets that are large enough that they need to be distributed over 100+ machines in order to be solved.

    Who has large data sets like that and no cluster access? Not university researchers, not government agencies, and probably not most firms doing significant number crunching.

    So I see the niche as firms with large data sets and someone who can write the MPI code, but who lack the willingness or finances to invest in a cluster of their own.

    In a year or two when the same service is selling for $0.25 per cpu hour it will be a much more compelling proposition.
    • Re:big margins (Score:5, Insightful)

      by Com2Kid ( 142006 ) <com2kidSPAMLESS@gmail.com> on Wednesday August 24, 2005 @05:27PM (#13393083) Homepage Journal
      In response, if you read one of the articles linked to from the main article:


      Sun Chief Operating Officer Jonathan Schwartz submitted a project to the Sun Grid--graphically rendering data from a protein folding experiment. It took only a few seconds, but cost $12. The 12 hours of CPU time for which Schwartz was billed was consumed by hundreds of machines simultaneously clicking away at the rendering problem for a few seconds each.


      Results n seconds.

      Doing a quick read through I did not see exactly how large the cluster was, but there is no reason that Sun could not scale it to a few thousand nodes, providing results in a time frame that would be cost prohibitive for most companies to setup clusters to compute.

      Come on, if you are running a huge simulation a few tmes a year, maintaining a 1000+ node cluster year round is just not cost efficient.
      • Sun's pricing is per CPU hour so the more machines you use the more it costs. There would be no price improvement to be had by adding more machines to the cluster. If anything, the value would be slightly worse due to the additional communications overhead introduced by MPI having to talk to the additional machines.

        You are correct that a company who needs a 1000+ node cluster a few times a year is a perfect client for the service. I don't know how many such companies there are.
        • Sun's pricing is per CPU hour so the more machines you use the more it costs. There would be no price improvement to be had by adding more machines to the cluster.

          But there is a value improvement to the customer of getting the job done faster.
  • by WillAffleckUW ( 858324 ) on Wednesday August 24, 2005 @05:13PM (#13392974) Homepage Journal
    1. cost for a CPU
    2. cost for the box for the CPU
    3. cost for data storage
    4. cost for monitors
    5. cost for cooling for above 1-4
    6. cost for power for above 1-5

    Now, ask yourself, will the price of power go down if oil will cost $100 (current median bet in the Oil Futures stock simulation using real dollars as per WSJ)?

    yes, there are ways to reduce those costs. but not everyone can.
    • I'd just like to clarify a few of your items.

      4. cost for monitors

      I guess monitorS plural is technically correct. You probably want a minimum of two interface terminals. :)

      However you certainly do not need or want hundreds or thousands of monitors for the hundreds or thousands of CPUs.

      2. cost for the box for the CPU

      Sort of. Just taking a wild guess here lets say motherboards with 4 dual processors. So figure a one eigth of the cost of a motherboard per "CPU". Figure 16 motherboards in a rack? Ok a rack is go
  • Computing@home (Score:4, Interesting)

    by Saiyine ( 689367 ) on Wednesday August 24, 2005 @05:15PM (#13392984) Homepage

    What about releasing a grid client so every one could earn some bucks by letting their cpus work for others?

    You know, like the spam bots in windows, but getting money!

    --
    Dreamhost [dreamhost.com] superb hosting.
    Kunowalls!!! [kunowalls.host.sk] Random sexy wallpapers.
    • One problem is most of us don't have much bandwidth at home, so this grid would only be useful for extremely compute-intensive jobs which do a lot of processing on a tiny amount of data without much communication among nodes.

      Another problem is you'd have to do every computation on at least 2 different machines to have any confidence in the result.

  • Sun wants us to run an app on their powerful grid, but what do software vendors think about us running their single lincense across, say, 100 CPUs?

    So now, on top of the $1/CPU/hr, we have to buy a license for each of those CPUs.

    Or else this will be very good for open source.
  • by IEEEmember ( 610961 ) on Wednesday August 24, 2005 @05:25PM (#13393066) Journal
    The whole idea of resource sharing via networks is based on the fundamental concept of
    aggregation of demand. The expectation is, that given an operational network, relatively small demands from around a large geographic area can be aggregated and thus be satisfied with one or a few specialized centers. By aggregating specialized service production into fewer centers, the required services can be provided at lower costs than would result from provision of the equivalent services by means of a large number of smaller machines. This principle of demand aggregation, to obtain the advantages of scale, applies to hardware, to software, and to operational costs.
    From IEEE Computer Society magazine Computer, August 1973
    reprinted in August 2005 in the 32 & 16 Years Ago column.
  • Yeah, they metered us this way back at college. It was supposed to simulate a "real" computing environment. We got a quota each semester. An hour logged in to a Sun was the same "price" as 1000 pages off the laser printer. I printed a lot of manuals. A LOT of manuals.
  • Sun is offering processing and storage in a pay-as-you-go arrangement of $1 per CPU per hour

    Sounds exactly like the timesharing model I used on a Burroughs B5500 system circa 1970.

  • "I've tried A! I've tried B! I've tried C! All the while the aircraft is hurtling flaming ball plummeting out of the sky like a - I've tried D! I've tried E!"

  • What CPU will it be? A 4Mhz 8080? Or a Pentium II? Or a 1.0Ghz UltraSparc V? Or is it only one kind of CPU? If so, which kind, what speed? And will the $1/hour/CPU increase as they upgrade to new processors, or will it stay constant?

    Wouldn't it be better to offer, say, $1 for each million MIPS? Would be a lot more straightforward.
  • To really take advantage of a platform like this, you need to write code tailored for it. Who is going to risk spending large sums of money and lots of time developing for such a platform when it is controlled by a single-vendor and could disappear at any time?
  • As expected most comments about "who's gonna pay for this" and "it's cheaper to run your own server".

    But think about business models where the grid provider sells not only CPU cycles, but also trust.

    Scalable web hosting: Your PHP code is replicated on-demand to as many grid server as needed to handle your peak loads. The grid provider guarantees that server-side code and data remains confidential. End of the slashdot effect as we know it.

    MMORPG: Small startups can deploy worldwide networks of game s

    • by RelliK ( 4466 ) on Wednesday August 24, 2005 @07:29PM (#13393864)
      Rendering farm: Your CG movie is due to premiere next month, and your 10,000-node rendering farm can complete the job in time. Wouldn't you pay extra $$ to anyone who can save the day and guarantee that screenshots won't be leaked to the Net ?

      And how the fuck are you going to transfer *hundreds of gigabytes* of data required to render a frame over the internet? How are you going to receive the data back? (2MB - 12MB per layer per frame).

      Does that thing even have Renderman installed? (at $5k/CPU I highly doubt it). Does it have Shake? Does it have Houdini? Does it have Maya?

      Besides that, how the fuck are you going to get approval to send _anything_ out of the studio? You obviously have never worked in the industry.

      I'm also skeptical as to whether there is any use for this. What sort of environment do they run it on? Solaris/SPARC? Solaris/x86? Linux? Windows? What sort of software does it have installed? Would it ever be possible to replicate the in-house environment on this "grid"? (you know, with all the custom software, directory structure, environment variables, aliases, etc.) I know for a fact that there is no way we could outsource our rendering to Sun even if we tried.

      The whole "CPU-hour" thing is a very nebulous concept. Environments differ wildly from one company to another, so you can never have a universal "CPU grid" in the same sense as you can have an electric grid.

      • "...Besides that, how the fuck are you going to get approval to send _anything_ out of the studio? You obviously have never worked in the industry..."

        That's a very, very shortsighted POV. Right now, your Tax Returns have a good chance of being done in India. Companies are hiring temporary consultants *daily* to do highly confidential coding and other development. These things are handled quite easily with appropriate controls and disclosure agreements (are there problems and do things fall through the cra

      • Uh...why not use the distributed rendering client that Maya, and Houdini have standard? Shit, Renderman loves to work that way...I live in a flat and have lots of budies help me out when I'm rendering something....all over TCP/IP (over a LAN, true, but ANY IP could take/use/send the data).

        "Does that thing even have Renderman installed? (at $5k/CPU I highly doubt it). Does it have Shake? Does it have Houdini? Does it have Maya?"

        It doesn't need to. It just needs the distributed rendering client. And your stud
  • yay for slowlaris (Score:3, Insightful)

    by Yonder Way ( 603108 ) on Wednesday August 24, 2005 @06:57PM (#13393684)
    Not all CPU hours are the same. An hour on a moderately fast SPARC processor is not as valuable as a moderately fast Intel Xeon or AMD Opteron or a PowerPC.
  • Flexibility (Score:3, Insightful)

    by tez_h ( 263659 ) on Wednesday August 24, 2005 @06:58PM (#13393688) Homepage Journal
    Others have noted that the cost of a bespoke system is lower. These slighly ignore the fact that this is aggregated over some time period, like a few months or a year, where the for most part of that time the computer may be sitting idle if there aren't jobs to fill up the time. But Sun's solution offers granular computing time for any number of CPUs at a linear rate, meaning you don't pay for the time you don't use, and you can pay for lots of CPUs without penalty. I would imagine this convenience and the additional conveniences of infrastructure, maintanence, security, updates, etc are what you're paying $1/CPU/hour for.

    Since Sun is attepting to utilise economies of scale, as this gets more popular, presumably prices would become more and more accessible.

    -Tez

  • Missing the point (Score:3, Interesting)

    by tsotha ( 720379 ) on Wednesday August 24, 2005 @09:21PM (#13394548)
    I don't see why so many people think this is aimed at people doing massively parallel processing. It's not.

    I'm thinking of my own employer, who's got hardware up the wazoo that mostly just sits around heating the building. Take payroll, for instance. They probably run some payroll batch job a couple times a month, then the rest of the time the computer that does payroll is just sitting around. Sure, you could run other stuff on it, but then when it came time to do the payroll everything would run slow and people would be upset.

    The way I see it, this is perfect for all kinds of periodic batch-type business applications where you really want to have a dedicated machine but it won't be utilized all the time.

    Also, machines that mostly sit around have about the same maintennence requirements as heavily-used hardware. They still need OS patches, security patches, backups, etc. And they need to get modified when the new head of IT decides logs should go in /var/tmp/messages instead of /var/adm/messages or whatever. I know my employer could probably get rid of 2/3 of its data center and a corresponding fraction of the high-priced administrators currently making everything run smoothly.

    You could argue companies can do the same thing by balancing existing hardware, i.e. have one box host multiple "bursty" applications so the CPU doesn't have much idle time. But that takes lots of effort to manage, and it doesn't leave you any spare capacity when you need it. This way you don't need any spare capacity, and when your business grows you never have to worry about running out of rack space or lead times for new hardware.

    The downside, of course, is you're trusting Sun to have the capacity available when you need it. In a way, it really is like a utility - Sun will be hoping its customers "bursts" will all average out to some managable load. I wonder if it's true.

  • ...and (Score:2, Funny)

    by thepawn ( 910023 )
    At 6:18 pm on August 24th, skynet became self-aware.
  • I understand the economics of renting CPUs for big infrequent calculations, but how do you transfer the GB's of data required? For most business and research apps, aren't massive amounts of data typically attached to massive processing jobs?

    Won't the average internet connection take more time uploading the datafiles than it takes to process the data? Is it really practical to ship hard drives full of data to Sun just to run a few calculations?

    • yeah, good question. big deal they have the huge computer...its poor you who has the huge database, possibly with company confidential data. YOu not only have bandwidth issues, you probably have security issues. Like the article said, Sun is still ironing out the wrinkles in the Thin Client software. Once you get the data uploaded, you store it there but that may not help much if the data all change a bit in the 6 monthes that elapse between runs.

Garbage In -- Gospel Out.

Working...