Sun Grid Utility Goes Live for Employees 227
museumpeace writes "CNET is reporting that Sun Microsystems turned on its Grid computing utility, hosting large ERP applications for its employees to test out the server infrastructure and user acceptance of the Computing-as-metered-utility model. General availability is scheduled for October. The rates? "Sun is offering processing and storage in a pay-as-you-go arrangement of $1 per CPU per hour, delivered via an Internet connection". Sun is still retooling its Thin Client interfaces and support SW. Experts quoted in the article wonder if Sun can make any money this way." Slashdot also covered the original announcement back in February.
Competition. (Score:5, Funny)
granted, there's more windows boxes than solaris.. (Score:2)
SunOS: the reason firewalls were invented.
Everything Old is New Again (Score:5, Insightful)
I'm not even going to enertain the idea of having MY data stored on another (microsoft/sun/etc)server, and paying for the rights to access/modify it.
There is a reason it's called the PC,and not a dumb terminal.
Re:Everything Old is New Again (Score:5, Informative)
There are no dumb terminals - only dumb users.
This isn't targeted at PC users. This is for (for example) the hedge fund that needs 50 machines for 8 hours, once a week, to run a complex model. This gives them the power they need for a fraction of the price of the raw hardware, and they don't have to pay anybody to maintain it.
I've had projects where I really wanted 1000 CPUs for a week, just so I could do scalability testing. There's no way we could afford $1,000,000 to buy 1000 machines just for that one test, but we could probably have swung $50,000 to get them for five 10 hour days or ten 5 hour days.
Re:Everything Old is New Again (Score:2, Insightful)
And what is wrong with mainframes? Putting all the computing power in one place, is about sharing that equipment between all users, making optimal use of the hardware. As opposed to everyone having their own box that comes with everything and the kitchen sink, but sits around doing nothing 98% of the time. Talk about waste...
I'm not even going to enertain the idea of having MY data stored on another (microsoft/sun/etc)server, and paying for the rights
Re:Everything Old is New Again (Score:2, Interesting)
I work producing a software product that I personally wouldn't use. Does that mean nobody would use it? Apparently not since my company is profitable and I get paid.
Also, 'backwards' is a matter of perspective. For ordinary home users who have trouble keeping their desktops clean and their start menus in order, don't keep backups of their data, and throw away their PCs every couple years because of viruses
Re:Everything Old is New Again (Score:2)
This is just the reincarnation of the mainframe era. Everyone (Sun, MicroSoft, et al), want to put us back in the days where the storage/cpu and most importantly the applications themselves are in their "capable" hands.
The application is in the hands of the person creating it. You are purchasing scaleable processing power.
I'm not even going to enertain the idea of having MY data stored on another (microsoft/sun/etc)server, and paying for the rights t
Re:Everything Old is New Again (Score:2)
Re:Everything Old is New Again (Score:2)
Re:Everything Old is New Again (Score:2, Insightful)
OpenOffice (Score:2, Funny)
Seti and Folding @ Home^h^h^h^h Datacenter (Score:2, Funny)
Ouch that hurt... Sun's spirit will always be amongst us.
Decision still to be made on name (Score:2)
Just to get it done away with... (Score:2, Funny)
$1 per CPU per hour...the true money-making scheme here is that if you run Linux, they'll charge you the $699 for each processor on behalf of SCO.
So for $50 bucks an hour, you can run a SWING application almost without a performance drop.
With the licensing model, you can run apps with it, but you can't alter any data that passes through without our permission. Want to see the results of your
sun's shining (Score:2)
Seriously, I like the sound of this. One can argue about costs, etc., but at least they have something other than inertia that might encourage a scientific user to choose Sun.
Not for big problems, then (Score:2, Insightful)
So 24 hours a day, $400 -> 16 days work. Let's add in 25% for "stuff" (electricity costs, etc., being generous...) and you're still saying that a problem that takes 20 days or more, you're better off buying a throw-away PC and running Linux on it.
So, it must be aimed at the smaller problems. Like what ?
Simon
Re:Not for big problems, then (Score:5, Insightful)
for example (Score:5, Insightful)
Mind you the cost of chip design software is the limiting factor here, not the cost of hardware to run it on
Re:Not for big problems, then (Score:4, Insightful)
Maybe you could do it cheaper by buying your own CPUs, but you could be waiting two weeks for your Dells to arrive. How much is it worth to you to get an answer within hours or days versus a few weeks or months of waiting?
--paulj
Re:Not for big problems, then (Score:2)
Also consider the case where your problem needs 20 CPU-days -- in 8 hours.
Re:Not for big problems, then (Score:2, Interesting)
And, if it's my computer, what management are we talking about ? It's a program running on a computer. I start it. I wait. I analyse the results. What's to manage ?
If I buy 100 of these things, you use a simple batch script (I wrote one at college in about 2 days). Typing 'batch ' was all that was required to start something. Typing 'batch list' gave y
Re:Not for big problems, then (Score:5, Insightful)
You're not paying for and hence you do not have:
- data centre floor space with:
- heavy duty UPS
- generator backup
- climate control
- security
- redundant networking
- multiply redundant storage
- tape backup silos / HSM
- The 24x7 staff to:
- monitor security
- test the generator weekly
- monitor the backup processes
- monitor and maintain the network
- monitor and maintain the hardware
etc. etc. If you think your costs as "Joe Bloggs the guy who runs a few Linux PCs at home" are comparable to a corporate affair then you're simply kidding yourself, particularly when you're not billing yourself for your own time
A lot of corporates have thought what you thought "Ah sure, it can't be expensive to run a few servers in our own 'data centre'", and they typically either under-estimate the costs, or they end-up with very shoddy server facilities. Then they'll have reliability problems due to:
- servers overheating cause they're stuffed into a cupboard (seen this)
- lack of staff expertise (all too common)
- utilities failures (they couldn't afford the large UPS + diesel generators + cut-over switches + electricians expenses)
- the gradual increasing burden of maintaing installed plant, which if not planned professionally slowly but surely turns into a huge sprawl of unmarked cables, till it gets to point even simple rewiring tasks are a massive (and error-prone) undertaking.
Eventually, to a lot of these types of small corporations, locating in a managed data-centre and letting someone else take care of the details becomes very very attractive. (particularly for coporates whose primary business is *not* computing).
You are almost certainly underestimating the costs.
--paulj
Re:Not for big problems, then (Score:2)
In the university environment I mentioned above, they already have all that. They were paying for it regardless of if I push 10 boxes on the shelves in the spare rack-space and wire them up to the switch. Which is exactly what I did. With their blessing. "It'll be lost in the noise, go right ahead"...
So. Zero extra cost apart from electricity.
Most corporates have an IT dept as well, and I would expect the same to apply.
In the personal
Re:Not for big problems, then (Score:2)
Didn't say you were, you were however trying to extrapolate costs from what you think your home costs are.
They were paying for it regardless of if I push 10 boxes on the shelves in the spare rack-space and wire them up to the switch.
And that slack capacity, in terms of floor and rackspace and network, has a definite cost. So does planning ahead in order to balance cost against sufficient slack (people's time has a cost).
So. Zero extr
Re:Not for big problems, then (Score:2)
It takes tremendous resource-planning - this 20-day project. I mean, there's absolutely nowhere that any multi-national organisation, university, home, garage whatever could possibly put another computer. Christ no. Everywhere is *completely* budgeted for, for that 20 day period. For the space of a desktop PC. Of course it is.
And (to make sure those 20 days are properly accounted for) I'll have to employ at least a dozen
Re:Not for big problems, then (Score:2)
Ah, sorry, I should have put in a smiley. It was intended as good-humoured banter. My posts generally should be taken as such.
Everywhere is *completely* budgeted for, for that 20 day period. For the space of a desktop PC. Of course it is.
I did not say "budgeted for", I said it had a cost. Whether you track that cost and budget for it or not is a different thing. You may feel it well-worth it to simply provide for lots of
Re:Not for big problems, then (Score:5, Insightful)
Well that or you need to optimize your code, or get a faster machine.
That said, it probably isn't worthwhile to the guy with a $400 problem - more likely they are looking to appeal to the kinds of guys that want to crack 128-bit encrypted data streams in real-time, or run two neural networks against each other in a zillion games of chess in order to teach (evolve) their neural network, or crunch two terabytes of data picked up by an Indy race team over three days at the track. Brute forcing 1024-bit encryption is totally possible, but the data isn't generally valuable a thousand years after you start decrypting it. Throw enough horsepower to decrypt 1024-bit RSA in real-time and you will find yourself rich (or dead.)
Knowing the winning numbers to the lottery thirty minutes after they are announced is pretty worthless.
Knowing the winning numbers to the lottery thirty minutes before they are picked is worth a hundred million dollars.
Amazing difference having the answers an hour earlier makes - I'm not saying that these computers will give you that much of an advantage, but I'm still saying
Re:Not for big problems, then (Score:2)
Re:Not for big problems, then (Score:3, Interesting)
Who is going to use it? (Score:2)
The high-energy physics folks, who generally get government and university subsidies for their high-performance computing needs, and so certainly get computation much cheaper than $1/cpu-hour.
Commercial folks, maybe in the financial services sector who are (rightfully) paranoid about security, and just aren't going to send their sensitive data from Wall Street to California, so matter how much SSL-this and triple-DES that happens on
Re:Who is going to use it? (Score:3, Interesting)
What about 3D rendering? There are lots of people renting time on render farms right now to make deadlines. If I can run PRrenderman on these CPUs, and the price includes storage for my rendered frames, it might be price competitive to buying a lot of processing speed that I'm not going to be using 24/7.
Corporate Espionage issues? (Score:2)
Re:Corporate Espionage issues? (Score:2)
$1/CPU/hour is damn expensive... (Score:2)
$1250 for a CPU-year. Compared with $8000/cpu/year for Sun's solution. So you better need BURSTS of CPU but not sustained CPU. And you better not be able to smooth out the burst demands with a batch-job system.
Re:$1/CPU/hour is damn expensive... (Score:2)
Of course, that's
Re:$1/CPU/hour is damn expensive... (Score:4, Insightful)
Add a few $65,000 / year staffers in there to install / support those $2,500 machines and you are looking at $13,500 per year (every year) per machine. I know, that's what my company bills my department for each server I have on the network.
Wrong scale (Score:2, Interesting)
(For real-time processes, you can even fix the clock cycles in advance.)
The advantages of doing things on this scale are that most heavy tasks will take in the order of seconds - at worst, minu
Wrong model (Score:2)
Re:Wrong scale (Score:2)
RIGHT scale (Score:2)
$ man 3 clock
clock - determine processor time used
LIBRARY
Standard C Library (libc, -lc)
SYNOPSIS
#include
clock_t
clock(void);
DESCRIPTION
The clock() function determines the amount of processor time used since
Flip Flop (Score:2, Insightful)
bang for the buck (Score:4, Interesting)
However, if you got a linux whitebox to run this, not only would you have to worry about power costs, but also every other detail that comes with making sure your machine is running. What about patches, upgrades, network, bad hardware, runaway processes, general administration, backups, storage, etc? Most of the people here would be able to do the standard stuff that's needed, but I'm sure a business that needs "xyz" computed would gladly pay the 2x price. Not only would it do away with all the minor details, but they'd also have their results back in a significantly shorter amount of time! I'm too lazy to do math right now, but I'd say a year of cpu time could easily be done in less than month. That alone could be _the_ deciding factor and the justification for the expense.
Re:bang for the buck (Score:2)
Re:bang for the buck (Score:2)
Needless to say, I've decommissioned that machine and only turn it on when I need to use it. My fileserver / e-mail / web serve
Re:bang for the buck (Score:2)
Note to self: Engage brain before posting, please
'display over IP' infrastructure (Score:2)
2. gridhost % while 1
?
? ???
? echo PROFIT!!!
? end
Free for employees? (Score:2)
How long for a portage plug-in? (Score:2)
big margins (Score:5, Insightful)
Consider that if you have a moderately large data set that you need to crunch it's not at all uncommon for it to take 3 hours on a 300 node cluster. That's $1800 if each machine is a dual proc machine.
So suppose Sun has a 300 node cluster, for example, where each machine cost $1800. Ever 3 hours one of the machines is paid for. In other words, a 300 node cluster is paid for in 38 days. Well, the hardware is, anyway.
I really don't know who the main clients would be of this kind of service, however. I'm guessing that if your company can't afford a 10-20 node cluster (fairly cheap) and still needs to do large scale computing, renting CPU cycles from Sun would make sense, though it would very quickly cost more than the 10-20 node cluster would have. So it's really going to benefit customers who need large scale number crunching results more quickly than they can obtain them simply by building a smaller cluster and waiting for the results, or customers whose problems involve data sets that are large enough that they need to be distributed over 100+ machines in order to be solved.
Who has large data sets like that and no cluster access? Not university researchers, not government agencies, and probably not most firms doing significant number crunching.
So I see the niche as firms with large data sets and someone who can write the MPI code, but who lack the willingness or finances to invest in a cluster of their own.
In a year or two when the same service is selling for $0.25 per cpu hour it will be a much more compelling proposition.
Re:big margins (Score:5, Insightful)
Results n seconds.
Doing a quick read through I did not see exactly how large the cluster was, but there is no reason that Sun could not scale it to a few thousand nodes, providing results in a time frame that would be cost prohibitive for most companies to setup clusters to compute.
Come on, if you are running a huge simulation a few tmes a year, maintaining a 1000+ node cluster year round is just not cost efficient.
Re:big margins (Score:2)
You are correct that a company who needs a 1000+ node cluster a few times a year is a perfect client for the service. I don't know how many such companies there are.
Re:big margins (Score:2)
But there is a value improvement to the customer of getting the job done faster.
Economics of Sun Grid Computing (Score:4, Interesting)
2. cost for the box for the CPU
3. cost for data storage
4. cost for monitors
5. cost for cooling for above 1-4
6. cost for power for above 1-5
Now, ask yourself, will the price of power go down if oil will cost $100 (current median bet in the Oil Futures stock simulation using real dollars as per WSJ)?
yes, there are ways to reduce those costs. but not everyone can.
Re:Economics of Sun Grid Computing (Score:2)
4. cost for monitors
I guess monitorS plural is technically correct. You probably want a minimum of two interface terminals.
However you certainly do not need or want hundreds or thousands of monitors for the hundreds or thousands of CPUs.
2. cost for the box for the CPU
Sort of. Just taking a wild guess here lets say motherboards with 4 dual processors. So figure a one eigth of the cost of a motherboard per "CPU". Figure 16 motherboards in a rack? Ok a rack is go
Computing@home (Score:4, Interesting)
What about releasing a grid client so every one could earn some bucks by letting their cpus work for others?
You know, like the spam bots in windows, but getting money!
--
Dreamhost [dreamhost.com] superb hosting.
Kunowalls!!! [kunowalls.host.sk] Random sexy wallpapers.
Re:Computing@home (Score:2)
Another problem is you'd have to do every computation on at least 2 different machines to have any confidence in the result.
Cool but what about licensing? (Score:2, Interesting)
So now, on top of the $1/CPU/hr, we have to buy a license for each of those CPUs.
Or else this will be very good for open source.
Aggregation of Demand (Score:3, Informative)
reprinted in August 2005 in the 32 & 16 Years Ago column.
Re:Aggregation of Demand (Score:2)
"I think there may be a world demand for about 5 computers" - Thomas Watson, founder of IBM
-
College (Score:2)
Re:College (Score:2)
Back in the days when we walked 6 miles i
Re:College (Score:2)
Old is new again (Score:2)
Sounds exactly like the timesharing model I used on a Burroughs B5500 system circa 1970.
To Paraphrase "The Right Stuff" (Score:2)
Lot of unanswered questions. (Score:2)
Wouldn't it be better to offer, say, $1 for each million MIPS? Would be a lot more straightforward.
Re:Lot of unanswered questions. (Score:2)
Given that processors get cheaper over time, you'd hope that the price never increases.
Pricing according to a Meaningless Indicator of Processor Speed doesn't sound like a great idea to me. Also, if the grid is space-shared, your job is consuming the same resources regardless of its IPC.
Re:Lot of unanswered questions. (Score:2)
Can't really see this being successful. (Score:2)
Applications that make sense (Score:2, Insightful)
But think about business models where the grid provider sells not only CPU cycles, but also trust.
Scalable web hosting: Your PHP code is replicated on-demand to as many grid server as needed to handle your peak loads. The grid provider guarantees that server-side code and data remains confidential. End of the slashdot effect as we know it.
MMORPG: Small startups can deploy worldwide networks of game s
I can shoot down one of these (Score:5, Insightful)
And how the fuck are you going to transfer *hundreds of gigabytes* of data required to render a frame over the internet? How are you going to receive the data back? (2MB - 12MB per layer per frame).
Does that thing even have Renderman installed? (at $5k/CPU I highly doubt it). Does it have Shake? Does it have Houdini? Does it have Maya?
Besides that, how the fuck are you going to get approval to send _anything_ out of the studio? You obviously have never worked in the industry.
I'm also skeptical as to whether there is any use for this. What sort of environment do they run it on? Solaris/SPARC? Solaris/x86? Linux? Windows? What sort of software does it have installed? Would it ever be possible to replicate the in-house environment on this "grid"? (you know, with all the custom software, directory structure, environment variables, aliases, etc.) I know for a fact that there is no way we could outsource our rendering to Sun even if we tried.
The whole "CPU-hour" thing is a very nebulous concept. Environments differ wildly from one company to another, so you can never have a universal "CPU grid" in the same sense as you can have an electric grid.
Re:I can shoot down one of these (Score:2)
That's a very, very shortsighted POV. Right now, your Tax Returns have a good chance of being done in India. Companies are hiring temporary consultants *daily* to do highly confidential coding and other development. These things are handled quite easily with appropriate controls and disclosure agreements (are there problems and do things fall through the cra
Re:I can shoot down one of these (Score:2)
"Does that thing even have Renderman installed? (at $5k/CPU I highly doubt it). Does it have Shake? Does it have Houdini? Does it have Maya?"
It doesn't need to. It just needs the distributed rendering client. And your stud
yay for slowlaris (Score:3, Insightful)
Flexibility (Score:3, Insightful)
Since Sun is attepting to utilise economies of scale, as this gets more popular, presumably prices would become more and more accessible.
-Tez
Missing the point (Score:3, Interesting)
I'm thinking of my own employer, who's got hardware up the wazoo that mostly just sits around heating the building. Take payroll, for instance. They probably run some payroll batch job a couple times a month, then the rest of the time the computer that does payroll is just sitting around. Sure, you could run other stuff on it, but then when it came time to do the payroll everything would run slow and people would be upset.
The way I see it, this is perfect for all kinds of periodic batch-type business applications where you really want to have a dedicated machine but it won't be utilized all the time.
Also, machines that mostly sit around have about the same maintennence requirements as heavily-used hardware. They still need OS patches, security patches, backups, etc. And they need to get modified when the new head of IT decides logs should go in /var/tmp/messages instead of /var/adm/messages or whatever. I know my employer could probably get rid of 2/3 of its data center and a corresponding fraction of the high-priced administrators currently making everything run smoothly.
You could argue companies can do the same thing by balancing existing hardware, i.e. have one box host multiple "bursty" applications so the CPU doesn't have much idle time. But that takes lots of effort to manage, and it doesn't leave you any spare capacity when you need it. This way you don't need any spare capacity, and when your business grows you never have to worry about running out of rack space or lead times for new hardware.
The downside, of course, is you're trusting Sun to have the capacity available when you need it. In a way, it really is like a utility - Sun will be hoping its customers "bursts" will all average out to some managable load. I wonder if it's true.
...and (Score:2, Funny)
How do you upload the datafiles? (Score:2)
Won't the average internet connection take more time uploading the datafiles than it takes to process the data? Is it really practical to ship hard drives full of data to Sun just to run a few calculations?
Re:How do you upload the datafiles? (Score:2)
Re:$1/CPU/hour? (Score:3, Insightful)
Re:$1/CPU/hour? (Score:3, Informative)
Re:$1/CPU/hour? (Score:2, Insightful)
Rather like virtual servers (Score:2)
You could apportion a similar costing structure to a normal e.g. unixshell/linode virtual server to charge by CPU hours.
Re:$1/CPU/hour? (Score:4, Insightful)
Actually I think this might be very appealing to research groups at universities.
As part of your grant proposal you include a flat cost for computer time rather than costing out hardware purchases. Not only that, but you can also start your project as soon as the money is approved, you don't have to go through all the hoops to buy, ship, house, and administer the hardware.
There may be peak periods.. (Score:2)
Chip companies need to run extensive simulations of the functionality of their chips before committing to actual silicon (which is a very expensive step). Although they tend to have large server farms to do this, these might already be busy.
When a (possibly even very small) change is made to the design of a chip you need to run a lot of
Re:$1/CPU/hour? (Score:5, Insightful)
But say you wanted to run the job ten times faster. You'd split it across ten CPUs. Each CPU would perform 1/10th the work, but in parallel, so the job gets done in 1/10th the time. But the total number of CPU-hours you've used remains the same. So you pay the same price but get the job done ten times faster.
If you wanted to do that yourself, you'd have to buy 10 CPUs and once the job was done you'd have a bunch of CPUs you didn't need.
Re:$1/CPU/hour? (Score:2)
Without a barrier to entry, this market will race to the bottom, and the price will settle out at around the amortized cost of the CPU + power and HVAC.
I think Sun has priced themselves out of the market before it
Re:$1/CPU/hour? (Score:3, Informative)
You assume that all parallel jobs have a negligible communication overhead. Many parallel algorithms are communication bound, and these simply won't work when distributed across the Internet. Are you seriously compa
Re:$1/CPU/hour? (Score:2)
Doesn't have to be ragtag at all -- if I was faced with a supercomputing task and had the option of buying a 1024 node system or leasing the time from Sun, I might consider buying the system if I knew I could recoup investment by leasing the time. Every prospective customer is therefore a prospective competitor.
When that happens the market clea
Re:$1/CPU/hour? (Score:4, Interesting)
Sun is betting that there are many people/businesses that fall into the latter category.
Re:$1/CPU/hour? (Score:3, Informative)
Re:$1/CPU/hour? (Score:5, Insightful)
You buy time on it when you need a LOT of CPUs worth of stuff done NOW.
Imagine you have some projection software package that you need to run once a quarter for your company. You need the data within a week of the beginning of the quarter. You require 10,000 CPU hours to get the numbers all crunched. It's the only "big-computing" job you have.
On one computer the task would take you a little over a years time (8544 hours in a year). That won't quite be up to the task, remember you need the job done in a week. That's 10,000 CPU hours to fit into 168 hours of real time. You'd neeed 60 processors chugging away for those 168 hours to get it done.
How much is a 60 CPU cluster going to cost you to build? It's not insanely expensive, but it's not cheap. It looks a lot better to you to build that cluster than to spend $40,000 a year though! Right?
Wait. Clusters take up space. A 70 CPU cluster (better add in a few for redundency since this job has to be done in time) is not going to fit in the broom closet. That floor space is going to cost you.
Hmm, those 60 CPUs throw off a lot of heat when they run. Better add some more cooling to the building. Another decent expense.
Damn, look at that electric bill from the extra 70 CPUs and cooling for them. This nickel and dime stuff is starting to add up.
And now for the killer. You've got a new 70-CPU cluster. Your going to need someone to manage it. Cluster work is a bit different from what's what your used to, and your IT staff is already busy with their current workloads. It's time to hire a guy to manage the cluster. BZZZZZZZZZT. That hire alone makes the $40,000 a year for grid CPU time a deal.
Work the numbers yourself. It's not really a bad deal if you only occassionally need massive computing.
Re:$1/CPU/hour? (Score:5, Interesting)
With Sun's service, you'll probably get the result within a few hours, not a week. If there's a problem the tests can be re-run with plenty of time before the presentation.
Of course, your bosses may be even more displeased about the extra $10,000 cost of the run than they would've been about another week's delay. Hope you talk fast!
Re:$1/CPU/hour? (Score:2)
Save money: procrastinate (Score:3, Interesting)
If you have computing job that will take 4 years on today's processors, a
Re:Save money: procrastinate (Score:2)
Re:$1/CPU/hour? (Score:2)
If you don't pay your 'hire' all year, you are going to need to find a contractor willing to deal with large gaps of time in between his use.
Now imagine you are the contractor. If one company that is only going to use you for 1 week every three months, and
Re:$1/CPU/hour? (Score:2)
Re:Photo from inside the facility: (Score:2)
Re:So I could use the internet (Score:2)
Re:No market there (Score:4, Interesting)
http://www.betanews.com/article/Microsoft_Heats_Gr id_Iron_with_Bigtop/1104374194 [betanews.com] 4 6291,00.asp [microsoft-watch.com]
http://www.microsoft-watch.com/article2/0,2180,17
It may be on the back burner at MS for now, but as we've seen many times if they perceive a market they're missing out on they can throw enormous resources at a project to get it to market.