Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Microsoft IT

Microsoft Finally up for Distributed Computing? 307

Posted by CowboyNeal
from the drinking-philosophers dept.
ReeprFlame writes "eWeek has reported overhearing Microsoft's plans to finally get into the distributed computing market. Considering that the Windows platform has never had the ability to parallel compute in the past, it leaves great potential to the company's operating system development. From current *nix systems we have today, such a grid proves very useful, especially in the serving arena. However, we are unsure of Microsoft's target for the software. Would it be an addition to home users computers as well as the server versions of Windows? As of now it is unclear, but Microsoft probably will bring this situation to life in the near future since it does hold alot of power for them over other platforms."
This discussion has been archived. No new comments can be posted.

Microsoft Finally up for Distributed Computing?

Comments Filter:
  • Oh great... (Score:4, Funny)

    by Anonymous Coward on Saturday January 01, 2005 @10:02AM (#11233428)
    now we have to worry about the blue wall of death.
  • I know... (Score:3, Funny)

    by frickenhell (643246) on Saturday January 01, 2005 @10:04AM (#11233434)
    They'll be secretly using your CPU cycles to compile their latest version of Windows.
  • From current *nix systems we have today, such a grid proves very useful, especially in the serving arena.

    Keep in mind though that Windows clusters are existing. Of course this is not the same, but it's not like all servers are single-machines.
    • by goombah99 (560566) on Saturday January 01, 2005 @10:23AM (#11233500)
      the article poster seems to confuse parallel processing on a single machine with distributed computing. The difference is that each machine is running it's own OS and not sharing physical memory in distributed computing.

      distributed computing happens at the application layer. Thus if you can run something like an MPI library on windows you have the basis for efficient distributed computing. All you need is a scheduler and launcher to be able to launch distributed launch an application across the net. But virtually all of these are daemons not strictly part of the OS. So that level of system independent abstraction exists already so this should not be too difficult.

    • by Savage650 (654684) on Saturday January 01, 2005 @11:01AM (#11233628)
      Keep in mind though that Windows clusters are existing.

      Looking at the MSFT definition or clustering [microsoft.com], they describe two kinds of clusters:

      • network load balancing clusters ("[the type ..] that distributes and load balances network connections among servers, providing high availability and scalability for stateless TCP/IP applications and services.").
        Note the explicit restriction to "stateless".

      • server clusters ("[the type..] that the Cluster service implements. Server clusters are characterized by high availability.)
        Note they mention availability but not performance.
      ObJoke: MSFT renamed "Wolfpack" to "Server Cluster API", probably because they were sick of people describing it as "two dogs fucking" (As in: two beasts stuck together, pulling in opposite directions and howling in pain).
  • by Anonymous Coward on Saturday January 01, 2005 @10:08AM (#11233443)
    Don't the spammers and virus writers have this technology already in their botnets?

    I guess Microsoft is imagining a Be-- stop! put down that bat!

  • Windows is an overly-bloated OS which is very GUI-oriented and is not modular or flexible for cluster node usage. Processing nodes usually don't even have a monitor or keyboard, much less a GUI and a mouse. Windows isn't much use there. Nor can you strip out the parts you don't need, or customize the kernel for performance. Plus, Microsoft's incredibly expensive and anal licensing makes a Windows cluster not worth the effort or money. I mean, Linux's licensing cost is 0, and 0 scales infinitely ;)

    Say what
    • In other words: Windows is not ready for the cluster!
    • While you're probably aware of grid computing from a perspective of "huge server farm at research organization X", I think the more practical use is corporations that often have 10s of thousands of extremely powerful workstations. These PCs are extraordinarily underused, and if there was some secure, reliable method of distribution processing across them (transaction calculations, actuarial processing, whatever) then that would be extremely valuable.
      • ... if there was some secure, reliable method of ...

        Do you know any single piece of software made by MS that you would
        consider secure or reliable?

        By your definition of workstation-clustering Windows already has the feature anyways.
        We're reading about its RPC capabilities twice a week aren't we?
        Just remotely inject your trojan of choice (sth like back orifice?) across the office and do whatever it is you want to do. Cluster computing needs application-support anyways (unless they come up with a MOSIX which
        • Solitaire? very secure and reliable.
          Pinball is pretty good, also.
          I like the VPN client/server system built into NT-Win2k. it's very reliable, and the security problems it has are fixable with a little work.
          Exchange 5.5 mail server is robust and pretty secure.

    • by Anonymous Coward on Saturday January 01, 2005 @10:30AM (#11233527)
      Processing nodes usually don't even have a monitor or keyboard, much less a GUI and a mouse. Windows isn't much use there.
      We have some 2500 Windows servers where I work. None of them have monitors, keyboards, or mice. If we need a KVM it's typically to get into the BIOS, not the operating system.

      Nor can you strip out the parts you don't need, or customize the kernel for performance.
      You most certainly can do both. It costs money, of course, but remember that we're not talking about trivial tweaks like compiling the kernel for your particular processor family. We're talking about hiring a team of programmers to extensively customize the kernel so it runs your specific application and nothing else. That costs a bucket of money, and compared to that the cost of a Windows source code license is not going to be a whole lot.

      I still feel that Linux would be a good bit cheaper, but we're talking big bucks both ways. And it's also worth mentioning that Microsoft's licensing model for "corner cases" like this is extremely flexible: they may give the source away at a significant discount just for the publicity. They've done it plenty of times before. Some of those 2500 servers at work run a custom-built NT kernel and we sure aren't a huge international company.

      • Does that require Windows Server? I was bred on Unixes of various sorts but spend a lot of my time on Windows. There are a number of operations I don't know how to do without a mouse, even on my own machine, much less on a remote machine.

        Even if I had a SSH/telnet-driven command prompt, I don't think I could kill a process on a remote machine, for example; I can do it only via the GUI. Is it just because I have a lot to learn, or is it a feature I don't have?
        • Even if I had a SSH/telnet-driven command prompt, I don't think I could kill a process on a remote machine, for example; I can do it only via the GUI. Is it just because I have a lot to learn, or is it a feature I don't have?

          rkill, but I think it's an installable service that only comes with Resource Kit.

      • yet each of those servers still run the GUI. Just because you can't see it doesn't mean it's not in the memory.

        At least with Linux you get two options. a low memory command shell that shuts down when you log out, or X which only loads the application on the local processor, using the remote machine for actual display. And when your done it turns off, restoring memory to the system.

        Windows GUI never shuts off. It's always there.
      • I still feel that Linux would be a good bit cheaper, but we're talking big bucks both ways.

        Perhaps the lack of detail in your arguement has given me the wrong impression, but it sure looks to me like you or bordering on soft fud. If I misrepresent your arguement then I appoligize.

        You are likely refering to customization as rewriting kernel code or writing new code for inclusion in the kernel which would be expensive either way (I suspect much more expensive for Windows because you would be paying for

    • by Bulln-Bulln (659072) <bulln-bulln@netscape.net> on Saturday January 01, 2005 @10:36AM (#11233549)
      Huh? Mac OS X is also very GUI-oriented, but that doesn't make it bad for clusters. I've only read positive feedbacks about Apple's Xgrid. http://www.apple.com/acg/xgrid/ [apple.com]
      So that's not really a reason why a Windows Cluster won't make sense.
      Licensing costs are also not the biggest concern from big corporations.
      • by gstoddart (321705) on Saturday January 01, 2005 @11:09AM (#11233665) Homepage
        Huh? Mac OS X is also very GUI-oriented, but that doesn't make it bad for clusters. I've only read positive feedbacks about Apple's Xgrid.


        In this case, Mac OS X is sitting on top of a UNIX kernel -- a modified FreeBSD. Which means all of those parts aren't GUI oriented, and you get all of the same benefits of a UNIX with all of the eye candy that Apple knows how to make work well.

        Windows seems to have been built with a model that expects everything to want to be GUI based and it includes a lot of stuff geared towards that. As has been pointed out elsewhere, Windows seems to be taking networking and other stuff as add-ons without having been accounted for in the first place. Though that's probably changing somewhat over time.

        In the case of OS/X, it will happily do both functions without saddling the non-GUI stuff with extra baggage.

        • Mac OS X is sitting on top of a UNIX kernel -- a modified FreeBSD.

          Wrong. OSX' kernel is XNU - a modified version of Mach. OSX (or better Darwin) includes a lot of FreeBSD code, but it's not just a modified FreeBSD.

          There's also a difference between Windows as a whole and just the NT kernel. The NT kernel isn't that bad. Most problems with Windows result from problems in the higher levels of the system - eg. IE.
          Problems with higher levels of the Windows OS are not necessarily a reason against c

          • Wrong. OSX' kernel is XNU - a modified version of Mach. OSX (or better Darwin) includes a lot of FreeBSD code, but it's not just a modified FreeBSD.

            Well, according to wikipedia [wikipedia.org] it's more of a hybrid. But, yes, I concede it's not just a re-worked FreeBSD, but to a process it offers an almost identical interface.

            There's also a difference between Windows as a whole and just the NT kernel. The NT kernel isn't that bad. Most problems with Windows result from problems in the higher levels of the system - eg

    • they do make an "embedded" version where you can cut crap away. Not that I think this is going to fly though... I like the poster: Would it be an addition to home users computers as well as the server versions of Windows? Considering all non-server versions only accept one telnet client at a time, keep on dreaming buddy. This is probably gonna get marketed as a distributed server thingy for large web or exchange clusters.
    • This has to be some sort of world record in bullshitting, the NT kernel is easily as modular and flexible as the Linux kernel. Microsoft can easily strip and optimize the kernel (the xbox is an excellent already-existing example, great use of the NT kernel).

      A number of years ago it was possible to actually listen to technical arguments on slashdot, but it seems that all technical considerations has been deemed less important than slamming Microsoft at every turn.

      NT will work great in such a setting, if an

    • You don't need to have a GUI to run NT. It doesn't sounds like something hard to implement - just code a "dummy driver" which does nothing.

      BTW, windows embedded allows customers to "customize" what parts of the kernel they want. They can do the same with a "cluster oriented" windows version

      And remember, even if windows sucks for cluster it don't means they won't have success. Windows 9x was crappy base to build a OS on it, despite of that everyone bought windows 95.
    • I dislike Windows, but most of what you wrote is wrong.

      The Windows GUI can be turned off, along with many of the other services that you won't need in a cluster. It is not even that hard. The MS knowledge base is a mess, but the information is there. There are many performance tweaks for the NT kernel that don't require a recompile. It should be noted that most Linux clusters use unmodified, or lightly modified kernels. Most admins feel that the slight performance gain (if any) is not worth the maintananc
  • could be good (Score:5, Interesting)

    by Cheeze (12756) on Saturday January 01, 2005 @10:12AM (#11233457) Homepage
    i always wondered why there's not an easy way to utilize all of the computers in a network to perform a task. Most of the computers on corporate networks are windows machines, and most of those are sitting idle 99% of the time. If there was a way to harness that power for something useful, like an oracle database, web hosting, mail hosting, etc, the whole network would not be bottlenecked by one overloaded server. Mosix kinda solves that problem, but on the linux-side only.

    If someone wanted to make millions of dollars, build something like that for windows and charge minimally for it. Better do it before Microsoft does.
    • Re:could be good (Score:2, Insightful)

      by LiquidCoooled (634315)
      Spyware and trojans have been doing exactly this for years now.
    • "i always wondered why there's not an easy way to utilize all of the computers in a network to perform a task. Most of the computers on corporate networks are windows machines, and most of those are sitting idle 99% of the time."

      Specifically for Oracle there is Oracle 10g.

      For various other classes of computation there are the following (plus others) on windows (and some are cross platform):

      * Condor
      * Entropia
      * United Devices
      * BOINC
      * IBM community grid
      * Vita Nuova's Inferno
      * Sun Grid Engine (coming soon)

      I
    • Re:could be good (Score:3, Interesting)

      by Just Some Guy (3352)
      I'd like to see distributed applications on a much wider scale. Right now my office is fully of extremely-underutilized 2.4GHz P4s. If it were possible to share the office's processing and storage conveniently, we could get by with an office of P-II 333s that were running at 50% capacity instead.

      Most of the machines are doing word processing, email, and other light-load activities. Very slow individual machines aren't useful because sometimes users want to run applications that require large amounts of

  • by John Harrison (223649) <johnharrison AT gmail DOT com> on Saturday January 01, 2005 @10:13AM (#11233458) Homepage Journal
    As of now it is unclear, but Microsoft probably will bring this situation to life in the near future since it does hold alot of power for them over other platforms.

    Does this make any sense? The rest of the summary is equally nonsensical.

    • by Scott7477 (785439) on Saturday January 01, 2005 @10:29AM (#11233523) Homepage Journal
      Here is tha actual article; note that MS doesn't plan to have this ready to release until "near the end of the decade."

      A Peek Under Microsoft's Secret 'Bigtop'
      By Mary Jo Foley, Microsoft Watch
      December 29, 2004

      Microsoft officials have said little about the company's intentions in the grid-computing space. But that doesn't mean Microsoft is ignoring the evolving arena of grid/distributed computing.

      Microsoft is working on a skunk-works project that is code-named Bigtop, which is designed to allow developers to create a set of loosely coupled, distributed operating-systems components in a relatively rapid way, according to sources close to the company, who requested anonymity.

      Rather than attempting to tightly couple a few high-performance systems together, Microsoft is looking at the consequences of loosely coupling a larger number of moderately powerful computers to achieve a similar result.

      Bigtop's first commercial manifestation will likely be as some kind of large-scale project, most likely a distributed grid-computing operating system, the sources added.

      Bigtop is one of Microsoft's incubation projects. It falls under the domain of Craig Mundie, the Microsoft senior vice president and chief technical officer in charge of advanced strategies and policy, sources said.

      Bigtop consists of three components, all written in C#, according to developers who said they were briefed by Microsoft. These are:

      Highwire: Highwire is a technology designed to automate the development of highly parallel applications that distribute work over distributed resources, the aforementioned sources said. Highwire is a programming language/model that will aim to make the testing and compiling of such parallel programs much simpler and more reliable.

      Bigparts: Bigparts is code designed to turn inexpensive PC devices into special-purpose servers, according to the sources. Bigparts will enable real-time, device-specific software to be moved off a PC, and instead be managed centrally via some Web services-like model.

      Bigwin: According to sources close to Microsoft, Bigwin sounds like the ultimate manifestation of Microsoft's "software as a service" mantra. In a Bigwin world, applications are just collections of OS services that adhere to certain "behavioral contracts." These OS services can be provided directly by the core OS or even obtained from libraries outside of the core OS.

      Sources said Microsoft will likely make some sort of preview version of the Bigtop code available to the company's software-development partners by 2006. If and when the final version debuts, it won't be much before the end of the decade, sources added.

      It's not clear whether the Bigtop components will run on top of Windows when they are completed. But sources say that is what they are expecting at this point. End of Article

      I like their use of a circus term as a name for this project. It gives the impression of a bunch of clowns running around into each other and falling down. Kind of like MS systems on the web now.

  • Sun GridEngine (Score:2, Informative)

    by Anonymous Coward
    Gridengine just added Windows support:

    - Windows XP and 2000 (December 2004 availability)

    http://www.sun.com/software/gridware/ [sun.com]

    Gridengine's source can be downloaded from:

    http://gridengine.sunsource.net/ [sunsource.net]

  • Just plug an unpatched XP box into the internet. It will be part of the worlds largest grid computer in less than 2 minutes.

    It will also hum the tune Zombie Rock!


  • Already done (Score:5, Insightful)

    by Progman3K (515744) on Saturday January 01, 2005 @10:17AM (#11233475)
    There are millions of Windows machines out there participating in a distributed SPAM relaying network.

    I imagine if Microsoft 'enahances' Windows to do this even easier, it'll make it even easier for spammers to write the next-generation spamming-joe-jobbing apps.

    Kudos, Microsoft!
  • hardware is the cost (Score:3, Informative)

    by mtenhagen (450608) on Saturday January 01, 2005 @10:20AM (#11233491) Homepage
    When you want to do large computations the biggest cost is the hardware. So you want to make optimal use of your hardware by using software optimized for that hardware. Rewriting networkcard software can give you improvements of 10-20% for your specific application.

    On linux you can remove interrupts from the kernel if your app only needs polling. Stuff like that will never be possible with a closed source solution.

    Lots of ppl stop using solaris cause of this.
  • No ETs yet... (Score:4, Informative)

    by ScentCone (795499) on Saturday January 01, 2005 @10:23AM (#11233501)
    ...but my SETI@home screen saver is one of the most stable apps on my XP machine. It certainly doesn't qualify as "grid" computing, but it feels awfully big some days.

    Of note: I've got some Win2K web servers running in a native WLBS load balanced rig, and those machines have been doing swell for four years now. They talk to a cluster of SQL servers, but that clustering really doesn't count... it's more like hot fail-over. The native load balancing of the web servers, though, has been pretty tight and has scaled very easily, at least within my mid-market universe.

    I know, I'm just asking for it with this post. Just wanted folks to know that it's possible to push a couple $million of holiday e-commerce through some pretty cheap white boxes running MS's stuff. And yes, my cheap admin help is glad there's a GUI for some of the chores they don't do every day. All right, flame me now. But you have to do it from a command prompt.
    • I suspect MS is looking for a way to squeeze some extra money out of this, perhaps implement some subscription services which they've been trying to do for a while. There would be a lot of smirks though, if IBM and Apple enter an unholy alliance and corner the low-end clustering market.

  • Microsoft are wonderful. Trying to sell things they don't have in order to make it look as if they are ahead of the pack.

    So if its announced in 2004/5 it will be "scheduled" for launch in 2007, but actually arrive in 2009.
    • The article said they plan to have it out by the end of the decade ... they didn't say WHICH decade, so we're probably looking at 2019, or maybe 2038.

      It's (the announcement) a trial balloon, along the usual Microsoft marketing lines (throw enough shit, some of it will stick). They did the same thing for years with the original versions of Windows, to try to keep the market from adopting competitors.

      Of course, they're too late - the free OSes have them beat already, and by the time Microsoft comes out with

  • by skinfitz (564041) on Saturday January 01, 2005 @10:44AM (#11233572) Journal

    Q. What do you call a cluster of Windows machines?

    A. A botnet.
  • by fmobus (831767)
    wow imagine a beowulf cluster of these... oh wait...
  • A giant Windows machine cluster that gets a virus.
  • by Ancient_Hacker (751168) on Saturday January 01, 2005 @11:08AM (#11233657)
    Last time I checked, Windows in all its multifarious versions has no way to run a program in a sandbox, such that this program is incapable of DOS'ng the PC by opening tons of windows, file handles, memory blocks, processes, etc.... If the system isnt designed fromt he ground up to be compartmentalized, stable, and secure, IMHO there's little change of grafting all these qualities on a decade down the road.
  • by Anonymous Coward on Saturday January 01, 2005 @11:09AM (#11233662)
    Summary of every post in this topic:

    This is bad. M$ is evil evil. *Cough* . Bloated, FUD, GUI, copied MAC, FUD, [nonsensical, nonsensical] bloated, *Cough*, I'm waisting my life ^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H. I can't believe people are so stupid to belive such M$$ lame FUD, propoganda [ nonsensical... ] Blue screen, Blue Screen!. Linux good. Why are M$$$ so stupid? Ha Ha, I'm so much smarter. *Cough* Blue Screen! this is like Clippy! [nonsensical, nonsensical], really crap. Mac good. Bad idea, unstable. Blue Screen! Open Source, Open Source! [ nonsensical... ]. M$ Bob. Zombie. Blue Screen, Blue Screen! Security ^H^H^H^H^H^H *cough*. IE, ahhh! ahhh! Blue screen. Stupid.
  • Does that mean that when (if?) Windows Longhorn boots up for the first time, the user will be offered a list of available botnets?

    That would be a major advance on the current behaviour of just selecting a botnet at random, a system that has annoyed some users.
  • DCOM, COM+ anyone? (Score:4, Insightful)

    by Otis_INF (130595) on Saturday January 01, 2005 @11:26AM (#11233726) Homepage
    Windows already has distributed computing build in, with transaction support which controls cross-machine/process transactions, it's available in every windows box (2000/XP/2003). Furthermore it has object-level security settings, based on roles, integrated in for example Active Directory so you can control which user can access/run which object.

    'Grid computing!!!111'... it's a buzzword. The technology is already available for many years, however not a lot of software uses it, if you look at the many many applications available.

    Considering that the Windows platform has never had the ability to parallel compute in the past, it leaves great potential to the company's operating system development.
    I don't know how much 'ReeprFlame' knows about windows, but it can't be a lot. :-/
  • Repeat After Me (Score:2, Informative)

    by codepunk (167897)
    All the MS fan boys on here need to repeat after me.

    Windows does not have clustering!

    Although they may have the capability of real clustering some day they do not have that capability today no matter how much your resident MCSE talks about his great exchange clusters etc. Windows can load balance and it can provide failover and it can run some distributed processing software but it cannot natively cluster.

    Linux on the other hand has the tools available to run a true cluster, failover, load balancing and
  • http://bofh.be/clusterknoppix/
  • A lot is TWO WORDS. Why is it so hard for you to do the jobs you are paid to do?
  • I think MS has finally admitted that until windoze can match *nix (LINUX, UNIX, OS X) in the distributed computing sphere, research is not going to touch their stuff.

    Fact of the matter is they have a pretty hard uphill battle ahead of them. The research computing community is as pro-linux and UNIX as any zealot here on slashdot.

    Nearly the entire U.S. goverment uses UNIX (mostly LINUX actually) within the supercomputing realm. DOE and NSF's supercomputing centers all run LINUX.

    We'll know how serious they
  • Ahem, (tap, tap, tap):

    Clustering Solutions for Windows NT:

    http://www.windowsitpro.com/Windows/Article/Arti cl eID/228/228.html

    http://www.amazon.com/exec/obidos/tg/detail/-/01 30 960195/102-3088378-9911361?v=glance

    http://research.microsoft.com/users/gray/Wolfpac k_ Compcon.doc

    I can't be the only one that had this book.
  • I don't think, given their current virus propagation, I'd want a grid of windows boxes. I'd guess one gets a virus and it would spread to the grid. They need to resolve the security issues first and then they can do the grid computing. I'd say we wont see this working effectively for at least 5 years. Yes they may have it "working" now, but I'd rather run a grid of *BSD's / *NIX than Windows. Just my 2cents.
  • Wheredo peopleget the ideathat alot is aword?

    Could they change that idea, please?

Mediocrity finds safety in standardization. -- Frederick Crane

Working...