Please create an account to participate in the Slashdot moderation system


Forgot your password?
Encryption Security

Wired on the 'Breakup' of 24

binarydreams writes "Wired News has a pretty good article about Adam Beberg leaving It provides more details about the disparate goals of Beberg and Distributed. Provides much of the background lacking in the two emails sent to the distributed announce list. "
This discussion has been archived. No new comments can be posted.

Wired on the 'Breakup' of

Comments Filter:
  • by Anonymous Coward
    SETI is crap. Period. Read, say, Ernst Mayr's "The Probability of Extraterrestrial Intelligent Life."

    I'm all for using spare cycles on your computer, but at least use them for something useful. SETI ain't it. My recommendation would be projects like the Mersenne Prime search at [].
  • I do not believe that the current split is the cause for the slow restart of stats. Nugget has been working very hard to get stats back online and it has been him who has been writing the new stats engine. They have the hardware, Nugget is just tying in the software. And new features are coming out every day. You just have to be patient and let the group of volunteers do the work.

  • Their next step is to take on OGR [], with Project Kangaroo.

    Alex Bischoff

  • Maybe OGR will help reverse the trend, but as things stand now, is, if not dying, at least not growing nearly as fast as it used to, and a lot of older members are dropping out.

    My own team, for example, despite my best efforts, is down to about 10 active members, from a peak of 40 or so. Some of those have been people who cracked for a few days and then quit, which wouldn't worry me. Now, however, we're starting to lose people that have been cracking for many months, because they simply don't care any more.

    We've been working on RC5-64 for more than a year and a half now, with nothing more to show for it than a "percent checked" bar still in the single digits. At this rate, the best we're going to prove is that 64-bit encryption is pretty darned strong, which isn't exactly a goal to generate excitement in the masses.

    I've been doing since the last couple months of RC5-56 - approaching two years now. I'm running the client on every box I've got (with the exception of my Novell server, which I gave up on after the first two times the client crashed it). My machines have been responsible for some 100,000 RC5 blocks. I'm currently cranking out 1.5 - 2Mk/s, which is enough to keep my ranking in the top quarter of active participants.

    And I'll drop like a rock if SETI@Home manages to produce a client that uses less than 20 fscking meg of RAM. They're at least doing something potentially useful. RC5-64 isn't.
  • That's wonderful and all, but where is the expandible core we where all waiting for? V3 had the promise to expand distributed computing to nearly any problem one could think of, not hard coded rules in clients to help compute a few things here and there. How about an autoupdate capability for the clients to upgrade their own cores online? It's just, well, getting very boring, really..
  • The project itself is slowly dying.. Are they going to continue cracking RC5 with nothing new for ANOTHER 5 years? I joined with the promise that the project would take on more problems, to be expandable with the arrival of the V3 clients.. What are they intending on doing in the future?
  • by Nugget94M ( 3631 ) on Sunday April 25, 1999 @02:18PM (#1917461) Homepage
    I apologize for any perceived vagueness in the original announcment from We all basically felt that a broad and more general treatment of the issue was the most appropriate tack to take in the initial announcement. It was very important to us to ensure that people realize that this divergance was amicable, friendly, and desired by all parties. To dwell too much on the specifics of the difference of opinion might be misinterpreted as drawing lines in the sand or bashing.

    While the actual issues are simple, they are fundamental; the difference of opinion between Adam and is more related to development philosophies than it is to our respective visions. We are both still striving towards a next-generation, general purpose distributed computing protocol and implementation.

    Adam sees Cosm as very much his personal invention, and he wants to see his vision implemented. We are more interested in exploring in the direction of a more open development environment. Trying to co-mingle those two philosophies was difficult and ultimately damaging to the organization.

    Open-source is the holy grail of distributed computing and is arguably the single greatest task lying ahead of us. It also makes sense for this task to be the first we tackle as we move forward.
    I would say that it is by far the most compelling and desirable goal we've laid out.

    The move from our sub-optimal "security through obscurity" model (which was never intended to last as long as it has) to an open source model is not really an issue of just slapping on some "extra security", however. The concept of trusting work performed by untrusted code is the sticky-wicket of distributed computing. Zero-Knowledge Proofs, as treated to date, don't entirely address the issue in a compelling and aesthetic manner.

    I'm not sure anyone knows quite the best way to approach this problem, and it is our hope that by encouraging discourse and open development we can, as a group, hone in on the most appropriate choice for our various applications.

    Believe me, though, when I tell you not to read anything at all into the fact that we have been closed source to date. This does not imply any loyalty to closed-source or closed development. We are all very committed to solving this dilemma and we always have been.

    It has been very difficult for, as an entity, to agree upon and convey a common and compelling focus when internally this was not the case. Unfortunately, much of our energy lately has been spent trying to reconcile two distinct and at-odds design philosophies.

    Ultimately we all decided that it was no longer prudent to try to come to an agreement and thus the decision followed that Adam and should each proceed in their own desired direction.

    On a technology front,'s goals are to utilize a truly open development environment to develop the next generation of distributed computing client and server. We are committed to moving our codebase beyond the ultimately indefensible closed-source model and to an open source codebase. Not open implementation, but truly open development. needs to begin living up to its name and distributing not only our client base but our brain trust as well.

    On an organizational front, our goals are unchanged. We seek to be the central standard for distributed computing. To continue to grow exponentially and expose as many people as possible to the concept of distributed computing and encourage them to become involved in the group. We wish to be the bar against which all distributed computing efforts are
  • But it doesn't really tell you much at all. "It's just that how we aim to do that is different." Okay... how? Why is it that he has to leave? Is it because he's going to do it for commercial purposes? Yes, we understand that the goals are different now, but why?

    Anything's better than the letters, which were written in the "lengthy but tell you nothing" style. I like the guys, I crack a lot of blocks, but sometimes they frustrate the hell out of me...
  • by hime ( 5963 )
    Well, you're welcome to download it, but the initial reactions have been pretty negative. Apparently it's rather large, requiring around 20 megs of RAM to run, and the amount of data it has to transfer is also quite large. I don't run Linux, so I haven't had the chance to play with it.

  • by hime ( 5963 )
    SETI@home in the past rejected an offer of help from - they were offering to make a core for SETI so that DNET supporters could participate. SETI was not interested for whatever reason, so somehow I kind of doubt that SETI will be interested in Cosm.

    Besides, it seems to me SETI's more interested in riding the Paramount/Sun/etc money train so they can keep their funding. Kinda sad, really.
  • While distributed computing protocols would probably be useful to those who own many machines, or have access to them (in a university, say), I highly doubt that people would be willing to allow remote delegation of their spare cycles for someone else's pet project.

    Tasks like breaking encryption (especially when prizes are involved) is general enough that people would want to participate, and I think that using distributed computing for SETI searches is an excellent idea. But I'm not sure I'd want some 'group' somewhere delegating my spare cycles without some real compensation.

    Speaking of SETI, has there been any progress on getting this going? Such a better idea than silly encryption breaking lotteries.

  • "Cosm [is a set of] protocols designed to allow really large-scale distributed computing over the Internet," said Beberg, who will run the project. "Basically, the goal is to get distributed computing big."

    Anyone have any tech details on this? This could quite possibly lead to larger scale rendering farms, and even Beowulf apps via the Internet. Don't know if conventional net connections could handle it just yet though. [] is doing something similar to the SETI@home project []. SETI@home implements a Screensaver on computers connected to the net, and when the computer is idle, the screensaver uses the processing power of the computer to crunch numbers, and look for signals from possible extra-terrestrial life. It's supposed to launch this Month. (I signed up back in November. heh) The point to my madness, is that whatever this new protocol may consist of, SETI might get use out of it.

    SETI@home is available for UNIX [] platforms RIGHT NOW! (Windows and Mac users have to wait. :)

    -- Give him Head? Be a Beacon?

  • Interesting. I wasn't aware of that.

    Nevertheless, I'm still going to download this thing anyway. I've always wanted to study the possibilities of other life in the universe. (Plus, it runs under Linux. YAY!!@%#)

    -- Give him Head? Be a Beacon?

  • Has anyone thought about building into the protocols a method of charging for your services? The challenge has an uncertain payout - most of the reason to put more computers on the case seems to be just to score higher than everyone else - but it's also a 'charity' thing. We're donating time to a research project.

    For commercial applications, I think the intangible benefit of putting your computer on the net for someone else to use is almost nonexistent. The company is making money out of using your computer - why shouldn't you make some money in return? I think that it shouldn't be too difficult to arrange some scheme where you can get some monetary benefit (credits at internet stores, payments to an internet bank, or even money banked to a normal bank).

    There are lots of issues here - maybe I'll write them up on my own pages. Email me [mailto] (after taking out the anti-spam part of my address) with your thoughts!

  • by Duncan3 ( 10537 ) on Sunday April 25, 1999 @06:28PM (#1917469) Homepage

    Unlike what the Wired article was looking for, there was no internal conflict that led to the breakup.

    It was simply a matter that Cosm and myself have been headed one way, and others in want to head another. So we parted ways. It was the Best option for Cosm and both.

    My goals since long ago have been to build a general purpose architecture for large scale distributed computing. A system that will allow projects all over the world to get done, while maintaining security and data integrity. That is what Cosm is.

    All the Cosm information that's available is at [] and more is going up every day.

    As has always been the plan, Cosm source will be public, both for development reasons, and security/trust issues. This is due to happen in a few more days (May 1), but could not happen until a design and framework were available to guide the code. A lot of very tough problems have had to be solved during that process. And now it's time to make Phase 1 happen.

  • 2 pts:

    Ernst Mayr's belief that SETI is pointless is questionable. There are hundreds of BILLIONS of stars in a galaxy, and hundreds of BILLIONS of galaxies (probably more galaxies in the universe than stars in a galaxy). So there are ~10^12 stars. Even if there are only habitable planets on a tiny fraction of these, and life on a tiny fraction of these, an empty universe is a far fetched thing. Besides, the sun is rather young compared to other stars, so there are many star systems with huge (3 billion year) head starts on getting intelligent life.

    Also, SETI isn't necessarily looking for a message. It may be more likely that we pick up the equivilent of a TV broadcast. Recieving a purposely sent signal may be unlikely, but it ONLY took us ~6000 yrs. to go from recording our history (civ. in Japan) to radio. Ever since the first TV broadcast we've been spewing out carrier waves. Coherent signals. So, it's pretty darn likely that there is a signal out there waiting for us to hear.

    There isn't much you can do at this time with free processer. There's cracking codes, looking for primes, and SETI. Rendering is out because nobody has set up the proper organization yet (I think).
    It would be a better use of tax dollars to give vaccinations or something than to look for alien life. But free computer cycles can't be used for much in the way of socially usefull things. So SETI@home is a good idea. Anyway, alien life is interesting (to the average luser). Mersenne primes are not. SETI@home will get the masses used to the idea of distributed computing. Until that happens, it won't be practically useful.

    Besides, I'd give processer time to both.

    (Sorry for the LONG comment)
  • I think a large global distributed machine network would be an excellent idea. It's basically a giant computer time-share. :) This concept worked for mainframes across north-america, so as long as abuse of the system can be prevented, I think this would be a great asset to people around the world.

    I would love to be able to render a Povray scene at 2000x2000 A0.8 in less half a second. ;)

  • I always wondered why the stats engin had never been completely finished.


  • Oh much for sarcasm.

  • by Otto ( 17870 )
    Sounds to me like it wants to be more of an open project type of distributed computing, with the protocol being the backend to handle the passing out of code/data..

    Sounds feasible, but with problems.

    Say we want to make a program which can work like, but with an arbitrary program. The program would still have to be highly parallel, but most computer intensive programs are. We would need:
    a) a protocol that can pass the bits of code out to the clients
    b) a protocol that can pass data back and forth between the clients
    c) a client that can run on several platforms, and execute the code given to it
    d) a whole bunch of clients

    This is surely possible, but several issues come up. The first one being, why the hell would I run a program on my computer that works for someone else? Where's my compensation?
    The easy answer to this is you setup a pay scale. Your main computing center accepts jobs to be put into the system. They get paid for this. They subtract a small (or not so small) fee for their overhead, and the rest gets distributed to the people whose computers are doing all this work. Of course, not every computer is equal, so you have to pay according to work done. This is best accomplished on the servers, by tracking, for example, number of completed blocks received.

    Also security becomes a big issue. If this server is sending my computer code to execute, what if that server gets hacked, and the code replaces by malicious (sp?) code? People executing random code on other's machines need to take this into account. Perhaps on client side, you execute your code inside a very well protected environment. Heck, use a stripped down Java VM, or something similar.

    Also security on the pay scale side. What's to keep someone from hacking your protocol to send bad completed blocks back, and then get paid for them? If you could check to see that each block was correct, then you wouldn't have to send out each block to be done, would you? So some form of authentication is needed, along with a system of tracking each block done by whom?

    Anyway, it's a good idea. I'd particpate in it, if it's done correctly.

"An open mind has but one disadvantage: it collects dirt." -- a saying at RPI