Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Encryption Security

distributed.net Contest Setback 92

meisenst writes "distributed.net is reporting that there was a problem earlier on in the CSC project (a few weeks ago) and that about 25% of the blocks will have to be re-done. Their announcement is here." We've gotten more than a few submissions about distributed.net showing more than 100% of all packets processed, but held off posting it until we had the official word. And here it is.
This discussion has been archived. No new comments can be posted.

distributed.net Contest Setback

Comments Filter:
  • Well, the bad part is that the CSC project was also affected. As most people know, the CSC project was supposed to be a very short key cracking project. The RC5-64 project isn't affected as bad since it is a long-time project.

    Does anyone know what specifically caused the problems in the key server?
  • by CvD ( 94050 )
    This is really annoying... Not the fact that we have to 25% again, but the fact that it took so long for them to say anything at all, while, as far as I can tell from the article, the problem was already known before we reached the 100% mark?

    I will still continue to support them, however.

    Cheers!

    Costyn.
  • by cetan ( 61150 ) on Sunday January 09, 2000 @04:27AM (#1389577) Journal
    distributed.net is made up of humans, people make mistakes. It's not the end of the world, but it is a little annoying.

    On the plus side, with the current key rates the remaining 25% will be cracked within a week or two.

    The other plus is that the key may not be at the very end of the remaining 25% of the keyspace and therefore we will finish even quicker!
  • Its only about a week of additional effort... probably the most hit will be the RC-64 project, which has been diminishing its rate slightly recently because of people flocking to CSC... hopefully, just another week is needed before getting all the people working back on RC-64.

    Go distributed!
  • It's VERY important to note that "Official" work about the CSC keyspace being over 100% was ALREADY explained!

    It works like this: D.Net rechecks blocks from suspicious clients. D.Net rechecks they by handing out those blocks again. D.Net has said repeatedly that they give full credit for EVERY block handed out in this way from the keyservers. AS SUCH it's easy to have the total go over 100% - this had NOTHING to do with the server problem. The article posted does not make this clear, and should be revised!


    Hey Rob, howabout that tarball!
    Oops... Another 24 hours now...
  • I wonder how many barrels of oil that wasted.
  • by Anonymous Coward
    I heard on IRC that it was due to the cipher text and plain text data getting corrupted and having the wrong data go out due to an unnoticed transient code bug (that was later fixed by other code changes).
  • Something was mixed up in the article. The more than 100% of the keyspace checked was due to some reissueing some blocks on purpose to avoid people submitting fake blocks and such and get up their keyrate. So without the error dnet made they would get over 100% anyway.

    Also RC5 was not affected because it is a longterm project, but because the error was somewhere in the CSC scripts.

    And apparently they only discovered the bug just yesterday orso and not some time ago.
  • I agree with you. reported this on the 3rd, here [distributed.net]. While it's a bit of a "setback, it's still fun to crack keys at this rate. I am glad they finally adjusted the visible keyrate as well. Go D.Net!
  • You know, I'm really getting tired of hearing bullshit comments like this any time distributed computing projects are discussed. HELLO? Some machines have to be up 24/7, mine included. And when things get slow, or even not D.Net is a perfectly good way to NOT waste energy resources on fan noise.
  • by D. Taylor ( 53947 ) on Sunday January 09, 2000 @04:47AM (#1389591) Homepage
    There is actually more than one problem with CSC, which is causing it to go over 100%

    First: dbaker (Daniel Baker), released an official anno uncement [distributed.net] explaining that the same blocks were being issued to multiple clients, to attempt to detect cheaters.
    Then dbaker released another anno uncement [distributed.net] in his .plan [distributed.net], stating that 9-12% of the keyspace was being duplicated.

    Second: nugget (David McNett), released an announcement [distributed.net] stating that there had been a problem with the keymaster generating invalid blocks, resulting in 25% of the keyspace being duplicated.

    So, one remaining question is, are they still sending out ~10% 'verification blocks'? Or have they abandoned that to allow us to complete the project faster?

    We have reached 112% due to verification blocks and could reach 140% due to 25% of the keyspace being corrupt. However, if 12% of the 25% new blocks are duplicated, then we could reach about 155%...
    --
    David Taylor
    davidt-sd@xfiles.nildram.spam.co.uk
    [To e-mail me: s/\.spam//]
  • Well, distributed.net participation should in theory be only idle cycles from machines that would normally be left on anyways, so the relative difference in power consumption between a running-but-idle machine and a computationally-intensive state should be relatively insignficant (much less significant than from a completely powered-off state).
  • The invalid keyblocks have already begun redistribution, so yes they will be reprocessed (correctly this time). The person who correctly resubmits a validly re-computed version of that block will also receive credit. It's not possible for the old previously distributed version of the block to capture the second-pass credit for the block because the invalidly computed blocks are now being screened and discarded as they arrive at the keymaster.
  • We've attempted to scale back verification blocks slightly now, however to help ensure the validity of the network, we're continuing to do re-verification work as frequently as deemed necessary.
  • Many people leave their machines on overnight for the soul purpose of distributed.net. I leave mine on overnight because I don't want to wait for a boot in the morning. heh. I also don't have to use my heater in the winter..

    Seriously, I'd like to see a calculation on how many barrels of oil per hour/day go to distributed.net.
  • While most of dnet's client software is open source, the keymaster and proxy software is not. The problem that was found was found in the closed sourced keymaster software. This bug may have been fixed sooner if the source were available. I don't understand why the source needs to be closed for the keymaster software. Proxy and networking, I understand the arguments for (though I do not necessarily agree with).

    TomG
  • It is in effect a stats error. The problem is:

    To be fair to everyone, everyone is given credit for the block, if it is a 'virgin' one, or a reissued one (however, not if it is a duplicate block, they are filtered by the keymaster before reaching the stats).

    The way the stats server currently counts the percentage complete, is simply counting all the blocks it is told have been completed, and dividing that by the number of blocks in the keyspace.

    Because people are being credited individually for duplicate blocks, the total no. of blocks done includes these duplicate blocks.

    To fix it, the stats need to know if a block has been reissued, and if so, only give credit to the participant -- but not the whole effort, as doing the same block twice *doesnt* increase our keyrate.
    --
    David Taylor
    davidt-sd@xfiles.nildram.spam.co.uk
    [To e-mail me: s/\.spam//]
  • I think that what they say in the announcement is that the RC5-64 project wasn't affected at all. The problem is only with the CSC code.

  • While the main article does do a good job of saying that we as D.net'ers have to re-do about 25% of the keyspace, it does not cover the ending editors comment as to why we reached 100%. The real reason that D.net reached 100% is that we re-test somewhere around 10% of the blocks. This can be found in one of the portions of This Document, dBaker's plan [distributed.net]. I hope this clears up any confusion as the two topics are related, but not the same.

    J. Marvin
  • The percentage reported by statsbox has always been a "rough" percentage that was computed from the sum(creditedblocks) for all users. (The total number of blocks was easilly assumed, based on the fact that it is a 56-bit contest).

    That calculation was reasonable for contests where we did not have the potential of giving credit for a block more than once. However, with block re-verification (and now with the 25% of re-issued keyspace) there has been a lot of duplicate credit for the same blocks, so it's easy to see how the total number of credit blocks can exceed the number that were originally planned to be uniquely distributed.

  • by dattaway ( 3088 ) on Sunday January 09, 2000 @05:34AM (#1389609) Homepage Journal
    Ah, power consumption! My utility company provides the 125 volt standard; right now its at 123.2 RMS volts at the outlet. Currently my two computers (desktop and laptop) on the UPS main at full CPU load and my 17" monitor at half brightness are using 1754mA [attaway.org], which makes for 216.1 watts. At 8 cents per kilowatt hour, that will net me $12.44 a month.

    My monitor consumes 1055mA, or 130 watts, or $7.49 a month. Turning it off could be a big savings.

    The main computer required to host web pages, etc., consumes 600mA, or 73.9 watts, or $4.26 a month. That's under full load, cracking CSC, serving MP3's, and providing limited remote control functions for the house.

    My laptop with the screen off, consumes the remainder of the power.

    Let's see how much power is saved by turning off the monitor, CSC cracking, MP3's, and the monitor... 1658mA, or 204 watts total. 8 watts saved? Well, there's too much going on this setup, so the savings are insignificant.
  • Unfortunately, we've reviewed the proxy codebase and there is very little useful code in it that can be usefully disseminated without weakening the [non]-security of our data protocol. These are the same problems that prevent us from releasing source to all parts of the client (we've released most all of the non-network parts of the client at source [distributed.net] already).

    Quite truthfully, releasing binary-only clients still does not completely eliminate the possibility of sabotage, since it is relatively easy for any knowledgeable person to disasemble or patch binaries. This is actually quite a trivial task, so we urge you not to try. Indeed, security through obscurity is actually not secure at all, and we do not claim it to be such.

    We understand the problems that are preventing us from becoming fully open source and are working to solve them. We have already done a lot of research work in this area and are working towards an eventual solution that can be fully open source. The data re-verification introduced in CSC is part of this! You can check out some of our work in this area at opcodeauth [distributed.net]

  • As Tom stated, he/I understand the reason for not
    releasing the source to the proxies, which have
    to attempt to communicate securely, however there
    must be a large amount of non-communications
    related code in the keymaster, which *could* be
    reveiewed by other people, if it were open source.

    What reasons are there for not releasing the source to the
    keymaster? (Excluding proxy communication code)
    --
    David Taylor
    davidt-sd@xfiles.nildram.spam.co.uk
    [To e-mail me: s/\.spam//]
  • From the announcement page:

    Participants will still receive full credit for their completed blocks, regardless of their validity. In the near future we will be supplying the stats server with the information it needs to properly discount the effects of any reverification work in the reported percentage.

    --

  • This problem probably would have been caught sooner had the keymaster source code been released. How about it, d.net guys?
  • ... to nugget. In all honesty, it can take a lot to admit that you fucked up, and that's what this is. As he said; they're human too, and mistakes happen. We should just be glad that this got caught before we were at like 200% and wondering what the hell was going on :)

    Now let's get cracking and finish this thing, and my applause again to nugget for getting it all fixed and admitting it.

    ---
    Tim Wilde
    Gimme 42 daemons!
  • The keymaster codebase is the same as the fullserver/personal proxy codebase, only compiled with different compilation directives to enable/disable different parts of the codebase. The only portions of the codebase that would not be necessarily restricted would be the low-level network socket wrappers that do nothing more than providing an abstraction around the platform-specific network i/o calls.
  • If they were to release the keymaster source, or the client source as written somewhere above, there could be rampant abuse of the source in order to produce what some people would think of as desirable results.

    We've seen this before, actually, and it's still in the process of being dealt with; the release of the Quake source to the hungry open source world. Immediately, those few (it's always the few) who have suspicious morals began to use the Quake source to cheat, effectively giving themselves impressive handicaps.

    The same thing could most certainly be done with distributed.net source, especially if one knows how to trick the keymaster into thinking certain things (a block is done without really being done, a client submitted 100 packets instead of 10, who knows). It is for this reason that I not only thank distributed.net for -not- releasing their source, but applaud their decision to not do so.

    Sure, I'd be more than happy to know how they do what they're doing so well, and yes, the bug would probably have been noticed/squashed more quickly had there been a few thousand code monkeys jabbering away at it, but to be quite honest, the distributed.net team responded as quickly as they could, and in a very professional manner. So, there's really nothing to complain about, IMHO.

    meisenst
  • Something seriously wrong indeed.
    Seeing this note about the block corruption in CSC client, I checked my client.
    I noticed that the new client had core-dumped on
    my Linux machine this week, other programs are ok.
    -- implying some corruption of major kind.

    Has anyone else encountered this behavior ?

    My client version is v2.8003-448 client for Linux
    My system is SuSE 6.3 (i.e. libc6).
    I run 2.3.x kernels -- but they haven't crashed ever --
    so methinks this has is a problem with the
    client binary.

    yes, i have enabled csc and rc5 on client.
    -ak
  • Interesting that you say that... as I recall slashdot flamed the #$@! out of a group called SETI @home for making a similar mistake.
  • It is not human error that is challenged here, but the way d.net does stuff using closed-source code for "security" reasons, ie they should have open their code long ago. Why they think that a good attacker would not be able to reverse engineer their code? It's written in pretty tight C, and main crypto routines are in Asm, which make the program extremely easy to analyze (provided no special efforts were made to thwart disassebly/debugger, which is highly platform dependable and thus unlikely to be well done).

    This is the call to d.net, stop fooling around and open your client! Even if it will take to shutdown all submission for a month or any time, we need TRUST in what we are doing. Who shall we blame when no key will be found at the end of RC5-64? Not nugget, but the closed source approach they use.
  • hmm..running code on untrusted machines is still a big problem and there will probably be no easy solution. Have you guys thought of downloading something like an applet type server compiled binary at client run time that plugs into the untrusted code and executes in a sandbox ? That may be mroe difficult to hack than the usual "download from our server and have time to hack it with a debugger" situation. you can also change the crack algorithm on the plug in for different contests without updating the old codebase.
  • I think perhaps you missed a few lessons in your electronics class. Power is not equal to Voltage times Current for AC systems unless the power factor is 1. For the switching power supplies in computers the power factor is usually in the 0.6-0.7 range so the actual amount of power you are using is 60%-70% of what you are indicating.
  • > We've seen this before, actually, and it's still in the process of being dealt with; the release of the Quake source to the hungry open source world.

    I know this is slightly offtopic, but a week ago when I read Jeff "Bovine Lawson's .plan of Jan 6 [distributed.net] and then his treatise on operational code authentication [distributed.net], I thought I was reading a Slashdot thread on how to correct the problem of cheating in an open-source Quake.

    It seems that both distributed key cracking and distributed Quake playing :) face many of the same cheating issues. It's clear Jeff and co. have thought a lot about what problems would have to be "solved" before they can go open source (which they hope to in the future). I highly recommend that anyone interested in the whole Quake cheating problem read Jeff's thoughts first. (He even mentions why Netrek-style blessed binaries won't work in general.)

  • I used to have 4 different email accounts of mine cracking keys for them on boxes at home, school, work, and I even tried to recruit friends to help me work on it. We had a decent team, and we were doing several thousand keys a day. It was fun.

    But we've had problems with bugs in the clients, (like this extremely annoying transparency bug on windows) also the fact that there isn't a way to control your team efficiently - and to top it all off, they recently had a problem over there that DROPPED ALL OF MY MEMBERS FROM MY TEAM.

    That was the last straw. That's a serious problem since 2 of the email addresses on my team aren't valid anymore so we can't rejoin that email address to the team. The boxes that are running using that email address are in a location we can't go to to change the setting there.

    So, after all of the hassle, it stopped being fun. Particlarly when they dumped so many people from their teams. I stopped participating, and distributed.net lost about 20 clients. Maybe I'll try SETI or some other CPU donation project that has their shit together.

  • dnetc v2.8003-448-CTR-99121202 client for Linux (Linux 2.2.13). Running on SuSE 6.3 with a re-compiled but stock 2.2.13 kernel, no problems here.
  • I understood from reading the d.net documentation that there is an issue of blocks that were never returned to the keyserver. These lost blocks were counted in the percentage but once the project "complete", the non-returned blocks would have to be redistributed assuming that the key had not already been found.


    That could account for at least part of the 25%. Especially since the Windows client tends to lose the packet that it is working on when the machine crashes.

  • yes that client release was ok. The last 2 new clients (2000 releases?) are problematic.

    I just downloaded current client:
    x86/ELF/glibc2.1/MT v2.8005.453 released on 2000-01-09.

    and this one also has a problem with SIG_SEGV core dumps.

    The gnu libc being used is libc-2.1.2-31

    -ak

  • My machine is a K6-2 -- was your machine K6-II ?
    since these could be specific problem with
    computation-cores.
  • Why they think that a good attacker would not be able to reverse engineer their code?

    First of all,the problem appeared on the _server_ side, not on the client. Opening the source of the client wouldn't have made any difference.

    Second, this has be discussed several times before. distributed.net is well aware that their code can be reversed engineered. But it does raise a bar. And they rather have a few attempts to disrupt the contest than many. And it isn't that distributed.net hasn't tried a fully open source client. They did. And the script kiddies had their fun, so now the source is partially closed.

    It's written in pretty tight C, and main crypto routines are in Asm, which make the program extremely easy to analyze

    Yeah, specially since you can download the source of the crypto routines. Isn't it remarkable that the whiners about open source/closed source don't even know most of the code can be downloaded?

    -- Abigail

  • probably the most hit will be the RC-64 project

    Uhm, no. It only effects CSC.

    -- Abigail

  • Oh please. DNET is a dying cow for various reasons. Their biggest problem is their lack of speed in getting anything done. They had a CSC core long before they deployed it. They've been working on OGR for two years (I was the first person to work on it.) They've had very simple solutions to prevent "cheating" for years yet they've done nothing. It's taken how long to get updated clients and some still haven't been updated? They want their name plastered all over everything but they fail to give credit to those who have contributed to their work. I did some work to retool the stats processing, but I guess it made nugget look too bad as no one ever looked at it -- a full days run can be done in less than 15 minutes and not require gigs of "temp" space; in fact, sans ranking, it could be near real-time. And to make matters worse, they "secretly" harrass their founder and former president for several months over a domain name they never used and never will. I bet you didn't know DCTI filed three suits against Adam L. Beberg [iit.edu], Mithral Communications and Design, Inc. [mithral.com], and "those in concert therewith" a few weeks ago -- right before Christmas.

    Bovine's explaination of netrek only shows his lack of understand of how "blessed binaries" work(ed). There is no "date" in the key itself. Expiration is completely artificially added by the key service agent along with other bits of data -- it was not to prevent people from recovering the secret key but more to push people to upgrade their client(s). Netrek uses (used) 128bit RSA numbers ("secret key", "private key", and "public key"). In the modern world, factoring a 128bit RSA number is not difficult. Recovering the secret key embeded in the client is _NOT_ easy. Unlike Xing, the key is not in one place; it's randomly scattered throughout a very large static binary. (The key is generated and destroyed by the build.) In ten years of working with netrek, I've never known of anyone to recover the key from the client. I have personally regenerated the secret key by factoring the public data, but that was (at the time) a very non-trivial task.

    I must admit, I'm starting to truely dislike DCTI and their current attitude. The CSC clients are duplicating work. They don't tell anyone this until it's obvious -- over 100% of the blocks have been checked in. They continue to have trouble with stats -- two years and counting now.
  • Yes, "people" make mistakes. However, most people fix their mistakes and don't hide behind them. DCTI has had numerous events of this nature (tho' most aren't this severe.) It's become an almost defacto standard to cover everything up. How many remember the bug found in the RC5 ultra core? I cannot find any mention of it at all in the .plans despite Remi talking about it. [It was converted from the alpha core and there was a slight error that caused it to increment keys incorrectly with the net result of part of the key block having never been checked. Every block submitted by a client that _could_ have been affected got reprocessed. Unfortunately, that meant throughing away the work of _every_ sparc of that version -- even those that didn't use that core as there was no way to tell if it was an ultra or non-ultra client.]

    So, how much other shit have they not told anyone about?
  • C++ actually.

    Their "security" doesn't stand up to even the simplest analysis. No debugger or disassembly is required to figure out how the blocks are being scrambled. Unfortunately, DCTI's existance depends on that scrambling remaining a secret. Once the world knows how it works, DCTI is doomed. There's no way they could prevent cheating. And there's no way they could replace the technology fast enough to keep a user base. (Plus, that would necessitate replacing every client in existance.)

    The more code they release, the easier it gets to recover the block scrambler technology. That not withstanding, there's a lot of code they are sitting on that could be released. The only file(s) that must be protected are those linked with block scrambling.

    PS: The "scambler technology" was written by Duncan (Adam Beberg) years ago -- long before DCTI existed. Yet, DCTI claims ownership of those few dozen lines of code.
  • Sure, if you do not feel that it is fun any more take your ball and go home. But no that you have had your little fit, maybe you want to act like a grown-up for a little while and take a closer look at the situation.

    It is not distributed.net's problem that you cannot access your systems to update information that is no longer valid. Yes they have had some issues as of late, but they have made a real effort to correct them, maybe you should send an e-mail and demand your money back. Oh, that's right you didn't send them any money.

    Good luck finding ET.

    all persons, living and dead, are purely coincidental. - Kurt Vonnegut
  • Their biggest problem is their lack of speed in getting anything done.

    Yeah, I've been bitten by that. CSC for the Mac and Tru64 Unix clients took a while to arrive compared to the Wintel/Linux/etc CSC clients. Of course, dcypher only has clients for the x86 architecture on 3 OSes...guess they don't want my cycles.

    As for seti@Home, they haven't exactly avoided similar problems either. Complaints about duplicated blocks and un-optimized client code for their project have been posted on /. before.

  • I was wondering if anyone had any insite into whether or not the same problem could crop up with RC5. I've (http://stats.distributed.net/rc5-64/psummary.php3 ?id=74973) been working on this more long term project for nearing 600 days, and a simalar mishap could be devistating to our time to completion.
  • [smack]

    ...goes your hand on your forehead, as you re-read what you replied to.

    RC-64 is losing CPU time while some dnetc clients are working CSC. Another week (or more???) of lost CPU time to CSC means fewer cycles for RC-64. Once CSC is over, the clients which continue to operate will be working RC-64 again. Everyone thought that would happen last week, but nope.

    Therefore, RC-64 is hurt by the extension of the CSC project.

  • Since when do I speak for all of slashdot?

    Since when is my opinion gospel truth?

    I wish someone would have told me, I'd have used that to my benefit a long time ago.

    I'm sorry your feelings were hurt by comments made towards Seti@home....maybe you should talk to a grief counselor about it. I know that I never made any such comments.
  • Comment removed based on user account deletion
  • Duplicates are ignored and logged separately, that is how you were noticed. The purpose of the "cease and desist" email is to notify the person of a couple of things:

    1) If you were trying to cheat, we've noticed you and your efforts are ineffectual.

    2) If you somehow have a broken client, read-only disk, or have run out of disk space and are inadvertantly submitting duplicates, you might want to check on it. You might be wasting cpu or network bandwidth that you are not aware of.

  • Blocks that are requested but not returned are eventually recycled and redistributed to other clients. The percentage completion that is noted on the stats page is based off of the number of blocks that have been successfully credited to people. The extra "slack" between the actual keyspace completed and the old percentage displayed on the stats page came from two factors: blocks that had been completed by two distinct people because of re-verification, and blocks that had been invalid and had been redistributed to someone else for proper computation.
  • cute, but we have only made clients available for processors with 32-bit registers, which means 386 and higher. :)
  • distributed.net is made up of humans ...

    What the hell are you talking about? distributed.net is controlled by robots. Well, at least that's what gammatron [editthispage.com] would say.

    -y.

  • That was the last straw. That's a serious problem since 2 of the email addresses on my team aren't valid anymore so we can't rejoin that email address to the team.
    Tried mailing their help-service [mailto] to ask for the passwords for those addresses? I had a problem with two old inaccessible emails and they helped me out very quickly. I think they're doing a great job, considering the number of people out there who try and screw with the stats and break things.
    Keep up the good work... Moo!
  • I wonder if my true-RMS voltmeter was telling the truth...
  • Why don't you just fix the stats by adjusting the number of total blocks in the keyspace?

    If you set the number of total block to the number of blocks in the keyspace plus the number of blocks handed out twice, wouldn't the total percentage be correct again?

  • I'm not putting my idle cpu cycles in the hands of a place that runs their stats page off a cable modem...

    Why not? What's wrong with cable modems?

    This statement is IMHO nothing more than a flame if you give no reasons.

  • Comment removed based on user account deletion
  • A lot of people misunderstand why running old keys is worse than running new keys. When given a choice, always process a new keyblock. Throwing away a block of keys you failed to run is never a bad thing. The longer you hang on to keys and don't process them, the higher the chance that someone else has crunched them for you, and your results will be useless to the project. This can happen two ways: (1) all remaining keys have been assigned to people so the keyserver begins re-assigning old blocks that were never returned; (2) people running the client in off-line mode to crunch random keyblocks have already processed your block.

    In fact, the closer the project gets to 100% completion, the more quickly unreturned blocks are re-assigned (and re-re-assigned) just to try to get somebody (anybody!) to crunch them and return the results. When a project begins, a block of 10-day-old keys is still mostly useful to the project. But near the end, there's a very high chance most of those keys have been handled already.

    Your computers have a finite amount of compute-power to offer the Distributed project. The best thing you can do with that power is make sure that 100% of the blocks you work on are useful to the project. The closest you can get to guaranteeing this is to process the newest blocks you can get from the keyserver.

    "But what if the keys I throw away had the one!" Keys are assigned in random order from the keyserver. Nobody knows if the one is in the beginning, middle, or end of the entire sequence of keys to check. The winning key has an equal chance of being in the old block you're considering throwing away as it does in any new block you could download from the keyserver, so what the heck - just take the new keys. (In fact, there's a slightly higher chance that a new block will have the winning key - some parts of your old block may have already been processed by others, right? And nobody's won yet, right?)

  • And criticism from ACs isn't?

How many hardware guys does it take to change a light bulb? "Well the diagnostics say it's fine buddy, so it's a software problem."

Working...