Become a fan of Slashdot on Facebook


Forgot your password?
The Internet IT

Coral P2P Cache Enters Public Beta 254

Eloquence writes "infoAnarchy reports that Coral, a peer-to-peer webcaching system, has gone into public beta. Currently the Coral node network is hosted on Planet-Lab, a large scale distributed research network of 400 servers. You can use Coral right now by appending "" to a hostname. View Slashdot through Coral. Is this the end of the Slashdot effect?"
This discussion has been archived. No new comments can be posted.

Coral P2P Cache Enters Public Beta

Comments Filter:
  • by Rexz ( 724700 ) on Saturday August 28, 2004 @09:00PM (#10099843)
    Just kidding.
    • > Just kidding.
      yea, but its true :)
    • by Anonymous Coward
      No, it really is down, at least from here:


      Ping request could not find host Please check the name and try again.


      ping: unknown host

      ; <<>> DiG 9.2.1 <<>>
      ;; global options: printcmd
      ;; connection timed out; no servers could be reached

      Seems their nameservers have some kind of problem. I am in the Midwest, going t

    • by Anonymous Coward on Saturday August 28, 2004 @09:46PM (#10100090)
      It really DOESN'T work for a lot of people.

      The problem is that it doesn't seem to be compatible with Microsoft DNS severs. Below is a copy of the DNS log when I issue a query here, on my LAN which has a Microsoft DNS server running on Windows 2000, which then forwards through the University of Wisconsin. You can see that at the end it says "The DNS server encountered an invalid domain name." Perhaps someone who knows more about DNS can tell where the problem is?

      Rcv 0004 Q [0001 D NOERROR] (8)slashdot(3)org(4)nyud(3)net(0)
      UDP question info at 014D5A0C
      Socket = 384
      Remote addr, port 1263
      Time Query=4338128, Queued=0, Expire=0
      Buf length = 0x0200 (512)
      Msg length = 0x0027 (39)
      XID 0x0004
      Flags 0x0100 QR 0 (question) OPCODE 0 (QUERY) AA 0 TC 0 RD 1 RA 0 Z 0 RCODE 0 (NOERROR)
      Offset = 0x000c, RR count = 0
      Name "(8)slashdot(3)org(4)nyud(3)net(0)"
      QTYPE A (1)
      QCLASS 1

      Snd 39b0 Q [0001 D NOERROR] (8)slashdot(3)org(4)nyud(3)net(0)
      UDP question info at 0109200C
      Socket = 408
      Remote addr, port 53
      Time Query=0, Queued=0, Expire=0
      Buf length = 0x0200 (512)
      Msg length = 0x0027 (39)
      XID 0x39b0
      Flags 0x0100 QR 0 (question) OPCODE 0 (QUERY) AA 0 TC 0 RD 1 RA 0 Z 0 RCODE 0 (NOERROR)
      Offset = 0x000c, RR count = 0
      Name "(8)slashdot(3)org(4)nyud(3)net(0)"
      QTYPE A (1)
      QCLASS 1

      Rcv 39b0 R Q [8081 DR NOERROR] (8)slashdot(3)org(4)nyud(3)net(0)
      UDP response info at 012DB8AC
      Socket = 408
      Remote addr, port 53
      Time Query=4338128, Queued=0, Expire=0
      Buf length = 0x0200 (512)
      Msg length = 0x00e0 (224)
      XID 0x39b0
      Flags 0x8180 QR 1 (response) OPCODE 0 (QUERY) AA 0 TC 0 RD 1 RA 1 Z 0 RCODE 0 (NOERROR)
      Offset = 0x000c, RR count = 0
      Name "(8)slashdot(3)org(4)nyud(3)net(0)"
      QTYPE A (1)
      QCLASS 1
      Offset = 0x0027, RR count = 0
      Name "[C019](4)nyud(3)net(0)"
      TYPE 39 (39) CLASS 1 TTL 1333 DLEN 25
      DATA Unknown resource record type 39 at 012DBC41.
      Offset = 0x004c, RR count = 1
      Name "[C00C](8)slashdot(3)org(4)nyud(3)net(0)"
      TYPE CNAME (5)
      CLASS 1 TTL 0 DLEN 15
      DATA (8)slashdot(3)org[C033](4)http(2)l2(2)l1(2)l0(5)n yucd(3)net(0)
      Offset = 0x0067, RR count = 2
      Name "[C058](8)slashdot(3)org[C033](4)http(2)l2(2)l1(2) l0(5)nyucd(3)net(0)"
      TYPE CNAME (5)
      CLASS 1 TTL 1335 DLEN 2
      DATA [C033](4)http(2)l2(2)l1(2)l0(5)nyucd(3)net(0)
      &am p;n bsp; Offset = 0x0075, RR count = 3
      Name "[C033](4)http(2)l2(2)l1(2)l0(5)nyucd(3)net(0)"
      TYPE A (1)
      CLASS 1 TTL 60 DLEN 4
      Offset = 0x0085, RR count = 0
      Name "[C038](2)l2(2)l1(2)l0(5)nyucd(3)net(0)"
      TYPE NS (2)
      CLASS 1 TTL 1991 DLEN 19
      DATA (3)139(2)91(2)70(2)71(3)ip4[C041](5)nyucd(3)net(0 )
      Offset = 0x00a4, RR count = 1
      Name "[C038](2)l2(2)l1(2)l0(5)nyucd(3)net(0)"
      TYPE NS (2)
      CLASS 1 TTL 1991 DLEN 16
      DATA (3)141(3)213(1)4(3)202[C09E](3)ip4[C041](5)nyucd( 3)net(0)
      Offset = 0x00c0, RR count = 0
      Name "[C091](3)139(2)91(2)70(2)71(3)ip4[C041](5)nyucd(3 )net(0)"
      TYPE A (1)
      CLASS 1 TTL 603196 DLEN 4

      • by mfreed ( 217310 ) on Saturday August 28, 2004 @10:20PM (#10100249) Homepage
        It appears that the Windows 2000 DNS server you are using is not aware of DNAME records (RFC 2672 []):

        Name "[C019](4)nyud(3)net(0)"
        TYPE 39 (39) CLASS 1 TTL 1333 DLEN 25
        DATA Unknown resource record type 39 at 012DBC41.
        We use these types of records to aid in redirecting resolvers to nearby Coral proxies (by mapping to a "hierarchical" name The goal is that once you find a "nearby" server, you should remain nearby.

        Given that the DNAME RFC is from 1999, it appears that some old DNS servers do not handle this record type well. We'll look into some alternatives or work-arounds. (Perhaps you can contact me directly to see if subsequent changes can fix your problem.)

        Thanks for the detailed report!

    • Check out their logs...

      Coral Statistics []

      ...note the recent blip?
  • Google (Score:3, Informative)

    by asd-Strom ( 792539 ) on Saturday August 28, 2004 @09:01PM (#10099851)
    Google cache has been a good helper to me for some time.
    So this is not so new to me regarding slashdot effects.
    • Re:Google (Score:2, Insightful)

      by Dreadlord ( 671979 )
      Google doesn't covert links in the cached page, you need to dig out cache of every page you want to visit.

      And you can't be sure that Google has cached your page in the first place.
    • Google needs to start using this technology.

    • Google doesn't cache images. Those are often the largest parts of the page. Also some browsers might not display the page at all if they can't load some images.

      Plus as others have said Google doesn't convert links.

    • Re:Google (Score:5, Informative)

      by bogie ( 31020 ) on Saturday August 28, 2004 @11:48PM (#10100637) Journal
      Google cache tip for you. There is a bookmarklet for Firefox where you simply click the bookmarklet and Google's cache of the page opens up. Its a nice feature to have at your fingertips. You can get the code at the very bottom of the following page, just drag it to your personal toolbar.

      If the page won't load at all thus negating the above just use the following example to load a page. g
      • Re:Google (Score:5, Informative)

        by doofsmack ( 537722 ) * on Sunday August 29, 2004 @02:46AM (#10101159)
        Talking about bookmarklets, I just wrote a quick little bookmarklet to redirect you to the Coral cache of the current page. Here it is:

        javascript:location.href=location.href.replace(/ht tp\:\/\/([a-zA-Z\.]+)\/(.*)/, "http://$$2");void(0)

        And if slashdot's tendency to insert spaces in long strings screws that up, try grabbing it from here []
  • by bigberk ( 547360 ) <> on Saturday August 28, 2004 @09:01PM (#10099858)
    Of, well, slashdoting the solution to slashdotting? Really cool idea though. Nice!
  • Dear Lord (Score:4, Funny)

    by over_exposed ( 623791 ) on Saturday August 28, 2004 @09:02PM (#10099864) Homepage
    I hope this isn't the end of the /. effect! What would we do w/o webservers crashing under tremendous loads?!? WE NEED the /. effect! I hope this technology crashes and burns...

    Then again it might not be so bad....
  • by Shaheen ( 313 ) on Saturday August 28, 2004 @09:02PM (#10099866) Homepage
    so it's like this... people click on a link on slashdot, which gets farmed out to the p2p network to get the cached copy, but there's so many people clicking the link to get the cached copy that they are only slashdotting their own computers since they are all part of the p2p network too! now we can all collectively feel the slashdot effect!

    oh, first post?
  • files (Score:3, Interesting)

    by Coneasfast ( 690509 ) on Saturday August 28, 2004 @09:02PM (#10099868)
    you can ensure that your readers can still access a certain web page or files, when the multitude of readers would otherwise overload the website and make the content unavailable.

    well apparently all html content, including files, will be cached. this is a great way to get around downloading from snail-pace sites, (although i will be checking md5sums)
  • if we do /. it...
  • by chrispyman ( 710460 ) on Saturday August 28, 2004 @09:04PM (#10099881)
    While their system would be pretty good (supposing it can withstand a slashdotting) for cacheing large files, it's not very useful for websites. Websites usually have lots of additional images, links, and whatnot, and as is currently, the system doesn't rewrite URLs.
  • by Rushuru ( 135939 ) on Saturday August 28, 2004 @09:05PM (#10099885)
    In case Coral gets slashdotted, use this mirror [] to view slashdot
  • "Is this the end of the Slashdot effect?"

    haha no - only the lateral shifting of the slashdot effect to your local lan as some dope sets up a cache server in your office. Im sure the ./ colo guys at exodus would love for you to run one :).
    • This is what you imagine:

      Is this the end of the Slashdot effect?" haha no - only the lateral shifting of the slashdot effect to your local lan as some dope sets up a cache server in your office.

      This is what coral says:

      One of Coral's key goals is to avoid ever creating hot spots that might dissuade volunteers from running the software for fear of load spikes. It achieves this through a novel indexing abstraction we introduce called a distributed sloppy hash table (DSHT), and it creates self-organizing c

  • by rsilvergun ( 571051 ) on Saturday August 28, 2004 @09:09PM (#10099907)
    as will ISPs if it takes off. Right now with bandwidth usage centralized it's pretty easy to bill for it. If you decentralize it with p2p via millions of always on unmetered clients/servers it gets hard, if not impossible. I kinda hope it doesn't take off, since if it does it could end unmetered Internet access...

    • What? Whether data comes from one server or a p2p net, it still travels the me <-> ISP pipe, and whether data goes to a client or a net, it still goes through the server <-> world pipe. How do you think this'll change that?
      • Upload bandwidth (Score:3, Insightful)

        by rsilvergun ( 571051 )
        imagine if we all used our max upload bandwidth 24/hrs a day. ISP would need to modify their networks to work around this. At least I assume they would. As it is, many 'unmetered' isps will start sending you nastygrams if you make heavy use of your upload bandwidth, but otherwise look the other way when you run a server. Keep in mind that all these p2p apps violate most IPS' TOS (mine doesn't let you run a server of any kind, and while there are places where enforcement of that would be silly, there's still
        • I asked my cable company the other day if I had a download limit, and they said, "Yes, 3 MB/s or so." I said "No, I mean, a monthly limit?" She said no, download as much as you can.

          I'm pretty happy with my cable company. ;-) The only negative is the upload cap, so sometimes torrents are like wicked slow.

    • I think perhaps instead it might end metered Internet access. If all the clients are unmetered, and they're now the ones doing most of the communicating, a server doesn't need to be on a backbone: it can be another one of the clients.
  • Also a proxy... (Score:4, Interesting)

    by jelevy01 ( 574941 ) on Saturday August 28, 2004 @09:09PM (#10099908)
    This would also by pass any restricted sites your company may be blocking...
    • I don't know if it existsin current corperate usage policy software systems, but surely it can't be difficult for software create rules to block both:
      etc ... ... while only having to provide the original AND a list of known p2p caching URLs in such a product's interface.
    • Re:Also a proxy... (Score:5, Informative)

      by interiot ( 50685 ) on Saturday August 28, 2004 @10:35PM (#10100326) Homepage
      There are actually a lot of sites out there that will let you access arbitrary content from elsewhere. Most corporate restricting proxies will block at least some of them (but it's impossible to get all of them). So something that could be as high-profile as Coral is less useful compared to some of the more obtuse of these:
      • google cache (this has been periodically blocked at my company)
      • the internet archive []
      • online translation sites (eg. if it's an english site, have the translator go from japanese to english... none of the words will be recognized as japanese, so it will pass them all as-is)
      • several others I'm forgetting at the moment...
  • No, after the FBI has a gander at the servers, and puts them in a truck and drives off, the Slashdot effect will be alive and well.
  • me thinks not P2P (Score:2, Interesting)

    by rob101 ( 809157 )
    This is a Content Distributon network of cooperating servers colloborating to exchange information and 'level out' excess demand by distributing reqiests among n servers. Like Akamai's EdgeSuite. based on a quick read of the front page. The providors of content in their network are never the consumers if content. thus i don't know why they call it peer-to-peer? anyone?
  • Some friends and I have an approx 10 MBytes application we want to distribute over the Internet, looking into hosting costs we see that it would cost us a bundle. So does coral let us serve our file to a slashdot-like crowd without breaking the bank?
    • So does coral let us serve our file to a slashdot-like crowd without breaking the bank?

      An interesting question. It would seem feasible only to serve up the full page when this is requested by a cache server, in all other cases just returning a redirect.

      If this actually proves possible, and no way of blocking it is found, it may kill the project stone dead.

    • Bittorrent is your friend. It's as common as AIM or IRC these days, instead of pulling the whole file from a central server, only the first few need to use a server host, and everyone else shares with each other. Most big linux distros do it with 650 MB files, or for large video files. No reason it wouldn't work for you.

      Here, I'll even link you to a good client that will give you a nice GUI for starting out. Another Bittorent Client [] for all OSes.

  • []

    It isn't P2P web proxy, it's just "big pipe"-based distributed one. Supposedly a great way to prevent slashdoting (just use instead of and everything goes from the cache, tiny site receiving only header requests to chceck if the document hasn't changed in the meantime) it's hardly known, way too quiet as for a project that useful. P2P may be faster and cheaper but certainly less reliable...
    • there's a reason it hasn't been released yet.
    • Er... no... from the site:

      "Please note that you cannot submit a whole site to FreeCache as in This will not work as only index.html will be cached. You have to prefix every item that you want to have cached seperately."

      As I understand it, Freecache refuses to cache small files anyway; I think the minimum was 5Mb.

    • Not a good solution (Score:3, Informative)

      by pyrrhonist ( 701154 )
      From the FAQ:
      What files are being served by FreeCache?

      FreeCache can only serve files that are on a web site. If the link to a file on that web site goes away, so will the file in the FreeCaches. Also, there is a minimum size requirement. We don't bother with files smaller than 5MB, as the saved bandwidth does not outweight the protocol overhead in those cases.

  • Only the top page? (Score:5, Interesting)

    by News for nerds ( 448130 ) on Saturday August 28, 2004 @09:24PM (#10099977) Homepage caches only the /. homepage. Doesn't it analyze hyperlinks?
  • by Danathar ( 267989 ) on Saturday August 28, 2004 @09:31PM (#10100007) Journal
    Many times it seems a bittorrent tracker is down due to bandwidth issues. If I "corralized" it...could this alleiviate the problem?
  • I was playing around with this the other day. I tried it out with my page which has the main html, but all images are loaded off a second server. Coral will get the main server's files that match the origination URL that you passed it, but will skip all files that are sitting off another server. So basically it'll handle all relative links, but no hard links to other sites. It would be killer if they somehow made it configureable in your browser's proxy settings.
  • Go here: []

    Notice the random question at the bottom of the page.

    Then go here: []

    The question is not randomly generating. They should have some checking for this such that if the data varies by the second, that it does not cache the page or invisible frames the HTML and filter the content it can cache.


    It will work in certain cases, but generally I am not that happy.
  • by Danathar ( 267989 ) on Saturday August 28, 2004 @09:34PM (#10100024) Journal
  • Work for CmdrTaco (Score:5, Interesting)

    by Dreadlord ( 671979 ) on Saturday August 28, 2004 @09:43PM (#10100076) Journal
    Goatse-links trolls will be back, with slashcode showing the same domain for [] every [] link [], I think CmdrTaco has some work to do now.
  • by Jugalator ( 259273 ) on Saturday August 28, 2004 @09:46PM (#10100086) Journal
    To save their bandwidth, you should've linked to their mirror! []
  • This system fails because most commercial sites, and many others, will lose the ability to track web usage for site tuning and marketing response. Sites will be built -- if need be -- with specific settings or configurations to confound the coralling of their pages.

    Its a noble goal, but ultimately will go the way of the video phone -- which apart from conferences planned in advance, remains a novelty dispite perfectly adaquate technology -- nobody wants a suprise video call because nobody wants to be a 50
  • by shish ( 588640 ) on Saturday August 28, 2004 @09:50PM (#10100115) Homepage
    Pretty picture :)

    Doesn't give a usable time scale though; it has "HTTP requests", but not "per second" / "per minute" or anything :(
  • [subject]

    Although I agree with others, it doesn't really compare to FreeCache. I still wonder why that never got much attention. It's an insanely great idea. Ah well. Between that, Corla, and BitTorrent, you never have to worry about /.'ing again when you submit your tiny personal site.

    In other news (for the morons who continue posting and whining), you can still remove the it prefix from the /. URL, removing the fugly colour scheme. And there was much rejoicing in the land.


  • of the /. effect?

    Safari can't open the page "" because it could not connect to the server "".

    I believe there is a term for this.


  • by digidave ( 259925 ) on Saturday August 28, 2004 @10:17PM (#10100238)
    I haven't checked the terms of use to see if I'm allowed to use this for my work web site, though maybe with a cash or hardware donation, or by running a high-bandwidth node, I can get permission.

    What I'm thinking is that at work I run a multi-server site that gets massively bogged down for short periods when it tries to handle upwards of 35,000 concurrent sessions. Bandwidth is not the problem, the application is, and it can't be rewritten for reasons that piss me off and I have no budget for more servers and no management support to run a static cached version of the site.

    So I was wondering if it was possible to have the site automatically direct visitors to the Coralized URL when the site load gets too high. Either a manual change or an automatic one would be ok. I have some ideas on how this could be done using a failover server config on our ServerIron. Possibly a router config can also do this, though we don't run our own router since it's at a colocation facility. Worst case scenario is I can edit the home page to redirect to Coral when the load gets high.

    Are there any other Slashdotters looking to use Coral in similar ways? If you have any ideas to share I'd be all ears.
  • Although I can browse both slashdot and SomethingAwful forums through Coral, I cannot do so while logged in. I wonder if the "user is logged in " logic is confused.
  • Is this the end of the Slashdot effect?"

    Not yet! :-)
  • Could people in China use it? Or could I use it to watch the olympics online if I live in the US?
  • So the best way to test it is for Slashdot to add a user preference to append to every inline href, so that if you turn that on you don't have to do it manually.
  • If you browse through the p2p cache, it also hides your IP address.

    Check this out: Normal browsing [] and Coral browsing [].

DISCLAIMER: Use of this advanced computing technology does not imply an endorsement of Western industrial civilization.