Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet Cloud Data Storage Networking IT Technology

Content-Centric Networking & the Next Internet 153

waderoush writes "PARC research fellow Van Jacobson argues that the Internet was never designed to carry exabytes of video, voice, and image data to consumers' homes and mobile devices, and that it will never be possible to increase bandwidth fast enough to keep up with demand. In fact, he thinks that the Internet has outgrown its original underpinnings as a network built on physical addresses, and that it's time to put aside TCP/IP and start over with a completely novel approach to naming, storing, and moving data. The fundamental idea behind Jacobson's alternative proposal — Content Centric Networking — is that to retrieve a piece of data, you should only have to care about what you want, not where it's stored. If implemented, the idea might undermine many current business models in the software and digital content industries — while at the same time creating new ones. In other words, it's exactly the kind of revolutionary idea that has remade Silicon Valley at least four times since the 1960s."
This discussion has been archived. No new comments can be posted.

Content-Centric Networking & the Next Internet

Comments Filter:
  • Magnet links? (Score:5, Insightful)

    by Hatta ( 162192 ) on Tuesday August 07, 2012 @01:18PM (#40907177) Journal

    Did he just reinvent magnet links?

    • Re:Magnet links? (Score:5, Informative)

      by vlm ( 69642 ) on Tuesday August 07, 2012 @01:21PM (#40907213)

      Did he just reinvent magnet links?

      Closer to a reinvention of freenet.
      Or maybe reinventing mdns
      Or maybe reinventing AFS

      Its been a pretty popular idea for a couple decades now.

    • It looks that way, and of course, it raises the obvious question "What transport layers do you propose to move this data around with?"

      • by u38cg ( 607297 )
        I have two questions: one, how do you expect to overcome the network effect of TCP/IP, and two, how does this prevent the free rider problem? Who pays for Youtube?
    • Re:Magnet links? (Score:4, Informative)

      by fuzzyfuzzyfungus ( 1223518 ) on Tuesday August 07, 2012 @01:33PM (#40907369) Journal

      " (In Jacobson’s scheme, file names can include encrypted sections that bar users without the proper keys from retrieving them, meaning that security and rights management are built into the address system from the start.)"

      It sounds like he made them worse; but otherwise pretty similar to magnet links or the mechanisms something like Freenet uses.

      Perhaps more broadly, isn't a substantial subset of the virtues of this scheme already implemented(albeit by an assortment of nasty hacks, not by anything terribly elegant) through caches on the client side and various CDN schemes on the server side? URLs haven't corresponded to locations, rather than to either user expressions of a given wish, or auto-generated requests for specific content, in the majority of cases for a while now(and, on the client side, caching doesn't extend to the entire system, for security reasons if nothing else; but it already covers a lot of common web-resource request scenarios).

      Now, in a perfect world, "we have a pile of nasty hacks for that" is an argument for a more elegant solution; but, in practice, it seems to be closer to equivalent to "we already have stuff that mostly works and will be cheaper next year", which can be hard on the adoption of new techniques...

    • Re: (Score:2, Informative)

      by Anonymous Coward

      more like CDN servers, except smarter.

      There are already mechanisms for this.

      What needs to exist is a hybrid approach where the end users are the origin servers, and the CDN notes operate as capacity supernodes on their local ISP, in turn these ISP supernodes talk to each other. If a piece of content needs to "disappear" the end user removes it from their system, and it will tell the supernodes that the content is no longer available, leaving only users who already have it to talk to each other if they still

      • The key then is browser transparency. It'd need to be possible to have an HTML document specify an image or video file, but via hash address (magnet would do perfectly, so long as we can agree on which hash to use). That way the dynamic parts come in via the usual HTTP, the static parts via the new protocol (With HTTP fallback, magnet can specify that too).
    • by EdIII ( 1114411 )

      This guy belongs in Star Trek, and I don't say that in a derogatory way.

      It's worse than magnet links, because he is proposing that the entire Internet (or most of it) work just like that.

      The problem is not the technology, it is the societies trying to implement it. Magnet links sound great in theory, but are progressively (extremely) dangerous in practice. You would have to be crazy to using public peer-to-peer networks at this point with Big Content doing its best to shove Freedom's face into the ground

      • We could implement his ideas, but the only safe way to do so would be to create an inherently anonymous infrastructure. Not a trivial task.

        So...like Freenet then? As someone else has already mentioned, this does (at least to me) sound a LOT like the way Freenet addresses files.

        • by lgw ( 121541 )

          It seems like Slashdot has a "let's reinvent Freenet" story every week now. Freenet may have issues, but it solves a great many current problems. What it lacks is the network effect - there's not really any content there today, so no one uses it (and vice versa).

          • I contributed a bit of code to the 0.5 network several years ago...I've been meaning to go back and see if that's still alive now that I have a stable, good internet connection. Just graduated college; didn't really have a connection I could run it on the whole time there. But last I checked 0.5 (FCON) was still populated, and I still can't quite trust the new network. Last I checked there was still better content on the old one anyway! Though certainly not much of it...nothing like it used to be...

            I think

      • I love the idea in theory, but it goes against the omnipresent need to control content with an iron fist. Incompatible would be an understatement.

        The thing with tyranny is, if you don't stand up to it, it just gets worse. Take a single sadist, and extrapolate from there... Appeasement. Does. Not. Work.

        Also, people with power are lame and stupid. Seriously. Those who didn't get attracted to it by being dumb to begin with, get turned dumb. And like that spider you killed with a shoe, they are more scared of y

        • by EdIII ( 1114411 )

          You misunderstand me.

          Their attacks on our Freedom will succeed as long as we let them, and sadly, it looks like we are going to let them. My point about this technology is that it will not be embraced by corporations and ISPs because it is wholly incompatible with their own business goals.

          The whole idea of Freenet and Darknets in general is a wonderful idea. Make no mistake however that it will not be popular as far as governments and corporations are concerned, and it will not have anything close to carri

    • by Njovich ( 553857 )

      No. Next question?

      a) it predates magnet (magnet just from 2002, CCN is from late 90's)
      b) magnet is a naming/addressing scheme, this is a routing technology. There is a difference, although one can be used with another.

    • ..magnet links

      Actually it sounds to me like just another way of saying "everything is going to the Cloud", which I happen to think is the worst idea ever.

  • by hawguy ( 1600213 ) on Tuesday August 07, 2012 @01:21PM (#40907211)

    Why does he say "it will never be possible to increase bandwidth fast enough to keep up with demand"?

    When I want to watch streaming video, I fire up Netflix and watch streaming video. When I want to download a large media file, I find it on bittorrent and download it. The only time I've noticed any internet slowdowns, it's been in my ISP's network, and it's just a transient problem that eventually goes away.

    Sure, Netflix has to do some extra work to create a content delivery network to deliver the content near to where I am, but it sounds like the internet is largely keeping up with demand.

    Aside from the IPv4->IPv6 transition (we've been a year away from running out of IP addresses for years), is there some impending bandwidth crunch that will kill the internet?

    • He seems to be assuming that demand will continue to grow at current and historical rate. I'd say that isn't a very good assumption: the jump from people using a text-based web to a video/flash/image one was significant, but the demands of each individual user aren't likely to increase much beyond that. Adding more people will increase demand somewhat, but not by an order of magnitude like Youtube, Netflix et al. do, and since people are already watching those just fine, it is hardly an insurmountable issue

      • but considering that most people can't tell the difference between 720p and 1080p, I doubt that will ever happen.

        Uh, what?

        I can see the difference. I can even see the difference between 1080p and 1440p, or 1440p and 2160p. And it's not a slight difference that I could understand people missing. In informal tests, comparing my laptop playing 1080p video to my parent's 720p "HDTV", 100% of those surveyed responded "holy crap that looks better" (margin of error for 95% confidence interval: 9.38%).

        • I think it's not that most people can't tell the difference between 720p and 1080p, but that they just don't care.

          • That's exactly it. It's not that they can't see the difference, I'm betting most can. It's just that most people don't give two shits about optimizing their home theater experience. My brother in law gives me crap about my television and how it is set up incorrectly. I have to tell him every time that I just don't care.

            (I'm going to make the next part up, but it makes sense) 75% of people use their television to waste time. 20% use it for background noise while they do something else (my group). I'm betting

            • by mikael ( 484 )

              The test pattern image is the most watched program in Italy - though it does play music.

        • by hawguy ( 1600213 )

          but considering that most people can't tell the difference between 720p and 1080p, I doubt that will ever happen.

          Uh, what?

          I can see the difference. I can even see the difference between 1080p and 1440p, or 1440p and 2160p. And it's not a slight difference that I could understand people missing. In informal tests, comparing my laptop playing 1080p video to my parent's 720p "HDTV", 100% of those surveyed responded "holy crap that looks better" (margin of error for 95% confidence interval: 9.38%).

          You're not "most people" - "most people" haven't even seen 1440p.

          And how do you make any sort of fair comparison between a 17" laptop screen and a 32" (or larger?) HDTV? There's no way to fairly compare the two because of the screen size difference.

          At normal viewing distances, most people can't see the difference between 720p and 1080p -- you'd need to be within 5 feet of your 40" TV to see the difference. Sure, maybe you have a home theater with a 60" TV and seats 6 feet away, but most people have a TV in

          • The comparison was actually with the laptop being next to the TV, so that's about as valid as I can get it. I've even found they can see a difference between a 1280x720 TV and my 1600x900 monitor, and that's much less a difference in physical size AND in resolution.

            Higher resolution does matter. Maybe there is a limit (I haven't seen 4K video on a home-size screen yet), but we're far from reaching it.

            And you also have to think about changes in consumption. More and more people aren't lounging on the couch a

            • by hawguy ( 1600213 )

              The comparison was actually with the laptop being next to the TV, so that's about as valid as I can get it. I've even found they can see a difference between a 1280x720 TV and my 1600x900 monitor, and that's much less a difference in physical size AND in resolution.

              That's about as invalid as you can get. You're comparing a 100+ dpi laptop screen with a 40 or 50 dpi TV screen. Of course people are going to like the sharper screen of the laptop better.

      • by ShanghaiBill ( 739463 ) on Tuesday August 07, 2012 @02:28PM (#40907993)

        the demands of each individual user aren't likely to increase much beyond that.

        I think your thinking is way too constrained. If the bandwidth was available, then people could have immersive 3D working environments, and tele-commuting could be far more common. This would result in much less traffic on the roads and a huge reduction in CO2 emissions and oil imports. This is not science fiction. I have used Cisco's "Virtual Meeting Room" and it is pretty good.

        You also need to think about things like "Siri", that send audio back to the server for processing, because there isn't enough horsepower in a cellphone. I could see "smart glasses" of the future sending video back to a server. That will require huge bandwidth.

        If the bandwidth is available and affordable, the applications will come.

        • by hawguy ( 1600213 )

          the demands of each individual user aren't likely to increase much beyond that.

          I think your thinking is way too constrained. If the bandwidth was available, then people could have immersive 3D working environments, and tele-commuting could be far more common. This would result in much less traffic on the roads and a huge reduction in CO2 emissions and oil imports. This is not science fiction. I have used Cisco's "Virtual Meeting Room" and it is pretty good.

          You also need to think about things like "Siri", that send audio back to the server for processing, because there isn't enough horsepower in a cellphone. I could see "smart glasses" of the future sending video back to a server. That will require huge bandwidth.

          If the bandwidth is available and affordable, the applications will come.

          I work in a large multi-building "campus" (well, more of an office park, we have offices in several buildings). It's a 15 - 20 minute walk from one building to the farthest one (depending on who is doing the walking)

          We have practically unlimited bandwidth between buildings (and at least a gigabit to remote offices) yet we still make people trudge between buildings for meetings, and teleconferences with remote sites are 720p (or Skype). So bandwidth isn't constraining us from immersive teleconferencing - we'

        • by mikael ( 484 )

          But everyone is still recommended to get up every hour and walk around for 10 minutes to allow the circulation and exercise to get rid of all the toxins that have built up.

    • by fa2k ( 881632 )

      I didn't RTFA yet, but Netflix has to have an immense infrastructure to serve its customers. Just imagine 2 Mbit per stream, then hundreds of thousands of streams, that's almost 1 Tbit/s. Multicast will not save us, because people are watching different things at different times (multicast would give us something like cable TV with DVRs) Some problem with the current situation: 1) If their customer base increases, the bandwidth requirement will be ridiculous, and will cause distortion to the structure of th

    • by dissy ( 172727 )

      (we've been a year away from running out of IP addresses for years)

      IANA has already allocated the last /8 IP blocks January 2001 to the RIRs (Regional lnternet Registeries)
      Once the RIRs run out, they will not be able to get any more.
      APNIC (The Asia Pacific RIR) had allocated their last block four months after in April 2001.

      ARIN is the North America RIR and still has smaller blocks left, but once they are gone then they have to make due with what blocks they can get back.

      So we are not a year away, we are a year past running out at the highest levels.

  • Boring (Score:5, Insightful)

    by vlm ( 69642 ) on Tuesday August 07, 2012 @01:23PM (#40907231)

    it will never be possible to increase bandwidth fast enough to keep up with demand.

    I've been hearing that since I got on the net in '91. Tell me a new lie.

    Its an end time message. "Repent, for the end is near". Yet, stubbornly, the sun always rises tomorrow.

  • by QilessQi ( 2044624 ) on Tuesday August 07, 2012 @01:23PM (#40907233)

    See http://en.wikipedia.org/wiki/Uniform_resource_name [wikipedia.org] . This is a very old [and good] idea.

    For example: urn:isbn:0451450523 is the URN for The Last Unicorn (1968 book), identified by its [ISBN] book number.

    Of course [as the dept. notes] you still need to figure out how to get the bits from place to place, which requires a network of some kind, and protocols built on that network which are not so slavishly tied to one model of data organization that we can't evolve it forward.

  • If this had come out of almost anyone else's mouth, I'd be the first to say they were full of it.

    But... Van Jacobson [wikipedia.org]!

    • Yes, and if you read that link you discover that he has been pushing this idea since 2006. So, while he has some good credentials to say that the sky is going to fall, he has been saying it for six years now. The sky hasn't fallen and the only sign that it might is the complaints of cellphone vendors, ISPs, and content producers whose profits have not risen as fast as they thought they would and/or would like them to.
  • by Ryanrule ( 1657199 ) on Tuesday August 07, 2012 @01:30PM (#40907327)

    Any idiot can have a pile of ideas. The implementation is what matters.

    Too bad the idea pays 95%, the implementation 5%

    • Any idiot can have a pile of ideas. The implementation is what matters.

      Too bad the idea pays 95%, the implementation 5%

      That's a common misconception. It's the person with the superior legal standing that gets paid 99%, IP only grants superior legal standing if you've also got the lawyers to back it up.

    • Any idiot can have a pile of ideas. The implementation is what matters.

      I like this quote, but personally would not attempt to use it when talking about Van Jacobson [wikipedia.org]

    • Any idiot can have a pile of ideas. The implementation is what matters.

      Too bad the idea pays 95%, the implementation 5%

      I run into "Ideas Men" in the indie game dev scene all the time... Most never make a game unless they learn actual coding, art, music -- Some actual skill other than thinking up WiBCIs ("wouldn't it be cool if ___"s). In my experience, it's the implementation that pays, ideas are worth less than a dime a dozen.

  • Dynamic caching? (Score:5, Interesting)

    by Urban Garlic ( 447282 ) on Tuesday August 07, 2012 @01:30PM (#40907337)

    So back in the day, we had a thing called the mbone [wikipedia.org], which was multicast infrastructure which was supposed to help with streaming live content from a single sender to many receivers. It was a bit ahead of its time, I think, streaming video just wasn't that common in the 1990s, and it also really only worked for actually-simultaneous streams, which, when streaming video did become common, wasn't what people were watching.

    The contemporary solution is for big content providers to co-locate caches in telco data centers, so while you still send multiple separate streams of unsynchronized, high-demand streaming content, you send them a relatively short distance over relatively fat pipes, except for the last mile, which however only has to carry one copy. For low-demand streaming content, you don't need to cache, it's only a few copies, and the regular internet mostly works. It can fall over when a previously low-demand stream suddenly becomes high-demand, like Sunday night when NASA TV started to get slow, but it mostly works.

    TFA (I know, I know...) doesn't address moving data around, but it seems like this is something that a new scheme could offer -- if the co-located caches were populated based purely on demand, rather than on demand plus ownership, then all content would be on the same footing, and it could lead to a better web experience for info consumers. That's a neat idea, but I think we already know how both the telcos and commercial streaming content owners feel about demand-based dynamic copy creation...

  • Similarly, in a content-centric network, if you want to watch a video, you don’t have to go all the way back to the source, Lunt says. “I only have to go as far as the nearest router that has cached the content, which might be somebody in the neighborhood or somebody near me on an airplane or maybe my husband’s iPad.”

    Of course, caching data at different points in the network is exactly what content distribution networks (CDNs) like Akamai do for their high-end corporate clients, so

  • So we replace URLs with SQLs?

    (The point of SQL is that you say what you want, not where to find it - hence the concept of "NoSQL" just silly)

  • there's already too much TCP/IP infrastructure bought, paid for and in use.

  • "not where it's stored."

    So we should make the Internet into Plan 9?

    • by Jawnn ( 445279 )

      "not where it's stored."

      So we should make the Internet into Plan 9?

      Your stupid minds. Stupid! Stupid!

  • “We can sit here and speculate about where the tollbooths will go, but to me, it’s more about whether there are pockets of money out there ready to address problems that people have now. The tollbooths will go where they need to be.”

    I'm pretty sure where the tollbooths will be - embedded in your local ISP. They will be put there by the music and movie industries so that when you in this new future request a tune or a clip by name rather than by IP address you can be either billed or den
  • There is not only a cost of deploying the new tech, but also the cost of change. That cost of change is REALLY high as the current methods are deeply seeded. IPv6 isn't "there" yet... and the experience has been dizzying for many. Now there's another new approach? It may be better, but people don't want the change. Something catastrophic will have to cause such change and even then, people will gravitate to the solution with the least amount of change possible.

    • by Jawnn ( 445279 )

      There is not only a cost of deploying the new tech, but also the cost of change. That cost of change is REALLY high as the current methods are deeply seeded. IPv6 isn't "there" yet... and the experience has been dizzying for many. Now there's another new approach? It may be better, but people don't want the change. Something catastrophic will have to cause such change and...

      Yeah, like Y2K. Oh, wait....
      I know! Let's get Apple to build it. Apple people will pay obscene sums for shiny new stuff with Apple logos on it.

  • But it's good that someone who was involved in the early Internet realizes that it's a good one.

    And no, it doesn't mean throwing TCP/IP away.

    But really, Slashdotting should be impossible. To me, the fact that it is possible indicates a fundamental problem with the current structure of the Internet. If you can come up with someone other than using content-addressing that solves the Slashdotting problem for everybody (even someone serving up content from a dialup) then it doesn't really solve the problem.

  • Bittorrent and other p2p protocols. Even if -all- content wete distributed this way, you would still need an underlying network, link, and transport mechanism. The tcp/ip serves that very well, then hopefully you have no hotspots of traffic or failre becaise of the distributed nature of the content. Another interesting facet is that if all content is truly distributed and redunt with no single point of storage, master copy, or decryption, there is no way to EVER remove content completely.
  • Let's consider Freenet [freenetproject.org]. Don't they store and retrieve data based on some cryptographic keys? Of course, data is distributed across all participants, and communications still piggy back on top of IP. But that's what I'd call content-centric networking. The content isn't located by location, but by its nature (hash/key/...).
  • by jmac880n ( 659699 ) on Tuesday August 07, 2012 @01:44PM (#40907513)
    There is a huge chunk of the Internet that cares very much where the content came from:
    • Who exactly is asking me to transfer money out of my account?
    • Did this patch that I downloaded come from a reputable server? Or will it subvert my system?
    • Is this news story from a reputable source?

    And the list goes on....

    • by Hatta ( 162192 ) on Tuesday August 07, 2012 @02:00PM (#40907687) Journal

      Who exactly is asking me to transfer money out of my account?
              Did this patch that I downloaded come from a reputable server? Or will it subvert my system?
              Is this news story from a reputable source?

      None of these depend on the location of the data, only the identity of the author. If you can verify the integrity of the data, where you get it is irrelevant.

      • by Chemisor ( 97276 )

        Except that the location of the data is the primary way of verifying the identity of the author. How am I supposed to know that the game patch I have just downloaded came from CompanyX, rather than from some malware spammer? I go to www.companyx.com and get the patch from there. Sure, there's DNS spoofing, MITM attacks, etc., but in general going to the authorized location is a pretty reliable method of identity verification. With this content-centric network, there is no way to reliably get the keys to ver

        • by Hatta ( 162192 )

          Except that the location of the data is the primary way of verifying the identity of the author.

          Only for historical reasons, not technical, and it's always been a bad way of identifying the author.

          How am I supposed to know that the game patch I have just downloaded came from CompanyX, rather than from some malware spammer?

          Cryptographic signatures.

        • by lgw ( 121541 )

          How do you today validate the identity of any host? Certificates. How would you validate the authentcity of any content retrieved by a hash? The same certificates (used to digitally sign the data). Moving to signed data would make phishing attacks far more difficult (though the certificate system itself has real problems, those problems exists today).

          • by Chemisor ( 97276 )

            > How do you today validate the identity of any host? Certificates.

            Only geeks read certificates. The reason being that verifying the certificate takes extra work. When I go to www.microsoft.com, I can be pretty sure that what I'm getting there is coming from Microsoft. If you only have a certificate to go on, you have to verify that the certificate was issued by a valid CA, that the name of the company matches. Can you be sure that there are no spaces at the end of that name? Are you sure the unicode enc

            • by lgw ( 121541 )

              I'm not sure you're getting how this would work. Everything important happens under the covers. When you go to www.microsoft.com, the only reason you can expect to get Microsoft and not a phishing site is that certificate auto-checked by your browser.

              It's not like DNS (or some equivalent) would go away, but that the content on a site could now be served from anywhere, P2P. Your browser asks for a list of hashes, gets the corresponding blocks back, and displays the result. But again, automatically under

              • by Chemisor ( 97276 )

                When you go to www.microsoft.com, the only reason you can expect to get Microsoft and not a phishing site is that certificate auto-checked by your browser.

                The reason I expect Microsoft is that I know that it owns the domain and that unless my computer has already been hacked, the DNS record will accurately get me to that site. Most of the time this works.

                The certificate, on the other hand, does not give me any such guarantee. Yes, the browser can verify the signature, which basically means that the certific

      • And if integrity is based on hash/signature, then it suddenly becomes relevant if computing catches up and can generate a collision. And then you have to upgrade the entire Internet at once to fix it.

    • by jg ( 16880 )

      *All* content is signed in CCNx by the publisher.

      You can get a packet from your worst enemy, and it's ok. The path it took to get to you doesn't matter. If you need privacy, you encrypt the packets at the time of signing.

  • ... is some infrastructure that we tell what we want and it tells us where it is. Or better yet, fetches it for us. Already done:

    The Pirate Bay/BitTorrent.

  • by oneiros27 ( 46144 ) on Tuesday August 07, 2012 @01:48PM (#40907547) Homepage

    Magnet links only use the hash, so there's a possibility of hash collisions. He's proposing an identifier + resolver scheme ... which again, has been done many, many times already.

    Eg, ARK [wikipedia.org] or OpenURL [wikipedia.org]

    Or, we get to the larger architecture of storing & moving these files, such as the various Data Grid [wikipedia.org] implementations. (which may also allow you to run reduction before transfer, depending on the exact infrastructure used).

  • by Njovich ( 553857 ) on Tuesday August 07, 2012 @01:58PM (#40907663)

    Any time someone talks about Content Centric networking or routing, there are always a bunch of people saying that it's basically the same as distributed hash tables, multicast, a cache, etc.

    However, it may use such technologies, but it isn't the same.

    Content Centric is all about having distributed publish/subscribe, usually on a lower network layer.

    The content part in the name means that there is being looked at the content itself for routing, not some explicit addressing. For instance, to give a very simple example you can send out a message [type=weather; location=london; temperature=21], then anyone subscribing to {location==london && temperature>15} will receive this message.

    The network is typically decentralized, and using this kind of method can give a number of interesting efficiency benefits.

    This is currently mostly being used in some business middleware; ad hoc networking stuff and some grid solutions. None of those particularly large.

    The real problems with widespread use of this technique are the following:

    * It's unnecessary: IPv6 is completely necessary, somewhat doable in terms of upgrading, and almost nobody is using it even now. This is someone suggesting a whole new infrastructure for large parts of the internet. The fact is, this would possibly be more efficient than many things that are being done now, but in reality nobody cares about it. Facebook and youtube (ok Google) would rather just pay for the hardware and bandwidth than give up control.

    * Security is still unclear, it's easy to do some hand-waving about PKI, but it's hard to come with a practical solution that works for many.

    • by w_dragon ( 1802458 ) on Tuesday August 07, 2012 @02:15PM (#40907841)
      There are a couple other little issues:

      You need to be able to find things somehow. This requires either some set of central servers, which somewhat defeats the purpose, or a method of broadcast communication that isn't blocked by your ISP. There's a good reason your ISP blocks UDP broadcast and multicast packets - on a large network broadcast leads to exponential packet growth.

      For most of us the most limited part of the internet infrastructure is the link from the last router to our house. Picking up my youtube cat videos from my neighbour rather than from a cache server on my ISP's backbone may seem like a good idea, but in reality you're switching traffic from a high-capacity link between my street's router and my ISP, to a low capacity link between my neighbour and our router.

      If you're going to cache things on my computer you're going to be using my hardware. That hardware isn't free, and neither are the bits you want to use my internet connection to send. How am I going to be compensated?
      • by Njovich ( 553857 )

        This requires either some set of central servers, which somewhat defeats the purpose, or a method of broadcast communication that isn't blocked by your ISP.

        No central servers are needed, and you don't need broadcast either really (although both are used by some solutions). However, you may need or want brokers/routers at local points, and they may need bigger caches than you would currently have. That can be a problem yes.

        (IP level) broadcast is not really needed, as the scheme already implements some kind

      • by gr8_phk ( 621180 )

        If you're going to cache things on my computer you're going to be using my hardware. That hardware isn't free, and neither are the bits you want to use my internet connection to send. How am I going to be compensated?

        You only cache the things you get for yourself, and they are only stored for a finite amount of time (say a few days). Your compensation is that you got the data you wanted for yourself over a better system. See Bit Torrent. Now imagine that an embedded YouTube video just points to a torrent a

  • you should only have to care about what you want, not where it's stored.

    Isn't that what Google is for?

  • by Animats ( 122034 ) on Tuesday August 07, 2012 @02:05PM (#40907747) Homepage

    This has been proposed before. It's already obsolete.

    The Uniform Resource Name [wikipedia.org] idea was supposed to do this. So was the "Semantic Web". In practice, there are many edge caching systems already, Akamai being the biggest provider. Most networking congestion problems today are at the edges, where they should be, not at the core. Bulk bandwidth is cheap.

    The concept is obsolete because so much content is now "personalized". You can't cache a Facebook page or a Google search result. Every serve of the same URL produces different output. Video can be cached or multicast only if the source of the video doesn't object. Many video content sources would consider it a copyright violation. Especially if it breaks ad personalization.

    As for running out of bandwidth, we're well on our way to enough capacity to stream HDTV to everybody on the planet simultaneously. Beyond that, it's hard to usefully use more bandwidth. Wireless spectrum space is a problem, but caching won't help there.

    The sheer amount of infrastructure that's been deployed merely so that people can watch TV over the Internet is awe-inspiring. Arguably it could have been done more efficiently, but if it had been, it would have been worse. Various schemes were proposed by the cable TV industry over the last two decades, most of which were ways to do pay-per-view at lower cost to the cable company. With those schemes, the only content you could watch was sold by the cable company. We're lucky to have escaped that fate.

    • by lgw ( 121541 )

      The concept is obsolete because so much content is now "personalized". You can't cache a Facebook page or a Google search result. Every serve of the same URL produces different output. Video can be cached or multicast only if the source of the video doesn't object. Many video content sources would consider it a copyright violation. Especially if it breaks ad personalization.

      All of those examples are aggregates of data that could be cached (and often are, in practice, just farther upstream than might be ideal in some cases).

    • by lannocc ( 568669 )
      If properly designed, something like a Facebook page actually is cacheable. Once an entry is made the entry itself remains static unless there is an edit. The page is simply a feed of resources that are all may be cached individually. Imagine it's an XML document with many xlinks to other resources, optionally also embedded in the original request. This is how I would do it.
      • A facebook page does not need to be cached. It needs to be sent to 10 people. It should be stored on your own personal machine with secure access handed to your "friends". Of course this requires actual peer-to -peer networking which doesn't really exist - just try to get a fixed URL from your ISP, and then try to find a common app that uses it. I'd like to see an IPv6 subnet where the addresses correspond to GPS location - that's just plain easy to route, and it helps with identification.
  • And it has the same issues. 15 years ago everyone said that we'd move past using file to store stuff and just go for the stuff we want. Microsft had WinFS for example (part of Longhorn).

    But then the question comes where do you actually store the stuff?

    The real change came not by eliminating using files to store stuff, but by changing how we retrieve stuff.

    And this is the same way. Changing how you locate stuff on the internet is not going to remove the need for TCP/IP. You're still going to have to contact

  • sounds a lot like what a list apart has been calling "orbital content" since at least april '11: http://www.alistapart.com/articles/orbital-content/ [alistapart.com]

    Our transformed relationship with content is one in which individual users are the gravitational center and content floats in orbit around them. This “orbital content,” built up by the user, has the following two characteristics:

    Liberated: The content was either created by you or has been distilled and associated with you so it is both pure and personal.
    Open: You collected it so you control it. There are no middlemen apps in the way. When an application wants to offer you some cool service, it now requests access to the API of you instead of the various APIs of your entourage. This is what makes it so useful. It can be shared with countless apps and flow seamlessly between contexts.

    The result is a user-controlled collection of content that is free (as in speech), distilled, open, personal, and—most importantly—useful. You do the work to assemble a collection of content from disparate sources, and apps do the work to make those collections useful. These orbital collections will push users to be more self-reliant and applications to be more innovative.

  • In fact it sounds identical to what CORBA promised. In fact, CORBA will take the world by storm! It will... um...

    *headscratch* Hmm....

  • by anom ( 809433 )

    This stuff has been around for a while, and I have the following problems with it:

    1. We already pretty much have CCN. They're called URLs, and companies like Akamai and others do a great job of dynamically pointing you to whatever server you should be talking to using DNS, HTTP redirects, etc. When I type www.slashdot.org, I already don't care what server it lives on. When I type https://www.slashdot.org/ [slashdot.org] I still dont care what server it is on, and I have at least some indication that the content is fro

  • "you should only have to care about what you want, not where it's stored."

    where its stored is very important. we forget in the age of fiber networks that the internet DOES have topology. there was an age where traceroute was a very neccary tool for setting up IRC networks to determine how you linked servers to your hubs, and how you formed your backbone of linked hubs.

    Given that computers on the internet are owned and operated by a variety of diffrent intrests, many of which view eachother with suspicion, i

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...