Content-Centric Networking & the Next Internet 153
waderoush writes "PARC research fellow Van Jacobson argues that the Internet was never designed to carry exabytes of video, voice, and image data to consumers' homes and mobile devices, and that it will never be possible to increase bandwidth fast enough to keep up with demand. In fact, he thinks that the Internet has outgrown its original underpinnings as a network built on physical addresses, and that it's time to put aside TCP/IP and start over with a completely novel approach to naming, storing, and moving data. The fundamental idea behind Jacobson's alternative proposal — Content Centric Networking — is that to retrieve a piece of data, you should only have to care about what you want, not where it's stored. If implemented, the idea might undermine many current business models in the software and digital content industries — while at the same time creating new ones. In other words, it's exactly the kind of revolutionary idea that has remade Silicon Valley at least four times since the 1960s."
Magnet links? (Score:5, Insightful)
Did he just reinvent magnet links?
Re:Magnet links? (Score:5, Informative)
Did he just reinvent magnet links?
Closer to a reinvention of freenet.
Or maybe reinventing mdns
Or maybe reinventing AFS
Its been a pretty popular idea for a couple decades now.
Re: (Score:2)
Quite. Regardless of whatever high level sort of information hiding you do to prevent the end user from knowing where there stuff is coming from, sooner or later your network is going to have to figure out how to get stuff from Point A to Point B.
It's like how our phone network isn't addressable by person yet. You still need a phone number for a person or a company for the same reason you can't get away from IP addresses on the Internet.
No matter how much you try to get away from either, they will still be
Re: (Score:2)
Re: (Score:3)
It looks that way, and of course, it raises the obvious question "What transport layers do you propose to move this data around with?"
Re: (Score:3)
Re:Magnet links? (Score:5, Insightful)
Is your actual premise here that Van Jacobson, a major contributor to TCP/IP and inventor of the modern flow control it is based on, somehow doesn't have the foggiest idea how the infrastructure HE HELPED FUCKING INVENT works?
Re:Magnet links? (Score:4, Informative)
I think the whole thing falls under the "I have a great idea, but I actually don't have the foggiest idea how infrastructure works now, but hey, I need to a BIG SEXY CONTROVERSIAL headline."
Imagine if even a tenth of the fucking morons out there who pontificate on subjects for which they had no real knowledge at all actually did have that knowledge. My God, we'd probably be terraforming Pluto by now!
The irony is strong in this one.
Anyone pontificating about internet infrastructure who doesn't know Van Jacobson [wikipedia.org] is a fucking moron.
Re:Magnet links? (Score:4, Informative)
" (In Jacobson’s scheme, file names can include encrypted sections that bar users without the proper keys from retrieving them, meaning that security and rights management are built into the address system from the start.)"
It sounds like he made them worse; but otherwise pretty similar to magnet links or the mechanisms something like Freenet uses.
Perhaps more broadly, isn't a substantial subset of the virtues of this scheme already implemented(albeit by an assortment of nasty hacks, not by anything terribly elegant) through caches on the client side and various CDN schemes on the server side? URLs haven't corresponded to locations, rather than to either user expressions of a given wish, or auto-generated requests for specific content, in the majority of cases for a while now(and, on the client side, caching doesn't extend to the entire system, for security reasons if nothing else; but it already covers a lot of common web-resource request scenarios).
Now, in a perfect world, "we have a pile of nasty hacks for that" is an argument for a more elegant solution; but, in practice, it seems to be closer to equivalent to "we already have stuff that mostly works and will be cheaper next year", which can be hard on the adoption of new techniques...
Re: (Score:2)
In Jacobson’s scheme, file names can include encrypted sections that bar users without the proper keys from retrieving them, meaning that security and rights management are built into the address system from the start.
So, no you can't, and yes they will.
Re: (Score:2)
Security has multiple meanings. This kind of security says that someone without the proper authority can't get access to it. The kind I'm most interested says that I *can* get access to *my* stuff. This doesn't seem to address that at all.
P.S.: My network occasionally goes down. If I don't know where my stuff is, how to I access it then?
The current internet handles this by saying "Your stuff on your computer will stay on your computer" (Don't tell me about that recent Wired reporter getting hacked. F
Re: (Score:2, Informative)
more like CDN servers, except smarter.
There are already mechanisms for this.
What needs to exist is a hybrid approach where the end users are the origin servers, and the CDN notes operate as capacity supernodes on their local ISP, in turn these ISP supernodes talk to each other. If a piece of content needs to "disappear" the end user removes it from their system, and it will tell the supernodes that the content is no longer available, leaving only users who already have it to talk to each other if they still
Re: (Score:2)
Re: (Score:3)
This guy belongs in Star Trek, and I don't say that in a derogatory way.
It's worse than magnet links, because he is proposing that the entire Internet (or most of it) work just like that.
The problem is not the technology, it is the societies trying to implement it. Magnet links sound great in theory, but are progressively (extremely) dangerous in practice. You would have to be crazy to using public peer-to-peer networks at this point with Big Content doing its best to shove Freedom's face into the ground
Re: (Score:2)
We could implement his ideas, but the only safe way to do so would be to create an inherently anonymous infrastructure. Not a trivial task.
So...like Freenet then? As someone else has already mentioned, this does (at least to me) sound a LOT like the way Freenet addresses files.
Re: (Score:3)
It seems like Slashdot has a "let's reinvent Freenet" story every week now. Freenet may have issues, but it solves a great many current problems. What it lacks is the network effect - there's not really any content there today, so no one uses it (and vice versa).
Re: (Score:2)
I contributed a bit of code to the 0.5 network several years ago...I've been meaning to go back and see if that's still alive now that I have a stable, good internet connection. Just graduated college; didn't really have a connection I could run it on the whole time there. But last I checked 0.5 (FCON) was still populated, and I still can't quite trust the new network. Last I checked there was still better content on the old one anyway! Though certainly not much of it...nothing like it used to be...
I think
Re: (Score:2)
The thing with tyranny is, if you don't stand up to it, it just gets worse. Take a single sadist, and extrapolate from there... Appeasement. Does. Not. Work.
Also, people with power are lame and stupid. Seriously. Those who didn't get attracted to it by being dumb to begin with, get turned dumb. And like that spider you killed with a shoe, they are more scared of y
Re: (Score:2)
You misunderstand me.
Their attacks on our Freedom will succeed as long as we let them, and sadly, it looks like we are going to let them. My point about this technology is that it will not be embraced by corporations and ISPs because it is wholly incompatible with their own business goals.
The whole idea of Freenet and Darknets in general is a wonderful idea. Make no mistake however that it will not be popular as far as governments and corporations are concerned, and it will not have anything close to carri
Re: (Score:2)
If all of the world would instantly have attacked Hitler before he could even have re-armed, instead of time and time again looking the other, that wouldn't even have been necessary. But, I wasn't talking about "the policy of appeasement towards the Nazis" anyway, I meant appeasement of powermongers and sadists, period. It just happens to also hold true for the Nazis, too. Segregation, Suffragettes, you name it -- you don't get shit by just asking real nice, much less by sitting still and merely hoping.
Re: (Score:3)
No. Next question?
a) it predates magnet (magnet just from 2002, CCN is from late 90's)
b) magnet is a naming/addressing scheme, this is a routing technology. There is a difference, although one can be used with another.
Re: (Score:2)
..magnet links
Actually it sounds to me like just another way of saying "everything is going to the Cloud", which I happen to think is the worst idea ever.
Re:Magnet links? (Score:5, Insightful)
The genie is out of the bottle, even still today on current internet setup....I'd not count on the govt types allowing the next one, to have a genie....by force of law.
Re: (Score:2)
Re: (Score:2)
But, what if said govt...decided that
PARC (Score:3)
I have a feeling that the current crop of PARC researchers are not as bright as their peers 20 or 30 years ago
They do not give us any new insight on what's beyond the horizon, nor demonstrate to us what their visions are leading to, unlike their peers 20, 30 or 40 years ago had done
Re: (Score:3)
# cd /
# sudo git init
# sudo git add .
# sudo git commit -av -m "Git filesystem is a go"
Re: (Score:2)
You might be interested in git-annex ( http://git-annex.branchable.com/ [branchable.com] ).
After you set up some repositories, you can say "git annex get my_home_movie.avi," and it will figure out which of your repositories has it, and copy it over for you. (It also checks that a file exists in another location before it lets you delete it).
As far as I can tell, the use-case it was designed for was one user with multiple places to put files. It might be pretty cool to extend it to work well with many users. (Although at thi
Isn't the internet already meeting demand? (Score:4, Insightful)
Why does he say "it will never be possible to increase bandwidth fast enough to keep up with demand"?
When I want to watch streaming video, I fire up Netflix and watch streaming video. When I want to download a large media file, I find it on bittorrent and download it. The only time I've noticed any internet slowdowns, it's been in my ISP's network, and it's just a transient problem that eventually goes away.
Sure, Netflix has to do some extra work to create a content delivery network to deliver the content near to where I am, but it sounds like the internet is largely keeping up with demand.
Aside from the IPv4->IPv6 transition (we've been a year away from running out of IP addresses for years), is there some impending bandwidth crunch that will kill the internet?
Re: (Score:3)
He seems to be assuming that demand will continue to grow at current and historical rate. I'd say that isn't a very good assumption: the jump from people using a text-based web to a video/flash/image one was significant, but the demands of each individual user aren't likely to increase much beyond that. Adding more people will increase demand somewhat, but not by an order of magnitude like Youtube, Netflix et al. do, and since people are already watching those just fine, it is hardly an insurmountable issue
Re: (Score:2)
but considering that most people can't tell the difference between 720p and 1080p, I doubt that will ever happen.
Uh, what?
I can see the difference. I can even see the difference between 1080p and 1440p, or 1440p and 2160p. And it's not a slight difference that I could understand people missing. In informal tests, comparing my laptop playing 1080p video to my parent's 720p "HDTV", 100% of those surveyed responded "holy crap that looks better" (margin of error for 95% confidence interval: 9.38%).
Re: (Score:3)
I think it's not that most people can't tell the difference between 720p and 1080p, but that they just don't care.
Re: (Score:2)
(I'm going to make the next part up, but it makes sense) 75% of people use their television to waste time. 20% use it for background noise while they do something else (my group). I'm betting
Re: (Score:2)
The test pattern image is the most watched program in Italy - though it does play music.
Re: (Score:2)
but considering that most people can't tell the difference between 720p and 1080p, I doubt that will ever happen.
Uh, what?
I can see the difference. I can even see the difference between 1080p and 1440p, or 1440p and 2160p. And it's not a slight difference that I could understand people missing. In informal tests, comparing my laptop playing 1080p video to my parent's 720p "HDTV", 100% of those surveyed responded "holy crap that looks better" (margin of error for 95% confidence interval: 9.38%).
You're not "most people" - "most people" haven't even seen 1440p.
And how do you make any sort of fair comparison between a 17" laptop screen and a 32" (or larger?) HDTV? There's no way to fairly compare the two because of the screen size difference.
At normal viewing distances, most people can't see the difference between 720p and 1080p -- you'd need to be within 5 feet of your 40" TV to see the difference. Sure, maybe you have a home theater with a 60" TV and seats 6 feet away, but most people have a TV in
Re: (Score:2)
The comparison was actually with the laptop being next to the TV, so that's about as valid as I can get it. I've even found they can see a difference between a 1280x720 TV and my 1600x900 monitor, and that's much less a difference in physical size AND in resolution.
Higher resolution does matter. Maybe there is a limit (I haven't seen 4K video on a home-size screen yet), but we're far from reaching it.
And you also have to think about changes in consumption. More and more people aren't lounging on the couch a
Re: (Score:2)
The comparison was actually with the laptop being next to the TV, so that's about as valid as I can get it. I've even found they can see a difference between a 1280x720 TV and my 1600x900 monitor, and that's much less a difference in physical size AND in resolution.
That's about as invalid as you can get. You're comparing a 100+ dpi laptop screen with a 40 or 50 dpi TV screen. Of course people are going to like the sharper screen of the laptop better.
Re:Isn't the internet already meeting demand? (Score:4, Interesting)
the demands of each individual user aren't likely to increase much beyond that.
I think your thinking is way too constrained. If the bandwidth was available, then people could have immersive 3D working environments, and tele-commuting could be far more common. This would result in much less traffic on the roads and a huge reduction in CO2 emissions and oil imports. This is not science fiction. I have used Cisco's "Virtual Meeting Room" and it is pretty good.
You also need to think about things like "Siri", that send audio back to the server for processing, because there isn't enough horsepower in a cellphone. I could see "smart glasses" of the future sending video back to a server. That will require huge bandwidth.
If the bandwidth is available and affordable, the applications will come.
Re: (Score:2)
the demands of each individual user aren't likely to increase much beyond that.
I think your thinking is way too constrained. If the bandwidth was available, then people could have immersive 3D working environments, and tele-commuting could be far more common. This would result in much less traffic on the roads and a huge reduction in CO2 emissions and oil imports. This is not science fiction. I have used Cisco's "Virtual Meeting Room" and it is pretty good.
You also need to think about things like "Siri", that send audio back to the server for processing, because there isn't enough horsepower in a cellphone. I could see "smart glasses" of the future sending video back to a server. That will require huge bandwidth.
If the bandwidth is available and affordable, the applications will come.
I work in a large multi-building "campus" (well, more of an office park, we have offices in several buildings). It's a 15 - 20 minute walk from one building to the farthest one (depending on who is doing the walking)
We have practically unlimited bandwidth between buildings (and at least a gigabit to remote offices) yet we still make people trudge between buildings for meetings, and teleconferences with remote sites are 720p (or Skype). So bandwidth isn't constraining us from immersive teleconferencing - we'
Re: (Score:2)
But everyone is still recommended to get up every hour and walk around for 10 minutes to allow the circulation and exercise to get rid of all the toxins that have built up.
Re: (Score:2)
I didn't RTFA yet, but Netflix has to have an immense infrastructure to serve its customers. Just imagine 2 Mbit per stream, then hundreds of thousands of streams, that's almost 1 Tbit/s. Multicast will not save us, because people are watching different things at different times (multicast would give us something like cable TV with DVRs) Some problem with the current situation: 1) If their customer base increases, the bandwidth requirement will be ridiculous, and will cause distortion to the structure of th
Re: (Score:2)
(we've been a year away from running out of IP addresses for years)
IANA has already allocated the last /8 IP blocks January 2001 to the RIRs (Regional lnternet Registeries)
Once the RIRs run out, they will not be able to get any more.
APNIC (The Asia Pacific RIR) had allocated their last block four months after in April 2001.
ARIN is the North America RIR and still has smaller blocks left, but once they are gone then they have to make due with what blocks they can get back.
So we are not a year away, we are a year past running out at the highest levels.
Boring (Score:5, Insightful)
it will never be possible to increase bandwidth fast enough to keep up with demand.
I've been hearing that since I got on the net in '91. Tell me a new lie.
Its an end time message. "Repent, for the end is near". Yet, stubbornly, the sun always rises tomorrow.
Re:Boring (Score:4, Interesting)
Two words: Dark fiber [wikipedia.org]. Laying absurd capacity of trunk line is no more expensive than burying an old copper wire bundle.
Re: (Score:2)
And fiber lasts a hell of a lot longer than copper. I don't know of any ILEC *not* replacing their copper with fiber when the copper gets to EOL.
Re: (Score:2)
Or when some metal thieves can't find enough scrap metal above ground.
Hippies love color changing things w/ LEDs -- There's certainly a market for Fiber thieves.
Sounds like the principle behind URNs (Score:5, Informative)
See http://en.wikipedia.org/wiki/Uniform_resource_name [wikipedia.org] . This is a very old [and good] idea.
For example: urn:isbn:0451450523 is the URN for The Last Unicorn (1968 book), identified by its [ISBN] book number.
Of course [as the dept. notes] you still need to figure out how to get the bits from place to place, which requires a network of some kind, and protocols built on that network which are not so slavishly tied to one model of data organization that we can't evolve it forward.
Re: (Score:2)
Just create the URN based on a (secure) hash. No central authority required.
Re: (Score:2)
Whatever URN lookup happens, it will probably resolve to URLs anyway. urn:isbn:0451450523 resolves to urn:http://www.loc.gov/isbn/0451450523, then a list of alternates like urn:http://www.amazon.com/isbn/0451450523
look at the source (Score:2)
If this had come out of almost anyone else's mouth, I'd be the first to say they were full of it.
But... Van Jacobson [wikipedia.org]!
Re: (Score:3)
Ideas are easy (Score:3)
Any idiot can have a pile of ideas. The implementation is what matters.
Too bad the idea pays 95%, the implementation 5%
Re: (Score:2)
Any idiot can have a pile of ideas. The implementation is what matters.
Too bad the idea pays 95%, the implementation 5%
That's a common misconception. It's the person with the superior legal standing that gets paid 99%, IP only grants superior legal standing if you've also got the lawyers to back it up.
Re: (Score:3)
Any idiot can have a pile of ideas. The implementation is what matters.
I like this quote, but personally would not attempt to use it when talking about Van Jacobson [wikipedia.org]
Re: (Score:2)
Any idiot can have a pile of ideas. The implementation is what matters.
Too bad the idea pays 95%, the implementation 5%
I run into "Ideas Men" in the indie game dev scene all the time... Most never make a game unless they learn actual coding, art, music -- Some actual skill other than thinking up WiBCIs ("wouldn't it be cool if ___"s). In my experience, it's the implementation that pays, ideas are worth less than a dime a dozen.
Dynamic caching? (Score:5, Interesting)
So back in the day, we had a thing called the mbone [wikipedia.org], which was multicast infrastructure which was supposed to help with streaming live content from a single sender to many receivers. It was a bit ahead of its time, I think, streaming video just wasn't that common in the 1990s, and it also really only worked for actually-simultaneous streams, which, when streaming video did become common, wasn't what people were watching.
The contemporary solution is for big content providers to co-locate caches in telco data centers, so while you still send multiple separate streams of unsynchronized, high-demand streaming content, you send them a relatively short distance over relatively fat pipes, except for the last mile, which however only has to carry one copy. For low-demand streaming content, you don't need to cache, it's only a few copies, and the regular internet mostly works. It can fall over when a previously low-demand stream suddenly becomes high-demand, like Sunday night when NASA TV started to get slow, but it mostly works.
TFA (I know, I know...) doesn't address moving data around, but it seems like this is something that a new scheme could offer -- if the co-located caches were populated based purely on demand, rather than on demand plus ownership, then all content would be on the same footing, and it could lead to a better web experience for info consumers. That's a neat idea, but I think we already know how both the telcos and commercial streaming content owners feel about demand-based dynamic copy creation...
From TFA, explaining *how* this would work (Score:2)
Re: (Score:2)
How is this different from Bittorrent? Isn't this the same principal, in a more router-oriented way?
SQ (Score:2)
(The point of SQL is that you say what you want, not where to find it - hence the concept of "NoSQL" just silly)
Re: (Score:2)
So you don't use the FROM clause?
No, he LIKEs using *.
SQL wins and losses (Score:2)
FROM specifies which tables (or views), not which server, or network, or storage device.
That in itself isn't the point of SQL, rather it's non-procedural, meaning you don't specify how to get the data, you only describe the data you want (in terms of how it relates to others). If your data doesn't have that sort of structure, the "NOSQL" strategy is fine (and can be done in SQL anyway).
SQL's main problems are the inconsistent and sometimes misleading syntax, and the complexity of the where clauses. There ar
Never gonna happen, because... (Score:2)
there's already too much TCP/IP infrastructure bought, paid for and in use.
you should only have to care about what you want," (Score:3)
"not where it's stored."
So we should make the Internet into Plan 9?
Re: (Score:2)
"not where it's stored."
So we should make the Internet into Plan 9?
Your stupid minds. Stupid! Stupid!
You're working for the clampdown (Score:2)
I'm pretty sure where the tollbooths will be - embedded in your local ISP. They will be put there by the music and movie industries so that when you in this new future request a tune or a clip by name rather than by IP address you can be either billed or den
Too many costs involved (Score:2)
There is not only a cost of deploying the new tech, but also the cost of change. That cost of change is REALLY high as the current methods are deeply seeded. IPv6 isn't "there" yet... and the experience has been dizzying for many. Now there's another new approach? It may be better, but people don't want the change. Something catastrophic will have to cause such change and even then, people will gravitate to the solution with the least amount of change possible.
Re: (Score:3)
There is not only a cost of deploying the new tech, but also the cost of change. That cost of change is REALLY high as the current methods are deeply seeded. IPv6 isn't "there" yet... and the experience has been dizzying for many. Now there's another new approach? It may be better, but people don't want the change. Something catastrophic will have to cause such change and...
Yeah, like Y2K. Oh, wait....
I know! Let's get Apple to build it. Apple people will pay obscene sums for shiny new stuff with Apple logos on it.
This isn't a new idea (Score:2)
But it's good that someone who was involved in the early Internet realizes that it's a good one.
And no, it doesn't mean throwing TCP/IP away.
But really, Slashdotting should be impossible. To me, the fact that it is possible indicates a fundamental problem with the current structure of the Internet. If you can come up with someone other than using content-addressing that solves the Slashdotting problem for everybody (even someone serving up content from a dialup) then it doesn't really solve the problem.
Sounds like.. (Score:2)
Something like Freenet maybe? (Score:2)
But I *DO* care where my content comes from! (Score:5, Insightful)
And the list goes on....
Re:But I *DO* care where my content comes from! (Score:5, Insightful)
Who exactly is asking me to transfer money out of my account?
Did this patch that I downloaded come from a reputable server? Or will it subvert my system?
Is this news story from a reputable source?
None of these depend on the location of the data, only the identity of the author. If you can verify the integrity of the data, where you get it is irrelevant.
Re: (Score:3)
Except that the location of the data is the primary way of verifying the identity of the author. How am I supposed to know that the game patch I have just downloaded came from CompanyX, rather than from some malware spammer? I go to www.companyx.com and get the patch from there. Sure, there's DNS spoofing, MITM attacks, etc., but in general going to the authorized location is a pretty reliable method of identity verification. With this content-centric network, there is no way to reliably get the keys to ver
Re: (Score:2)
Except that the location of the data is the primary way of verifying the identity of the author.
Only for historical reasons, not technical, and it's always been a bad way of identifying the author.
How am I supposed to know that the game patch I have just downloaded came from CompanyX, rather than from some malware spammer?
Cryptographic signatures.
Re: (Score:2)
Web of trust? Printing the public key in the newspaper? This was solved decades ago. Public keys are to be made as public as possible and kept by as many people as possible so as to be more verifiable.
Re: (Score:2)
How do you today validate the identity of any host? Certificates. How would you validate the authentcity of any content retrieved by a hash? The same certificates (used to digitally sign the data). Moving to signed data would make phishing attacks far more difficult (though the certificate system itself has real problems, those problems exists today).
Re: (Score:2)
> How do you today validate the identity of any host? Certificates.
Only geeks read certificates. The reason being that verifying the certificate takes extra work. When I go to www.microsoft.com, I can be pretty sure that what I'm getting there is coming from Microsoft. If you only have a certificate to go on, you have to verify that the certificate was issued by a valid CA, that the name of the company matches. Can you be sure that there are no spaces at the end of that name? Are you sure the unicode enc
Re: (Score:2)
I'm not sure you're getting how this would work. Everything important happens under the covers. When you go to www.microsoft.com, the only reason you can expect to get Microsoft and not a phishing site is that certificate auto-checked by your browser.
It's not like DNS (or some equivalent) would go away, but that the content on a site could now be served from anywhere, P2P. Your browser asks for a list of hashes, gets the corresponding blocks back, and displays the result. But again, automatically under
Re: (Score:2)
The reason I expect Microsoft is that I know that it owns the domain and that unless my computer has already been hacked, the DNS record will accurately get me to that site. Most of the time this works.
The certificate, on the other hand, does not give me any such guarantee. Yes, the browser can verify the signature, which basically means that the certific
Re: (Score:3)
And if integrity is based on hash/signature, then it suddenly becomes relevant if computing catches up and can generate a collision. And then you have to upgrade the entire Internet at once to fix it.
Re: (Score:3)
*All* content is signed in CCNx by the publisher.
You can get a packet from your worst enemy, and it's ok. The path it took to get to you doesn't matter. If you need privacy, you encrypt the packets at the time of signing.
So, what we need .... (Score:2)
The Pirate Bay/BitTorrent.
Nope ... but close (Score:3)
Magnet links only use the hash, so there's a possibility of hash collisions. He's proposing an identifier + resolver scheme ... which again, has been done many, many times already.
Eg, ARK [wikipedia.org] or OpenURL [wikipedia.org]
Or, we get to the larger architecture of storing & moving these files, such as the various Data Grid [wikipedia.org] implementations. (which may also allow you to run reduction before transfer, depending on the exact infrastructure used).
CCN is not $other_technology (Score:3)
Any time someone talks about Content Centric networking or routing, there are always a bunch of people saying that it's basically the same as distributed hash tables, multicast, a cache, etc.
However, it may use such technologies, but it isn't the same.
Content Centric is all about having distributed publish/subscribe, usually on a lower network layer.
The content part in the name means that there is being looked at the content itself for routing, not some explicit addressing. For instance, to give a very simple example you can send out a message [type=weather; location=london; temperature=21], then anyone subscribing to {location==london && temperature>15} will receive this message.
The network is typically decentralized, and using this kind of method can give a number of interesting efficiency benefits.
This is currently mostly being used in some business middleware; ad hoc networking stuff and some grid solutions. None of those particularly large.
The real problems with widespread use of this technique are the following:
* It's unnecessary: IPv6 is completely necessary, somewhat doable in terms of upgrading, and almost nobody is using it even now. This is someone suggesting a whole new infrastructure for large parts of the internet. The fact is, this would possibly be more efficient than many things that are being done now, but in reality nobody cares about it. Facebook and youtube (ok Google) would rather just pay for the hardware and bandwidth than give up control.
* Security is still unclear, it's easy to do some hand-waving about PKI, but it's hard to come with a practical solution that works for many.
Re:CCN is not $other_technology (Score:4, Informative)
You need to be able to find things somehow. This requires either some set of central servers, which somewhat defeats the purpose, or a method of broadcast communication that isn't blocked by your ISP. There's a good reason your ISP blocks UDP broadcast and multicast packets - on a large network broadcast leads to exponential packet growth.
For most of us the most limited part of the internet infrastructure is the link from the last router to our house. Picking up my youtube cat videos from my neighbour rather than from a cache server on my ISP's backbone may seem like a good idea, but in reality you're switching traffic from a high-capacity link between my street's router and my ISP, to a low capacity link between my neighbour and our router.
If you're going to cache things on my computer you're going to be using my hardware. That hardware isn't free, and neither are the bits you want to use my internet connection to send. How am I going to be compensated?
Re: (Score:2)
No central servers are needed, and you don't need broadcast either really (although both are used by some solutions). However, you may need or want brokers/routers at local points, and they may need bigger caches than you would currently have. That can be a problem yes.
(IP level) broadcast is not really needed, as the scheme already implements some kind
Re: (Score:2)
You only cache the things you get for yourself, and they are only stored for a finite amount of time (say a few days). Your compensation is that you got the data you wanted for yourself over a better system. See Bit Torrent. Now imagine that an embedded YouTube video just points to a torrent a
Old future. (Score:2)
you should only have to care about what you want, not where it's stored.
Isn't that what Google is for?
Not a new idea, or a useful one (Score:4, Interesting)
This has been proposed before. It's already obsolete.
The Uniform Resource Name [wikipedia.org] idea was supposed to do this. So was the "Semantic Web". In practice, there are many edge caching systems already, Akamai being the biggest provider. Most networking congestion problems today are at the edges, where they should be, not at the core. Bulk bandwidth is cheap.
The concept is obsolete because so much content is now "personalized". You can't cache a Facebook page or a Google search result. Every serve of the same URL produces different output. Video can be cached or multicast only if the source of the video doesn't object. Many video content sources would consider it a copyright violation. Especially if it breaks ad personalization.
As for running out of bandwidth, we're well on our way to enough capacity to stream HDTV to everybody on the planet simultaneously. Beyond that, it's hard to usefully use more bandwidth. Wireless spectrum space is a problem, but caching won't help there.
The sheer amount of infrastructure that's been deployed merely so that people can watch TV over the Internet is awe-inspiring. Arguably it could have been done more efficiently, but if it had been, it would have been worse. Various schemes were proposed by the cable TV industry over the last two decades, most of which were ways to do pay-per-view at lower cost to the cable company. With those schemes, the only content you could watch was sold by the cable company. We're lucky to have escaped that fate.
Re: (Score:2)
The concept is obsolete because so much content is now "personalized". You can't cache a Facebook page or a Google search result. Every serve of the same URL produces different output. Video can be cached or multicast only if the source of the video doesn't object. Many video content sources would consider it a copyright violation. Especially if it breaks ad personalization.
All of those examples are aggregates of data that could be cached (and often are, in practice, just farther upstream than might be ideal in some cases).
Re: (Score:2)
Re:Facebook (Score:2)
we see this with filesystems (Score:2)
And it has the same issues. 15 years ago everyone said that we'd move past using file to store stuff and just go for the stuff we want. Microsft had WinFS for example (part of Longhorn).
But then the question comes where do you actually store the stuff?
The real change came not by eliminating using files to store stuff, but by changing how we retrieve stuff.
And this is the same way. Changing how you locate stuff on the internet is not going to remove the need for TCP/IP. You're still going to have to contact
orbital content (Score:2)
Our transformed relationship with content is one in which individual users are the gravitational center and content floats in orbit around them. This “orbital content,” built up by the user, has the following two characteristics:
Liberated: The content was either created by you or has been distilled and associated with you so it is both pure and personal.
Open: You collected it so you control it. There are no middlemen apps in the way. When an application wants to offer you some cool service, it now requests access to the API of you instead of the various APIs of your entourage. This is what makes it so useful. It can be shared with countless apps and flow seamlessly between contexts.
The result is a user-controlled collection of content that is free (as in speech), distilled, open, personal, and—most importantly—useful. You do the work to assemble a collection of content from disparate sources, and apps do the work to make those collections useful. These orbital collections will push users to be more self-reliant and applications to be more innovative.
What an amazing concept... in fact... (Score:2)
In fact it sounds identical to what CORBA promised. In fact, CORBA will take the world by storm! It will... um...
*headscratch* Hmm....
Oh CCN (Score:2)
This stuff has been around for a while, and I have the following problems with it:
1. We already pretty much have CCN. They're called URLs, and companies like Akamai and others do a great job of dynamically pointing you to whatever server you should be talking to using DNS, HTTP redirects, etc. When I type www.slashdot.org, I already don't care what server it lives on. When I type https://www.slashdot.org/ [slashdot.org] I still dont care what server it is on, and I have at least some indication that the content is fro
Novel concept but no... (Score:2)
where its stored is very important. we forget in the age of fiber networks that the internet DOES have topology. there was an age where traceroute was a very neccary tool for setting up IRC networks to determine how you linked servers to your hubs, and how you formed your backbone of linked hubs.
Given that computers on the internet are owned and operated by a variety of diffrent intrests, many of which view eachother with suspicion, i
Re: (Score:3)
But, of course, it's all going to be running on top of TCP/IP. This isn't a replacement, it's just another widget you run on the tubes.
Re: (Score:2)
Agreed, that's the only realistic approach. Build support for URNs into browsers, get the caching infrastructure in place so that URN'ed data migrates seamlessly to follow demand, and finally get people to migrate from URLs to URNs.
And while we're at it, get rid of the "TLD" concept altogetherm, com vs. org vs. net vs whatever. Names should be doled out to match the jurisdiction of regional naming authorities,with a special "top level". So you might have:
* /i/google internationally-registered name
* /
Re: (Score:3)
Britain did that with their original domain name system. Email would have been uk.ac.somewhere.faculty.department.researchlab@student, and a web page would have something similar.
Aren't DNS hostname just the same thing as he is proposing. You send out a request for the name, and any one of many machines may send back the reply?
All they would have to do is add support for encrypted hostnames. Encrypt the name using a public/private key and send it to the secure port of the domain name server.