Fast TCP To Increase Speed Of File Transfers? 401
Wrighter writes "There's a new story at Yahoo about a new version of TCP called Fast TCP that might help increase the speed of file transfers. Sounds like it basically estimates the maximum efficient speed of your network, and then goes for it, dumping a lot of time-consuming error checking." There's also an article at the New Scientist with some additional information.
Has to be said... (Score:3, Funny)
Re:Has to be said... (Score:3, Funny)
HUGE performance increase is possible, just by ommiting the optional EVIL bit.
Re:6000 TIMES !!! (Score:5, Interesting)
Actually if you read the New Scientist artictle you can see that that's a lie. What they actually did was bundle 10 FastTcp connetions (one must assume on fast lines) togeather and, fairly unsuprisingly, got speeds 6,000 times fast than a broadband connection... wow... 10 high speed lines are faster than broadband??
This would be more interesting had they actually tested it on a standard 512kbs connection and seen if there was a speed increase. IMO it most likely would not make a huge a difference anyway since alot of the slowdown on a consumer broadband connecting is the connection buffers at your ISP. For a better explanation read the Traffic Shaping HOWTO.
Isn't this called UDP? (Score:5, Insightful)
Re:Isn't this called UDP? (Score:5, Interesting)
Re:Isn't this called UDP? (Score:5, Interesting)
Re:Isn't this called UDP? (Score:5, Informative)
Any then they totally confuse the issue by mentioning that you can use multiple high speed links in parallel to get higher overall bandwidth. Boy am I impressed.
Re:Isn't this called UDP? (Score:3, Informative)
Re:Isn't this called UDP? (Score:5, Informative)
No, the guy at new scientist got it right... TCP uses an AIMD (additive increase multiplicative decrease) rate control algorithm. The rate at which you send is controlled by the window size at any given time. If you detect a loss, you decrease your window, dividing it's size by 2. If packets are arriving ok, you make small increments to your window size.
This new protocol uses a different window management algorithm. It uses the acks as probes, (I guess they measure delays) and if 'the coast is clear', it maxes it's transmission speed
I do wonder about FAST TCP congestion control capabilities, thugh... As for the poster who taled about slow start, sorry pal, but slow start is just the name...At that state, the transmission rate is increased quite fast, actually
Re:Isn't this called UDP? (Score:4, Insightful)
Really, I am not sure that this is a good idea. TCP includes error checking for a reason. I see this as a way to transmit corrupted files, not a way to speed up the internet experiance as a whole.
Re:Isn't this called UDP? (Score:5, Informative)
Without trying to be mean, you see it that way because you don't understand what's going on (mostly because the post was misleading). Fast TCP packets will still have a checksum and everything, so you're not going to get corrupted files. The change here is that normal TCP halves its "window size", or the amount of info that's out on the network at once without receiving an acknowledgement of receipt, with each error. This means that if there's one minor slowdown when 10 packets are currently out from your computer to the recipient (you've put out 10 packets without getting an ACK back yet), then your computer will reduce its window size and only allow 5 packets to be out at a time, effectively halving the transmission rate. Since TCP continually tries to get faster, it will always hit a bottleneck, resulting in your connection vacillating between optimal speed and half of that (approximately, I guess it might be worse than this on high-speed networks based on what I've read here).
In Fast TCP, they do this "congestion control" in a different way. Rather than halving the connection speed with every slowdown to ensure stability, they send as much data as possible as long as the network seems clear on the recipient's end (I think they estimate this with round-trip time of some sort).
So the "error checking" being changed by Fast TCP is NOT bit checking -- it's transmission rate checking. You'll still always get your files intact.
Re:Isn't this called UDP? (Score:5, Informative)
And I should clarify my first post as well by explaining what a "transmission error" is that would cause the window size to halve. From the article above: Basically, what I mean by a "transmission error" is a timeout -- the sender sends a packet and never gets an ACK for it. TCP works on the premise that packets are mainly dropped when congestion is high enough for routers to drop packets because of maxed buffers. Thus it makes sense to reduce transmission rate when no ACK is received to adjust to the capacity of the network.
But there's a reason you halve it.... (Score:3, Informative)
If you only back off a little bit, what happens is you just go overrun that same buffer again, and just send o
Re:But there's a reason you halve it.... (Score:3, Insightful)
What I'd like to know is where they did this real-world test. On a connection between two universities on Internet2 with wide fiber links, I can understand that they see a considerable perfomance gain. However, I'd also like to see tests done through consumer grade DSL or ca
Re:Isn't this called UDP? (Score:3, Interesting)
This explanation must be somewhat simplistic, because everybody already did some 100 mbps transfers on fast-Ethernet LANs (even with a couple of routers), and we did not notice that the transfer speed was oscillating between 50 and 100 mbps.
Also,
Re:Isn't this called UDP? (Score:3, Interesting)
That's because the oscillation happens so fast that you can't see it happening (or see the next paragraph for an alternate explanation). I mean, it is not a disputed fact that TCP will frequently halve its window during a large file transfer under normal Internet cond
Re:Isn't this called UDP? (Score:5, Informative)
I don't think that having a long RTT (round-trip time) will have a huge effect on transmission rate in the standard TCP case. Internet traffic, if I remember correctly, is bursty (cite [216.239.51.100]), meaning that a typical transmission looks like:
SEND 32 PACKETS
RECEIVE 32 ACKS
SEND 36 PACKETS
RECEIVE 36 ACKS
SEND 40 PACKETS
RECEIVE 38 ACKS
(oops! Sent 2 more than I could! Halve TX window!)
SEND 20 PACKETS
RECEIVE 20 ACKS
etc. In this case it's easy to see why having a long RTT doesn't slow things down particularly, since there can be a big gap between the SEND and RECEIVE and nothing changes.
In the case of non-bursty traffic, I don't think large RTT causes a big problem for normal TCP either. This is because even with a large RTT (if it takes 400 ms to go from sender to host, for example) ACKS will still be streaming in at a constant, if slower, rate, allowing for more packets to be sent out (this is more subtle to explain, so you might want to google more for a better explanation).
I think the reason you misunderstand this is because the New Scientist article makes it sound like you send a packet, then receive an ACK, then send one, etc. This is not the case -- you send lots of packets together...this is the principle behind the "window", that you can send out more than one packet at a time without receiving an ACK because you've been successful at that so far.
FYI, I looked up MCI's traffic times and found that transatlantic latency is roughly 80ms compared to 45ms for within-US traffic (cite [mci.com]). This is non-trivial, but also not huge.
If anybody disagrees with this assessment, please feel free to correct, since as I said, I'm not 100% sure that increased RTT doesn't mean lower window size.
Also, from my reading a lot of the gain was simply in the fact that halving a throughput rate of 800K/sec means you're dropping to 400K/sec when realistically you should probably only be dropping a little bit. According to NS, they improved by more than two-fold, but that's probably just because normal TCP doesn't often get to the actual max of the network, it may burp a lot on the way up and dip more than halfway than its reasonable max.
Re:No corrupted files at all (Score:4, Informative)
That's wrong. You were possibly misled by a bad article. Your diagram of "Slow TCP" isn't describing TCP at all, but a more primitive protocol that can be called "Stop and Wait". Stop-and-Wait provides some of the same advantages of TCP (reliability and consistency), but obviously takes a painfully long time to transmit a file, so it's little used in real life.
The idea that you can send many data packets before getting an ACK is already part of normal TCP. It's called the "sending window", and it measures the number of packets that can be sent before waiting for an ACK. The sending window is basically an estimate of the bandwidth of the link, which TCP maintains as it runs. The estimate starts out low, and then is continually increased as the transmission goes on. As soon as ACKs fail to arrive, the assumption is that the bandwidth was exceeded, and the estimate is cut in half. It's a linear-increase, exponential falloff procedure- which is meant to prevent from impairing the TCP performance of other users (you increase your transmission speed slowly, but decrease it quickly if there's an overload). The lineary increase of the window size is called "slow start".
Apparently, this research uses a different technique to obtain the starting estimate of the bandwidth available. Instead of sequentially increasing the window size up to gigabit levels as the connection goes, they start out with a large estimate, because they already know the link is fast.
Re:Isn't this called UDP? (Score:5, Insightful)
Re:Isn't this called UDP? (Score:5, Interesting)
#2. RTFA.
#3. They're not getting rid of error checking. It sounds like they're reworking the windows for ACKs in TCP to allow better streaming over high speed, but realistic (IE, slightly lossy) networks. Current TCP aggressively backs off when packet loss is detected, to prevent flooding the weak link in a network connection. It works really well for consumer network speeds, but on very high speed networks (EG, 45 Mbps), even very light packet loss will drop your speed dramatically down. TCP just wasn't meant to scale to these kinds of speeds, and some reengineering needs to be done to make it work smoothly. Many of the current extensions to TCP have made matters a lot better, but it's still going to have trouble scaling to gigabit, high latency networks, and it's best to start dealing with these issues early.
Re:Isn't this called UDP? (Score:2, Informative)
Tom
Re:Isn't this called UDP? (Score:4, Funny)
Gee
Re:Isn't this called UDP? (Score:3, Insightful)
That said, UDP is probably a better option for 99% of high-bandwidth traffic. Higher-lever error checking could accomplish the same thing with potentially less overhead.
uh BitTorrent? (Score:4, Funny)
Re:uh BitTorrent? (Score:5, Funny)
"Hey! This article is great! Imagine how BitTorrent would help it!"
Interesting, but I might suggest a different name: (Score:2, Interesting)
Re:Interesting, but I might suggest a different na (Score:3, Funny)
or you could also call it ReverseDoS..
or Self-Slashdotting! :-)
You just need to hire the right marketing team... (Score:5, Funny)
With Microsoft's, it would be ActiveTCP.
With Intel's, it would be HyperTCP.
And so on, and so forth.
Re:You just need to hire the right marketing team. (Score:5, Funny)
nTCP = instant speed increase.
stupid non network guy question (Score:3, Insightful)
just wondering
Re:stupid non network guy question (Score:5, Informative)
If you're curious: RFC 793 [ietf.org]
Re:stupid non network guy question (Score:3, Interesting)
Re:stupid non network guy question (Score:2)
tcp respondes to each packet correct?
Re:stupid non network guy question (Score:3, Informative)
If the recipient never NACKed a packet, the sender wouldn't know if the all the packets were getting through or if none of the NACKs were getting through.
Re:stupid non network guy question (Score:2)
zmodem??? (Score:5, Interesting)
So when you sent a block of 2k and got no errors, the frame size increased to 4k...8k... etc etc... Sounds like a similar approach.
Case
P.S. That was a long time ago in a FidoNet far far away, so my terms may be off.
Re:zmodem??? (Score:2)
As a result I cached stuff on my provider, and set to download overnight. Also Zmodem has a very spiffy resume feature my FTP at the time didn't support. My provider supported this "suck up our lines at night" as it left the lines open for their business cu
Re:zmodem??? (Score:5, Informative)
At the risk that you're trolling, Kermit is actually very good indeed (after 1990 or so,) assuming you set your options correctly.
The defaults are slow, but they work; that's Kermit's raison d'etre and why it's still around. But Kermit was probably also the first protocol to implement sliding windows and configurable blocksizes; Zmodem probably got that idea from Kermit. Set your options correctly, and Kermit's damn good.
The age of the BBS is over (I ran one for about 12 years) but I'm pretty sure I'll use Kermit again before I have cause to use Zmodem again.
Don't be so sure. (Score:4, Informative)
It's useful when the ssh client has it built in because you get pretty much the same speed and the ability to download between clients.
By the way, I know about two zmodem-enabled ssh clients:
1) SecureCRT [vandyke.com]- nonfree/Windows only.
2) Zssh [sourceforge.net] - open-source, cross-platform.
The actual applications which initiate the transfer are called "rz" and "sz."
Re:Don't be so sure. (Score:3, Interesting)
Any good reason not to just use SCP? I know you can transfer files in the same SSH window (using zmodem), but it wouldn't take too much work to modify the SSH client to start a file copy over the current connection using SCP...
So what's the advantage here?
Thanks! (Score:2)
ZModem was sweet indeed when it came out. I went from 2400 baud to 14,400 baud using Zmodem and became the cool kid on a very geeky block.
Re:Bah, in my days (Score:2)
Re:zmodem??? (Score:5, Interesting)
There were variants that did 8k blocks (and often referred to themselves as Zmodem8k), but none of these were true zmodem protocol.
Still, nothing can be quite as fast as ymodem-g
A little more on topic; what they are describing does not dynamically scale the packet size, only dynamically adjust the transmission speed up to the point that ack's start slowing down, but (hopefully) before any packets actually get dropped. I suspect disney and such will be quite disappointed if they think they are going to get a 6000x speedup in practical use as hinted at in the articles. Perhaps a 10% speedup for joe blow on a dialup modem, _maybe_. Take a look at your connection some time when downloading a file; you will probably find you can already peg your bandwidth quite nicely.
Re:zmodem??? (Score:5, Interesting)
X-modem transmitted files as 256 byte blocks of data along with an 8 bit checksum (IIRC.) The receiver would respond with an ACK (Acknowledgement) or a NAK (Negative Acknowledgement) after each block. If it was a NAK the sender would re-send the block. If it was an ACK it would send the next block.
Y-modem increased the block size to 1k which was helpful since the turnaround time between packet and acknowledgement was wasting a lot of time. It also used a 16-bit CRC (Cyclic Redundancy Check) instead of an 8-bit checksum. Apparently the CRC was much more reliable.
Around the time that error correcting modems started becoming popular (USR Courier 9600 HST) a variation of Ymodem popped up called Ymodem-G. Ymodem-G would send 1k-blocks with CRC's non-stop without waiting for an ACK. If the receiver got a bad block it would simply abort the transfer and you'd have to start it over.
Zmodem would also send blocks and CRC's non-stop unless it got a NAK back. It would resume sending at the block that caused the NAK. The variably sized blocks were pretty cool too.
Feel free to correct any errors. It's been a long time.
Re:zmodem??? (Score:3, Insightful)
I'm pretty sure this is a standard in TCP/IP.
Re:zmodem??? (Score:3, Informative)
Uh oh... (Score:5, Funny)
Re:Uh oh... (Score:2)
I have a feeling that when they give it to them, they will receive a check with around eight to ten 0's in it.
Cool (Score:2, Insightful)
Window size anyone? (Score:2, Interesting)
I read the article and did not understand what do thay add that is better then the standerd tcp enhancements of selective ack and big window sizes.
clue anyone?
Without more details, sounds like BS (Score:2, Insightful)
trollish - pls mod parent down (Score:3, Insightful)
The New Scientist [newscientist.com] makes it quite clear on how the Fast TCP is done, if you know anything about how TCP works (and how the window size halves in the event of packet losses)
shame on a relatively low-ID user making such trollish comments...
Re:trollish - pls mod parent down (Score:5, Insightful)
The whole "driving a car while looking 10 meters ahead" analogy ignores a lot of the work TCP does to keep things moving fast. The "trasnmists, waits, then sends the next" packet paragraph is almost deliberately misleading.
It tosses about a "6000 times faster" statistic without explaining 6000 times faster than what. Is my dad's 28.8 modem going to suddenly be getting throughput of 172Mbps? Of course not, but what difference is it going to make to him? I think maybe none at all, and FastTCP is only for very large network hauls, but the article has claims about me downloading a movie in 5 seconds.
My DSL line is 768kbps; I get downloads of large files through it of around 85kBps, which is a data throughput rate of 680kbps. That means that all the layers of the OSI burrito, including resends, checksums, and routining information, add up to about 12% overhead. Not the best, but not that bad, either. How much improvement is FastTCP going to get me?
The numbers for their practical test of Fast TCP connected two computers that got 925Mbps, and the "ordinary" TCP got just 266Mbps. Even that's pretty unbelievable to me; I find it hard to believe that TCP was running at about 25% efficiency.
Extraordinary claims demand extraordinary evidence. Like I said before, without further technical details, this doesn't actually sound all that different than the claims of Pixelon [thestandard.com], which also had an eye towards video on demand.
Maybe they've got something; someone linked to the actual caltech article, which I haven't had a chance to read in detail (and wasn't linked to at the time I started my post). Caltech certainly is a cool place, so there is probably something interesting going on. But the New Scientist article is a fluff piece, pure and simple, and if calliing shennanigans on it makes me a troll, so be it.
New Scientist article bad. Research good. (Score:3, Informative)
But, now that I've read some of the documents from the Caltech site [caltech.edu], and I think I understand the claims, the research is fairly interesting, at least in the world of "ultrascale" networking. Of course, I'm just an unfrozen caveman engineer, and that world confuses and frightens me, so my understanding might be slightly off. Here goes anyways.
As I understand it, the authors are saying th
Great! (Score:5, Funny)
New Scientist didn't put it very well... (Score:5, Informative)
Looking at the information on their web page at caltech [caltech.edu], the FAST network project is working with alternate TCP window sizing schemes.
Namely, instead of reducing window size in the case of packet loss, window size is changed based on round trip latency. The problem being that reducing the window size in response to loss works well on most networks, but has a serious problem when dealing with very high-bandwidth links.
In such a case, the conventional TCP windowing will shrink greatly in response to even one or two lost packets, which when you are sending a LOT of data, will occur.
The real work (and it seems to be somewhat covered in their web pages) is how to use latency for congestion detection/control, but I haven't read it in enough detail to quite understand this, NOR how this scheme will interact with conventional TCP streams.
Re:New Scientist didn't put it very well... (Score:3, Interesting)
It is based/successor to TCP Vegas (Score:2)
Re:New Scientist didn't put it very well... (Score:2)
In such a case, the conventional TCP windowing will shrink greatly in response to even one or two lost packets, which when you are sending a LOT of data, will occur.
I don't have a ton of knowledge about TCP, but is it me,
Re:New Scientist didn't put it very well... (Score:2)
Re:New Scientist didn't put it very well... (Score:5, Informative)
The problems with very high bandwidth links, TCP and RTT estimation start from the fact that TCP can do that estimation just every ACK received and with very high bandwidth links it changes much faster and in greater degree. So, TCP can't efectively estimate the available capacity, since it cannot probe the channel frequently enough.
Caltech's Vegas looks great on pictures, however, there were papers pointing out that it's not exactly fair, especially with multiple bottlenecks of real-world topology. Then there were papers fixing that, and papers critisizing those solutions, and as the result I don't see Vegas anywhere around (except for some Cisco routers maybe) - the best I see is NewReno+SACK+FACK+ECN. I can imagine that more aggressive scheme will have an advantage over TCP, although NewReno is pretty aggressive if compared to most RT rate control schemes, so it's difficult to imagine anything more aggressive than that, that would yield in times of congestion.
The best description of what they really propose seems to be in their Infocom's paper from April this year. That looks pretty good, too. But again, as it was with original Vegas, it will probably come out that it has some flaws, they will be fixed, the fixes will have some flaws, and so on. And for the time being everybody will continue to use NewReno. *snicker*
Fact is that there is enormous (partly bad) experience with using TCP Reno, and with current abundance of capacity in the backbones, it doesn't seem that there is much on an interest in precise traffic control. I've got to have my first problem watching some movie trailer, yet.
One thing worth mentioning - no reasonable application uses TCP for multimedia (why Disney then?). RTP/UDP with a reasonable model-based rate control can easily at least match Vegas, and often outperform it because of the kind and amount of feedback used to adapt to the network conditions for particular application. Caltech's scheme was constructed for ultra highspeed networking and tested for processing of vast data volumes produced by LHC to overcome deficiences of traditional TCP in that case. They have a real nice article on experiments with that with good results. But that's not quite the same as typical situation.
Nice... BUT! (Score:2, Funny)
Yes, but (Score:2, Funny)
not optimistic (Score:5, Insightful)
6,000 times? The tests done in labs are usually stripped-down and the results overstated just for statistical pleasure. In the real world, however, such figures are rarely achieved.
This is old technology (Score:4, Insightful)
smells like... (Score:5, Interesting)
Does that mean TCP has 99.99% (humor me) overhead ?
But seriously, you can probably use large windows to send streams of packets such that a single ack is required for a bunch of them, but it's impossible to achieve 6000x more throughput just by "optimizing" the TCP protocol. Even over Internet (I'm not even talking LANs since there is obviously not that much room for improvement due to the low latency).
Re:smells like... (Score:5, Interesting)
Still, those numbers don't look right. AFAIK TCP has 5-15% overhead, so they must have been using a high-bandwidth, really-high-latency line to get that much improvement. Really high.
Under these conditions (that obviously are unfavorable to TCP) I would be curious to see how "fast TCP" compares to any real streaming protocol (UDP-based with client feedback control). I have a feeling that the UDP stream is faster.
Eliminating 'burstiness' of TCP/IP (Score:5, Interesting)
TCP is extremely bursty - it pumps all the packets it can as fast as it can over the network as soon as the window opens. Then it waits for replies to all the packets. What typically happens is the burst from the NIC overloads the local router causing numerous dropped packets. This gives the imporession to the sending machine that the network is overloaded and results in a ~90% reduction in bandwidth utilization.
The change is to include a timer that allows the NIC to space the initial burst over the entire window. This prevents the overloading at the router and permits the NIC to reach near its theoretical maximum bandwidth.
In tests involving one router, the results were an order of magnitude increase in bandwith utilization. I'd be interested in seeing their test setup to see how they got such dramatic improvements. Normally TCP/IP is not that ineffecient - even with its extreme burstiness.
There is a reason why TCP throttles itself (Score:5, Insightful)
It's called congestion collapse and the condition is described by RFC 896 [rfc-editor.org] by John Nagle.
Just firing packets into the network willy-nilly is very bad; it's the "tragedy of the commons" all over again...
Nagle (Score:5, Informative)
In other words the story is all wrong, but what they are doing is actually worthwhile. You sometimes have noisy networks, especially when they are wireless or in an industrial environment. The big long haul telecoms lines are better off doing error correction on line, but in the last mile you never really know the noise characteristics so this should be handled better on the TCP level. I would probably do something like FEC with the number of recoverable errors per packet and per lost packet per logical block, tuned to the error characteristics of the network. Then call it TCP2 and release an RFC and some BSD licensed source code.. (I thought of doing this as part of building an ISP friendly P2P protocol but decided I didn't have the time..) Their solution has the advantage that it works just great with regular old TCP implementations.
Fixes problems with slow-start and resends.... (Score:5, Informative)
This protocol figures out ahead of time if it needs to slow down so its always getting acks back instead of waiting for timeouts. Also it avoids the binary backoff time that happens with timeouts.
So in response to many of the previous posts it loses none of the robustness of TCP. In the worst case its as slow as TCP and in the best case it should be equally as fast as TCP. In the average case, however, it shows a huge performance increase. Most of the time on the network is the average case so this is a good thing.
Nothing to see here, move along..... (Score:4, Interesting)
SCTP is a reliable transport protocol operating on top of a connectionless packet network such as IP. It offers the following services to its users:
-- acknowledged error-free non-duplicated transfer of user data,
-- data fragmentation to conform to discovered path MTU size,
-- sequenced delivery of user messages within multiple streams,
with an option for order-of-arrival delivery of individual user
messages,
-- optional bundling of multiple user messages into a single SCTP
packet, and
-- network-level fault tolerance through supporting of multi-
homing at either or both ends of an association.
The design of SCTP includes appropriate congestion avoidance behavior
and resistance to flooding and masquerade attacks.
Caltech Site (Score:4, Interesting)
This is part of a whole bunch of TCP and networking related work at CalTech.
I hate to do this to them, but the Caltech Networking Lab [caltech.edu] site has more info.
From what I see, the improvement here is to use packet delay instead of packet loss for congestion control. They claim this has a bunch of advantages for both speed and quality.
Here is a Google cached copy of their paper [216.239.37.100] from March 2003.
Man! (Score:5, Funny)
Fast TCP is TCP + congestion control (Score:5, Interesting)
As near as I can tell from the popular articles, and the web page referenced in the New Scientist article, "Fast TCP" is not a new protocol, but rather better congestion control for standard TCP. I'm not a network guru by any means, so please take the comments below with a grain of salt.
Currently, TCP implementations use a standard trick [berkeley.edu] to play nice with small router queues. Using precise timing would be better. I hassled Mike Karels over it about 10-15 years ago, but the consensus at the time was that the hardware wasn't up to it. Now it is. Also, modern routers have gotten clever about queue management, which screws up the trick.
The new proposal is to take advantage of modern HW to measure latencies. Existing TCP could thus be used more efficiently, by allowing larger amounts of data to be outstanding on the network without trashing routers.
It is not widely understood that in 1988 the Internet DOSed itself because of a protocol design issue, and Van Jacobsen got everybody to fix it by a consensus change to the reference implementation of TCP. These articles appear to report (badly) ongoing research into that issue.
Sounds easy. (Score:2)
It will take a little research to find good algorithms, which I presume they've already done, but there's nothing stopping some enterprising soul (who wants his porn faster) from adding this to linux in a couple weeks. So I guess the real que
Uhm... (Score:2, Interesting)
Maybe they're measuring the round-trip delay, and then sending more data than can fit in the reciver's window, on the assumption that ACKs "should be" in flight. Maybe they also notice when an ACK is overdue, and send a duplicate packet early, rather than w
Duplicate effort? (Score:3, Interesting)
smart window resizing (Score:3, Informative)
Article is inaccurate and misleading (Score:5, Interesting)
See caltech's press release on FAST [caltech.edu] for an article that actually makes sense.
Also, could someone please explain to me why boringly predictable stereotypical slashdot feedback is being modded up?
"Whoa! Faster pr0n!"
"Imagine a beowolf cluster of these!"
-Insert completely unrelated Microsoft bashing post here-
-Insert completely unrelated technobabble from some geek posting out of their ass (without reading the article first)-
News for nerds. Stuff that matters. Discussion that doesn't.
It's not a breakthrough, but it's good work. (Score:5, Informative)
Second, it's intended for use for single big flows on gigabit networks with long latency. You have to be pumping a few hundred megabits per second on a single TCP connection over a link with 100ms latency before it really pays off. It won't do a thing for your DSL connection. It won't do a thing for your LAN. It won't do a thing for a site with a thousand TCP connections on a gigabit pipe.
Third, unlike some previously hokey attempts to modify TCP, this one has what looks, at first glance, like sound theory behind it. There's a stability criterion for window size adjustment. That's a major step forward.
(I first addressed these issues in RFC 896 [sunsite.dk] and RFC 970, [sunsite.dk] back in 1984-1985. Those are the RFCs that first addressed how a TCP should behave so as not to overload the network, and what to do if it misbehaves. So I know something about this.)
These kinds of articles anger me. (Score:3, Interesting)
No wonder I have trouble explaining how the network works to my sister, or even to my mother who happens to have his masters in tech. (Albeit in mechanical engineering)
Let's see.
"The sending computer transmits a pack, waits for a signal from the recipient that acknowledges its safe arrival, and then sends the next packet"
No honey, thats why we have the buffers. So you could receive packets out of sequence and wait for the middle ones to arrive. This is why we have 32-bit seq and ack fields in the tcp header just after the src and dst ports. seq tells the packets order in the queue. Ack tells the seq ofthe next packet (from other peer) so we can use random increments to prevent spoofing of packets. Or make it harder atleast.
But that's out of the scope of this rant.
If no receipt comes back, the sender transmits the same packet at half the speed of the previous one, and repeats the process, getting slower each time, until it succeeds.
Umm, No. I'm not 100% sure but I think the network devices are dump thingies that talk to each other on predefined carriage frequencies. Thus, you can't really "slow down" the speed to increase possibility to get the packet through. And certainly this has nothing to do with TCP. Resending of failed packets is a Good Thing (TM). They are just sent again untill they reach their destination or the "I give up"-treshold has been reached.
"The difference (in Fast TCP) is in the software and hardware on the sending computer, which continually measures the time it takes for sent packets to arrive and how long acknowledgements take to come back"
This is the only difference? Wow! Shit. We are definetely going to get faster speeds by adding overhead with this calculation.
Now, I'm through with my rant.
I really really would like to see an actual white paper how this works. There has to be more to this. By the sound of just these articles, it seems to me that someone was paid to develop new, faster protocol that would magically be backward compliant with TCP. Finally the persons couldn't come up with anything smart but gobbled together something that might sound plausible.
Of course you can get "more than 6000 times the capacity of ordinary broadband links" by using your very own dedicated parallel LAN links. You just need fast enough computer to handle the TCP stack. You would also need some fricking fast BUS's on you computer to make any use of this bandwith. Remember, the hard drives, mem chips and other storages aren't exactly 'infinite' in speed either.
If the demo consisted only of two computers exchanging data, there would be no need to estimate the speeds as it would be very unlikely to get packet collisions because of disturbance from other network devices. Also, again, that has nothing to do with TCP-stack. Again, this useless speed calculation is just more overhead.
And now I'm rambling.
Shit, why can't i just stop.
I'm angry, that's why
I hope someone would answer me with insight of what am I overlooking. This looks so useless to me.
Re:These kinds of articles anger me. (Score:3, Informative)
This is true, but when the T
Ugh, reporters.. (Score:4, Interesting)
Anyway, I think this is primarily interesting for people on really fast connections (ranging in hundreds of megabits per second up to gigabits) with relatively large latencies (tens/hundreds of milliseconds as on a transcontinental link rather than nanoseconds/milliseconds like on a LAN), but I'm sure the research will have some effect on LANs and even the standard broadband connection. Impact on dialup and other not-quite-broadband connections would likely be miniscule.
One main issue with TCP is that it uses a "slow start" algorithm, which other people have mentioned. Real TCP stacks probably tweak the algorithm quite a bit, but from the description in Computer Networks (3rd edition, 1996) by Andrew Tannenbaum, TCP packets start off with a small "window"--how much data can be in transit at a time. The window grows exponentially as packets are transmitted and acknowledgements received until a pre-set threshold is reached, and then the window starts growing more slowly (Tannenbaum's example grows exponentially to 32kB at first, then by 1kB per transmitted packet).
If a packet is lost, the process starts over and the threshold is set to half the window size you had before the dropped packet (I imagine many systems reduce the window size by lesser amounts). Now, this particular algorithm can cause quite a bit of nastiness. It's possible the window size will never get very large. This isn't a really huge problem on low-latency links like in a LAN where you get acknowledgements really quickly, but a hyperfast transcontinental link could be reduced to mere kilobits per second even if the percentage of dropped packets is fairly low.
Additionally, this slow start algorithm will eventually force you to restart at a smaller window size. Given enough data, you'll eventually saturate the link and lose a packet, so until the window grows enough again, there will be considerable unused bandwidth. Good TCP stacks would attempt to guess the available link speed and stop growing the window at a certain point.
Smart system administrators can tweak kernel variables to make systems behave better (preventing the window from getting too small, having larger initial thresholds, for instance), but it looks like a lot of work on Fast TCP and related projects is related to making this a more automatic process and growing/reducing the transmit window size in a more intelligent manner.
Error Checking (Score:5, Funny)
--
Linux-Universe [linux-universe.com]
Nothing new here (Score:4, Interesting)
Congestion control based on roundtrip times is old news but is uncommon AFAIK. What really happens is direct feedback from routers along a transmission's path. This is done in TCP Vegas, which was first proposed in 1994 and I think is fairly common now. The problem with scaling this or any of the other common TCP implementations to high speed/high delay links is the reaction to detected congestion. "Normal" TCP aggressively scales back its send window (send rate) when it detects congestion, usually chopping it in half. The window/rate then grows linearly until something goes wrong again. This results in alot of lost throughput in high-speed networks especially if the amount of "real" congestion is low. The FAST group is working on a new TCP implementation that doesn't react so aggressively to congestion. This is great for those high-speed/low-congestion networks we all wish really existed but is not something you want to use on the always-backed-up Internet. Would probably make things worse.
"Video on demand" is irrelevant (Score:5, Insightful)
That's just wrong, at least according to the ways media companies have traditionally desired their materials to be broadcast over the internet. They typically use streaming protocols, which not only gives the user one-click startup, but also makes it non-trivial to keep a local copy of the file (enhancing the corporation's feeling of control).
However, a well-designed streaming protocol won't use TCP at all. TCP hides many characteristics of the network from the application software, and to stream properly it needs to know as much as possible. One example of why TCP is bad for streaming: in streaming, you try to keep advancing time at a constant rate. Once 156 seconds of playing have elapsed, you want to be showing video from exactly 156 seconds into the source file. If at 155 seconds some packets were dropped, you should just skip over them and continue onward. TCP, however, will always try to retransmit any lost packets, even if that means they'll arrive too late to do any good. TCP has no knowledge that packets may expire after a fixed time, but a custom-built UDP protocol can be aware of that constraint.
(Here's a reference on preferring UDP in video streaming [wpi.edu])
On the other hand, maybe a corporation will realize that properly controlled non-streaming playback can provide a better end-user experience (guaranteeing, for example, that once playing starts, network failures will never interrupt it). In that case, they might either try to push Microsoft to integrate this faster TCP/IP into Windows(r), or more interestingly, implement it themselves in customized player software.
It's possible to implement a protocol equivalent to TCP on top of UDP, with only a tiny constant amount of overhead. So a programmer for realplayer, quicktime, or mplayer might be able to add the techniques from this research to his own code, even without support in the operating system.
Re:Obviously, you're wrong. (Score:3, Informative)
Of course a streaming protocol uses a buffer. That's one reason why you don't want to run it over TCP. TCP provides its own buffer, which would be redundant with the one the higher-level protocol is also creating. Optimally, one
You're still wrong. (Score:3, Informative)
Also, RTSP is a protocol-independent stream control mechanism.
Humm (Score:3, Insightful)
Error checking (Score:3, Funny)
Sounds like a slashdot editor
This is HSTCP - more links as well (Score:4, Informative)
For more info, you can also take a look at:
Web100 [web100.org] and Net100 [net100.org].
It basically amounts to improving the AIMD algorithm and changing the way slow start works as well. Also, whoever said it before that this will not help your DSL connection is correct. It is meant to help high speed long RTT paths. And it does so -- quite well.
Seconds... (Score:3, Funny)
Sure it does. I'm thinking around 10000 seconds.
-- this is not a
compare to cold fusion (Score:3, Insightful)
as an institution. It seems clear that some group
at CalTech pumped this to the media, to the point
where a categorically deceptive series of fluff
pieces entered the news stream.
Compare this to the "cold fusion" debacle in '89:
Pons and Fleischman reported valid, and eventually
reproducible results without hype, but the media
pumped it with speculation. Pons and Fleischman,
excellent, highly competent and productive stars in
their field, were essentially tainted by no fault
of their own, and run out of town on a rail.
It's galling.
any fool can design a 'faster tcp' (Score:3, Insightful)
however from what i've been reading on caltech site, it appears that one of the usages of this protocol would be to download very large file on dedicated pipe (like movie on demand). From movie server to the user through private connection. This makes sense. You can streamline lots of things. OPtimize protocol for a fat pipe. Whatever
Re:mirror (already slashdotted) (Score:2)
Re:Oh Great they've invented UDP (Score:3, Insightful)
This isn't revolutionary so much as evolutionary.
Re:Oh Great they've invented UDP (Score:3, Insightful)
The problem is people are just reading the summary and assuming that it is actually correct and accurate.
Good points otherwise...
Re:6000 times faster!? (Score:2, Informative)
Re:Like Zmodem protocol from BBS days? (Score:2)