Estimating the RoundTrip Time in TCP

https://networkengineering.stackexchange.com/questions/69562/estimating-the-round-trip-time-in-tcp

I was reading a textbook which says:

Let’s begin our study of TCP timer management by considering how TCP estimates the round-trip time between sender and receiver. This is accomplished as follows. The sample RTT, denoted SampleRTT, for a segment is the amount of time between when the segment is sent (that is, passed to IP) and when an acknowledgment for the segment is received. Instead of measuring a SampleRTT for every transmitted segment, most TCP implementations take only one SampleRTT measurement at a time. That is, at any point in time, the SampleRTT is being estimated for only one of the transmitted but currently unacknowledged segments, leading to a new value of SampleRTT approximately once every RTT.

I'm a little bit confused here, the text in black says it won't measure SampleRTT for every segement, then it says a new value of SampleRTT will be approximately once every RTT, which still sounds like TCP measure SampleRTT for every segement to the an average RTT?

Answers

From the sender's perspective, segments within the send window are all "in flight" simultaneously. So, instead of trying to track each segment's RTT, just one segment is tracked at a time. Since it takes RTT to send a segment and receive ACK, one sample per RTT is taken that way.

If you'd track each segment's RTT you'd have number of segments in send window = window size / segment size samples per RTT - that's actually more than you need, so it wastes memory and processing power.

As Jeff has pointed out in his answer, today's implementations commonly use the TCP timestamp option to simplify RTT measurement. Timestamping provides finer-grained information with less processing overhead. Do check out Jeff's links as they're well worth reading.
I suggest you start by reading RFC 1323 §3 RTTM: Round-Trip Time Measurement which is a fantastic introduction to this problem, a great perspective on how long very smart people have worked on it, and how little has changed since 1992.

The Linux tcp_input.c source also contains a lot of useful commentary and links to a few newer academic papers on this topic.

If you check on your own workstation, using tcpdump or wireshark, you'll find that most TCP segments exchanged by your computer have a timestamp option present. This allows more frequent RTT measurement providing better inputs to the Smoothed RTT used to calculate RTO, and with less complexity.

Without TCP timestamps, systems have to do what Zac67 described, with associated problems / limitations discussed in both the above links and really all the literature about this subject.

so TCP just radomly choose one segment within the many segments(pipeline) the send window? what happen if the chosen one is timeout later then TCP cannot go back to measure another segment in previous window? –
amjad
 Aug 17 '20 at 0:59
Implementation specifics are really off-topic here, but the easiest way is to track the first segment, when that's ACKed, track the very next segment to send and so on. –
Zac67

 Aug 17 '20 at 7:45
It's unfortunate that this answer became the accepted one since it's not correct for modern TCP implementations. For example, Linux really does keep track of the time segments were transmitted and acknowledged. You don't have to take my word for it. elixir.bootlin.com/linux/latest/source/net/ipv4/tcp_rate.c
Jeff Wheeler
 Aug 17 '20 at 11:01

原文地址:https://www.cnblogs.com/ztguang/p/15795508.html