[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: The IPv4 Internet MTU



Hi Fred, Hi *,

"Templin, Fred L" <Fred.L.Templin@boeing.com> writes:

>> > Why not just set the Teredo interface MTU to (64KB-ENCAPS)?
>> > Then, take special care to make sure the small packets get
>> > through but let the big packets take care of themselves.
>> 
>> The problem is UDP fragmentation.
>
> I guess you mean IPv4 fragmentation? Even so, there is
> a lot we can do within the no-man's-land of the Teredo
> interface after we get a packet from IPv6 and before we
> give it over to IPv4. So, why not just give IPv6 an
> optimistic MTU estimate then deal with reality from
> within?

IMHO, the optimistic Teredo interface MTU is what I provided in my
previous post (sth in [1380, 1384]): 

1) it should prevent obvious fragmentation in all cases by taking into
   account reasonable encapsulation that can happen on the path
2) it also leave on the side the wrong assumption that you'll either get
   a message from the network or have your packets fragmented by it if
   it happens to be too big. There are people that simply prevent that
   to happen in their networks and there are vendors that are unable to
   implement those kind of things properly (don't ask for name, i would
   reply ;-) ). 

Reality is that just like you perform MSS clamping when you are aware
that the return path to clients in your site is less than what they
advertise, setting the interface MTU from the value one would consider
a common PMTU (the one clamped value is computed from) will simply have
the advantage of ensuring that all L4 protocols work with it, not only
TCP.

Using (64KB-ENCAPS) is _interesting_ but IMHO unfeasible directly. You
have the insurance that almost all your packets will be IPv4-fragmented
to more or less 1500 bytes. Full packets will become more than 40
fragments. I consider that packets don't get lost or killed that much in
the core _when_ they fit the PMTU: packet loss occurs at both ends 
because of some bad connectivity (unreliable medium). If the path is
reliable _and_ the communication takes place between teredo clients
directly, you have the benefit of IPv4 fragmentation (no header
cost).

Now, in all those cases, this will not work (or not work well):

1) bad medium (unreliable path): fragmentation will amplify the problem
   and there is pretty much nothing to do to prevent that except lower
   the MTU.
2) IPv4 MTU is higher than PMTU: using the reasonable value i proposed
   for IPv4 packets generated by the teredo interface coudl solve that
   point. 
3) Packets are for a native IPv6 client: if the proposed reasonable MTU
   value allow packets to hit the relay in all cases (expected), this
   creates an interesting situation because it would be able to warn the
   Teredo interface of the emitter that its packets are too big for the
   output path (ICMPv6 Packet Too Big). The positive aspect is that
   with the insurance that the mechanism is implemented by all Teredo
   relays (1 or 2 implementations at the moment ?), this removes IPv4
   internet issues from the table. Only drawback is the required work by
   the relay (regarding packet reconstruction, i.e. first 1280 bytes
   for the citation) which imply higher memory consumption. Rémi, what
   do you think?
4) Packets are for Teredo client: chances are high that the client's NAT
   will try to reassemble the packet instead of letting fragments go
   through. Well educated NAT GW will send an IPv4 ICMP to the client,
   that could be used to change the size of IPv6 packets for that
   peer. Rémi, is it feasible?

To sum up, a default MTU of (64Kb-ENCAPS) for the Teredo interface, even
with the use of IPv4 packets to carry everything with a reasonable MTU 
([1408,1412]) still requires many assumptions. I don't think it would
fly ;-)

Cheers,

a+