The simulation results given need to be updated. While they do give some good indications about the performance impacts of various parameter choices, they are not complete, and are inaccurate in two ways. - First, the time between DNS Request and flow establishment needs to be changed. The only two choices are either 1 or 2 seconds, but discussions at IETF 74 indicated that much smaller times are seen in practice. This will have the effect of increasing the volume of flows that can be handled in bursts. - Second, the deferral mechanism programmed in the current simulation is a bit too optimistic in a certain way that causes too many flows to be admitted at the same time. This results from my way of handling a situation that can never occur in real life, but nevertheless does happen when my pre-processed datasets are parsed by my simulator. Fixing this problem will have the effect of causing more flows to be rejected in situations where the simulation is already overloaded. In other words, unfeasible parameter choices will become even more unfeasible. Other improvements I want to make. During IETF 74 it was pointed out that DNS resolvers will often provide multiple DNS Reply messages for locally generated DNS Requests, even with zero caching specified. This means the NAT would never see the DNS Requests, and that consequently the same flow allocation could elicit attempts by multiple source computers to establish flows. In the abstract, this is a good thing, but it requires that the allocated flow must remain available for new establishment even after the first packet arrives from one of the source computers that had received the DNS Reply associated with the DNS Request that triggered the allocation. Mark Andrews, who pointed this out at the IETF, suggests that I simply extend the BIND_TIMEOUT after every reception of a packet from a new source at the newly allocated flow. I will make this change to the specification, and to the simulation, but unfortunately I do not have any data I can use to test the feature. -