In order to kickstart some activity in this wg, here is my view on the current state of multihoming in IPv6 development. I hope this will start a discussion on this here mailinglist, and also elsewhere, because I don't think we can solve the entire problem right here, right now. Since waiting for "someone else" to completely solve the problem hasn't been very productive so far, I think everyone should assume responsibility for that part of the solution space he or she has any control over. There are things that can be done to help multihoming in operations, address allocation policy and application, host stack and router development. Lack of cooperation from even one of those will hurt the multihoming in IPv6 effort. The problem ----------- The user problem: the internet is unreliable. Parts of it fail from time to time. The only way to really protect yourself against this is having several connections to the internet over different ISPs (multihome). The IETF problem: multihoming by announcing a globally visible address block (as is current practice in IPv4) doesn't scale to a large enough number of multihomed sites. A number of new drafts and revised versions of existing drafts will be out in time for Atlanta. 1. Application solutions ------------------------ One way to solve this would be by taking provider aggregatable addresses (that don't polute the global routing table and thus scale well) from two (or more) ISPs and give all systems addresses from both ranges. Then applications simply try all addresses. This has many drawbacks. For instance, sessions still break if there is a problem, and source address selection is problematic, especially in the presence of anti-spoofing filtering by ISPs. But the good part is it actually works today, to some degree. For instance, I can list several IP addresses for an FTP server, and when the first one is down the FTP program simply tries the next. Unfortunatley, the applications must support this, and most of the newer ones (especially web browsers) don't. 2. Transport protocol solutions ------------------------------- It is also possible to solve the problem at the transport layer. This has most of the drawbacks of solving the problem at the application layer, such as the source address selection/filtering issue. But also the fact that applications have to be changed: unfortunately, the IPv6 API as described in RFC 2292 centers around a single IPv6 address. Therefore, an application wishing to use an address-agile transport protocol must be changed in order to provide the extra address or addresses to the transport layer. Alternatively, the transport layer could incorporate a mechanism to discover alternate addresses, but this leads to security problems as applications would be unaware of the addresses they use. Transport-layer solutions only solve the multihoming problem for applications using the specific transport protocol so they only provide a partial solution. 3. Routing implementation solutions ----------------------------------- Current routers easily need upto a kilobyte or even more memory for each individual prefix that is visible in the global routing table. In theory, it should be possible to reduce this by one or two orders of magnitude by removing information that doesn't directly lead to better routing decisions. However, this requires major changes to the internals of routers and the potential gain is relatively limited. 4. Geographical aggregation --------------------------- By confining the reachability information for multihomed prefixes to the geographical region where the multihomed network is connected, it is possible to keep the size of the routing tables in individual routers small while the global routing table gets bigger. The downside is that all aggregation implies loss of information, which means less optimal routing. Also, aggregation across network boundaries is complex. Geographical aggregation can only work in the presence of a compatible address allocation policy. 5. Tunnelling or aliasing solutions ----------------------------------- As a rule, current transport protocols do not have the capability to switch soure/destination addresses when the addresses that are used no longer provide connectivity, even if other addresses that are tied to another physical or logical path that doesn't share the problem are available. Such an alternate path can still be used by tunnelling the original packets with the broken addresses. An optimization to tunnelling is to simply rewrite the original addresses with the alternate ones at (or close to) the source, and write back the original addresses at (or close to) the destination. This is called aliasing and has the advantage the size of the packet remains the same. Aliasing has superficial similarities to NAT, but it is fundamentally different as the destination sees the same addresses as the source. Note the resemblance to mobility. A major problem with tunnelling/aliasing solutions is the discovery of the alternate addresses, and the security of these discovery mechanisms. What have we learned -------------------- In discussions on this mailinglist and elsewhere, I've come to the following conlusions: * The operations people really want/need multihoming. Current 6bone guidelines (which are treated as law in many circles) forbid announcing non-pTLA blocks with the result not even internet exchanges or regional internet registries can be provider independent. * There is a lot of overlap between mobility and some of the multihoming solution space. It is likely that mobility can be extended to support host multihoming. * GSE is interesting, but it doesn't provide much multihoming support. * Applications behaving in a multi-address aware manner could very easily support some bare-bones host multihoming. * Hosts having multiple addresses from different ISP PA blocks doesn't currently work in practice. This could be solved by having systems (mainly hosts, but possibly also routers) taking the source address into account when making routing decisions. * Larg(er) organizations will do just about anything to avoid renumbering, this is NOT "free" or even particularly easy in IPv6. * Very few organizations want or need to connect to the internet at geographical locations that are very far apart and provide "internal transit" for non-backup purposes. * Some organizations that have a relatively large block of address space de-aggregate to avoid having to use their internal network for transit. What we should do ----------------- This is what I think this wg should do: 0. Move past the requirements. There is consensus--sort of. It's not going to get much better than this. 1. Make recommendations to developers and other wgs about what they can do to better support multihoming. 2. Several tunnelling/aliasing solutions have been proposed. It would be good if those could interoperate. This would make it possible for a multihomed host with (for instance) modified mobility support, to interact in a multihomed-meaningful way with a host sitting behind a cluster of big aliasing boxes. This would greatly relieve the chicken/egg problem for this type of solution and make experimentation with the really hard part (discovery and its security) easier. So we should come up with a protocol that makes this interoperation possible. 3. Open the lines of communication with people working on mobility. 4. Work on geographical aggregation, especially by getting an address allocation mechanism that supports this off the ground. Iljitsch van Beijnum