[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Complementary work?
We submitted, just before the deadline, an update of the expired I-D
below. It appears to complement the work being done in the MULTI6
WG, in that it deals with a higher level of abstraction about
multihoming than at the routing level alone. While it was presented
to IDR some time ago, the consensus was that it had useful content
but, since it did not deal with protocol development, was outside
their scope. Variants of the document have been presented at NANOG,
and are also in my book, _WAN Survival Guide_ from Wiley.
What would be the group's feeling about taking it on as a working
document, with the routing multihoming document being at a more
technology specific level, possibly with some of the routing-specific
detail moving to the IPv4 and IPv6 drafts?
This is meant as a friendly amendment only!
Howard Berkowitz
Nortel
----------
Network Working Group H. Berkowitz
Internet-Draft D. Krioukov
July 2001
Nortel Networks
draft-berkowitz-multirqmt-02.txt
(if possible, change to draft-berkowitz-multireq-02.txt)
To Be Multihomed: Requirements & Definitions
Status of this Memo
This document is an Internet-Draft and is subject to
all provisions of Section 10 of RFC2026.
Copyright Notice
Copyright (C) The Internet Society (2001). All Rights Reserved.
Abstract
As organizations find their Internet connectivity increasingly critical
to their mission, they seek ways of making that connectivity more
robust. The term ''multi-homing'' often is used to describe means of
fault-tolerant connection. Unfortunately, this term covers a variety of
mechanisms, including naming/directory services, routing, and physical
connectivity. The "home" may be identified by a DNS name, an IP address, or
an IP address and transport-layer port identifier. Any of these identifiers
may be translated in the path between source and destination. This
memorandum presents a systematic way to define the requirement for
resilience, and a taxonomy for describing mechanisms to achieve it. Its
intended audience is primarily people concerned with planning and performing
multihoming deployments, rather than protocol developers. It examines both
requirements and applications, with the emphasis on the former.
As organizations find their Internet connectivity increasingly critical
to their mission, they seek ways of making that connectivity more
robust. The term ''multi-homing'' often is used to describe means of
fault-tolerant connection. Unfortunately, this term covers a variety of
mechanisms, including naming/directory services, routing, and physical
connectivity. The "home" may be identified by a DNS name, an IP address, or
an IP address and transport-layer port identifier. Any of these identifiers
may be translated in the path between source and destination. This
memorandum presents a systematic way to define the requirement for
resilience, and a taxonomy for describing mechanisms to achieve it. Its
intended audience is primarily people concerned with planning and performing
multihoming deployments, rather than protocol developers. It examines both
requirements and applications, with the emphasis on the former.
1. Introduction
As the Internet becomes more ubiquitous, more and more enterprises
connect to it. Some of those enterprises, such as Web software vendors,
have no effective business if their connectivity fails. Other
enterprises do not have mission-critical Internet applications, but
become so dependent on routine email, news, web, and similar access that
a loss of connectivity becomes a crisis.
As this Internet dependence becomes more critical, prudent management
suggests there be no single point of failure that can break all Internet
connectivity. The term "multihoming" has come into vogue to describe
various means of enterprise-to-service provider connectivity that avoid
a single point of failure. Multihoming also can describe connectivity
between Internet Service Providers and "upstream" Network Service
Providers.
Several terms have become overloaded to the point of confusion, including
multihoming, virtual private networks, and load balancing. This document
attempts to bring some order to the definition of multihoming. It partially
overlaps definitions of virtual private networks [Ferguson & Huston].
If we take the word "multihoming" in the broadest context, it implies there
are multiple ways to reach a "home" destination. Another way to look at this is
that the Internet, in the broadest sense, consists of a set of layered
virtualizations: time division multiplexing on top of media, data link
multiplexing on top of multiplexed channels, routes and label switched paths on
top of data link virtual circuits, transport sessions on top of
routes, etc. See Chapter 5 of [Berkowitz 2000].
This "home" may be identified by a name, an IP address, or a combination of
IP address and TCP/UDP port. In this document, TCP/UDP ports will be referred
to as TU ports.
There are other motivations for complex connectivity from enterprises to
the Internet. Mergers and acquisitions, where the joined enterprises
each had their own Internet access, often mean complex connectivity, at
least for a transition period. Consolidation of separate divisional
networks also creates this situation. A frequent case arises when a
large enterprise decides that Internet access should be available
corporate-wide, but their research labs have had Internet access for
years -- and it works, as opposed to the new corporate connection that
at best is untried.
Many discussions of multihoming focus on the details of implementation,
using such techniques as the Border Gateway Protocol (BGP) [RFC number
of the Applicability Statement], multiple DNS entries for a server, etc.
This document suggests that it is wise to look systematically at the
requirements before selecting a means of resilient connectivity.
One implementation technique is not appropriate for all requirements.
There are special issues in implementing solutions in the general
Internet, because poor implementations can jeopardize the proper
function of global routing or DNS. An incorrect BGP route advertisement
injected into the global routing system is a problem whether it
originates in an ISP or in an enterprise.
2. Requirements
In this document, the words that are used to define the significance
of each particular requirement are capitalized. These words are:
* "MUST" This word, or the words "REQUIRED" and "SHALL" mean that
the item is an absolute requirement of the specification.
* "SHOULD" This word or the adjective "RECOMMENDED" means that
there may exist valid reasons in particular circumstances to
ignore this item, but the full implications should be
understood and the case carefully weighed before choosing a
different course.
* "MAY" This word or the adjective "OPTIONAL" means that this
item is truly optional. One vendor may choose to include the
item because a particular marketplace requires it or because it
enhances the product, for example; another vendor may omit the
same item.
An implementation is not compliant if it fails to satisfy one or more
of the MUST requirements for the protocols it implements. An
implementation that satisfies all the MUST and all the SHOULD
requirements for its protocols is said to be "unconditionally
compliant"; one that satisfies all the MUST requirements but not all
the SHOULD requirements for its protocols is said to be
"conditionally compliant".
3. Goals
Requirements tend to be driven by one or more of several major goals for
server availability and performance. Availability goals are realized
with resiliency mechanisms, to avoid user-perceived failures from single
failures in servers, routing systems, or media. Performance goals are
realized by mechanisms that distribute the workload among multiple
machines such that the load is distributed in a useful manner. Like
multi-homing, the terms load-balancing and load-sharing have many
definitions.
In defining requirements, the servers themselves may either share or
balance the load, there may be load-sharing or load-balancing routing
paths to them, or the routed traffic may be carried over load-shared or
load-balanced media.
3.1 Analyzing Application Requirements
Several questions need to be answered in the process of refining goals:
-- the administrative model and administrative awareness of endpoints
-- availability requirements
-- the security model
-- addressing requirements
-- scope of multihoming
3.1.1 Administrative Model
A key question is: are endpoints predefined in the multihoming process, or
will either the client or server end be arbitrary?
The servers of interest may be inside the enterprise, or outside it. If they
are outside, their names or addresses may or may not be preconfigured into
multihoming mechanisms.
In this document, intranet clients and servers are inside the enterprise and
intendedprimarily for enterprise use. The existence of both can be
preconfigured. Intranet clients have access only to machines on the
intranet.
Multinet clients servers are inside the enterprise, but there is pre-
authorized access by known external partners.
Internet servers are operated by the enterprise but intended to be
accessible to the general Internet. Internet clients have general Internet
access that may be mediated by a firewall. The client administrator will
know the prior identity of clients, but not of servers. The server
administrator will know the prior identity of servers, but not of clients.
3.1.2 Availability Requirements
There are major implications between defining a requirement for high
availability of initial access, and making the connection stay up once
access has been achieved. The latter tends to require transport layer
awareness.
In the terminology of RFC1775, "To be 'on' the Internet," servers
described here have "full" or a subset of "client" access. Client
servers may not directly respond to specific IP packet from an arbitrary
host, but a system such as a firewall MUST respond for them unless a
security policy precludes that. Some valid security policies, for
example, suppress the response of ICMP Destination Administratively
Prohibited responses, because that would reveal there is an information
resource being protected.
RFC1775 defines full access as " a permanent (full-time) Internet
attachment running TCP/IP, primarily appropriate for allowing the
Internet community to access application servers, operated by Internet
service providers. Machines with Full access are directly visible to
others attached to the Internet, such as through the Internet Protocol's
ICMP Echo (ping) facility. The core of the Internet comprises those
machines with Full access." This definition is extended here to allow
full firewalls or screening routers always to be present.
If a proxy or address translation service exists between the real
machine and the Internet, if this service is available on a full-time
basis, and consistently responds to requests sent to a DNS name of the
server, it is considered to be full-time.
In this discussion, we generalize the definition beyond machines
primarily appropriate for the Internet community as a whole, to include
in-house and authorized partner machines that use the Internet for
connectivity. Both the cases described in 4.2.3 and 4.2.4 have been termed
"Virtual Private Networks."
3.1.3 Security Model
Security requirements can include various cryptographic schemes, as well as
mechanisms to hinder denial of service attacks. The requirements analyst
must determine whether cryptography is needed, and, if so, whether
cryptographic trust must be between end hosts or between end hosts and a
trusted gateway. Such gateways can be routers or multiported application
servers.
3.1.4. Addressing Refinements and Issues
It is arguable that addressing used to support multihoming is a routing
deployment issue, beyond the scope of this document. Rationales for
including it here is that addressing MAY affect application behavior. There
also may be administrative requirements for addressing, such as a service
provider that contracts to run a multinet may require addresses to be
registered, possibly from the provider's address space.
If the enterprise runs applications that embed network layer addresses
in higher-level data fields, solutions that employ address translation,
at the packet or virtual connection level, MAY NOT work. Use of such
applications inherently is a requirement for the eventual multihoming
solution.
Consideration also needs to be given to application caches in addition
to those of DNS. Firewall proxy servers are a good example where
multiple addresses associated with a given destination may not be
supported.
Network Address Translation (NAT) is a widespread technique undergoing
significant enhancement in the NAT Working Group. The traditional approached
either did a one-to-one translation from inside to outside address, or a
many-to-one mapping from a large number of addresses on one side to a much
smaller number of addresses (with a larger number of TCP/UDP ports). The
traditional approaches, in practice, include:
RFC1918 internal, static network address translation (NAT) to outside
The outside space may be an extranet partner's private or registered
address space, or may be registered space allocated to an ISP.
RFC1918 internal, dynamic port address translation (PAT) to outside
The outside space may be an extranet partner's private or registered
address space, or may be registered space allocated to an ISP.
-- Registered internal, Provider Assigned (PA)
-- Registered internal, Provider Independent (PI)
More powerful translation technologies such as Load-Sharing NAT [RFC2391] or
Address Layer Gateways (ALG) [RFC2663] may be needed.
3.1.5 Scope of multihoming
Multihoming may be defined between an end host and a router or application
gateway, on an end-to-end basis possibly involving virtual servers, among
routers, or among elements in a transmission system. Different multihoming
scopes may support the same application requirement.
3.2. Application Goals
These goals need to be agreed to by the people or organization responsible
for the applications. Not to reach fairly formal agreement here can lead to
problems of inappropriate expectations.
At the application layer, there will be expectations of connectivity. Not
all applications will operate through classical NAT devices. Application
designers should proceed on two fronts: following NAT-friendly application
design principles [Senie 1999a] and being aware of potential application
protocol interactions with NAT technologies [Holdredge 1999a].
The term "service level agreement" often refers to expectations of
performance, such as throughput or response time. Ideas here extend the
performance-based model to include availability.
3.2.1 Specific network application availability
The first goal involves well-defined applications that run on a specific
set of servers visible to the Internet at large. This will be termed "endpoint
multihoming", emphasizing the need for resilience of connectivity to
well-defined endpoints. Solutions here involve DNS mechanisms as well as
route injection techniques.
There are both availability and performance goals here. Availability
goals arise when there are multiple routing paths that can reach the
server, protecting it from single routing failures. Other availability
goals involve replicated servers, so that the client will reach a server
regardless of single server failures. Replicated servers can be either
placed in the same location or distributed geographically.
Performance goals include balancing client requests over multiple
servers, so that one or more servers do not become overloaded and
provide poor service. Requests can be distributed among servers in a
round-robin fashion, or more sophisticated distribution mechanisms can
be employed. Such mechanisms can consider actual real-time workload on
the server or other devices involved, proximity between the client and
the server, known server capacity, etc.
From an application standpoint, this is either a many-to-one topology, many
clients to one server, or a many-to-many topology when multiple servers are
involved. It can be worthwhile to consider a many-to-few case, when the few
are multiple instances of a server function, which may appear as a single
server to the general Internet. The idea of many-to-few topology allows for
a local optimization of inter-server communications, without affecting the
global many-to-one model.
Addresses on interfaces that connect to the general Internet need to be
unique in the global Internet routing system, although they may be
translated, at the network address or port level, from public to internal
space. However, in route injection techniques, virtual addresses
corresponding to the same geographically distributed application
may be the same in different locations.
3.2.1.1 Content Delivery Networks (CDNs)
CDNs can be considered as a special case of the application availability
scenario. The emphasis here is on dealing with the client-server
proximity and/or on delivering of specialized content and network-based
applications to (the predefined set of) clients in the best possible way.
CDNs can be though of as a next generation hosting solutions. In the
usual hosting scenario, Content/Collocation Service Provider limits
itself to the Internet Data Center (IDC) operations. In the CDN model,
the service provider distributes copies of content to remote replicas
(caches) located as close to the client edge as possible.
The fundamental reasons for doing so are:
- improved end user experience (even speed-of-light delays become
comparable with the maximum delays required by some applications and,
hence, service provider's SLAs; as a result, the physical proximity
of the content to the client becomes crucial);
- improved network bandwidth utilization;
- decreased load on the origin server(s).
The value of a CDN is defined by its reach and scale. The way to
increase the reach and scale of CDNs is CDN peering, [Day]. In the
CDN peering model, separate CDNs can peer (exchange content as well
as content routing, proximity and accounting information) in a way
similar to traditional ISP peering. Success of CDN peering standard
effort may lead to appearance of a content-and-proximity-aware global
network overlaid with the current Internet infrastructure.
3.2.2 General Internet connectivity from the enterprise
The second is high availability of general Internet connectivity for
arbitrary enterprise users to the outside. This will be called
"internetwork multihoming". Solutions here tend to involve routing
mechanisms.
This can be viewed as a few-to-many application topology.
Addresses on interfaces that connect to the general Internet need to be
unique in the global Internet routing system, although they may be
translated, at the network address or port level, from internal private
address to public space.
3.2.3 Use of Internet services to interconnect "intranet" enterprise
campuses
The third involves the growing number of situations where Internet
services are used to interconnect parts of an enterprise. This is
"intranetwork multihoming". This will usually involve dedicated or
virtual circuits, or some sort of tunneling mechanisms.
This case may be termed a "virtual private network." The "multihoming"
aspect of this case is associated with high availability to the
connectivity network that underlies the tunneling system.
In this case, addresses only need to be unique within the enterprise.
3.2.4 Use of Internet services to connect to "multinet" partners
A fourth category involves use of the Internet to connect with strategic
partners. True, this does deal with endpoints, but the emphasis is
different than the first case. In the first case, the emphasis is on
connectivity from arbitrary points outside the enterprise to points
within it. This second case deals with pairs of well-known endpoints.
These endpoints may be linked with dedicated or virtual circuits defined
at the physical or data link layer. Tunneling or other virtual private
networks may be relevant here as well. There will be coordination
issues that do not exist for the third case, where all resources are
under common control.
Addresses need to be unique in the different enterprises, but do not need
to be unique in the global Internet.
4. Planning and Budgeting
In each of these scenarios, organization managers need to assign some
economic cost to outages. Typically, there will be an incident cost and
an incremental cost based on the length or scope of the connectivity
loss.
Ideally, this cost is then weighted by the probability of outage.
A weighted exposure cost results when the outage cost is multiplied by
the probability of the outage.
Resiliency measures modify the probability, but increase the cost of
operation.
Operational costs obviously include the costs of redundant mechanisms
(i.e., the addititional multihomed paths), but also the incremental
costs of personnel to administer the more complex mechanisms -- their
training and salaries.
5. Issues
5.1 Performance and Robustness
Goals of some forms of "multi-homing" conflict with goals of improving
local performance. For example, DNS queries normally are cached in DNS
servers, and in the requesting host. From the performance standpoint,
this is a perfectly reasonable thing to do, reducing the need to send
out queries.
From the multihoming standpoint, it is far less desirable, as
application-level multihoming may be based on rapid changes of the DNS
master files. The binding of a given IP address to a DNS name can
change rapidly.
On the other hand, goals of some other forms of "multi-homing" just
revolve around the improved performance (while higher robustness is
delivered "automatically"). The CDN approach is a good example.
5.2 Service Level Agreements
Enterprise networks, especially mainframe-based, are accustomed to building
and enforcing service level agreements for application performance. A key to
being able to do this is total control of the end-to-end communications
path.
In the current Internet, the enterprise(s) at one or both ends control their
local environments, and have contractual control over connections to their
direct service providers.
If service level control is a requirement, and both ends of the path are not
under control (i.e., cases 1 and 2), the general Internet cannot now provide
service level guarantees. The need for control should be reexamined, and, if
it still exists, the underlying structure will need to be dedicated
resources at the network layer or below. A network service provider may be
able to engineer this so that some facilities are shared to reduce cost, but
the sharing is planned and controlled.
5.3 Symmetry
One of the reasons service level agreements are not enforceable in the
general Internet is the reality that global routing cannot be guaranteed to
be symmetrical. Symmetrical routing assumes the path to a destination is
simply reversed to return a response from that destination. Both legs of a
symmetrical path are assumed to have the same performance characteristics.
Global Internet routing is not necessarily optimized for best end-to-end
routing, but for efficient handling in the Autonomous Systems along the
path. Many service providers use "closest exit" routing, where they will
go to the closest exit point from their perspective to get to the next hop
AS. The return path, however, is not necessarily of a mirror image of the
path from the original source to the destination.
Closest exit routing is, in fact, a "feature" rather than a "bug" in some
multihoming schemes [Peterson] [Friedman].
Especially when the enterprise network has multiple points of attachment to
the Internet, either to a single ISP AS or to multiple ISPs, it becomes
likely that the response to a given packet will not come back at the same
entry point in which it left the enterprise.
This is probably not avoidable, and troubleshooting procedures and traffic
engineering have to consider this characteristic of multi-exit routing.
5.4 Security
ISPs may be reluctant to let user routing advertisements or DNS zone
information flow directly into their routing or naming systems. Users
should understand that BGP is not intended to be a plug-and-play
mechanism; manual configuration often is considered an important part of
maintaining integrity. Supplemental mechanisms may be used for
additional control, such as registering policies in a registry [RPS, RA
documents] or egress/ingress filtering [Ferguson draft]
Challenges may arise when client security mechanisms interact with fault
tolerance mechanisms associated with servers. For example, if a server
address changes to that of a backup server, a stateful packet screening
firewall might not accept a valid return. Similarly, unless servers back
one another up in a full mirroring mode, if one end of a TCP-based
application connection fails, the user will need to reconnect. As long
as another server is ready to accept that connection, there may not be
major user impact, and the goal of high availability is realized. High
availability and user transparent high availability are not synonymous.
5.5 Load Balancing vs. Load Sharing
These terms are often interchanged, but they really mean different things.
Load balancing is deterministic, and at a finer level of control than load
sharing, which is statistical. Load balancing is generally not something
that can be realized in general Internet routing, other than in special and
local cases between adjacent AS. A degree of load sharing is achievable in
routing, but it may introduce significant resource demands and operational
complexity.
Historically, load balancing was thought of as a true equal split of traffic
over a set of transmission pipes. More modern concepts of controlled load
balancing use much more intelligent algorithms. In the server load balancing
(SLB) context, for example, the balancing is done by a SLB algorithm. For local
server load balancing (i.e., in a LAN-connected cluster of servers), deployed
algorithms include round robin, least real server sessions, least server
response time, least real server load, etc. Global load balancing among
multiple clusters becomes even more complex.
The key point to understand is that load balancing is a result of some dynamic
algorithm, while load sharing is a result of some static configurations.
5.6 Application Compatibility
Some deployment mechanisms involve network address, or network address and
TCP/UDP port, translation (NAT and NAPT). If the application protocols embed
IP addresses in their protocol fields, NAT or NAPT may cause protocol
failures. Translation mechanisms for such cases may require knowledge of the
application protocol, as typified by application proxies in firewalls, or in
application gateways with multiple interfaces.
6. Multihoming Deployment Technologies
A basic way to tell which technology(ies) is applicable is to ask oneself
whether the functional requirement is defined in terms of multihoming to
specific hosts, or to specific networks.
A given multihoming implementation may draw on several of these
technologies, such as DNS
name-level redirection based in part on routing metrics.
6.1 Application/Name Based
Technologies in this category may involve referring a client request to
different instances of the endpoint represented by a single name. Another
aspect of application/name multihoming may work at the level of IP
addressing, but specifically is constrained to endpoint (i.e., server)
activities that redirect the client request to a different endpoint.
Application-level firewall proxy services can provide this functionality,
although their application protocol modification emphasizes security while a
multihoming application service emphasizes availability and quality of
service.
6.2 Transport Based
Transport based technologies are based on maintaining tunnels through an
underlying network or transmission system. The tunnels may be true end to
end, connecting a client host with a server host, or may interconnect
between proxy servers or other gateways.
6.3 Network Based
Network based approaches to multihoming are router-based. They involve
having alternate next-hops, or complete routes, to alternate destinations.
6.4 Data Link Based
Data link layer methods may involve coordinated use of multiple physical
paths, as in multilink PPP or X.25. If the underlying WAN service has a
virtual circuit mechanism, as in frame relay or ATM, the service provider
may have multihomed paths provided as part of the service. Such functions
blur between data link and physical layers.
Other data link methods may manipulate MAC addresses to provide virtual
server functions.
6.5 Physically-based
Physical multihoming strategies can use diverse media, often of different
types such as a wire backed up with a wireless data link. They also involve
transmission media which have internal redundancy, such as SONET.
7. Application/Name Multihoming
While many people look at the multihoming problem as one of routing,
the goal is often multihoming to endpoints. Finding an endpoint usually
begins in DNS. Once an endpoint address is found, some application
protocols, notably HTTP and FTP, may redirect the request to a different
endpoint.
The basic
idea here is that arbitrary clients will first request access to a
resource by its DNS name, and certain DNS servers will resolve the same
name to different addresses based on conditions of which DNS is aware,
or using some statistical load-distribution mechanism.
There are some general DNS issues here. DNS was not really designed to
do this. A key issue is that of DNS caching. Caching and frequent
changes in name resolution are opposite goals. Traditional DNS schemes
emphasize performance over resiliency.
7.1 DNS Caching
DNS standards do provide the capability for short cache lifetimes, which in
principle support name based multihoming. "The meaning of the TTL field is
a time limit on how long an RR can be kept in a cache. This limit does not
apply to authoritative data in zones; it is also timed out, but by the
refreshing policies for the zone. The TTL is assigned by the administrator
for the zone where the data originates. While short TTLs can be used to
minimize caching, and a zero TTL prohibits caching, the realities of
Internet performance suggest that these times should be on the order of
days for the typical host. If a change can be anticipated, the TTL can be
reduced prior to the change to minimize inconsistency during the change,
and then increased back to its former value following the change. [RFC1034]
Several real-world factors limit the utility of simply shortening the cache
time. Widely used BIND, the most widely used DNS implementation, does not
accept cache lifetimes less than 5 minutes, while the older versions of BIND
just ignore responses with TTL 0.
Dynamic DNS may be a long-term solution here. In the short term, setting
very short TTL values may be help in some cases, but is not likely to help
below a granularity of 5 minutes. Remember that the name normally is resolved
when an application session first is established (that is, the application
must be restarted for updated DNS mapping to take effect), and the decisions
are made over a longer time base than per-packet routing decisions.
7.2 DNS Multiple Hosts and Round Robin Response
The DNS protocol allows it to return multiple host addresses in response to
a single query. At the first level of DNS-based multihoming, this can
provide additional reliability.
A DNS server knows three IP addresses for the server function identified by
server.example.com, 10.0.1.1, 10.0.2.1, and 10.0.3.1. A simple response to a
query for server.example.com returns all three addresses. Assume the
response provides server addresses in the order 10.0.1.1, 10.0.2.1, and
10.0.3.1.
Whether this will provide multihoming now depends on the DNS client. Not all
host client implementations will, if the first address returned (i.e.,
10.0.1.1) does not respond, try the additional addresses. In this example,
10.0.2.1 might be operating perfectly.
A variant suggested by Kent England on the NANOG mailing list is to have the
addresses returned in the DNS response come from the CIDR blocks of different
ISPs that provide connectivity to the server function. This approach combines
aspects of name and network multihoming.
Again, this will work when intelligent clients try every IP address returned
until a server responds. Not all clients are intelligent.
7.3 Intelligent DNS Responses
This is usually a part of Global Server Load Balancing (GSLB) functionality
of specialized devices (load balancers) from various vendors. In this case,
information about availability of and load on servers is collected
and proximity to the client DNS server is measured by the load balancer.
Given this information, the load balancer can make more intelligent (than
in the round robin case) decision replying to the DNS request.
7.4 Cache Servers and Application Multihoming
Caching is actively used in Content Delivery Networks (CDNs). Clusters of
caching devices are installed in a set of "strategic" locations, closer
to the client edge. Availability, load and proximity information is exchanged
between these clusters and IDCs, where the original servers are located.
In case of a cluster failure, the next closest to a given client cluster
or IDC serves this client.
8. Transport Multihoming
Transport layer functions are conceptually end-to-end. There are two broad
classes of transport multihoming function, those maintained by the endpoints
and those that involve intermediate translation devices.
8.1 End-to-end Tunnel Maintenance
Basic point-to-point tunneling mechanisms include GRE, PPTP and L2TP
and IPIP. DVMRP is a special case. Choices here will depend in part
on the security policy
and the administrative model by which multihoming is provided. GRE, for
example, does not itself provide encryption, while PPTP and L2TP do.
"The differences between PPTP and L2TP are more of where one wishes the
PPP session to terminate, and one of control. It really depends on who
you are, and where you are, in the scheme of the control process. If
your desire is to control where the PPP session terminates (as an ISP
might wish to control), then L2TP might be the right approach. On the
other hand, however, if you are a subscriber, and you wish to control
where the PPP session terminates (to, say, a PPTP server somewhere
across the cloud), then PPTP might be the right approach -- and it
would be transparent to the service provider). It really depends on
what problem one is trying to solve, and if you are in the business
of trying to create "services." [Ferguson-1998-2]
8.2 Load Sharing NAT (LSNAT)
Various proxy and address translation mechanisms can play a role in
multihoming. They can be divided into several levels of topological
constraints:
-- all servers are collocated in a single address domain "behind" the
translator
-- servers are in different parts of a single address domain. These
parts are connected by tunnels.
-- the servers are at arbitrary network addresses, but the translator
knows how to reach them.
Application-aware proxies can have even more knowledge of application load.
A variant of NAT, called load sharing NAT (LS-NAT), offers a load sharing
mechanism at the transport/network level rather than the DNS level
[Srishuresh]. When considering LSNAT-style load sharing, remember that it
emphasizes providing a pool of servers capable of servicing requests.
In its "local" form, it does not easily provide mechanisms for increasing
reliability by mapping the user request to geographically distributed
servers. More advanced variants can combine with DNS- and routing-aware
mechanisms to increase reliability as well as performance.
The LSNAT function is visible globally as a server address. It is actually a
virtual server. When a client request arrives at the LSNAT, the LSNAT
translates the destination address, transparently to the client, and passes
it to a server in the LSNAT's server pool.
LSNATs, in their basic form, do have topological restriction. It has been
suggested that all request/response traffic in a single session must go from
the real client, to one specific LSNAT, to the server. It is conceivable
that multiple routers could be used, but they would need to be tightly
synchronized.
LSNAT builds on NAPT, but adds intelligence to the port mapping function.
Also, general NAPT is oriented toward outgoing requests from the stub domain
to the outside, while LSNAT emphasizes incoming requests to a virtual server
address.
As currently conceived, LSNATs operate at the TCP/UDP transport level, so
they could not easily change server hosts during a session. There are
potential workarounds to this, including using a multicast address as the
server pool destination, with coordination between the servers as to which
actually answers the request. For highly fault-tolerant applications, more
than one server conceivably could answer the NAT request, and the LSNAT
decide which to pass to the client. Typically, if servers are identical, it
would be the first response received by the server pool side of the LSNAT.
This general topology restriction suggests LSNAT functionality belongs on a
single NAT-capable border router, with the server pool inside the stub
domain. A LSNAT violates the Internet end-to-end model in the same way that
basic NAT does. There is no requirement that the addressing in the stub
domain be private, only that all access to the servers go through the NAT.
In basic LSNAT, an arbitrary external client attempts to establish a session
with what it believes to be a server. In reality, it is attempting to
establish a session with the virtual server address of the "outside"
interface of the LSNAT.
The LSNAT, based on internal criteria, redirects the external request to a
specific internal server in a server pool. Unique connections are
established from the LSNAT to the pool server for each request, translating
addresses and ports as needed.
8.2.1 Source LS-NAT
Adding source address translation removes many topological limitations
between the real servers and the LS-NAT router.
In a LS-NAT router, inbound sessions identified by the tuple <real client
address, real client TU port, virtual server address, virtual server port)
are translated into the tuple:
o client address
o client TU port
o real server address
o real server TU port
Notice that the server needs to respond to the real client address in a
LS-NAT system. Assuming the servers do not participate in routing, the only
realistic way for the servers to send to an external address is to use a
default route. In a variety of network scenarios involving multi-attached
servers, this single default gateway may be a problem in case of some local
failures.
It is not a given that servers do not participate in routing. Some cache
server schemes physically involve a cluster of servers with a dedicated
router, or servers themselves may run routing protocols.
Source LS-NAT involves defining a virtual server address as that of an external
interface of an source LS-NAT router. Incoming sessions translate into the
tuple:
o virtual server address
o virtual server TU port
o real server address
o real server port
If source LS-NAT is used in the local context, then various complications
associated with the single default gateway on multi-attached servers are
resolved since all connections to real servers seem as if locally originated
now.
In global source LS-NAT scenarios (so called "remote server"
functionality), the
real server can use multiple paths to return responses. Quite complex routing
can provide multiple paths to the source LS-NAT router. There remains the basic
topological constraint that there will be only one source LS-NAT router, but
there can more easily be multiple internal paths to it. This allows servers
to be outside a stub domain. The source LS-NAT router can direct traffic
to an internal server inside the private address space of a stub domain,
or direct the traffic to a third-party server using registered addresses
and WAN connectivity.
The additional complexity of source LS-NAT does allow greater scalability,
because new links can be dropped into the routing system without problem.
As long as new client links can get to the virtual server address, the addition
of these links is transparent to both servers and clients.
The most important consideration against source LS-NAT is broken logging on
real servers.
8.3 Triangle Data Flow (TDF) methods
Various tunneling and source LS-NAT based methods as well as combination of the
two can be used to make aforementioned topological constraint mitigation
even more efficient (than in the pure source LS-NAT) by means of triangulation
of data flows.
In a generic source LS-NAT case, traffic flows are still as in any LS-NAT
scenario -- the ingress traffic flows from the client, to the LS-NAT router
and then to the real server; the egress traffic flows the same way backwards.
In a generic TDF case, the ingress traffic still flows the same way (client,
LS-NAT router, server), while the egress traffic flows directly from the
server to the client by-passing the LS-NAT router.
The main reasoning for TDF is the simple observation that the egress traffic
comprises 90% of the total average data center traffic.
8.4 Route Health Injections
While this method utilizes routing, its primary target is at the application
level. Therefore it is described here.
In this technique, the DNS name for a geographically distributed application
usually corresponds to only one IP address and this IP address is injected
as a host route into the routing protocol (either BGP or IGP) at any
geographically separate location hosting corresponding application.
Every host route is injected only when the corresponding server or application
hosted in a given location is alive.
This way, every client connects to its closest (in the given routing protocol
sense) server and, in the case of its failure, traffic from the client is
routed the next closest server.
9. Network/Routing Multihoming
A common concern of enterprise financial managers is that multihoming
strategies involve expensive links to ISPs, but, in some of these
scenarios, alternate links are used only as backups, idle much of the
time. Detailed analysis may reveal that the cost of forcing these links
to be used at all times, however, exceeds the potential savings.
The intention here is to focus on requirements rather than specifics of
the routing implementation, several approaches to which are discussed in
RFC1998 and draft-bates-multihoming-01.txt. Exterior routing policies can be
described with the Routing Policy Specification Language [RFC2280].
Operational as well as technical considerations apply here. While the
Border Gateway Protocol could convey certain information between user
and provider, many ISPs will be unwilling to risk the operational
integrity of their global routing by making the user network part of
their internal BGP routing systems.
ISPs may also be reluctant to accept BGP advertisements from
organizations that do not have frequent operational experience with this
complex protocol.
9.1 Single-homed (R1)
The enterprise generally does not have its own ASN; all its
advertisements are made through its ISP. The enterprise uses default
routes to the ISP. The customer is primarily concerned with protecting
against link or router failures, rather than failures in the ISP routing
system.
9.1.1 Single-homed, single-link (R1.1)
There is a single active data link between the customer and provider.
Variations could include switched backup over analog or ISDN services.
Another alternative might be use of alternate frame relay or other PVCs
to an alternate ISP POP.
9.1.2 Single-homed, balanced link (R1.2)
In this configuration, multiple parallel data links exist from a single
customer router to an router. There is protection against link
failures.
The single customer router constraint allows this router to do round-
robin packet-level load balancing across the multiple links, for
resiliency and possibly additional bandwidth. The ability of a router
to do such load-balancing is implementation-specific, and may be a
significant drain on the router's processor.
9.1.3 Single-homed, multi-link (R1.3)
Here, we have separate paths from multiple customer routers to multiple
ISP routers at different POPs. Default routes generated at each of the
customer gateways are injected into the enterprise routing system, and
the combination internal and external metrics are considered by internal
routers in selecting the external gateway.
This often is attractive for enterprises that want resiliency but wish
to avoid the complexity of BGP.
9.1.4 Multihomed to a Single Provider (R1.4)
RFCs 1998 and RFC2270 describe approaches where customers use the
flexibility of
BGP, typically with a private AS number, to multihome to multiple POPs of a
single ISP. The difference between this case and 9.1.3 is that there is much
greater control, since more information than default routes and preconfigured
static routes are available. RFC1998 deals with a case where the customer's
address space is provider-assigned, where RFC 2270 is more general about
addressing.
9.1.4 Special Cases
While the customer in this configuration is still single-homed, an AS
upstream from the ISP has a routing policy that makes it necessary to
distinguish routes originating in the customer from those originating in
the ISP. In such cases, the enterprise may need to run BGP, or have the
ISP run it on its behalf, to generate advertisements of the needed
specificity. Since the same basic topologies discussed above apply, we
can qualify them as R1.1B, R1.2B, and R1.3B.
It MAY be possible for the customer to avoid using BGP, if its adjacent
ISP will set a BGP community attribute, understood by the upstream, on
the customer prefixes [RFC1998]. Doing so results in the cases R1.1C,
R1.2C, and R1.3C. This will involve more administrative coordination, but
offers the advantage of leaving complex BGP routing to professionals.
9.2 Multi-homed Routing
Multihomed routing to multiple providers definitely increases
protection against
failures of the routing system of a specific provider, but it has the negative
consequence of injecting many more long prefixes into the global
routing system.
Indeed, the added prefixes, and especially the increased churn from prefixes
going up and down, may decrease the reliability of
resource-constrained routers.
Some proposals have been submitted on how to mitigate the long prefix injection
problem (see [RFC2260] for the IPv4 case, and [Hagino], [Bragg] and references
therein for the IPv6 case). However, none of them fully resolve the problem --
they usually both introduce a high level of operational complexity leading to
service providers' reluctance of deploying them and address only local failures
(between the provider and the customer); failures outside of this scope (some
Internet backbone events making the provider unreachable, for example) are
undetected by those mechanisms. It can be stated that the problem of long
prefix injection with multi-provider multi-homing remains an open issue (for
more details see [Huston]).
Some providers may be violating RFC1771 rules by using RFC2270 private AS for
multihoming to more than one provider. While, conceptually, conserving AS
numbers is desirable, other solutions to that problem are being developed, such
as 4-byte AS numbers. All registries consider multihoming to more than one
provider a justification for assignment of a public AS.
The enterprise connects to more than one ISP, and desires to protect
against problems in the ISP routing system. It will accept additional
complexity and router requirements to get this. The enterprise may also
have differing service agreements for Internet access for different
divisions.
9.2.1 Multi-homed, primary/backup, single link (R2.1)
The enterprise connects to two or more ISPs from a single router, but
has a strict policy that only one ISP at a time will be used for
default. In an OSPF environment, this would be done by advertising
defaults to both ISPs, but with different Type 2 external metrics. The
primary ISP would have the lower metric. BGP is not necessary in this
case. This easily can be extended to multi-link.
9.2.2 Multi-homed, differing internal policies (R2.2)
In this example, assume OSPF interior routing, because OSPF can distinguish
between type 1 and type 2 external metrics. The main default for the
enterprise comes from one or more ASBRs in Area 0, all routing to the
same ISP. One or more organizations brought into the corporate network
have pre-existing Internet access agreements with an ISP other than the
corporate ISP, and wish to continue using this for their "divisional"
Internet access.
This is frequent when a corporation decides to have general Internet
access, but its research arm has long had its own Internet connectivity.
Mergers and acquisitions also produce this case.
In this situation, an additional ASBR(s) are placed in the OSPF areas
associated with the special-case, and this ASBR advertises default.
Filters at the Area Border Router block the divisional ASBR's default
from being advertised into Area 0, and the corporate default from being
advertised into the division. Note that these filters do not block OSPF
LSAs, but instead block the local propagation of selected default and
external routes into the Routing Information Base (i.e., main routing
table) of a specific router.
9.2.3 Multi-homed, "load shared" with primary/backup (R2.3)
While there still is a primary/backup policy, there is an
attempt to make active use of both the primary and backup providers.
The enterprise runs BGP, but does not take full Internet routing. It
takes partial routing from the backup provider, and prefers the backup
provider path for destinations in the backup provider's AS, and perhaps
directly connected to that AS. For all other destinations, the primary
provider is the preferred default. A less preferred default is defined
to the second ISP, but this default is advertised generally only if
connectivity is lost to the primary ISP.
9.2.4 Multi-homed, global routing aware (R2.4)
Multiple customer router receive a full routing table, and, using
appropriate filtering and aggregation, advertise different destinations
(i.e., not just default) internally. This requires BGP, and, unless
dealing with a limited number of special cases, requires significantly
more resources inside the organization.
9.3 Transit.
While we usually think of this in terms of ISPs, some enterprises may
provide Internet connectivity to strategic partners. They do not offer
Internet connectivity on a general basis.
9.3.1 Full iBGP mesh (R3.1)
Connectivity and performance requirements are such that a full iBGP mesh
is practical.
9.3.2 Scalable IBGP required (R3.2)
The limits of iBGP full mesh have been reached, and confederations,
route reflectors, etc., are needed for growth.
10. Transmission Considerations in Multihoming
"Multihoming" is not logically complete until all single points of
failure are considered. With the current emphasis on routing and naming
solutions, the lowly physical layer often is ignored, until a physical
layer failure dooms a lovely and sophisticated routing system.
Physical layer diversity can involve significant cost and delay.
Nevertheless, it should be considered for mission-critical connectivity.
The principal transmission impairment, the backhoe, can be viewed at
http://www.cat.com/products/equip/bhl/bhl.htm
10.1 Local Loop
From a typical server room, analog and digital signals physically flow
to a wiring closet, where they join a riser cable. Depending on the specific
building, the closet and riser may be the responsibility of the enterprise
or ISP, the building management, or a telecommunications carrier.
The riser cable joins with other riser cables in a cable vault, from which
a cable leaves the building and goes to the end switching office of the
local telecommunications provider. Most buildings have a single cable
vault, possibly with multiple cables following a single physical route
back to the end office. A single error by construction excavators can cut
multiple cables on a single path.
A failure in carrier systems can isolate a single end office. Highly
robust systems have physical connectivity to two or more POPs reached
through two or more end offices.
Alternatives here can become creative. On a campus, it can be feasible
to use some type of existing ductwork to run additional cables to
another building that has a physically diverse path to the end office.
Direct wire burial, fiber optic cables run in the air between buildings,
etc., are all possible.
In a non-campus environment, it is possible, in many urban areas, to
find alternate means of running physical media to other buildings with
alternate paths to end offices. Electrical power utilities may have
empty ducts which they will lease, and through which privately owned
fiber can be run.
10.2 Provider Core
As demonstrated by a rash of fiber cuts in early 1997, carriers lease
bandwidth from one another, so a cut to one carrier-owned facility may
affect connectivity in several carriers. This reality makes some
traditional diverse media strategies questionable.
Many organizations consciously obtain WAN connectivity from multiple
carriers, with the notion that a failure in carrier will not affect
another. This is not a valid assumption.
If the goal is to obtain diversity/resiliency among WAN circuits, it may
be best to deal with a single service provider. The contract with this
provider should require physical diversity among facilities, so the
provider's engineering staff will be aware of requirements not to put
multiple circuits into the same physical facility, owned by the carrier
or leased from other carriers.
10.3 AS Topology Beyond ISPs
The designer must be aware of realities of physical plant in the area being
served. If the only upstream connection from two local ISPs is to the same
higher-level service provider, the only fault tolerance benefit of routing
multihoming will be to improve connectivity in the local serving area. In some
extreme cases, the only upstream connection of all ISPs in a given area may be
a single link freely hanging on rusty towers and shabby trees along the
railroad.
11. Security Considerations
12. Acknowledgments
13. References
[Berkowitz 2000] Berkowitz, H. _WAN Survival Guide_. Wiley, 2000.
[Berkowitz 2001] Berkowitz, H., Davies, E., Andersson, L. " An Experimental
Methodology for Analysisof Growth in the Global Routing Table " Work in
progress, IETF, July 2001.
[Bragg] Bragg, N. "Routing support for IPv6 Multi-homing", Work in Progress,
draft-bragg-ipv6-multihoming-00.txt, November 2000.
[Day] Day, M., Cain, B., Tomlinson, G., and P. Rzewski, "A Model for CDN
Peering", Work in Progress, draft-day-cdnp-model-06.txt, May 2001.
[Ferguson-1998-1] Ferguson, P. "Re: Comments on "What is a VPN?"" Message to
IETF VPN mailing list, 08 Mar 1998 19:52:29
[Friedman] Friedman, A. "Dynamic Selection of Geographically Distributed
Servers," presentation at NANOG October 1997 meeting, notes at
http://www.academ.com/nanog/october1997/dynamic-selection.html
[Hagino] Hagino, J., and H. Snyder, "IPv6 multihoming support at site exit
routers", Work in Progress, draft-ietf-ipngwg-ipv6-2260-02.txt, July 2001.
[Huston] Huston, G. "Architectural Requirements for Inter-Domain Routing in
the Internet", Work in Progress, draft-iab-bgparch-01.txt, May 2001.
[Krioukov 2000]"Global Server Load Balancing" D. Krioukov, A. Kit. NANOG
October 2000 meeting. http://www.nanog.org/mtg-0010/krioukov.html
[Peterson] Peterson, A. "Dynamic Selection of Geographically Distributed
Servers," presentation at NANOG October 1997 meeting, notes at
http://www.academ.com/nanog/october1997/dynamic-selection.html
[RFC1034] Mockapetris, P.V. Domain names - concepts and facilities.
Nov-01-1987.
[RFC1631] Egevang,, K., and P. Francis, "The IP Network Address
Translator (NAT)", RFC 1631, May 1994.
[RFC1775] D. Crocker. To Be "On" the Internet. March 1995.
[RFC1812] Baker, F., "Requirements for IP Version 4 Routers", RFC
1812, June 1995.
[RFC1900] Carpenter, B., and Y. Rekhter, "Renumbering Needs Work", RFC
1900, February 1996.
[RFC1918] Rekhter, Y., Moskowitz, R., Karrenberg, D., de Groot, G-J.,
and E. Lear, "Address Allocation for Private Internets", RFC 1918,
February 1996.
[RFC1930] Hawkinson, J., and T. Bates. Guidelines for creation, selection,
and registration of an Autonomous System (AS). March 1996. (Also BCP0006)
[RFC1998] An Application of the BGP Community Attribute in Multi-home
Routing. E. Chen & T. Bates. August 1996
[RFC2050] Hubbard, K., Kosters, M., Conrad, D., Karrenberg, D., and J.
Postel, "INTERNET REGISTRY IP ALLOCATION GUIDELINES", BCP 12, RFC
2050, November 1996.
[RFC2071] Ferguson, P., and H. Berkowitz, "Network Renumbering
Overview: Why would I want it and what is it anyway?", RFC 2071,
January 1997.
[RFC2072] Berkowitz, H. "Router Renumbering Guide." January 1997.
[RFC2260] Bates, T., and Y. Rekhter, "Scalable Support for Multi-homed
Multi-provider Connectivity", RFC 2260, January 1998.
[RFC2270] Stewart, J., Bates, T., Chandra, R., and E. Chen, "Using a
Dedicated AS for Sites Homed to a Single Provider", RFC 2270,
January 1998.
[RFC2280] Routing Policy Specification Language (RPSL). C. Alaettinoglu, T.
Bates, E. Gerich, D. Karrenberg, D. Meyer, M. Terpstra, C.
Villamizar. January 1998.
[RFC 2391] Load Sharing using IP Network Address Translation (LSNAT). P.
Srisuresh, D. Gan. August 1998.
[Srishuresh] Srishuresh, P., and D. Gan, "Load Sharing using IP Network
Address Translation (LSNAT)", work in progress,
ftp://ftp.ietf.org/internet-drafts/draft-srisuresh-lsnat-02.txt
[RFC2663] IP Network Address Translator (NAT) Terminology and
Considerations. P. Srisuresh, M. Holdredge. August 1999.
[RFC3027] Protocol Complications with the IP Network Address
Translator (NAT). M. Holdredge, P. Srisuresh. January 2001.
[Senie1999a] NAT Friendly Application Design Guidelines. D. Senie. Work in
Progress, IETF NAT Working Group, June 1999. draft-nat-app-guide-06.txt
[Vohra2001] " BGP support for four-octet AS number space" Q. Vohra, E. Chen.
Work in Progress, IETF IDR Working Group, May 2001, draft-ietf-idr-as4bytes-
03.txt
15. Authors' Addresses
Howard C. Berkowitz
Nortel Networks
5012 South 25th Street
Arlington VA 22206
Phone: +1 703 998 5819
EMail: hberkowi@nortelnetworks.com
Dmitri Krioukov
Nortel Networks
205 Van Buren Street
Herndon, VA 20170
Phone: +1 703 712 8518
Email: dima@nortelnetworks.com
--
Babylon.