[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[RRG] Run your own ETRs and ITRs, mobility, local anycast for Query Servers
- To: Routing Research Group list <rrg@psg.com>
- Subject: [RRG] Run your own ETRs and ITRs, mobility, local anycast for Query Servers
- From: Robin Whittle <rw@firstpr.com.au>
- Date: Thu, 06 Dec 2007 18:27:16 +1100
- Cc: Stephen Sprunk <stephen@sprunk.org>
- Organization: First Principles
- User-agent: Thunderbird 2.0.0.9 (Windows/20071031)
Oops, I wrote "PI" where I meant "PA".
Hi Stephen,
I had tended to assume that an ETR would be physically located in an
ISP's building, probably serving multiple end-user networks. You
suggest something different, which I think is also a good approach.
In the thread "Thoughts on the RRG/Routing Space Problem":
http://psg.com/lists/rrg/2007/msg00721.html
you wrote, quoting me:
>> I hadn't considered the cost of equipment. I figure the ITRs
>> are paid for, in general, by the ISPs where they are located -
>> tunneling packets originating from the ISP's customers' hosts.
>
> That's one option. See below for my take.
>
>> For an end-user who gains some of the new kind of address space
>> provided by the ITR-ETR (AKA map&encap) system, the ETR(s)
>> their traffic flows through would be owned and operated by the
>> ISP(s) which they connect to the net with. ETRs are going to
>> be pretty simple compared to ITRs - they don't need an FIB, a
>> database or a feed of mapping data (or access to a query
>> server, if they are caching ITRs).
>
> Now that's interesting... ISPs seemingly have little motivation
> to operate ETRs because they defeat customer lock-in.
This works both ways. It may decrease the lock-in of their current
end-user networks which use PA space, but this is a tiny subset of
the total number of potential customers. The most important thing
is that the ITR-ETR scheme reduces lock-in for the customers of all
other ISPs too. So every ISP will want to have their own ETRs to
attract the custom of end-users who are happy with the new ITR-ETR
mapped address space and who would like to choose this ISP for at
least one of their connections to the Net.
> I'd rather run my own ETR(s). Since it's a brain-dead simple
> function, I expect to see that make its way into even the
> cheapest CPE. Ideally, I'd set my ETR(s) up with RLOCs from each
> upstream ISP (perhaps obtained via PPP or DHCP), set the mappings
> in the database, and be ready to rock. My ISPs wouldn't
> necessarily even realize I was using one.
I agree entirely.
You need at least one reasonably stable (PA, not something
dynamically assigned) address for each link from each of your ISPs,
and then you are indeed ready to rock.
You could probably do it with dynamically assigned IP addresses with
Ivip. Any change in your ETR's address could be handled within a
few seconds (that is my goal) by sending new mapping information via
some secure method to the RUAS, or to whichever other UAS
organisation you use to control your mapping:
http://tools.ietf.org/html/draft-whittle-ivip-arch-00#page-60
There would be a few seconds lost connectivity when the ETR finds it
has a new address and until the ITRs around the Net start tunneling
to the ETR at its new address. Most communication sessions would
survive this. All your hosts on their mapped addresses would remain
in contact with whichever hosts they were communicating with - just
no packets would arrive until the ITRs around the Net got the new
mapping information. (Ivip includes QSDs - full database query
servers - sending updates to caching ITRs and caching Query Servers
when new mapping information arrives concerning a micronet they
recently enquired about, and so may still be handling traffic to.
The caching Query Servers do the same, so all caching ITRs get the
new mapping, ideally, within a few seconds.)
The other ITR-ETR schemes can't get new mapping information to ITRs
quickly enough for you to run an ETR on a dynamic address.
Before you can use your mapped address space via any ISP, you need a
way of sending your packets via their link, or at least by some
method, so the packets get to the outside world.
With what might be called a "traditional" arrangement, there would
be a relatively stable and formal agreement. You have two ISPs A
and B, and you tell them you have a UAB (User Address Block)
66.77.88.32 with length 16. They say "OK", we will accept packets
from your link with these source addresses.
How many ISPs filter on the source address of outgoing packets?
They arguably should, but do they?
With your complete DIY ETR arrangement, unknown to the ISP, if any
or all of them won't allow out your packets with your 66.77.88.xx
source addresses, then you need one or more tunnels to one or more
devices outside those ISPs with which you have an arrangement to
forward your packets to the rest of the Net. That will entail costs
and longer packet paths, at least for packets you send to hosts in
the networks of one of your chosen ISPs.
Then, you are doing part of the Ivip TTR (Translating Tunnel Router)
mobility system as per the diagrams:
http://www1.ietf.org/mail-archive/web/ram/current/msg01547.html
You have one or more care-of addresses, and you dynamically maintain
a tunnel to some device, outside any of your ISPs, which will
forward your packets to the rest of the Net. Ideally, for
efficiency, that device will be a full database ITR, so there will
be no longer paths for packets which are addressed to hosts with
Ivip-mapped addresses.
With the full TTR approach, that device is also your ETR. You may
have two or more TTRs, with two-way tunnels from your host's (or
border router's) one or more care-of addresses. Then, you are
completely independent of any agreement or services in whichever
ISPs you use to connect to the Net with - the ISPs who give you your
care-of addresses. But you need to run a router or some host
software to maintain these tunnels, and either your own system or
some external monitoring system needs to keep an eye on connections
and change the mapping for your addresses so packets go to the best
TTR (if you have two or more) for your current connectivity to ISPs.
You want your TTR to be physically close to your one or more ISP
connections. So ideally there is a bunch of them around the Net,
which you pay to access, and your router or host chooses one or more
close ones as you move your physical connection(s) from one ISP's
network to others.
>> So the end-user doesn't need to purchase or run ITRs or ETRs in
>> order to use the new kind of address space.
>>
>> Their network may well include ITRs - to ensure the tunneling
>> work on outgoing packets is done locally and doesn't depend on
>> anything upstream, which is shared by others. But that would
>> be the case irrespective of whether the end-user's network uses
>> the new kind of address space.
>
> We need some way to make sure that either (a) every default-free
> entity (including, but not limited to, ISPs) runs an ITR, or (b)
> there are EID aggregates in the DFZ (most likely anycasted, but
> perhaps not). The latter seems more achievable, but the question
> is who pays to install and run them. The most logical answer
> answer is "whoever issues EID space", which for the sake of
> argument let's call the RIRs. However, those "ITRs of last
> resort" are going to be overwhelmed if map&encap is even
> moderately successful, and ISPs would be motivated to provide
> their own (presumably closer and more robust) ITRs to improve
> performance and keep customers happy.
I think this is a reasonable basis on which to continue the
technical development of ITR-ETR schemes. Although a single ITR "of
last resort" might be able cope with traffic for the one or multiple
MABs (Mapped Address Blocks) which it handles, this would often
involve longer paths - so multiple ITRs "of last resort" using
"anycast" (all advertising the MABs in BGP) is a better option.
"Whoever issues EID space" - whoever administers a MAB, assigning
sections of it as UABs, which the end-user splits into one or more
micronets - is presumably charging the end-users for their space,
and is providing (or contracting out) the system for sending mapping
information to the world's ITRs. So it makes sense that this
"whoever" will charge according to number of addresses, number of
micronets defined by the end-user and by how often they change their
mapping.
This gives them money to spend on a global system of ITRs which
catch packets from networks with no ITRs. These organisations
compete in terms of price and the performance of the network of
"ITRs of last resort". That depends on the geographical location of
the ITRs to generally minimise path lengths from anywhere to
wherever the ETRs are (anywhere) and on ITR's traffic capacity and
speed of mapping updates.
> Customers running caching ITRs may happen, but I'm doubtful we'll
> see full-database ITRs at many customer sites. The largest may
> get the latter, but it's too much to ask for Linksys et al to put
> that in $50 consumer boxes; even a caching ITR may be too much
> complexity.
Yes. A full database ITRD or query server (QSD) needs a substantial
flow of mapping data, and a lot of memory. You could do it over DSL
etc. for the next few years, but in the longer term, the quantity of
data could be oppressive. It is probably better to run caching ITRs
(including ITFHs in sending hosts not behind NAT) and to rely on
query servers in your upstream ISPs.
The most obvious arrangement is for ITRCs and ITFHs to be configured
with the addresses of the upstream query servers. Alternatively,
there could be some auto discovery system. Ideally DHCP would carry
the information, but we can't rely on additions like that.
Another approach might be to say, for this ITR-ETR scheme for IPv4:
The prefix xx.xx.xx.0/24 is reserved globally for Query Servers.
Every ISP who runs them can make them available on the lowest
addresses in this prefix, and therefore this prefix is not routed
in BGP. All customers running ITRs and ITFHs will no longer need
to configure them, since they will try xx.xx.xx.0 first.
xx.xx.xx.0/24 probably needs to be an ordinary unicast public
prefix, which after this, can never be used reliably in the global
BGP routing system.
Or perhaps we have the ITRs and ITFHs choose a random IP address
within this /24, with the ISP connecting its N separate query
servers to the network each with approximately 256/N IP addresses.
In that way, load would be spread over as many Query Servers as the
ISP provides.
That could work if your border router dynamically chooses one of
your upstream links to forward xx.xx.xx.0 to. It could actually
send the packet to two or more ISPs at the same time, so your ITR or
ITFH typically gets two or more separate replies.
You could make the sole ITR in your system be your border router.
Correctly implemented and configured, that would appear to each
ISP's query server to have the PA address that ISP gave you. Then
the request goes from a single IP address which is part of the ISP's
network to one of its Query Servers, and straight back to that address.
But what if you want to implement your ITR function in other
devices, servers, or in sending hosts - all of which are likely to
have addresses which are part of your one or more UABs and therefore
are in one of your micronets?
Then, the reply from the Query Server will be to your mapped
address, and will need to go to a nearby ITR which encapsulates it
and sends it to your ETR, which decapsulates it and sends it to the
ITRC or ITFH. That will be fine too.
However . . . if your border router functions as a caching Query
Server, then it responds to packets sent to xx.xx.xx.0/24. The
border router then sends queries from its own PA address to the
Query Servers in one or more upstream ISPs. It then sends responses
back to whatever made the request and caches the result, saving
further on traffic when another one of your ITRCs or ITFHs makes a
query about an address which is in the same micronet.
- Robin
--
to unsubscribe send a message to rrg-request@psg.com with the
word 'unsubscribe' in a single line as the message text body.
archive: <http://psg.com/lists/rrg/> & ftp://psg.com/pub/lists/rrg