[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[RRG] Scaling, Mobility & 228 mapping changes a second



Warning: Long message ahead.

Short version:    Mapping only needs to change when the mobile
                  node (MN) moves a long distance, such as to
                  another large country or 1000km or more, if
                  the MN uses a Translating Tunnel Router (TTR)
                  near to, but outside, the IP network of
                  whatever radio etc. networks it connects with.

                  I provide some very loose guesses of the rates
                  of mapping changes from all sources - including
                  from billions of mobile end-users.

                  My WAGs lead to average figures ca. 200 updates
                  a second, which should be fine for Ivip or
                  something similar to handle.

                  Also, a response to an exchange between
                  Christian Vogt and David Conrad.


Since map-encap schemes were initially developed to support
multihoming it has generally been assumed that the end-user's
network uses an ETR in the network of each of the one of more ISPs
through which it connects to the Net.

Below I discuss Ivip's Translating Tunnel Router concept, how this
leads to low rates of change in mapping.  Finally I discuss possible
total global rates of mapping update.

Before that, some notes on what I think is required to support
"Mobility".

There hasn't been a formal specification of what is meant by
"Mobility", but here is one aspect which I think needs to be satisfied:

   If my Mobile Node (a single host or a network, single or
   multiple IP addresses, single or multiple micronets) - such as
   a cellphone, laptop or do-everything Bat-Phone - arrives in a
   new locality, and establishes a connection to a new access
   network, then it should be able to get all packets addressed to
   its micronet(s) tunnelled to a newly chosen ETR/TTR in some
   "short" time.

   "Short" means 5 to 10 seconds, ideally zero, but probably 5
   will be a feasible time with Ivip.

   "Short" does not mean more than about 30 seconds.  It certainly
   does not mean 5, 10 or 30 minutes.

"Short" means some time so short that no map-encap scheme using a
global query server system (LISP-ALT and TRRP) could provide it
simply by having short caching times - since the resultant frequency
of requests and responses would be far too great.

Therefore, by this definition, Mobility can only be provided by one
of the following map-encap systems:

  1 - Pure push (NERD is the only example) with really fast global
      full push to every ITR in the world.  There are severe cost
      and scaling problems associated with doing full push to every
      ITR, and fast push would be harder than NERD's slow push
      approach.

  2 - Full pull (ALT or TRRP) with some additional "notify" system
      which could fast push updates to every ITR which recently
      requested mapping for some micronet.  ALT has no such
      system and TRRP's notification mechanism (PCN 1 see messages
      488, 497 and 532) won't work because it can't reliably get
      notifications to all requesters.

  3 - Hybrid push-pull, with fast push to the local full database
      query servers and fast notify from those to the ITRs which
      recently requested mapping for the relevant micronet.  Ivip
      aims to achieve this.  APT is hybrid push-pull with presumably
      fast notification from the Default Mapper to the ITR, but its
      push to the Default Mappers is slow.

So of the current proposals, only Ivip has a chance of getting the
end-user's mapping change to all the world's ITRs in a "small"
enough time to support this vision of Mobility.


With multihoming (or just using the map-encap scheme for
portability), the end-user establishes a lasting business
relationship with the ISP whose access network they are paying to use.

Therefore, we assume a few things:

  1 - The ISP configures one or more ETRs to decapsulate packets
      addressed to the end-user's micronets.   Likewise, the
      ISP provides some link to deliver these packets to the
      end-user's router, host etc.

  2 - The ISP makes arrangements in its routing system to accept
      the end-user's outgoing packets and to forward them within
      its network and to the rest of the Net.

  3 - We don't assume the end-user's host or router has any
      other address than its own map-encap mapped address:
      an EID address or prefix.  Therefore, we don't assume
      the end-user's host, router etc. has anything resembling
      what is known in the Mobile IP field as a "care of address".


In a situation I would describe as being genuinely "Mobile", I think
we need to reverse these three assumptions.

  1 - We can't assume the end-user has a business relationship
      with the ISP whose access network they are using.  For
      instance, maybe they are 3G roaming in Sweden when their
      3G cellphone is with an Australian carrier.  Maybe their
      mobile node (MN - router, host, whatever) is connected to
      an open, free-access, WiFi network.

  2 - Therefore, we can't assume the ISP would correctly forward
      outgoing packets with the MN's micronet addresses in the
      source address, since there's no business relationship, no
      knowledge of the end-user's micronet address space and
      no assurance the end-user isn't sending spoofed source
      address packets etc.

  3 - The end-user's MN certainly will be given a care-of
      address, since the purpose of every access network is
      to provide an address, probably behind NAT, so client
      software on the MN can have access to the Internet.


The Ivip TTR concept was created to cope with this situation.

A TTR behaves to ITRs exactly like any other ETR.  However its
relationship with the MN is very different to that of an ordinary ETR.

Firstly, the MN builds a 2-way (typically encrypted, presumably TCP)
tunnel to the TTR.  Even if the MN is behind one or more layers of
NAT, it can do this.  The TTR can't establish the tunnel if the MN
is behind NAT, and so it never attempts to in any circumstances.
Also, the TTR has no direct, physical, link to the MN, whereas with
a "traditional" (these ideas are a little over a year old) map-encap
ETR, there is quite possibly a physical connection.

Secondly, the MN relies on this one or more TTRs to handle its
outgoing packets.  (Maybe the MN is smart enough to recognise which
outgoing packets should be sent directly from its care-of address
into the local routing system of the access network, but this is
optional and not discussed further.)

The TTR probably integrates an ITR to encapsulate whichever of these
outgoing packets are addressed to Ivip-mapped addresses.  However,
it could forward them upstream to ITRs in the network it is located
within, or perhaps forward them outside that network where they will
be be forwarded to the nearest "anycast ITR in the core/DFZ".

Thirdly, the end-user has a lasting business relationship with the
operator of the TTR.  The most likely arrangement is that multiple
companies TTRs-R-Us Inc, MobilIPy LLC etc. have their own competing
global networks of TTRs.  (Also, multiple customer facing companies
could rent capacity on the TTRs of some wholesaling TTR network
operator.)  Whichever way it occurs, the end-user has an account
with one or more such TTR companies.  They would probably pay by
packet volume, incoming and outgoing.

To make this global, mobile, effectively permanent IP address or
micronet system work, the end-user organises:

  1 - A micronet of address space, including  perhaps a single IPv4
      address or an IPv6 /64 prefix.

  2 - An account with a TTR company which has TTRs in whatever areas
      they are planning on being.

  3 - Whatever actual Internet access they use, such as with a
      3G network, WiFi, wired Ethernet, WiFi at home via DSL etc.

The end-user also has special tunnelling software in the OS of their
MN (hopefully IETF standardised, rather than a different system for
each TTR company) to establish the tunnels, decapsulate incoming
packets and to feed them to the OS stack just as if they had arrived
in their decapsulated state etc.  Also, the OS might be tweaked to
retain the MN's IP addresses being active to applications even when
there is no physical link to any access network.

The tunnelling software sends packets out the one or more 2-way
links to TTRs too.

Each TTR company might have its own set of application software,
which integrates with the OS tunnelling functions, and works with
each TTR's global network of TTRs and other servers to help the MN
find and establish a tunnel to a "nearby TTR", from wherever it
establishes a care-of address.

TTRs could be located inside access networks, and be only accessible
from within that network.  They could be inside access networks but
also accessible from other access networks.

For this discussion, I will assume that TTRs are not located in the
sort of access networks mobile devices may use, including:

  1 - Any 3G network - therefore some IP network presumably
      run by an ISP of some kind.

  2 - WiFi systems in any network at all, including of
      end-users such as universities, airlines, cafes etc.
      (That is, the care of address is probably behind NAT
      and probably in space which is mapped by the map-encap
      system, rather than being directly reachable via BGP.)

  3 - Likewise a cabled or WiFi Ethernet link to any home or
      office DSL/cable-modem/fibre system.

For instance, TTRs-R-Us could have 2000 sites all over the world,
each with a TTR, bunch of servers or whatever, at major
Internet exchanges, data centres etc.

The TTRs need to be on ordinary RLOC BGP-managed addresses, because
they are ETRs.  TTRs-R-Us could do this with their own address
space, in 2000 separate separately advertised /24 prefixes.  But
that would be a pain, so maybe they would have many or all of these
TTRs connected to RLOC space in ISPs in the various areas.

Either way, these TTRs are part of a global network, and a server
system with software in the MN enables the MN to find one or more
potential "close" TTRs and to establish a 2 way tunnel to one or
more of these.

For this example, I will assume the end-user uses a single TTR
company, and has given that company the credentials (username and
password) which enables the company to control the mapping of the
one or more micronets which the MN uses.  It would also be possible
for the end-user to use the networks of multiple TTR companies, and
to use their MN or probably their own external system to control the
mapping.

The end-user's business relationship with the TTR company means it
has some username, password etc. system by which the MN can
authenticate itself to any of the company's TTRs.  The end-user pays
for this service, including perhaps by volume of packets sent and
received.

This means the TTR company is happy to forward to the Internet all
packets emitted by the MN, since it knows the end-user and knows
the single or multiple IP addresses in their one or more micronets.

With this arrangement, the MN can automatically connect to any
access network, without any prior arrangement, and either establish
a 2-way tunnel to a new (presumably nearby) TTR, or establish a
2-way tunnel to a TTR it is currently using via another link.  It
could also establish a 2-way tunnel to a TTR which it last used -
which is where packets addressed to its micronet are currently being
tunnelled.

There is no technical reason the TTR must be local.  Generally, it
is best to have a TTR which is close to the physical point of
connection, or as close as possible to a border router of the access
network, to reduce path length, reduce delay, packet losses etc.

However, the whole idea of a TTR is that it does not need to be in
the access network.

The MN could do some fancy things when it has two or more care-of
addresses in two or more access networks.

Reliability for incoming packets could be improved by the TTR sending
each decapsualted packet to all the current care-of addresses.  For
instance, the incoming packet would be sent by 3G and WiFi at the
same time.  The tunnelling software in the MN would be wise to this
and use the first one which arrives.

The same scheme could work for outgoing packets - the MN sends each
packet from every care-of address, on every 2-way tunnel to the one
TTR.  These robustness techniques add to the cost, if any, of the
end-user's use of the access network.  3G and WiFi systems could
charge by traffic volume.  Nonetheless, these techniques would be a
powerful, simple, form of robustness which would be welcomed in many
3G, WiMax, WiFi, Bluetooth etc. wireless mobile situations.


Previously, I assumed the mapping for the MN's micronet would need
to change every time it got a new access network address.  Recently,
in two off-list discussions, I realised this is not the case.

I turn on my cellphone (MN) and it connects via WiFi to an access
point which runs behind NAT on my DSL service.  (It doesn't matter
if the DSL modem's address is fixed or dynamic, or whether my home
network is actually in mapped address space, multihomed via DSL and
a cable modem.)

The MN connects to the Melbourne-based TTR it used when I turned it
off last night.  No mapping change required.

It also connects to a 3G system, and makes another 2-way tunnel from
that dynamically assigned care-of address to the TTR.  Since the 3G
system charges a hell of a lot more than my DSL service, the MN (or
perhaps the management system for it in TTRs-R-Us) is configured to
send and receive packets over the WiFi link as long as it is up.

When I walk out the door, the WiFi signal fades and the TTR and MN
communicate using the 3G care-of address instead.  In a
sophisticated system, the MN could recognise the WiFi signal was
fading, before it became unusable, and would switch to the 3G
service, or use both for a while, to avoid even a moment of lost
connectivity.

In the subway (actually, Melbourne suburban trains go through a few
long tunnels) the 3G system cuts out, but my MN has already found
and connected to the free WiFi system provided by the train
operator.  So connectivity continues, returning to 3G only when I
get out of the train.

I go into some office and the MN uses the WiFi system there.

I plug the cellphone-PC-whatever into an Ethernet cable and it uses
DHCP to get an address there, and establishes another 2 way tunnel
to the TTR.

In all this, these access networks have border routers which are
physically located in one or more peering points, Internet Exchanges
etc. somewhere in the Melbourne area.  That is just fine, since my
TTR is in or very close to one of these sites.

Now, if I fly to Sydney - 1000km away - the MN could keep using the
same TTR in Melbourne.  Its not such a big deal.  Most packets would
be going back and forth to the USA, Europe etc. so it makes no
substantial difference if they all have to go an extra 1000km.

(The WiFi network in the aircraft could be somewhat trickier.  That
would presumably be a satellite link to some IP network on Earth.
If it was to some Australian site, it would probably be fine to keep
using the same TTR.  If it was to a US site, I would probably want
my MN to find a new TTR close to that site, and for the management
system to change the mapping of my micronet to that TTR.  Then I
would want to change back either to a Melbourne TTR or a Sydney one
when I got out of the plane in Sydney.)

Let's say I arrive at Heathrow airport.  My MN finds a 3G system it
can roam to, and a WiFi network in the airport.  It establishes
care-of addresses in both, and soon figures out we are a long way
from Melbourne!

Technically, the MN could keep using the Melbourne TTR, but since I
am going to be in the UK for a while it is best if the MN choose a
TTR in London.  Ideally, the MN's software working with TTRs-R-US's
network of TTRs and other servers would ensure that the MN found a
new TTR in the UK, and had the mapping changed so the world's ITRs
tunnel my micronet's packets to this new TTR.  Within 10 seconds or
so, I would be connected again.

Mapping changes cost money.  The end-user needs to at least
partially pay for the global fast-push system.  The cost need not
be high.  I am guessing a few cents to a few tens of cents.

Those who want to change their mapping every 20 seconds can do so -
for instance for fine real-time steering of large traffic flows
coming in on intentionally separate IP addresses, each with its own
micronet, via multiple ISPs to a multihomed end-user network.  Those
end-users will be paying their share of the costs of the global fast
push system.

Some key points:

1 - ALT, NERD, APT and TRRP can't support mobility.

2 - The fact that ALT and NERD could support countless billions
    of micronets doesn't really matter, since I can't see how,
    without mobility, there would ever be more than a few hundred
    million micronets: every business and some homes.

3 - APT can't support mobility unless it replaces its BGP-based
    slow push system with something really fast, like planned for
    Ivip, which is intended to get user mapping change commands out
    to ITRs in about 5 seconds.

4 - So only Ivip or something like it could support mobility.

5 - Mobility requires being able to change the mapping information
    for all the ITRs which need it within 30 seconds - ideally
    just a few seconds.  5 seconds should be fine, but of course
    if it could be faster that would be good too.

6 - Mobility does not require a change of mapping every time a
    new access network is used.  An end-user could do this, but
    they would have to pay for the burden they place on the global
    fast push network to send their mapping change to every full
    database ITR and query server site on the planet.  (Actually, I
    think the last few Replicator levels of the fast push system
    will generally be run and paid for by ISPs and larger end-users,
    so micronet end-users don't have to pay for the entire system.)

7 - There needs to be a mapping change only when the MN needs to
    select a TTR which is different from the current one.  With
    TTRs being independent of access networks, and with a suitably
    wide choice of TTRs all over the world, there only needs to
    be a new TTR when the MN's access network is far from the
    current TTR.  "Far" probably means on the other side of a
    large country like the USA, China or Australia.  However,
    if the end-user does move from Melbourne to Sydney for a
    week or more, or from San Francisco to LA, New York to Boston
    etc. they might as well spend a few tens of cents on a
    mapping change so all their traffic goes via a closer TTR
    for that week or more.


There is no problem storing the global mapping database at any full
database ITR or query server site.

Mobility only happens with fast push, and with fast push, you don't
need large amounts of mapping information.  All you need is the
micronet's starting address, its length, and the single address of
the ETR to tunnel the packets to.  (This is Ivip's approach - no
explicit load sharing between multiple ETRs: the workaround is to
split the traffic over multiple IP addresses, each a separate
micronets and map each one to one of the several ETRs, adjusting
this in real time to get the desired load spreading.)

For IPv4 the raw data in each mapping entry or change is 12 bytes:

   Micronet starting address 4 bytes
   Micronet length           4 bytes (typically 2 or less)
   ETR address               4 bytes

With IPv6, it is 48 bytes, or 32 bytes if some assumptions are made
about micronet granularity being /64.  (As Brian Carpenter pointed
out, this should not be assumed in the protocol design, but so many
micronets might be defined these way that the shorter data format
could often be used.)

10^10 micronets results in total mapping database sizes (not
counting overhead for storage, access etc.) of 120 Gbytes (IPv4) and
480 Gbytes IPv6).  Of course, there are unlikely to be more than
about 2 billion micronets in IPv4, so that is 24 Gbytes.

Last year's consumer hard drives store 1000 Gbytes, so it is clear
that storage - on disc or probably RAM by the time these figures
eventuate - represents no problem for full database ITRs or query
servers (ITRD or QSD).

While it would take a while for an ITRD or QSD to boot up and bring
itself up to date with the full database, this could be sped up by
sucking most of the data from a nearby device which was up.

The only remaining scaling questions involve the rate of the updates:

1 - What is the raw data rate to each ITRD or QSD?

2 - How much overhead is involved.  The current Ivip plan is for
    a completely duplicated feed from two geographically separated
    Replicators.  So multiply the raw data rate by about 2.5 to 3.

3 - How much CPU effort does the ITRD or QSD need to invest in the
    data as it comes in, to maintain its local copy of the database.
    (This depends on the number of missing packets, since they need
    to be retrieved from a remote server.)


When we get to these vast numbers of micronets, there isn't going to
be a truly "full database ITR", in that I don't think any ITR will
have its FIB set to implement the mapping of every micronet.

The closest thing to a full database ITR will be something like
this.  Every section of the address space in a DFZ router will be
flagged as one of:

  1 - This is an ordinary RLOC style BGP routable space - so
      do the normal FIB operation and forward the packet to a
      particular interface.

  2 - This is a mapped address and the FIB has the information
      needed to encapsulate the packet.  So encapsulate it and then
      submit the result to the process mentioned above.

  3 - This space is within a MAB - it is part of a BGP advertised
      prefix in which the space is mapped by Ivip - but the FIB has
      not yet been set up with the mapping information.

      Send the packet to some process, probably not on the main
      forwarding card (but maybe it is, since these forwarding cards
      are increasingly implemented with multiple endlessly flexible
      CPUs) and retrieve the mapping information from the router's
      hard drive, RAM etc.  This finds out what micronet the packet
      is for, sets up the FIB for that micronet's range with the
      correct ETR address. Now submit the process to step 2 above,
      and therefore step 1 too.

  4 - Handle the packet in some other way.

  5 - Drop the packet.

So a full database ITR really has a caching FIB and a built-in full
database query server built into the one box.

There probably isn't much behavioural difference between one or more
of these and one or more true caching ITRs, each sitting in a rack
with an Ethernet cable going to a full database query server.


How many updates a second are needed, globally?

Firstly, the updates which ISPs or end-users want to make for their
real-time traffic steering to achieve load sharing or other TE
outcomes.  Whatever this rate is, they will be paying for it, so the
global fast-push system will be built to cope with that rate.  It it
gets too technically daunting, the price goes up and the end-users
send less such updates.

Secondly, mapping changes for multihoming service restoration.
These will be pretty infrequent, say one or two a month at most for
the few hundred million non "Mobile" business sites which are doing
conventional multihoming.  So lets say 150 million such sites with 2
changes a month = 300,000,000 a month.  While there might be peaks
in this, the multihoming changes only occur when local links fail,
which are not correlated in time, and when a big outage occurs so
that the BGP system somehow can't keep certain ISPs connected.

There are 2.63 million seconds a month, so this is 114 updates a
second, on average.

Thirdly, updates to do with Mobility.  If some folks want to change
to a new TTR every time they get a new radio link, that is their
choice - they are paying incrementally for the fast push system to
blast their mapping changes to a few hundred thousand sites, most of
which don't need the mapping update.

If we assume that mobile end-users configure their TTR choosing
software or service in a way which provides them with perfectly good
service, without unnecessary mapping changes, then an update will
only occur when:

1 - They move from one large region to another - where "region"
    probably means hundreds of km or a thousand km.

2 - When even though they haven't moved far, their access network
    does not have a close connection to the TTR they are currently
    using in the current area.  This would normally indicate that
    the access network was poorly configured.  The exceptions would
    be when the access network was based on satellites, particularly
    Mid-Earth-Orbit or Geostationary.  That could be important for
    some folks in rural areas, and it would often be the case when
    getting on a plane or a ship.

Type 2 only frequently occurs for a small fraction of mobile users.
 (I am assuming there are not going to be billions of satellite
connected cellphones - most will use terrestrial base-stations.)
These end-users will be air travellers and those in rural areas who
depend on satellite links for their mobile radio link, or for their
home or office Internet access.

Neither of these rates of updates are going to be frighteningly
high.  My WAGs are 5 million a day for each type - together, 300
million a month.

Not counting the unknown number of TE mapping changes, (which
only apply to steering the traffic of a few tens of thousands of
busy ETRs), my WAG is 300 million a month for multihoming and 300
million  a month for Mobility.  Probably the multihoming estimate is
way too high, but lets say 600 million a month.

That is an average of 228 a second.

I expect the dual and quad CPUs of the present day could cope with
handling this average rate of updates.  The hard drives might be
tricky, but we probably have fifteen or twenty years before anything
like these update rates would eventuate.  FLASH memory or DRAM is
the obvious alternative if hard drives are physically too slow.

Peaks could be a lot higher, but the updates could be buffered with
the CPU catching up as best it can.

At 5 cents an update, this is a raw revenue to the system operators
of $1,000,000 a day.

So there is real money to pay for running the RUASes (Root Update
Authorisation Systems) and the Launch system and Replicator network.


With the 3.0 fudge factor applied to the basic 12 and 48 byte
mapping update sizes, 228 updates a second is:

  IPv4   8.2 kbytes a second.

         However, this is for 10 billion micronets, and there will
         never be more than about 2 billion with IPv4.

         Actual rates depend on how many mobile devices there are,
         how many move substantial distances, the rates of actual
         multihoming link or ETR failure, and the unknown degree
         to which people will use mapping changes to dynamically
         balance the load of their various links.

         If the price is low enough, maybe the system would be used
         extensively for load balancing.


  IPv6   32.8 kbytes a second.

         Best to allow 512Mbps of bandwidth.


But these are outside figures for the long-term future.

I am confident that Ivip or some similar fast hybrid push-pull
system will be able to handle these kinds of update rates.

Map-encap schemes which are not fast hybrid push-pull won't need to
handle these rates, because they won't support Mobility.

  Full pull involves initial packet delays which are unacceptable.
  (For example ALT or the original incarnation of TRRP.)  This
  can't be fast enough for mobility.

  Full push (NERD) means every ITR needs to receive all the updates
  and store the entire database.  That is too expensive.

The hybrid push pull options are at present:

  Ivip - purpose designed fast hybrid push-pull.

  APT - hybrid push-pull, but with slow push and limited flexibility
        compared to Ivip in the placement of full database query
        servers (Default Mappers).

  TRRP - after a series of upgrades to reduce the query delays to
         quite low values, by establishing a few dozen or more
         anycast sites which integrate all authoritative
         nameservers. See messages 488, 497 & 532.  But this can't
         do mobility unless it has a fast notify system and unless
         there is a fast push system to every anycast site.


Christian Vogt wrote, in (message 522 of Re: "[RRG] Why delaying
initial packets matters", quoting his correspondence with David
Conrad on 8 Feb (http://psg.com/lists/rrg/2008/msg00292.html):

CV >>> Regarding mobility support of a reactive mapping system:  It
CV >>> would work for local mobility within an edge network.  But
CV >>> not for global mobility.

DC >> Could you expand on this?  Assuming a pull system something
DC >> along the lines of what I described to Robin, why wouldn't it
DC >> work for global mobility?

> David,
>
> I should have been more specific, and maybe we are even on the
> same page.
>
> It is possible to use the indirection between edge and transit
> addresses as a tool for global mobility support, but alone it is
> insufficient.  Three more components are needed:
>
> (1)  Localized mobility support within an edge network, because
>      the address indirection alone tracks host mobility only at
>      the granularity of edge networks, not access links.

Yes.  When my MN moves from one access point to another, or from one
3G cell to another, I keep the same care-of address, and maintain
the same 2-way tunnel to the TTR.


> (2)  Dynamic and per-host mappings between edge and transit
>      addresses, because the mapping for a single edge address
>      changes when a mobile host moves to another edge network

As explained above, I don't think this is the case.  Mapping only
needs to change in the much less frequent circumstance of the
access network's border router(s) being "far away" from the current
TTR.


> (3)  Trust relationships between edge networks so that a mobile
>      host's mapping can be changed by (or based on information of)
>      a visited edge network.

This may presume an ALT-like model where the ISPs which run the
access networks are heavily involved in the mapping of the micronets
(EID prefixes) of whatever mobile end-users are currently using
their networks.

For mobility to really work, the mobile node needs to be able to use
any kind of Internet access, without any involvement of the ISP or
end-user who operates that network.  I think my explanation above
shows how this is feasible, desirable and arguably absolutely necessary.


> Item (1) could be realized through, e.g., Hierarchical Mobile
> IPv6 or Proxy Mobile IPv6.

Yes, but I think WiFi access points handle it fine within their
scope of operation.  Also, I imagine that a 3G network enables the
MN to keep the same IP address for a long time.

Moving from one operator's 3G network to another would involve a
change.  Calls are not handed over between such changes of network -
just changes of cell.  Driving across Europe I imagine a roaming 3G
device would connect to quite a few different carriers' 3G networks,
getting a new care-of address at each one.  Some such changes might
make it worthwhile to spend a few cents or tens of cents on a
mapping change, but even without it, if the TTR is in Berlin and the
3G phone has a care-off address in a network with border routers in
Rome, how bad can it be?  This is still a small distance compared to
the Pacific Ocean - which would make a change of TTR well  worthwhile.

So I think that as long as the radio networks provide the same
care-of address, I am not convinced any "Mobile IP" techniques are
needed to provide the care-of address addresses the MN needs for
this Ivip approach to global mobility.

  - Robin       http://www.firstpr.com.au/ip/ivip/





--
to unsubscribe send a message to rrg-request@psg.com with the
word 'unsubscribe' in a single line as the message text body.
archive: <http://psg.com/lists/rrg/> & ftp://psg.com/pub/lists/rrg