Instead forwarding is based on knowing the destination DFZ-router's geographical location ID (square degree geopatch#, square minute geopatch#, square second geopatch#), conveyed somehow (e.g. inside a prepended header, like in LISP). Let's say, a router maintains 3 tables t1 ,t2, t3. t1 has (90+90) x (180+180) = 64800 elements. t2 and t3 have 60x60 = 3600 elements. Forwarding: if dest.square degree geopatch# not equal to the one of the current router then next-hop = t1[dest.square degree geopatch#] elseif dest.square minute geopatch# not equal to the one of the current router then next-hop = t1[dest.square minute geopatch#] elseif dest.square second geopatch# not equal to the one of the current router then next-hop = t1[dest.square second geopatch#] else ... /* we are close enough to have a look at the dest. IP address ! Or should we subdivide the square second geopatch once more ? */ Hence it will always be a single table-offset plus eventually 1 or 2 or 3 checks. The dest.IP address needs to be unique just at the destination geopatch. So 4 octets should be sufficient for all times. Billions of billions of users on earth can hereby even be served.
I did display how to fill these tables. Meanwhile however I have been progressing substantially, so that e.g. partition-free topologies at whichever zooming level will be computed, or that this is done without the need of configurational data per node (i.e. its highest zooming level to show up), instead, that pre-given i.e. standardized zooming factors will do, and also, unlike PNNI, that any node will well be surrounded first by strict links, then by loose and looser links, i.e. such that e.g. a node close to the rim of a hemisphere won't be immediately surrounded by most loosest links (no uplinks).
IMO, this is fastest mapping, a side effect from eliminating the need to build prefixes and/or to do caching in short from eliminating the scalability problem once and forever
Heiner
Heiner
|