[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Granularity (was Re: [RRG] ALT + NERD is inelegant & inefficient, compared to APT or Ivip)



Iljitsch van Beijnum wrote:
On 23 jan 2008, at 23:11, Brian Dickson wrote:

dual-homed
multi-homed (N>2)

I'd argue that the requirements of each differ, and there can be significant scaling benefits from splitting them out, and handling each separately.

I completely disagree. There is no reason to make the assumptions you make.
I think you meant to say, "I *see* no reason...".

Sometimes I don't do a good enough job of explaining things, so I'll try again: * As Tony Li pointed out, if you allow host granularity, you need about 10^11 rather than 10^8 map entries - if you keep them in the map database * If you want to keep host granularity stuff out of the map database, you have to encode it in the address
* You have 128 bits in the IPv6 address to work with, and 32 bits per ASN
  * The case of N=0 is non-existent.
  * The case of N=1 is trivial - EID == RLOC, no mapping needed at all
* The case of N=2 was what I provided an example mapping for, by no means definitive choice, but merely showing it was possible, with some reasonable level of scale * The case of N=3, means that the number of hosts per ASN in any mapping scheme is <= 2^11, or 2k. Not reasonable.

So, there are three possibilities for host granularity:
* No host granularity
* mapping size 10^11 (not suitable for push)
* special case for N=2 only

Pick one and only one. I vote for N=2 as a special case, which allows for scalable mappings (for N>2) and host granularity (only if N=2).
The N=2 case does not *require* host granularity.
And the N>2 case is really *N>=2*, *without* host granularity.

More precisely put, the cases are:
mapped entries (no host granularity)
non-mapped entries (host granularity, N <= 2 only)

Also, in computer science school we learn to count like this: 0, 1, many.
I guess it's a good thing I went to math school instead of computer school. They taught us about the concept of "plus a constant" (used mostly in calculus for integrals, but also in computers.)

Having 0, 1, many (plus a constant), means for (a constant of 2), we get:
2, 3, 3+many, which is can be further simplified as 2, 2+many (which was what I picked as a scheme.)
If a value is larger than 1, you don't get to hardcode it in your solution. Experience shows that if a value isn't 0 or 1, it can be anything so hardcoding it will bite you at some point.

It's called predefined constants, as in header files full of "#define FOO fixed-value". We learned that in first year.
It's not hard-coding if the constant itself doesn't appear in your code.
(And it's not like we need more work by splitting the problem space in two and then solving both parts independently.)
The idea is to make *less* work by splitting the problem space in two.

Especially if the work parameterizes well, by having elements have affinity for only one of the two parts of the problem space.

Which is the case when one branch has host granularity, and the other branch has a mapping database to contend with.

Brian Dickson

--
to unsubscribe send a message to rrg-request@psg.com with the
word 'unsubscribe' in a single line as the message text body.
archive: <http://psg.com/lists/rrg/> & ftp://psg.com/pub/lists/rrg