[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Crankback [Was: Running out of RSVP-TE bits?]
Curtis,
I think you have possibly missed the chief use of crankback and focused on a side use.
The principal use is to handle LSP setup failures. When these happen they tend to happen
one at a time (that is not a whole bunch of impacted LSPs that were using a link that has
failed). Crankback allows nodes on the LSP to attempt to re-direct the LSP setup attempt
without failing it back to the ingress, and allows the collection of more detailed failure
information to aid the re-direction process.
In a nutshell, crankback targets rapid re-direction of LSP setup attempts without waiting
for IGP convergence. That crankback can also be used for handling failed LSPs is an
'interesting' additional feature.
Note that in no case does crankback increase the error reporting message flow. It simply
adds a little information to error messages already used in RSVP-TE or GMPLS. In fact, if
crankback is in use and intermediate nodes are enabled as crankback repair points,
crankback can reduce the error reporting message flows since they don't have to go all the
way to the ingress nodes.
> > Perhaps you could summarize some of the problems crankback will cause.
> > It would be good to get comments on these problems from the ITU-T members who
> > have made this an ASON requirement.
>
> After a link fails zero to a large number of admission failures may
> occur on a few links that are considered desireable alternates (by
> multiple LSP ingress, each acting independently). Crankback will
> provide individual signals to each ingress, probably for each LSP that
> failed.
Well now: the number of signals is not a function of crankback. It is an existing function
of RSVP-TE error reporting.
Crankback simply adds information to the error reporting so that the error may be
repaired. It also recognizes that non-ingress nodes may wish to repair errors.
> Just using the IGP flooding is far more efficient and
> proactively tells ingress that haven't sent LSP paths over the now
> overloaded link.
Efficiency is in the eye of the beholder on this issue. The debate over flooding or
signaling errors will probably go on for a while (see Richard Rabbat's work in ccamp on
the need for a flooding mechanism for fail-over to a protection LSP that may be carrying
extra traffic). The efficiency surely depends on the number of LSPs and the number of
nodes/links in the network.
To repeat: the crankback draft doesn't make any assumptions about the efficiency, it
simply uses existing mechanisms. In fact, there is a significant issue with the use of a
flooding mechanism. Viz. the ingress may not be able to tell from a description of the
faulted resource which LSPs were impacted - suppose a component link fails, or even an
individual laser. (You might as well say that soft pre-emption should be reported through
flooding!)
> Of course if you write specs before trying things you often get things
> wrong, particularly where the dynamics of a protocol are concerned.
> That's where the running code thing comes in.
I agree with you whole-heartedly. It is so important to have running code.
Not sure why you bring it up in this context. Are you trying to imply something?
> I think testing in an
> environment where lots of LSP from many ingress traverse a link that
> fails and all route to one or two links that become overloaded should
> be a prerequisite for crankback, with improvement demonstrated.
As in my preamble, this is not what crankback is about.
>
> Curtis
>
> ps - who cares about ASON? or ITU for that matter? :-)
Cheers,
Adrian