[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: RSVP Restart (was Re: update GMPLS signaling documents)



Yangguang,

> Yakov,
> 
> Further questions in-line. Regards,

Further responses in-line...

> Yangguang
> 
> > Couple of points:
> > 
> > (1) First of all, you do run RSVP Hello between C and F.
>  
> Is this specified somewhere? Or it's your proposal. 

What I mentioned in the above should be abundantly obvious to the 
informed reader of RSVP and LSP Hierarchy specs.

> Then, is there a scalability issue here? 

No.

> Also, C and F could be thousands mile away (in the transport
> network) and the control channel bandwidth is expensive.

GMPLS assumes that control channel is more than just 56Kbits/sec link.

Or to put it differently, control channels used by GMPLS have more than 
enough capacity for RSVP Hellos.

> > (2) Since L1 is advertised as an FA into OSPF/ISIS, F should
> > be able to recover the Interface ID it assigns to L1 from
> > a combination of (a) the OSPF/ISIS link state database that
> > F would recover, and (b) the Forward Interface ID (the one
> > assigned by C).
> 
> Can OSPF/ISIS happen to be on the same controller as RSVP-TE? I guess then it
> has to recover its FA info first. How?

The answer should be abundantly obvious to the informed reader.

And to make it crystal clear, I have no intention in designing your 
products for you.

> > > Let us say the node can resynchronize its neighbor if
> > > the neighbor restarts and requests state recovery. But
> > > the issue is how a node can advertise that it does not
> > > need recovery since all its state was preserved?
> > 
> > By treating is the same way as the way the spec handles
> > control channel fault.
> >
> 
> If a node can recover states from NVRAM or OS (this solution is
> cheaper and have been implemented). It may not implement the spec.
> How could a neighboring node, which implements the spec, not tear
> down data connections mistakenly? Similar to a backward compatibility
> problem, even not.

Stating the obvious: interoperability with a node that doesn't implement 
the spec is outside the scope of the spec.

> > > RECOVERY LABEL does not come into picture unless the node
> > > that is upstream to the restarting node has already received
> > > a Resv.
> > 
> > Wrong. Quoting 9.5.3:
> > 
> >    Upon detecting a restart with a neighbor that supports state
> >    recovery, a node SHOULD refresh all Path state shared with that
> >    neighbor.
> > 
> > So, as you can hopefully see from the above, the upstream node doesn't
> > wait until it receives a Resv.
> > 
> 
> Then this may not meet carriers' requirement. (see IPO carriers' requirement)
> Again, back to the telecom world (sorry about this, yet we are talking about
> GMPLS), typically only the established pathes are preserved. Pending requests
> may be purged/aborted. Then how?

Note the verb is "SHOULD", not "MUST". The rest should be
obvious to the informed reader.
  
> > > In case of PSC devices, it may be OK to remove state that
> > > is not resynchronized at the end of the recovery period,
> > > and the recovery period advertised might reflect that.
> > > But for LSPs in transport networks, one might want to
> > > have a different recovery period to avoid any LSP from
> > > going down because of recovery timer expiry.
> > 
> > There is no requirement for a node to advertise exactly the
> > same Restart_Cap on all the interfaces. So, on PSC interfaces
> > the node could advertise that it will remove the state that
> > isn't syncronized at the end of the recovery period, while
> > on the TDM interface precisely the same node could advertise
> > that the LSPs would be kept even after the recovery time expires.
> >
> 
> So, restart_cap is per interface based? 

yes.

> Is the spec going to be enhanced/clarified?

The current spec need not be enhanced/clarified - what I mentioned in the
above should be abundantly obvious to the informed reader.

> > > The first problems seems to be there still - consider two
> > > adjacent nodes restarting.  They both act both as the restarting
> > > node as well as the neighbor to the restarting node. So, once
> > > they learn the state from the upstream neighbor, do they use
> > > suggested label or the recovery label when they send the path
> > > message to the just restarted downstream neighbor?
> > 
> > The recovery label.
> > 
> > The following should be added to the existing text from the document:
> > 
> >    In the special case where a restarting node also has a restating
> >    downstream neighbor, a Recovery_Label object should be used instead
> >    of a Suggested_Label object.
> 
> How does a recovering NE know that its neighbor is also recovering? it may 
> lose the instance number totally

It's advertised in the neighbor's hellos.

Yakov.