[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [IP-Optical] RE: Three GMPLS related IDs




Hi Neil,

I saw your comments for the set of GMPLS related I-Ds. My comments are
in-line (related to the arch-intra-domain ID).

> Arch-intra-domain ID:
> Note - I like this paper in general, and I guess the reason for this is that
> several operators have input (requirements) to it and it is not just sourced
> from vendors.
> 

Yes. I think this is a very critical aspect of working on any
specification, that operators have input into the process. After all, it
is the operators (however large or small) who will be deploying these
networks and will have to live with the consequences of a well-specified
(or not) standards. 

If we extend Kireeti's voting weight, I would think an operator's input
would have more weight than other's inputs.

> Thinking about the further concerns a carrier would have to address in a
> connection oriented switched-network, there needs to be a consideration of
> blocking probability and users having connection requests refused, ie a GoS
> metric. The result is that transport networks currently designed around high
> utilisation with long-holding connections need to be redesigned around
> ensuring a large enough pool of free capacity.  So, in addition to having to
> address the above issues up-front with SVC-like services, carriers would
> also be forced to define a 'erlang/pricing model' for something akin to a
> Gbit PSTN!  Depending upon the number of circuits and the distribution of
> traffic holding times it may not be possible to apply simply erlang theory
> as is, but rather some modifications will be required.....and the demand
> model will obviously be an interdependent function of the pricing model.
>

Yes. Simple availability metric does not fit the switched model very
well. Since availability measurements are obtained through time
integration of errors, with short connection duration, this metric may
not be appropriate in its current form. For example, if a connection
lasts for one day, and during that time 1 or 2 bit error occur.
Depending on the level of availability offered, this may result in
non-compliance. 

Parameters that have been used in traditional PSTN networks need to be
re-examined and modified to fit the new switched network. These include
those you've mentioned, such as connect attempts, hold times, blocking
probability,  erlang traffic model (which already takes hold times into
consideration)

> 6       Noted the point in para 4.2 regarding ENE 'having to maintain
> source/sink LSP inventory', but the SNE also needs to maintain its
> server->client layer adaptation mappings....so that on failure the correct
> FDI information can be sent forward into all affected clients (and this
> process should recurse for clients of those clients, etc).
>

Yes. Each NE would still provide the capabilities inherent in an NE,
i.e., adaptation mappings, detection/generation/suppression of various
defects/alarms. Because these are user plane functions, we did not
include discussion of this. 

Do you think it would be worthwhile to have a sentence to describe the
user plane functions?
 
> 7       In section 5.3 on path selection source (ERO) and hop-hop routing
> are compared.  However, the biggest advantage of source routing is not noted
> strongly enough.....and that is the fact that one does not need to wait
> until a failure to calculate an alternative route from the link-state
> database.  For very important routes one can pre-calculate an alternative
> path and reserve resources, so restoration is very fast.  This leads me to
> the next point here.
> 

Yes, that is certainly an advantage for fast restoration. We will add
text to convey this more clearly. do you happen to have some suggested
text handy for us to add?


> 8       Although not covered explicitly in this paper, I think something
> needs to be said about pre-emtion/bumping.  We have extensive experience of
> priority selected pre-emption/bumping schemes.  And this experience leads us
> to have a requirement that we must be able to turn-off any such schemes.
> Why?  Well, it only comes into any significant effect at high utilisations.
> But, because even small changes of loading around the 'congestion knee' can
> cause large swings in behaviour, it is at this time that predicting its
> effects can be most difficult, and a single failure event can cause multiple
> consequential failures as the bumping scheme ripples out.  We don't mind
> dumping 'extra' traffic, but once dumped there is no way we would want this
> to then dump some even lower priority traffic.  However, I clearly would not
> wish to stop anyone wanting such a scheme to try it out and discover its
> usefulness for themselves.  Our requirement therefore is (i) an ability to
> turn-off any bumping based on priority, but (ii) an ability to select the
> priority of restoration post failure between different trails.
> BTW - Although the largest BW trails seem like they should be restored 1st
> (if the most efficient packing density is the sole criterion), this may not
> be the case and we would want to identify trail restoration priority
> irrespective of BW.
> 

I agree with the observation that highest bandwidth does not necessarily
mean highest priority.  Highest bandwidth might be the least critical
application, e.g., file transfer (low priority) vs. voice (high
priority), the bandwidth needs might be reversed.

We will add this discussion in the next revision. BTW, when you state
"an ability to select the priority of restoration post failure between
different trails", do you mean that for all protected connections, the
highest priority protected connection get protected first, second
highest priority get protected second, etc. and that you want the
flexibility to decide what criteria is used to rank the restoration?


> 9       In section 5.3.3 you are hinting at quite an important point
> regarding routing protocols.  That is, the attributes of a routing protocol
> for an OTN will be quite different to that needed for SDH....or indeed IP.
> This leads to the question - do we create one all-embracing routing protocol
> for all technologies (and have lots of extensions and redundancies) or do we
> have different (specifically tailored) routing protocols for different
> technologies?  We can also say the same, and perhaps more strongly, about
> signalling protocols.  And indeed also addressing.....this facet definitely
> requires independent address spaces per distinct layer network even if based
> on the same generic structure.  The key point is here is that not only are
> these control-plane facets 'different' for different layer networks, but
> they are also orthogonal *within* a layer network.....a point which seems to
> get glossed over quite regularly.  This is not to decry any particular
> choice of addressing/signalling/routing protocol combination, just to point
> out that it is not a logical way of reasoning to say that either (i) all
> layers get treated the same or (ii) *if* you choose addressing schema X you
> *must* have signalling protocol Y and you *must* have routing protocol Z.
> Indeed for true CO fabrics like an OTN or SDH, it is very hard to agree to
> arguments that say it must be v4 (or v6) addressing and it must be RSVP
> (indeed it isn't since we also have CR_LDP) and it must be OSPF (and again
> indeed it isn't since we also have IS-IS).....but for the NNI BGP seems the
> only choice.  I have yet to hear what is technically wrong with a
> combination of NSAPs, PNNI signalling and IS-IS/BGP...and on the face of it,
> these would appear far more logical choices given the nature of the beast
> they are controlling;  noting that for S-PVC-like services this would also
> seem like a more 'off-the-shelf' solution.
>

Yes, I agree. theses issues should be viewed independently.  As you
mention, even within routing protocol, there would be differences in
term of the relevant aspects/constraints for route determination.
Whether these differences could be handled via different protocol or
simply as different extensions need to be further discussed. For
example, would the same algorithm be applicable to both SDH and OTN
networks (most likely?)

Do you suggest we expand this section to include discussions of these
points? We would be very happy to work with you on extending the
concepts that you've described.


> 10      In section 5.4.1.5 'actions' on trails are defined.  For a CO
> network with fixed BW selection at trail creation time it is very hard to
> see what a 'modify' action could be....other than perhaps a change of
> restoration priority and/or change of dedicated back-up trail/resource.  If
> there is any notion of (working) BW change here (be it a scalar magnitude
> change, ie same route, or a vector change, ie new route (with or without a
> scalar magnitude change)), it would seem to me that this is really a new
> pair of 'create_new->switch_to_new->delete_old' actions.  I note however
> that this has been marked as TBD, but I think it would help if it was made
> clear that BW changes cannot (IMO) be a single 'modify' action.
> 

Modify command can have different implications depending on what
attributes you're modifying and what capabilities exist in the network.
Two behaviours of modify are: disruptive modification and non-disruptive
modification. For example, in certain cases, you might be able to modify
the CoS of a connection without disruption (simply by adding a backup
path to the original connection, albeit likely not the optimized route),
or if you have LCAS capability, you might be able to increase the
bandwidth of a connection without impacting the existing traffic.

We will revise this to have more discussion regarding use of the modify
command. 


> 11      In section 5.4.3.1 you describe the 4 main stages of
> prot-sw/restoration.  I agree.  But please don't overlook these facts that
> are relevant to the first item of failure detection:
> -       1st, we must identify all failure modes
> -       2nd, we must describe their entry/exit criteria
> -       3rd, we must take correct consequent actions, eg FDI upwards to stop
> alarm storms, BDI backward (if single ended visibility of both directions
> needed), squelch traffic if trail ID mismatch (important to protect customer
> traffic...so this is a sort of security consideration, but it also impacts
> billing etc)
> -       4th, we need to know/define what constitutes up and down states of
> the trail.....this defines the 2 aspects of availability SLA and the QoS
> SLA, where the latter only has meaning once the former is defined (ie only
> valid in up-state) and will generally be based on the defect handling
> covered by points 1st-3rd above.
> See our draft on MPLS OAM (packet level) where we cover the above for an
> example of what is required in the user-plane:
> http://www.ietf.org/internet-drafts/draft-harrison-mpls-oam-00.txt We need a
> similar approach for the lower transport layers....and note that there
> should be some attempt at metric/objective harmonisation across the layers
> in order for any measurements/SLAs taken/applied at each layer to have some
> cross-layer relative significance.
> 

Yes, these are very valid points. I believe for points 1-3 above, we can
provide a brief description and then refer to existing standards. For
point 4, we will need to extend the document. 

Would you happen to have some text available for us to include in the
next revision?


Regards,

Zhi