[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: FW: [T1X1.5] Re: Suppression of Downstream Alarms...



Jonathan,

Thanks for taking the time to review and incorporate some of my comments.
I have some more remarks related to some of the comments that you did 
not incorporate. Please see below.

Thanks!
Carmine

Jonathan Lang wrote:

>Carmine,
>  Thanks for the careful review of the document.  We've incorporated many of
>your comments. Specific responses pasted below: new text identified in
>quotes after [Jonathan]
>
>Thanks,
>Jonathan
>
>
>========================
> Abstract 
>    
>   Future networks will consist of photonic switches, optical 
>   crossconnects, and routers that may be configured with control 
>   channels and data links.  Furthermore, multiple data links may be 
>   combined to form a single traffic engineering (TE) link for routing 
>   purposes. This draft specifies a link management protocol (LMP) that 
>   runs between neighboring nodes and is used to manage TE links.  
>   Specifically, LMP will be used to maintain control channel 
>   connectivity, verify the physical connectivity of the data-bearing 
>   channels, correlate the link property information, 
>[Insert: suppress downstream alarms in pre-G.709 networks, and localize 
>faults in both opaque and transparent networks to a particular link 
>between adjacent cross-connects for protection/restoration actions.]
>[Delete: and manage link failures. A unique feature of the fault management 
>technique is that it is able to localize failures in both opaque and
>transparent 
>networks, independent of the encoding scheme used for the data.] 
>[Jonathan]"suppress downstream alarms, and localize link failures in both
>opaque and transparent networks for protection/restoration purposes."
>
<Carmine> Why did you not include the "pre-G.709" portion of the text?

>
>
>1. Introduction
><snip>
>
>   In GMPLS, the control channels between two adjacent nodes are no 
>   longer required to use the same physical medium as the data-bearing 
>   links between those nodes. For example, a control channel could use 
>   a separate wavelength or fiber, an Ethernet link, 
>[Delete: or]
>   an IP tunnel through a separate management network
>[Delete: .]  
>[Insert: , or a multi-hop IP network] 
>[Jonathan]"an IP tunnel through a separate management network, or a
>multi-hop IP network."
>
>2. LMP Overview 
><snip>
>
>In this draft, 
>[Delete: two]
>[Insert: three]
>   additional LMP procedures are defined: link 
>   connectivity verification 
>[Delete: and fault management]
>[Insert: suppression of downstream alarms, and localization 
>faults in both opaque and transparent networks to a particular link 
>between adjacent cross-connects for protection/restoration actions]
>   .
>[Delete: These procedures are particularly useful when the control 
>channels are physically diverse from the data-bearing links.]
>[Jonathan] no change made.
>
<Carmine> I'm not sure why you didn't incorporate this comment. My 
reasoning behind the comment was to list the separate applications that 
are covered. Fault management, without a definition is confusing to me. 
I think the sentence that I suggested clearly indicates the three 
applications.

>
>
>   Link connectivity 
>   verification is used to verify the physical connectivity of the 
>   data-bearing links between the nodes and exchange the Interface Ids; 
>   Interface Ids are used in GMPLS signaling, either as Port labels or 
>   Component Interface Ids, depending on the configuration.  The link 
>   verification procedure uses in-band Test messages that are sent over 
>   the data-bearing links and TestStatus messages that are transmitted 
>   back over the control channel.  Note that the Test message is the 
>   only LMP message that must be transmitted over the data-bearing 
>   link.  
>[Delete: The fault management scheme uses]
>[Insert: Both the suppression of downstream alarms and the localization
>of faults for protection/restoration use]
>   ChannelStatus message 
>   exchanges between adjacent nodes 
>[Delete: to localize failures]
>   in both opaque 
>   and transparent networks, independent of the encoding scheme used 
>   for the data.  As a result, both local span and end-to-end path 
>   protection/restoration procedures can be initiated.
>[Jonathan]"Both the suppression of downstream alarms and the localization of
>faults for protection/restoration use ChannelStatus message exchanges
>between adjacent nodes in both  opaque and transparent networks, indepedent
>of the encoding scheme used for the data."
>
>[Insert: Note that the fault localization scheme supported in LMP localizes
>faults on a link and does not address node failures. Therefore additional 
>mechanisms are needed to detect node failures for end-to-end path
>protection/restoration.] 
>[Jonathan]no change made.
>
<Carmine> Is my statement incorrect?

>
>
>[Delete: For the LMP link connectivity verification procedure, the free 
>(unallocated) data-bearing links MUST be opaque (i.e., able to be 
>terminated); however, once a data link is allocated, it may become 
>transparent.]
>[Insert: For LMP link conncetivity verification procedure between adjacent
>PXCs, the test message is generated and termindated by opaque test units
>that
>may be shared among multiple ports on the PXC. Opaque test units are needed
>since the PXC ports are transparent.]
>[Jonathan]change made as suggested.
>
>
>
><snip>
>
>The LMP fault management procedure
>[Insert: (i.e., the suppression of downstream alarms in pre-G.709 networks,
>and the localization of faults to a particular link between adjacent OXCs
>for
>protection/restoration actions)]
>   is based on a ChannelStatus 
>   exchange using the following messages:  ChannelStatus, 
>   ChannelStatusAck, ChannelStatusRequest, and ChannelStatusResponse.  
>   The ChannelStatus message is sent unsolicitated and is used to 
>   notify an LMP neighbor about the status of one or more data channels 
>   of a TE link.
>[Jonathan]no change made.
>
>3. Control Channel Management
><snip>
>
>For the purposes of LMP, the exact implementation of the control 
>   channel is not specified; it could be, for example, a separate 
>   wavelength or fiber, an Ethernet link, an IP tunnel through a 
>   separate management network, 
>[Insert: a multi-hop IP network,]
>[Jonathan]change made.
>
><snip>
>
>The control channel can be either explicitly configured or 
>   automatically selected, however, for the purpose of this document 
>   the control channel is assumed to be explicitly configured. 
>[Insert: The term configured means that the destination IP address to 
>reach the adjacent node on the far end of the control channel is known 
>at the near-end. The destination IP address may be manually configured,
>or automatically discovered.]
>[Jonathan]Replacing the above text starting with "The Control channel.."
>with "To establish a control channel, the destination IP address on the far
>end of the control channel must be known.  This knowledge may be manually
>configured or automaticaly discovered."
>
><snip>
>   Control channels exist independently of TE links and multiple 
>   control channels may be active simultaneously between a pair of 
>   nodes.  Individual control channels can be realized in different 
>   ways; one might be implemented in-fiber while another one may be 
>   implemented out-of-fiber.  
>[Insert: Maintenance of control channels (i.e., detection of control
>channel failures and restoral of communication) is needed. Various
>mechanisms could be used to provide maintenance of control channels,
>depending on the level of service required. For example, control channel
>failures could be detected and restored via normal IP routing protocols,
>however this may not support the necessary level of service due to the
>time required to update the routing tables. For very fast recovery of
>control channels, other mechanisms such as bridging messages at the near-end
>and selecting messages at the far-end can be used. LMP defines a Hello
>protocol that can be used to detect control failures. To support
>the Hello protocol,]
>[Delete: As such, ]
>   control channel parameters MUST 
>   be negotiated over each individual control channel, and LMP Hello 
>   packets MUST be exchanged over each control channel to maintain LMP 
>   connectivity if other mechanisms are not available.
>[Jonathan]no change
>
>3.1. Parameter Negotiation 
>
>[Insert: Activation of the LMP Hello Protocol]
>[Delete: Control channel activation]
>   begins with a parameter negotiation 
>   exchange using Config, ConfigAck, and ConfigNack messages.
>[Jonathan] no change.  LMP control channel activation requires Config
>message exchange even if LMP Hellos are not run.
>
<Carmine> Maybe I don't understand what the Config message is actually 
providing... could you please explain to me... As I understood it, the 
Config message was only beginning the parameter negotiation for the LMP 
Hello Protocol... Is it doing something else?

>
>
><snip>
>
>To begin 
>[Delete: control channel]
>[Insert: Hello Protocol]
>   activation, a node MUST transmit a Config 
>   message to the remote node.
>[Jonathan] no change
>
>3.2. Hello Protocol 
>
>Once 
>[Delete: a control channel]
>[Insert: the Hello Protocol]
>   is activated between two adjacent nodes, the ...
>[Jonathan]no change
>
>5. Verifying Link Connectivity 
>[Question: The following paragraph is a bit confusing to me. What is the
>problem
>that it is trying to describe? Thanks :-)]
>
<Carmine> I see your answer below. Thanks. However, the text still seems 
confusing to me. From your answer below, it seems as though you are 
trying to say that since PXC ports do not terminate the optical signal 
it is difficult to determine the status of the data link. Therefore, 
opaque test units can be used to generate and terminate signals between 
adjacent PXCs to determine the status of the data link. I would suggest 
replacing the below paragraph with similar text as in my previous two 
sentences.

>
>   A unique characteristic of all-optical PXCs is that the data-bearing 
>   links are transparent when allocated to user traffic.  
>
<Carmine> I assume when you say transparent, you mean all-optical and 
not service transparent. Just because PXCs support all-optical ports 
(i.e., no O/E conversion) does not mean that the data-link between the 
adjacent PXCs is all-optical (or "transparent" as you say).

>This 
>   characteristic of PXCs poses a challenge for validating the 
>   connectivity of the data links since shining unmodulated light 
>   through a link may not result in received light at the next PXC.
>
<Carmine> I am not sure what you mean here? Even if you send out a 
signal on an electrical cross-connect, it may not result in received 
light at the next electrical cross-connect since there could be a fiber 
cut between the two.

>  
>   This is because there may be terminating (or opaque) elements, such 
>   as DWDM equipment, between the PXCs.  
>
It seems to me that if there were terminating (or opaque) elements, such 
as DWDM equipment, between PXCs, then it would be more likely that the 
downstream PXC always sees light, even if a failure occurred on the DWDM 
line system, since the opaque DWDM port could possibly generate a new 
signal. In this situation, the PXC would see light, but not know if the 
light was a good signal or a keep-alive signal...  This is another 
reason for using opaque test units to test a data link, so the PXC would 
be able to terminate the test signal after it was sent across the data 
link, rather then just monitoring for power.

>Therefore, to ensure proper 
>   verification of data link connectivity, it is required that until 
>   the links are allocated for user traffic, they must be opaque.  
>
<Carmine> The links must be opaque is a bit misleading. I think you just 
mean that until the links are allocated for user traffic, opaque test 
units are used to test the data-links.  This is a similar comment to the 
one I made in the LMP overview Section 2 that you incorporated.

>To 
>   support various degrees of opaqueness (e.g., examining overhead 
>   bytes, terminating the payload, etc.), and hence different 
>   mechanisms to transport the Test messages, a Verify Transport 
>   Mechanism field is included in the BeginVerify and BeginVerifyAck 
>   messages.  There is no requirement that all data links be terminated 
>   simultaneously, but at a minimum, the data links MUST be able to be 
>   terminated one at a time.  Furthermore, for the link verification 
>   procedure it is assumed that the nodal architecture is designed so 
>   that messages can be sent and received over any data link.  Note 
>   that this requirement is trivial for DXCs (and OEO devices in 
>   general) since each data link is terminated and processed 
>   electronically before being forwarded to the next OEO device, but 
>   that in PXCs (and transparent devices in general) this is an 
>   additional requirement. 
>[Jonathan]There is a challenge in verifying the connectivity between PXCs
>since a PXC just interconnects ports without termination the signals.
>
>
>
>6.2. Fault Localization Procedure 
>
>[Insert: 6.2.1 Suppression of downstream alarms in pre-G.709 networks]
>[Jonathan] Rather than separate the section into two subsections, I will add
>text to clarify both features provided by LMP.
>   
>   If data links fail between two PXCs, the power monitoring system in 
>   all of the downstream nodes may detect LOL and indicate a failure.  
>   To avoid multiple alarms stemming from the same failure, LMP 
>   provides a failure notification through the ChannelStatus message.  
>   This message may be used to indicate that a single data channel has 
>   failed, multiple data channels have failed, or an entire TE link has 
>   failed.
>[Insert: A ChannelStatus message is sent to the downstream node indicating
>to the downstream node that the failure has been detected upstream and
>therefore to suppress the alarm.]
>[Delete: Failure correlation is done locally at each node upon 
>   receipt of the failure notification.]
>[Jonathan]no change
>
<Carmine> Why was this comment not included? Is the text I proposed 
incorrect?

>
>
>[Insert: 6.2.2 Localization of a fault to a link for protection/restoration]
>[Jonathan]see comment above 
>    
>[Delete: As part of the fault localization,]
>[Insert: To localize a fault to a particular link between adjacent OXCs,]
>[Jonathan]change made.
>   a downstream node (downstream in 
>   terms of data flow) that detects data link failures will send a 
>   ChannelStatus message to its upstream neighbor indicating that a 
>   failure has occurred (bundling together the notification of all of 
>   the failed data links).
>
><snip>
>
>Once the failure has been localized, the signaling 
>   protocols can be used to initiate span or path 
>   protection/restoration procedures.
>[Insert: Note that the fault localization scheme supported in LMP localizes
>faults on a link and does not address node failures. Therefore additional 
>mechanisms are needed to detect node failures for end-to-end path
>protection/restoration.]  
>[Jonathan]no change.
>
<Carmine> Is the statement I proposed incorrect?

>
>
>[Insert: 6.2.2.1 Examples of Fault Localization]
>[Delete: 6.3. Examples of Fault Localization] 
>[Jonathan]see comment above
> 
>
>-----Original Message-----
>From: Carmine Daloia [mailto:daloia@lucent.com]
>Sent: Monday, November 26, 2001 7:43 AM
>To: Sudheer Dharanikota
>Cc: Jonathan Lang; ccamp@ops.ietf.org; tsg15q11@itu.int; t1x15@t1.org
>Subject: Re: [T1X1.5] Re: Suppression of Downstream Alarms...
>
>
>Hi Sudheer, Jonathan, and all,
>
>Attached is the LMP draft with my proposed text inserted. I inserted
>proposed text on the suppression of downstream alarms, as well as some other
>sections.
>
>Please let me know what you think of the proposed text.
>The notation I used is as follows:
>[Insert: .... new proposed text....]
>[Delete: .... text that I propose to be deleted...]
>
>Thanks
>Carmine
>
>Sudheer Dharanikota wrote:
>
>Carmine Daloia wrote:
>Hi Sudheer,My last comment wasn't meant to be an argument :-)Maybe if I
>propose some text specific to the suppressionof downstream alarms that would
>help clarify the scope ofapplicability. The text would state that when
>clientdevices (e.g., SONET/SDH cross-connects, IP routers) areinterconnected
>via a standard OTN network then thesuppression of downstream alarms is
>already handled in thetransport/user plane via the OTN overhead (both
>in-bandvia the digitial overhead as well as out-of-band vianon-associated
>overhead). Also the text would address PXCswithin a standard OTN network. In
>this case, again thesuppression of downstream alarms is handled via the
>OTNoverhead.The implementation proposed in LMP for suppression ofdownstream
>alarms applies to PXCs or client devices (e.g,SONET/SDH cross-connects or IP
>routers) interconnected viaa non-standard DWDM network. In this case, it is
>as
>summedthat the non-standard DWDM network does not provide theneccesary
>overhead within the transport/user plane tosuppress alarms on PXCs and
>client devices and thereforeLMP provides a mechanism to carry such alarm
>suppressionmessages in the control plane.I'll take a crack at specific text
>so that the group canreview it. Does this sound like something that would
>behelpful.
>Sure.Regards,sudheer
>