Gosh, I haven't had a MUST, SHOULD, MAY discussion for quite a while :)
For reasons other than what Dave discusses I believe you want to make this one a SHOULD. Reason being is that MUST is usually being done to ensure compliancy -- unfortunately different implementations will have varying mechanisms for determining when a connection has gone way and hence there will be no consistent implementation of this particular feature -- which means that there is no way to determine compliant implementations -- which means that MUST is not "enforceable".
Regards, /gww
> -----Original Message-----
> From: owner-netconf@ops.ietf.org [mailto:owner-netconf@ops.ietf.org]
> Sent: Friday, October 01, 2004 09:44
> To: 'Eliot Lear'
> Cc: 'netconf'; 'Margaret Wasserman'
> Subject: RE: on <close-session>
>
> Hi Eliot,
>
> I haven't been following netconf closely so I may have missed some
> changes, and may make some dated assumptions. Feel free to point out
> that my assumptions are based on an outdated understanding of netconf
> design.
>
> Let me say that I'm probing around the edge cases because I want to
> understand whether this rule is really required for interoperability.
> I don't really have an opinion about this rule. As a MIB Doctor, there
> are a few things I watch for: we should avoid using MUST/SHOULD
> language where it isn't warranted, and we want to avoid developing
> crappy little rules (tm) that are unnecessary constraints on
> implementors.
>
> This rule has three requirements - cease processing, release locks,
> and release other session state. I'll focus on the "cease processing"
> requirement for now.
>
> "cease processing" and interoperabiity:
> Once we lose the communications channel, the manager loses the ability
> to receive responses of any kind from that session, error or success.
> Whether the agent ceases processing is immaterial, right? So I
> question whether "MUST cease processing RPCs" impacts on-the wire
> interoperability; the on-the-wire behavior isn't impacted as far as
> the manager is concerned.
>
> "cease processing" and the manager -
> A pipelining manager, with knowledge of an agent that buffers RPCs,
> MAY deliberately send RPCs to get buffered, and then close down the
> comm channel because it doesn't care what the reposnses are. I don't
> think this is a great design choice, but I cannot foresee all the
> factors that might lead to such a choice, and we shouldn't have CLRs
> that constrain application design unnecessarily.
>
> "cease processing" and the agent -
> If an agent sends responses after it has determined the comm channel
> is lost, it is knowingly wasting bandwidth. Does this mean it should
> not process the RPCs it has buffered, or merely that it shouldn't send
> a response? If the pipelining manager doesn't care about the
> responses, closing the channel might be a thoughtful optimization. If
> the use case of a manager that doesn't care about responses is real,
> maybe we should just add an optional flag to each RPC that says "don't
> bother responding". How real is this use case? Should we write a CLR
> that says this use case is illegal? The current proposed rule seems to
> be that CLR.
>
> Does an implementor have the resonsibility to check whether the
> response channel still exists before it starts to process any buffered
> RPC? This rule seems to require that. Why is it a MUST, rather than a
> MAY or a SHOULD?
>
> Other interoperability concerns -
> It may impact interoperability if the manager cannot regain control
> using a new session because the lock exists, or the other session
> state exists, or the other processing is still ongoing. Are there real
> cases where this could be a problem, that are not already addressed by
> netconf?
>
> SNMP was designed to be able to work in an unstable network,
> specifically to stay operating well enough in a war environment to
> deliver commands, even if it cannot maintain session. Think terrorist
> attack. If a manager can send RPCs to a device to configure a security
> mechanism, but the agent buffers the RPC, and the terrorist then
> disrupts the communications, then the security RPC that has been
> successsfully delivered has to be re-delivered, possibly after
> determining through get-state that it didn't take effect the first
> time. Does requiring the discard of successfully-received RPCs make
> netconf less robust when under attack?
>
> David Harrington
> dbharrington@comcast.net
>
> -----Original Message-----
> From: Eliot Lear [mailto:lear@cisco.com]
> Sent: Friday, October 01, 2004 8:24 AM
> To: dbharrington@comcast.net
> Cc: 'netconf'; 'Margaret Wasserman'
> Subject: Re: on <close-session>
>
>
>
> David B Harrington wrote:
> > Hi Eliot,
> >
> > What is the reasoning behind the following rule?
> > - an agent MUST cease processing <RPC>s if it determines that it
> has
> > lost communications with the manager, and release all state
> > associated with the session, including locks.
> >
> > I view MUST as a requirement for interoperable on-the-wire behavior.
> > Is this really a MUST, or a SHOULD, or a MAY, and why?
>
> The theory goes that the reason you had the additional RPCs in the
> first place was that you are pipelining, and pipelining is an
> optimization so that you don't have to wait for a response. However,
> there is no way to report an error once the communication channel is
> gone, so why continue processing? At worst then you've lost a single
> RPC response. I would argue that it is an interoperability argument
> in as much as you've lost the ability to report state to the other
> end. Does that fail the test of MUST?
>
> I'm not religious about it, so I could be convinced the other way. In
> fact here's an argument for the other way- suppose one RPC is
> intentionally disruptive and the next is meant to heal. The catch in
> doing it this way is that you have no guarantee of the 2nd RPC getting
> to the device before it go boom on the 1st RPC.
>
> Eliot
>
>
>
> --
> to unsubscribe send a message to netconf-request@ops.ietf.org with
> the word 'unsubscribe' in a single line as the message text body.
> archive: <http://ops.ietf.org/lists/netconf/>