[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Working group last call: draft-ietf-ccamp-lsp-dppm-05



Hi Lou,

- As always, please cleanup any idnits identified issues.
For the nits, we get a warning about the lack of a disclaimer.
Hopefully this could be resovled by using a more recent id generating tool.
We wil do it.

- In general it seems to me that for the performance metrics to be
  complete and fully useful a metric that quantifies data path
  availability relative to connection request and establishment is
  required.  In section 15 you do say:
  But I really don't think this is sufficient, and feel that data path
  metric parameters and related methodologies needs to be added to
  each of metrics/test cases. If you'd like to indicate that
  these are OPTIONAL metric parameters, that is fine with me.
  But, IMO, the document is incomplete without them.  I have
  heard from others that they agree with this point too.  (I can
  try to get them to speak up publicly if this turns out to be
  necessary.)

Yes this has been one major point of controversy after the publication of
this id. As the authors, we have seriously considered the possibilities of
introducing such metrics and methodologies into this document, but provided
the big variety of interfaces that GMPLS will control, it is difficult to
integrate everything into one document. Recently members of the author team
are discussing the ways to address this issue, in another document targeted
for either CCAMP or BMWG. This can be a technically reasonable approach,
because unlike the signaling delay, which changes as traffic load and
network topology change, the data plane metric as you put it, or more
precisely the delay in between the completion of the signaling process and
the completion of the cross-connection programming, is more persistent and
not sensitive to traffic load and network topology. This can be benchmarked
in a lab environment and the value is persistent for an implementation.
Because of this, I personally believe separating this out will not sacrifice
the integrity of the current id. Of course, we are always open for more
suggestions from GMPLS and benchmarking experts for other ways out.

Hope you and others will be happy with this reply.:)


- In the tests you treat all failures as an "undefined" result.  It
  seems to me that it would be more useful to identify and track
  (i.e., count) specific types of errors rather than just define
  all errors as "undefined" metrics.

Very good point, thanks! We will add metrics to track the different type of
errors.


- The document should be run through a spell checker, there are
  a number of spelling errors.

Thanks we will do this.

Please be noted that most of our authors are on the national 3-day holidy.
There may be more responses coming up after that.

Cheers,
Weiqiang