In the current version, I have offered a high-level
description of a methodology that the WG could us to
prioritize, implement and evaluate change proprosals.
This description will be removed from the next version
of the process document (too solution-oriented), but I
plan to present it as a proposed process during the
COACH BOF (with a bit more detail, perhaps).
Here is the text:
- Identify and prioritize a set of promising proposals for
improvement.
- Figure out what each proposal is trying to improve (in
measurable terms) and define a metric to measure performance
in that area.
this could be a "required part of a proposal". But it's likely to be a very
difficult one to get agreement and consistent measurement on - some metrics
(like "quality") are very hard to measure objectively, or could easily
absorb more competent manpower than the change itself; other metrics (like
"industry uptake") can't be measured until months to years after the time
we want to make the judgment. - Make successful changes available IETF-wide, by publishing
them in BCP RFCs.
- As necessary, train WG chairs and other participants on the
how to implement the successful improvements in their WGs.
- Repeat as necessary.
This process is based on widely accepted practices for software
engineering process improvement.
Do people believe that this type of iterative, controlled process is
the right approach to use for IETF WG quality process improvements?
I don't know if we have enough control to do it that way....