[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Process for Evaluating Solutions (RE: [Solutions] Document Process Improvement WG)



some worrying..... "taking the role of raising worries", not "this is the wrong approach".....

--On torsdag, juni 19, 2003 12:13:41 -0400 Margaret Wasserman <mrw@windriver.com> wrote:

In the current version, I have offered a high-level
description of a methodology that the WG could us to
prioritize, implement and evaluate change proprosals.
This description will be removed from the next version
of the process document (too solution-oriented), but I
plan to present it as a proposed process during the
COACH BOF (with a bit more detail, perhaps).

Here is the text:

         - Identify and prioritize a set of promising proposals for
           improvement.
         - Figure out what each proposal is trying to improve (in
           measurable terms) and define a metric to measure performance
           in that area.
this could be a "required part of a proposal". But it's likely to be a very difficult one to get agreement and consistent measurement on - some metrics (like "quality") are very hard to measure objectively, or could easily absorb more competent manpower than the change itself; other metrics (like "industry uptake") can't be measured until months to years after the time we want to make the judgment.

         - Determine the current level of performance against the
           defined metric.
         - Institute each change in a few representative WGs (on a
           volunteer basis).
worry: we have few "representative WGs". And the WGs that are likely to volunteer are likely to have chairs that have energy, openness and willingness to try things - exactly the type of chair that is likely to run a successful working group anyway.

         - Measure the results to determine if each change was
           successful.
worry: what's our control group? (see above for self-selection of vounteers)

         - Make successful changes available IETF-wide, by publishing
           them in BCP RFCs.
         - As necessary, train WG chairs and other participants on the
           how to implement the successful improvements in their WGs.
         - Repeat as necessary.

This process is based on widely accepted practices for software
engineering process improvement.

Do people believe that this type of iterative, controlled process is
the right approach to use for IETF WG quality process improvements?
I don't know if we have enough control to do it that way....

one important point is the timescales we operate at; if we have a 3-month interval from start of experiment to completion evaluation, we're relatively unlikely to get big external changes influencing our "numbers"; if we have a 2-year loop time, things like a general market upturn or downturn, or other changes to the IETF context, could easily swamp the effects we want to measure - and deciding "this didn't work, try the next one" 2 years down the road is not an appealing prospect anyway.

But measuring 2-year projects on a 3-month timescale takes art....