[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: Process for Evaluating Solutions (RE: [Solutions] Document Process Improvement WG)



Harald,

Some comments:

> > Here is the text:
> >
> >>          - Identify and prioritize a set of promising proposals for
> >>            improvement.
> >>          - Figure out what each proposal is trying to improve (in
> >>            measurable terms) and define a metric to measure performance
> >>            in that area.
> 
> this could be a "required part of a proposal". But it's likely to be a very 
> difficult one to get agreement and consistent measurement on - some metrics 
> (like "quality") are very hard to measure objectively, or could easily 
> absorb more competent manpower than the change itself; other metrics (like 
> "industry uptake") can't be measured until months to years after the time 
> we want to make the judgment.

Agreed - however, each proposal should contain text on how to know if it
is working or not.

> >>          - Determine the current level of performance against the
> >>            defined metric.
> >>          - Institute each change in a few representative WGs (on a
> >>            volunteer basis).
> 
> worry: we have few "representative WGs". And the WGs that are likely to 
> volunteer are likely to have chairs that have energy, openness and 
> willingness to try things - exactly the type of chair that is likely to run 
> a successful working group anyway.

Still, if it improves the output, I see no harm in it.

> one important point is the timescales we operate at; if we have a 3-month 
> interval from start of experiment to completion evaluation, we're 
> relatively unlikely to get big external changes influencing our "numbers"; 
> if we have a 2-year loop time, things like a general market upturn or 
> downturn, or other changes to the IETF context, could easily swamp the 
> effects we want to measure - and deciding "this didn't work, try the next 
> one" 2 years down the road is not an appealing prospect anyway.

Agreed, but still, informal surveys of WGs that have used issue trackers,
for example, have all been unanimous that it has helped.  John K. may
mention this is due to the Hawthorne Effect** but still, if the perception
is it is good, then it is good.  I recieved many personal mails about my
mail on Erik Nordmark's stepping down.  All indicated some level of burn-out
and dissatisfaction with the process.  This is not good.

John

** John Kleinsin told me:

	(i) I worry less about true placebo effects than about
	what is known in the educational measurement community
	as the Hawthorne Effect.   Superficially, paying
	attention to some situation --and having the
	people/processes involved know that attention is being
	paid to them, produces positive effects, for a while.  I
	believe that the whole history of "programming
	methodologies" has been characterized by a variation on
	that effort.  One establishes a fancy new system,
	typically with documentation or ritual requirements, and
	insist that everyone follow it.  For a while, it works
	really well, just because the rituals require that
	everyone pay careful attention.  Unfortunately, after a
	while, the rituals require less attention and become
	just more excess baggage and source of wasted time and
	delay.  Then a new "methodology" comes along, and the
	cycle repeats.