[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [idn] Figuring out what can be displayed
--On Wednesday, 11 September, 2002 00:36 +0000 "Adam M.
Costello" <idn.amc+0@nicemice.net.RemoveThisWord> wrote:
> John C Klensin <klensin@jck.com> wrote:
>
>> Yes, there are grounds for ignoring a "SHOULD". That is why
>> we call it "SHOULD" and not "MUST". But we don't describe
>> things that violate SHOULDs with phrases like "fully
>> conforming", regardless of the cause
>
> Okay, but I don't see why that's a problem. If the spec says
> you SHOULD do something and it's simply impossible to do it,
> then you do the best you can, you don't conclude that you
> can't implement the spec. On the other hand, if the spec had
> said MUST, then you would have to conclude that you can't
> implement the spec.
No one is asking for MUST here. I'm arguing that it can't make
the statement it makes about selection criteria and still use
SHOULD.
> That said, I don't really care whether the spec says "SHOULD"
> and "SHOULD NOT", or "encouraged" and "discouraged", when
> talking about choosing which form of label to display. That
> would be a technical change that I don't see the need for, but
> you can try convincing the other authors and the area
> directors.
This may be the root of several differences of opinion about the
document. I tend to take the notion of a standards-track
document seriously, and that includes being able to understand
when something does, or does not, conform. To the degree to
which the specification has user-visible implications, that also
means that the behavior, as seen by the user, should be
predictable (or the areas in which it is not should be clearly
delineated).
Of course, ymmd.
>> please identify at least two operating systems in common use
>> today in which an application can accurately determine that
>> it can display a particular, arbitrarily-chosen, Unicode
>> character.
>
> This is not my area of expertise, but after poking around on
> the web, I found two such platforms:
>
> In Java, Font.canDisplayUpTo(String) will tell you whether a
> given font can display a given string.
>
> In MacOS X, NSFont.coveredCharacterSet() will return the set
> of Unicode characters that a given font can display.
I don't think Java is an OS, but let's not quibble. In both
cases, if I understand your description, what these functions
are providing is information about whether a given string can be
displayed given a font and a device capable of rendering that
font. But the first is only vaguely knowable (again, think
about the cases in which program display output may be piped, as
a string, to another application). And the second is, in the
general case, not knowable at all.
And that doesn't even start to mention the cases in which the
"native" CCS used on the local operating system is different
from Unicode and the local display fonts are bound to it.
Please note that, in this case, I'm not arguing that there is
anything wrong with IDNA conceptually. I am suggesting that the
text of the specification is somewhere between misleading and
badly broken _as a document being considered for
standardization_. If we can give application writers a
specification about what they are expected to do, one that
testers can test and evaluate against, then we should certainly
do so. If we cannot, if we are expressing a wish about what
will happen where the actual choice depends on the risks an
implementor is willing to incur, then the specification language
should not be stronger than "MAY" and we probably need some
additional "security considerations" notes.
regards,
john