[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [spam score 5/10 -pobox] draft-jones-opsec-01.txt comments



Sorry I didn't get around to replying to this sooner.

> >  I think restricting
> > based on which interface is intended to be internal would be more
> > effective.
>
> This is intended to keep joe-hacker-who-is-23-hops-away from being
> able to attempt remote management.  @ 23 hops, he's likely coming
> from the external interface.
>
> How would you use this on internal interfaces ?
>
> I think this may accomplish what you want:
>
>    2.5.3   Ability to Control Service Bindings for Listening Services
>
> possibly in combination with
>
>    2.5.2   Ability to Disable Any and All Services

Yeah, I think the optional provision of controlling bindings by
interface is a useful feature.

I'm having a bit of trouble envisioning a topology where I'd really be
confident that a TTL-based approach really is the right thing, and
that the problem couldn't be better solved by controlling bindings.

> > A warning that the ``internal'' interface may not be secure if it is a
> > wireless interface might also be worthwhile.
>
> It might also not be secure if it's in a college dorm :-)

I was thinking of the case of a student who owns multiple computers
and a NAT box, and plugs the ``external'' interface into the college
network, and plugs his/her own computers into the ``internal''
interface.  In such a setting, I think we can pretend the computers on
the internal interface are secure (maybe they aren't if they're
running certain email clients etc, but shrug).

> The working assumption is that the operator can secure the management
> device that is directly connected.  Should that be stated ?
> The security of the devices that are connected is beyond the scope.

I'm not entirely clear on what this paragraph means.

It's certainly the case that every Cisco router I've ever seen has the
property that you really don't want to plug its console port into
something which is insecure, but if we're discussing TTLs, we're
discussing IP.

I don't have enough experience with IP based management interfaces
which are separate from the routing interfaces to know what sort of
topologies are common there, if that's what you think we're talking
about here.  If that sort of usage is at all common, an RFC talking
about recommended practices for or experiences with that would be
interesting to see, I think.

> > Under ``2.2.2 Use Strong Encryption'' it may be worth discussing the
> > importance of a strong MAC and a strong block chaining mode as well as
> > a strong cipher.
>
> Would you care to help with the wording/reqs ?

I was kind of hoping to get away with handwaving there.  8-)

But sure, I'm willing to try to work on this, though it's not
immediately obvious what final text to propose.

Is ANSI.T1.276-200x publicly available at no charge?  A google search
for ANSI.T1.276-200x found me your draft on port111.com, and nothing else.

In looking at the text again just now, I'm not really convinced that
my original criticism of the text was entirely valid: when one talks
about encryption algorithms, as the draft does, the first definition
that comes to my mind are symmetric ciphers like 3DES, AES, RC4, etc.
But the algorithm used to do initial key exchange is also in some
sense a cryptographic algorithm, for example, and if you define
``encryption algorithm'' sufficiently broadly, the text might well be
correct.  (It probably still could be made less ambiguous, though.)

The focus should not be on key lengths, though.  The easiest way to
defeat cryptography (assuming there aren't known buffer overflows,
which is even easier when it happens to work) tends to be by attacking
the initial part of the session, when the client and server
authenticate themselves to each other.  For example, when using the
current Kerberos v5 protocol with passwords, an attacker probably
effectively basically needs to be able to crack a 30 bit key.  Even if
you're using 256 bit AES to encrypt the later parts of the session.
With the sshv2 protocol, if password auth is being used, all you need
to do is cons up a new host public key and offer it to the user and
hope they accept it, at which point they will send you the password
effectively in the clear.  These attacks turn out to be even easier
than cracking 56 bit DES, in most cases.

Maybe we want language saying that devices need to support
well-understood cryptographic protocols such as sshv2, TLS, and/or
IPsec, and let the documents of those working groups speak to the
strength of the encryption algorithms.

But I'm not quite sure.  These are ideas that are striking me as
possibly worthwhile as I'm typing them.

> > Under ``2.2.3 Key Management Must Be Scalable'': one of the obvious
> > things to consider is the use of Kerberos, but the warnings should
> > probably explicitly say something about how in managing a network
> > device, you may need to log in to fix it
>
> Period.   I think 2.12.7 hits this (fallback to local if configured)

OK.

> > s connectivity to a key
> > server, and the importance of being able to use PKI in a way that
> > doesn't depend on being able to talk to any external server once the
> > key is configured.  (A dependence on OCSP should be warned against.)
> >
> > There should also be discussion of how you revoke a key on N devices
> > when none of the devices check with a central server every time the
> > key is used.
>
> Right.  Hard problems.  This has been moved to the INFO doc.  If you'd
> care to edit/discuss, I'll take contributions.

I wonder if it would be useful to define two levels of management
access: access for a handful of people who maintain the authentication
infrastructure, and some of the core functionality of the network
devices, and then a second class of access for people who only need
access when things are mostly working.

For example, if an NSP has a router with 100 customer circuits on it,
and a couple of circuits going to other routers the NSP has, the
technicians who troubleshoot customer circuits can reasonably use
Kerberos, or can use some sort of X.509 based system that does online
verification that there's no revocation certificate out there.  The
people responsible for keeping that router connected to the NSP's
backbone are probably going to need to know some kind of password that
they can use to log in on the console port.

One other interesting question is whether you could build a protocol
for automatic updates of that password.  (Note that this may be a case
where you really do want to share a password between lots of people,
so that if the update breaks, you will actually notice; if you have an
account per user, and a disgruntled employee leaves and the protocol
only deletes his account on half the routers, is anyone ever really
going to notice?  My experience has been that anything that depends on
people checking logs and noticing that something is wrong will go
unnoticed sooner or later.)

> > Does ``2.3.4 No Forwarding Between Management and Data Planes''
> > prohibit a command on the console port from initiating an outbound
> > telnet session to the data forwarding interface?
>
> The intent here is primarily to make it impossible for management
> to be attempted from the public/data forwarding side.   If you
> can't get the packets there, you can't attempt management.

OK.  I think being more explicit about that in the draft would be
useful.

> > For the ``separate IP stacks'' option, what counts as ``separate''?
> > If they run on the same processor type and are compiled from the same
> > code base, are they separate?
>
> Yes.
>
> You can crash/compromise one, but you will not crash/compromise the
> other because a) you can't get there from here and b) they are not
> sharing resources (memory, processor, etc.)

OK.  So the idea then is that the management network is a
well-isolated network, and that there won't be attackers on the
management network.

Does that actually work in the real world?  Can people be trusted to
not accidentally plug the management network into the real network?

(I'm reminded of Radia Perlman's story of the first time an ethernet
bridge was deployed, and it didn't work.  It turns out that its two
ports were plugged into the same network, and because of Spanning Tree
Protocol, this meant that nothing happened.  While Radia sees the
moral of this story as demonstrating the value of STP, there's also
the observation that people will plug the wrong network into the wrong
network from time to time.  And I wonder, if you use a separate range
of IP addresses, if you'll immediately notice the insecurity that you
create there.)

> Do you think that needs to be spelled out more clearly ?

I'm not sure.

> > ``2.5.5 Support Automatic Anti-spoofing for Single-Homed Networks''
> > should clearly state whether/how it applies to NAT.
>
> Suggestions ?

Maybe it should say that if you're doing NAT, that you MUST default to
blocking all packets that don't have a source address that gets
translated by the NAT.

Of course, that default for the NAT case is a more strict requirement
than what is recommended in the non-NAT case, but I don't think that
necessarily makes it incorrect.  On the other hand, I'm thinking of
consumer-grade devices here, and it may be that these days cable modem
and DSL providers already provide enough anti-spoofing that it doesn't
matter whether a consumer-grade NAT box also does anti-spoofing.

> > The wording of ``2.7.7 Ability to Filter Without Performance
> > Degradation'' seems to imply that if you have a SOHO NAT box that does
> > forwarding and filtering on a microprocessor needs to delibrately
> > waste cycles when it's not filtering if it can't keep up with a full
> > 10mb worth of traffic when not filtering.
>
> ???

Sorry, I meant to say that that section seems to imply that if you
have a SOHO NAT box that does forwarding and filtering on a
microprocessor, it needs to delibrately waste cycles when it's not
filtering if it can't keep up with a full 10mb worth of traffic when
filtering.

> >  I think the more
> > interesting thing to focus on may be that if forwarding is done by an
> > ASIC, that ASIC needs to support filtering.  (I think the Cisco 2500
> > and 2600 series routers are examples of routers that lose a *lot* of
> > performance in certain filtering cases, because they switch from ASIC
> > to microprocessor based filtering, but I may be a bit confused about
> > that, and I haven't been paying much attention to Cisco's product line
> > in the last couple years.)
>
> ASICs are implmenation.   The requirement is fast filtering.
> It has moved to the info doc, but I could be convinced, based
> on Dave Newman's article/evaluations, to move it back to BCP.

Yes.  You saying that makes me wonder if requiring that the same
implementation be used both with and without the filtering is the
right requirement.

> > The justification under ``2.9.1 Ability to Accurately Count Filter
> > Hits'' seems bogus.  Things that automatically retry will be seen as a
> > greater threat, if you go by packet count, even if the following
> > packets don't differ in any interesting way.  Some of the subtle
> > things that nmap can try to do make it very hard to get reliable
> > information out of this.
>
> The requirement is accurate counting of the matching traffic presented.
> The requirment is not AI that does attack analysis.   We still need
> humans for that.

I think the warnings should say that.

> > ``2.11.6 Ability to Maintain Accurate System Time'' should talk about
> > spoofed timeservers
>
> Wording ?

Maybe something along the lines of ``If network time synchronization
is used, an attacker may be able to manipulate the clock unless
cryptographic authentication is used.''  Or perhaps there should be a
requirement to support authenticated network time.

The problem, though, is that as far as I can tell, authenticated
network time isn't always available.  (This is why I wrote an
internet-draft that tries to explore exactly where Kerberos depends on
secure network time, and why I have advocated making Kerberos less
dependent on time of day; I don't think mumbling ``cryptographicly
authenticated time synchronization'' and pretending that this will
magically solve all of the real-world issues is viable given the
current state of available time servers.)

If you limit how much you allow your clock to change on the basis of
data you get from NTP, and don't allow the clock to be set backwards,
and don't allow large forward leaps, and have a fairly accurate clock,
the amount of trouble an attacker can cause is fairly small.

If you care about accurate time for logs, it may be that you really
need a hardware clock that's designed to be accurate enough that you
don't need to rely on network time sync much, so that you can
configure the device to drop time sync if there's any significant
drift (maybe more than 1%?).

A requirement that the allowed drift be configurable and default to
something strict might also be desireable, but that seems like it goes
in the future directions and not BCP draft.

> > and cryptographic authentication.
>
> See
>
>   2.1.1 Support Secure Management Channels

Oh, OK.  It is at least *there* if I read it carefully enough.

I had been assuming that ``management channels'' meant ``channels to
enter configuration data''.  Apparently it also means logging and time
sync.

Apparently a device which allows me to run insecure NTP over a
management-only IP network technically qualifies, even if that device
also has a serial console port, and I've built my network to use
serial console ports.

I'm realizing that because there are multiple options in 2.1.1, it's
easy in general to get devices which technically qualify but might not
fit easily and wel into a particular topology.  I'd be tempted to
argue that devices should be required to support doing everything
in-band if so configured.  It's hard to do logging and time sync well
across a traditional serial console port.  It might make sense to say
that if a device has a management IP port, that it MUST support
logging and time sync on that interface.

I also wonder if there should be a requirement that devices that can
support more than two IP interfaces MUST or SHOULD allow any interface
to be configured as a management-only interface.  Though maybe if you
can turn forwarding off on that interface and bind services to
specific interfaces, that's kind of sufficient for that or something.
But again, I wonder if anyone really uses independent IP management
networks.