[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [psamp] timeliness of export process



Timeliness also has an application dependency, and as such that 1 sec bound may
not fit all.  How about we make the dispatch delay a configurable parameter but
assign a default value of, say, 1 sec?

Regards,
Alex.

Peter Phaal wrote:

 Keeping the packet dispatching delay to under 1 second has other benefits besides limiting buffer requirements. For many applications a 1 second time resolution is sufficient. Applications in this category would include: identifying sources associated with congestion; tracing denial of service attacks through the network and constructing traffic matrices.The 1 second rule in these situations eliminates the need for agent clocks to be synchronized, or for the collector and agents to maintain bi-directional communication in order to track clock offsets. The collector can simply process the samples in the order that they are received (using its own clock as a "global" time base), avoiding the complexity of buffering and reordering samples.Low measurement latency allows the traffic monitoring system to be more responsive to real-time network events, quickly identifying sources of congestion (including ingress points for denial of service attacks). Timeliness is generally a good thing for the switches/routers performing the sampling since it minimises the amount of memory needed to buffer samples.Peter
-----Original Message-----
From: owner-psamp@ops.ietf.org [mailto:owner-psamp@ops.ietf.org]On Behalf Of Maurizio Molina
Sent: Wednesday, July 23, 2003 5:20 AM
To: psamp
Subject: [psamp] timeliness of export process
 
Hi,
timeliness in the PSAMP export process is considered in the Framework draft in section 3.3 and 7.4
In particular, sec. 3.3 states
* Timeliness: reports on selected packets MUST be made available
       to the collector quickly enough to support near real time
       applications. Specifically, any report on a packet MUST be
       dispatched within 1 second of the time of receipt of the packet
       by the measurement process.

Nick explained me that his main reason for quantifying such limit was that for some applications (e.g. One Way Delay estimation)
the collector may need to buffer one record corresponding  to a packet "passage" through a measurement point until the
corresponding record of a passage through another measurement point arrives.
Putting a limit on the export delay is a way to bound this buffer requirement.
However, Nick told me also that there was no particular "dimensioning" reasoning under the 1s. Hereafter, I attempt to give some.
My conclusion is that  buffer constraints do not bring such a stringent requirement, and that therefore specifying a bound is only needed 
for bringing the data to the application to allow timely reaction. I think that a more relaxed bound, e.g. 30s, is enough for
all the applications I foresee. Others may have different sugestions.... 
Maurizio

**********************************************************************************
Assumption: the collector is built with a standard off the shelf PC architecture. If a single collector is not enough for a single domain, 
it makes probably more sense to place multiple collectors rather than concentrate all the load on a machine with dedicated hardware, 
load balancing the reports towards them. 
If we consider a state of the art PC running Linux, and rely on libpcap to capture packets and bring them to the
collector software for processing, libpcap may be the system's bottleneck. On the tests we did, we found that beyond 5000 pk/s
libpcap may start loosing a non negligible amount of packets.
Let's say conservatively that we'll limit the load on each collector to 2000 pk/s.
Let's assume that for OWD the collector has to store (*) for each report a
32 bit hash value, a 64 bit timestamp, a 64 bit information  about the Measurement point (e.g. source address + ID). Let's say that 
the implementation requires other  two 32 bit pointers (e.g. double linked list). All sums up to 28 bytes but let's assume, conservatively, 100 bytes.
(*) Even if the psamp record contains part of the packet header/payload, a hash function  can be immediately computed at the collector and only its result is stored.

This ends up in an average buffer requirement of
B = 2000 [pk/s] *  100 [bytes] * T [s] = 200 * T [Kbytes]

where T is the average time a report must be kept until the "companion" arrives and can be processed.
Let's consider the worst case where the first crossed MP sends immediately the report, while the second hold it for a certain time T_h
before sending it (T_h is actually what section 3.3 of the FW draft  speaks of...). Then
 T = T_h + T_d
where T_d is the worst case domain crossing delay. Let's assume VERY conservatively 5s
Then the buffer requirement will be

B = 1000 + 200 * T_h [Kbytes]

Even considering a PC with a memory of only 256Mbyte, and costrainting the OWD application to conservatively  occupy only the 10% of it,
this turn in a constraint on T_h of 123s, which is 2 orders of magnitude greater that what section 3.3 proposes. And note we made a lot
 of "worst case" assumptions... 
Therefore, there's no need to specifiy a requirement for the export delay as stringent as 1s becouse of buffer requirement. 
I like anyway the idea of suggesting a bound for reporting delay, to guarantee that the applications will receive data timely.
But this bound needs only to be linked to the application requirements. 
For the applications I foresee, I'd say that a bound of 30 is more than enough, but others may have different vews...