The Netflow plugin allows you to monitor the average throughput generated over a defined period by an application, a source ip or a destination ip and generate alerts if the defined thresholds are exceeded, it also provides data and performance graphs in the same way as the other plugins.
Use cases and best practices for using plugins :
The plugin has been specified to meet specific needs. It presents different fields to be filled in order to target the bandwidth consumption
Ideally, each instantiated plugin should meet a need, for example, measuring the throughput generated by the mail service. In this case, the user will fill in the various fields necessary for this measurement (destination ip of the mail server, SMTP port 25....).
sFlow
Introduction
SFlow provides real-time traffic monitoring of data networks containing switches and routers. It uses the sampling mechanism in the sFlow Agent software on switches and routers to monitor traffic and to transmit the sample data on the ingress and egress ports to the central data collector, also called the sFlow Analyzer.
For more information on sFlow, see RFC 3176.
sFlow agent
The sFlow Agent periodically samples or polls interface counters that are associated with a data source of sampled packets. The data source can be an Ethernet interface, an EtherChannel interface, or a range of Ethernet interfaces. The sFlow agent polls the Ethernet port manager for the respective EtherChannel membership information and also receives notifications from the Ethernet port manager for membership changes.
When you enable sFlow sampling, based on the sampling rate and the internal random number of the hardware, the input packets and output packets are sent to the CPU as a sampled packet in sFlow. The sFlow agent processes the sampled packets and sends an sFlow datagram to the sFlow analyzer. In addition to the original sampled packet, an sFlow datagram includes information about the input port, output port, and length of the original packet. An sFlow datagram can have multiple sFlow samples.
sFlow versions
Version | Comment |
V1 | Initial version |
V2 | (Unknown) |
V3 | Adds support for the information extents |
V4 | Adds supporting BGP communities |
V5 | Several protocol improvements. This is the current version, which is supported worldwide. |
SFlow datagrams
The sampled data is sent as a UDP packet to the specified host and port. The official port number for sFlow is 6343. The unreliability of the UDP transport mechanism does not significantly affect the accuracy of measurements obtained from an sFlow agent. If counter samples are lost, new values will be sent when the next polling interval has passed. The loss of packet flow samples results in a slight reduction in the effective sampling rate.
The UDP payload contains the sFlow datagram. Each datagram provides information about the sFlow version, the IP address of the originating device, a sequence number, the number of samples it contains and one or more flow and/or counter samples.
Default settings for sFlow
Parameters | Defect |
SFlow sampling rate | 4096 |
SFlow sample size | 128 |
SFlow max datagram-size | 1400 |
SFlow collector-port | 6343 |
SFlow counter-poll-interval | 20 |
Architecture
Network elements (switches and routers) compile statistics on the network flow data they export to collectors. These detailed statistics can include packet and byte counts, application ports, IP addresses, QoS fields, interfaces through which they pass, etc.
The architecture for collecting IP network traffic information is as follows:
- sFlow exporter: Observes packet data, creates records of monitored network traffic and transmits this data to the sFlow Collector.
- sFlow Collector: Collects records sent by the exporter, stores them in a local database.
- ServiceNav BOX: Retrieves the information collected by the sFlow Collector: according to the need entered in the plugin parameters sFlow
- SNP (Supervisory Platform) allows you to configure the sFlow is to use the data reported by the ServiceNav BOX
Configuration of the prerequisites
Setting up sFlow Collector Storage
If you already have a NetFlow Collector, you can share it by following the procedure below
Depending on the network analysis needs, it is possible to dedicate a server sFlow Collector Storage or use one of your ServiceNav Box already in use.
Sizing of the sFlow Collector Storage
How much disk space should an average sFlow deployment consume? One of the biggest concerns is that exporting sFlow will impact the available bandwidth, processor overhead on the devices or hard drives that store it.
It is important to note that a network flow data export can contain recordings containing up to 30 conversations or streams . This is important because the average volume of sFlow is directly proportional to the number of unique TCP/UDP sockets created by clients and servers on the network.
What is the typical flow volume per PC? The answer is: it depends, the trend appears to be approximately 100 streams/minute per computer, with a peak of 350
for example, a company has 1000 knots and each node generates 200 streams per minute. This is equivalent to approximately 200,000 streams in one minutewhich is approximately 3300 streams per second. Why so much flow?
Applications generate a lot of unique streams, especially web browsers and most applications. Here are some typical chatty applications:
- Java, Adobe, Anti-virus, web browsers
- Skype is very chatty and causes traffic to the DNS
- Stream of web pages generating images, ads, etc.
- Email constantly checking the inbox
- NetBios
A stream stored on the sFlow Colletor Storage takes up 150 bytes of disk space, so we recommend provisioning 2 GB per day for every 100 nodes:
Cpu(s) | 4 vCPU |
RAM | 8 GB |
Disk space | 20 GB + 2 GB per day per 100 nodes |
Network interface | 1 gbps |
Configuring sFlow Collector Storage :
If you already have a NetFlow Collector, you can share it by following the procedure below. In this case do not download a new master and port the Sflow configuration directly to your shared Collector Storage.
The sFlow Collector Storage will be created from an SNB master, this server must be dedicated to the collection of SFlow exports and must not be used as a supervision box.
- Download the SNB master:
- FTP site: software.servicenav.io (contact support for login details)
- Select the SNB master in the directory SNB-SNM - ServiceNav Box/4.0/SNM_MASTER_OVF_2019_01_24_V4_0.0.zip
- Target network interfaces that meet your analysis needs
- Connect via SSH to SFlow Collector Storage
- Create a destination directory for SFlow exports, for example you can create a generic path ~/network_analysis/sflow and created under it as many directories as network interfaces to monitor, these directories will be used to store exports.
- Attention the ssh supervision account used will be the account coadminIt is therefore recommended to create the destination directories with the user coadmin. You can name them according to the interface ip or the equipment name:
-
Ex: mkdir -p ~/network_analysis/sflow/ROUTER_A_WAN
-
- Define a listener port for the listener per network device on which sFlow will be activated, for example the 6343 for Router A and 6344 for Router B
- Create an ACL allowing the connection to SFlow Collector Storage on the listening ports:
-
iptables -A INPUT -p udp --dport 6343 -j ACCEPT
-
iptables -A INPUT -p udp --dport 6344 -j ACCEPT
-
- Run the listener with the following command:
-
sfcapd -p 6343 -l /home/coadmin/network_analysis/sflow/ROUTE_A_WAN -D
-
sfcapd -p 6344 -l /home/coadmin/network_analysis/sflow/ROUTEUR_B_WAN -D
-
-
-
- -p sets the listening UDP port (9995 in our Cisco router configuration)
- -l sets the directory where the data will be stored (collector location)
- -w ensures that the collection will be done every n minutes (n=5 by default) with values ,5,10...
- -D allows to ask nfcapd to run as a daemon (in the background)
-
- To refine the granularity to the minute, add the option -t expressed in seconds, example :
- For a per minute granularity the command would be :
sfcapd -p 6343 -l /home/coadmin/network_analysis/sflow/RouterA_WAN -D -t 60
(note it is possible to combine the options -w and -t)
- For a per minute granularity the command would be :
- Initialize the sFlow Collector Storage with the ServiceNav platform to benefit from the updates: Chapter 2.2 of the following procedure: https://servicenav.coservit.com/en/documents/commissioning-a-servicenav-box/
By default, exports to nfcapd.YYYYMMddhhmm are deleted every 24 hours via a scheduled task launched every day at 0:00:
/root/crontabRoot
0 0 * * * /usr/local/nagios/libexec/nfcapd_deleteCache.sh > /dev/null 2>&1
You must specify the path where the data to be deleted is located:
- Edit the file /usr/local/nagios/libexec/nfcapd_deleteCache.sh => vi /usr/local/nagios/libexec/nfcapd_deleteCache.sh
- Change the line " find /usr/local/nagios/Network_Analysis/ -name nfcapd.\* -type f -mmin +180 ! \( -name " *.current.* " \) -delete " in place of /usr/local " /nagios/Network_Analysis/ " by the path where the data to be deleted is located. Example using the above configuration: " find /home/coadmin/network_analysis/sflow -name nfcapd.\* -type f -mmin +180 ! \( -name " *.current.* " \) -delete " to delete RouterB_WAN and RouterA_WAN
Network equipment configuration
Connect to the network device on which SFlow is to be activated and perform these steps to configure sFlow.
Below is an example of a configuration for a Switch HP-220-48G:
Ordering | Objective |
|
|
|
|
|
|
Configuration on the HP-220-48G Switch
HP-2920-48G(config)# sflow 1 destination 192.168.238.37 6343
HP-2920-48G(config)# sflow 1 sampling ethernet 47-48 128
HP-2920-48G(config)# sflow 1 polling ethernet 47-48 30
Checking the operation of sFlow and displaying sFlow statistics
Check that SFlow is correctly configured.
Use the command show sflow 1 destination to display the SFlow configuration details:
HP-2920-48G# show sFlow 1 destination
Destination Instance: 1
sflow : Enabled
Datagrams Sent : 126822
Destination Address: 192.168.238.37
Receiver Port: 6343
Owner: Administrator, CLI-Owned, Instance 1
Timeout (seconds): 2147403334
Max Datagram Size : 1400
Datagram Version Support :
|
Verification that sFlow data is stored on the sFlow Collector Storage
Connect to the sFlow Collector Storage, go to the directory dedicated to store the corresponding nfcapd exports and check the presence of the file in the nfcapd.YYYYMMddhhmm (nfcapd.201709181140).
Attention files nfcapd are created periodically by the sfcapd even if they are not powered by the sFlow Exporter (swich, router...).
To ensure that the configuration is operational, these files must be populated by the sFlow Exporter (swich, router...). An empty (un-powered) file is 276 bytes in size. The presence of a file with a size of 276 bytes indicates that the files are not populated and that the configuration must be reviewed.
The files must contain data with a size greater than 276 bytes as shown in the example below:
If it is the case, your configuration is operational, you can go to the next step otherwise resume the previous configuration steps.
Supervision of the sFlow Collector Storage
The sFlow Collector Storage is central to your SFlow architecture, so it is essential to monitor the load and the running processes.
Use the equipment model model-server-linux:
- CPU
- LIN-DiskIO
- LIN-Diskspace
- LIN-Network_Traffic
- LIN-RAM
- LIN-Swap
To these service models already integrated into the equipment model model-server-linux, use the following service models:
- LIN-DirectorySize to monitor the size of your destination directories.
- Lin-ProcessName to monitor the proper execution of processes sfcapd
Finally, use the action models to restart the processes sfcapd in the event of an interruption by following the procedure below: https://coservit.com/servicenav/fr/documentation/utilisation-des-modeles-daction/