The Netflow plugin allows you to monitor the average flow generated over a defined period of time by an application, source ip or destination ip and generate alerts when defined thresholds are exceeded, it also reports data and performance graphs in the same way as the other plugins.
Use cases and best practices for using plugins :
The plugin has been specified to meet specific needs. It presents different fields to be filled in to target bandwidth consumption
Ideally, each instantiated plugin should meet a need, such as measuring the throughput generated by the e-mail service. In this case, the user will fill in the various fields necessary for this measurement (destination IP of the mail server, SMTP port 25...).
sFlow
Aim
SFlow enables real-time traffic monitoring of data networks containing switches and routers. It uses the sampling mechanism in the sFlow Agent software on the switches and routers to monitor traffic and to transmit the sampling data on the input and output ports to the central data collector, also called the sFlow Analyzer.
For more information on sFlow, see RFC 3176.
sFlow Agent
The sFlow Agent periodically samples or polls the interface counters that are associated with a data source of the sampled packets. The data source can be an Ethernet interface, an EtherChannel interface, or a range of Ethernet interfaces. The sFlow Agent queries the Ethernet Port Manager for the respective EtherChannel membership information and also receives notifications from the Ethernet Port Manager for membership changes.
When you enable sFlow sampling, depending on the sample rate and the internal random number of the hardware, input and output packets are sent to the CPU as sFlow sampled packets. The sFlow agent processes the sampled packets and sends an sFlow datagram to the sFlow analyzer. In addition to the original sampled packet, an sFlow datagram includes information about the input port, the output port, and the length of the original packet. An sFlow datagram can have several sFlow samples.
sFlow versions
Version | Comment |
V1 | Initial Version |
V2 | (Unknown) |
V3 | Adds support for the information expanses |
V4 | Adds supporting PMO communities |
V5 | Several protocol improvements. This is the current version, which is supported worldwide. |
SFlow datagrams
The sampled data is sent as a UDP packet to the specified host and port. The official port number for sFlow is port 6343. Unreliability in the UDP transport mechanism does not significantly affect the accuracy of measurements obtained from an sFlow agent. If the counter samples are lost, new values will be sent when the next sampling interval has passed. The loss of packet flow samples results in a slight reduction of the effective sampling rate.
The UDP payload contains the sFlow datagram. Each datagram provides information about the sFlow version, the IP address of the originating device, a sequence number, the number of samples it contains, and one or more flow and/or counter samples.
Default settings for sFlow
Settings | Default |
SFlow sampling rate | 4096 |
SFlow sampling-size | 128 |
SFlow max datagram-size | 1400 |
SFlow collector-port | 6343 |
SFlow counter-poll-interval | 20 |
Architecture
Network elements (switches and routers) compile statistics on the network flow data they export to collectors. These detailed statistics can include the number of packets and bytes, application ports, IP addresses, QoS fields, the interfaces through which they pass, etc.
The architecture for collecting IP network traffic information is as follows:
- sFlow exporter: Observes packet data, creates records of monitored network traffic and transmits this data to the sFlow Collector.
- sFlow Collector: Collects the records sent by the exporter and stores them in a local database.
- ServiceNav BOX: Retrieves information collected by the sFlow Collector: according to the need entered in the plugin parameters sFlow
- SNP (Monitoring Platform) allows you to configure the sFlow template to use the data reported by the ServiceNav BOX
Prerequisite configuration
Setting up sFlow Collector Storage
If you already have a NetFlow Collector, you can mutualize it by following the procedure below
Depending on the need for network analysis, it is possible to dedicate a dedicated server. sFlow Collector Storage or use one of your ServiceNav Box already in service.
Sizing of the sFlow Collector Storage
How much disk space should an average sFlow deployment consume? One of the biggest concerns is that exporting sFlow will have an impact on available bandwidth, CPU overhead on the devices or hard drives that store it..
It is important to note that a network flow data export can contain records containing up to 30 conversations or streams . This is important because the average volume of sFlow is directly proportional to the number of unique TCP/UDP sockets created by clients and servers on the network.
What is the typical stream volume per PC? The answer is: it depends, the trend seems to be about 100 flows / minute per computer, with a peak of 350
for example, a company has 1000 nodes and that each node generates 200 feeds per minute. This is equivalent to about 200,000 feeds in one minute, which is about 3300 flows per second. Why so much flow?
Applications generate a lot of unique feeds, especially web browsers and most applications. Here are some typically very talkative applications:
- Java, Adobe, Anti-virus, web browsers
- Skype is very talkative and generates traffic to DNS
- Web page feeds generating images, ads, etc.
- Email constantly checking inbox
- NetBios
A flow stored on the sFlow Colletor Storage occupies 150 bytes of disk space, it is therefore recommended to provision 2 GB per day, per 100 nodes:
Cpu(s) | 4 vCPU |
RAM | 8 GB |
Disk Space | 20 Gb + 2 Gb per day and per 100 knots |
Network interface | 1 gbps |
Configuring sFlow Collector Storage :
If you already have a NetFlow Collector, you can mutualize it by following the procedure below. In this case do not download a new master and port the Sflow configurations directly to your Shared Collector Storage.
The sFlow Collector Storage will be created from a SNB master, this server must be dedicated to collecting SFlow exports and must not be used as a monitoring box.
- Download the SNB master:
- FTP site: software.servicenav.io (contact support for access credentials)
- Select the SNB master from the directory SNB-SNM - ServiceNav Box/4.0/SNM_MASTER_OVF_2019_01_24_V4_0.0.zip
- Target network interfaces that meet your analysis needs
- Connect in SSH to the SFlow Collector Storage
- Create a destination directory for SFlow exports, e.g. you can create a generic path ~/network_analysis/sflow and created under it as many directories as there are network interfaces to monitor, these directories will be used to store the exports.
- Attention the ssh supervision ssh account used will be the account coadminIt is therefore recommended to create the destination directories s with the user. coadmin. You can name them according to interface PIs or equipment names:
-
Ex: mkdir -p ~/network_analysis/sflow/ROUTER_A_WAN
-
- Define a listening port for the network equipment listener on which sFlow will be activated, e.g. the 6343 for Router A and 6344 for Router B
- Create an ACL allowing connection to SFlow Collector Storage on the listening ports:
-
iptables -A INPUT -p udp --dport 6343 -j ACCEPT
-
iptables -A INPUT -p udp --dport 6344 -j ACCEPT
-
- Launch the listener with the following command:
-
sfcapd -p 6343 -l /home/coadmin/network_analysis/sflow/ROUTEUR_A_WAN -D
-
sfcapd -p 6344 -l /home/coadmin/network_analysis/sflow/ROUTEUR_B_WAN -D
-
-
-
- -p sets the UDP listening port (9995 in our Cisco router configuration)
- -l sets the directory where the data will be stored (location of the collector)
- -w ensures that the collection will be done every n minutes (n=5 by default) with values .5,10 .
- -D allows to ask nfcapd to run as a daemon (in the background)
-
- To refine the granularity per minute, add the option -t expressed in seconds, for example:
- For granularity per minute the order would be:
sfcapd -p 6343 -l /home/coadmin/network_analysis/sflow/RouteurA_WAN -D -t 60
(note it is possible to combine the options -w and -t)
- For granularity per minute the order would be:
- Initialize the sFlow Collector Storage with the ServiceNav platform to benefit from updates: Chapter 2.2 of the following procedure: https://coservit.com/servicenav/fr/documentation/mise-en-service-dune-box-servicenav/
By default, exports to nfcapd.YYYYYYMMddhhmm are deleted every 24 hours via a scheduled task launched every day at 0:00 am:
/root/crontabRoot
0 0 * * * * /usr/local/nagios/libexec/nfcapd_deleteCache.sh > /dev/null 2>&1
You need to specify the path where the data to be deleted is located:
- Edit the file /usr/local/nagios/libexec/nfcapd_deleteCache.sh => vi /usr/local/nagios/libexec/nfcapd_deleteCache.sh
- Modify the line " find /usr/local/nagios/Network_Analysis/ -name nfcapd.\* -type f -mmin +180 ! \( -name " *.current.* " \) -delete as a "replacement /usr/local " /nagios/Network_Analysis/ " by the path where the data to be deleted is located. Example using the above configuration: " find /home/coadmin/network_analysis/sflow -name nfcapd.\* -type f -mmin +180! \"-name " *.current.* " -delete " to delete the data relating to RouteurB_WAN and RouteurA_WAN
Configuration of network equipment
Connect to the network equipment on which SFlow is to be activated and perform these steps to configure sFlow.
Below is an example configuration for a switch HP-220-48G:
Order | Aim |
|
|
|
|
|
|
Configuration on the HP-220-48G Switch
HP-2920-48G(config)# sflow 1 destination 192.168.238.37 6343
HP-2920-48G(config)# sflow 1 sampling ethernet 47-48 128
HP-2920-48G(config)# sflow 1 polling ethernet 47-48 30
Checking the functioning of sFlow and displaying sFlow statistics
Check that SFlow is correctly configured.
Use the command show sflow 1 destination to display the details of the SFlow configuration:
HP-2920-48G# show sFlow 1 destination
Destination Instance: 1
sflow: Enabled
Datagrams Sent: 126822
Destination Address: 192.168.238.37
Receiver Port: 6343
Owner: Administrator, CLI-Owned, Instance 1
Timeout (seconds): 2147403334
Max Datagram Size: 1400
Datagram Support Version :
|
Checking that sFlow data is stored on the sFlow Collector Storage
Connect to the sFlow Collector Storage, go to the directory dedicated to store the corresponding nfcapd exports and check for the presence of a file in the format nfcapd.YYYYYYMMddhhmm (nfcapd.201709181140).
Careful the files are created periodically by the sfcapd even if they are not powered by the sFlow Exporter (swich, router...).
To ensure that the configuration is operational, these files need to be populated by the sFlow Exporter (swich, router...). An empty (unpowered) file has a size of 276 bytes. The presence of a file size of 276 bytes indicates that the files are not powered and the configuration needs to be reviewed.
Files must contain data with a size greater than 276 bytes as shown in the example below:
If this is the case, your configuration is operational, you can proceed to the next step, otherwise repeat the previous configuration steps.
Monitoring of sFlow Collector Storage
The sFlow Collector Storage is central to your SFlow architecture, it is therefore essential to monitor the load and the processes running.
Use the host template server-linux-model:
- CPU
- LIN-DiskIO
- LIN-Diskspace
- LIN-Network_Traffic
- LIN-RAM
- LIN-Swap
In addition to the service templates already in the host template server-linux-model, use the following additional service templates:
- LIN-DirectorySize to monitor the size of your destination directories.
- Lin-ProcessName to monitor the proper execution of nfcapd processes sfcapd
Finally, use action templates to automatically restart processes sfcapd in the event of an interruption by following the procedure below: https://servicenav.coservit.com/en/documentations/use-of-action-models/