
Transcription
Analyzer Configuration Guide for IMCOrchestrator SolutionThe information in this document is subject to change without notice. Copyright 2021 Hewlett Packard Enterprise Development LP
ContentsIntroduction ························· 6Network configuration ········· 6Network diagram ··················· 6Data collection and data parsing compatibility matrix ············· 8Install and deploy Analyzer ························ 10Introduction ························· ·Â·Â·Â·Â·Â·Â·Â·Â· 10Configure NTP ············ 10Deploy collectors ········· 11Deploy IMC clusters and install the Analyzer module ··· 12Configure network ··· 14Configure network health-related devices ···· 14Add network assets ············· 18Import assets from ····················· 18Manually import ·Â· 20Configure collection templates ····················· 20Edit an SNMP template ························ 20Edit a NETCONF template ··················· 21Add a collection template ····················· 22Edit a device access parameter template ····················· 24Obtain and save topology ···················· 24Configure agents ················· 25Configure network health data collection ····· 26Configure SNMP collection ·················· 26Configure NETCONF collection ··········· 27Collect switch syslog ··· 28Configure SNMP parsing ····················· 29Configure IfKpiGrpc parsing ················· 30Configure NodeKpiGrpc parsing ·········· 31Configure a device health task ············· 31Configure a table entry resource heath task ················· 32Configure a device control plane flow processing task · 33Configure a device forwarding plane flow processing task ···················· 33Configure a buffer monitor flow processing task ··········· 34Configure an SNMP alarm parsing task ························ 35Configure diagnostic analysis tasks ············· 35Configure NETCONF parsing ·············· 35Configure a device routing forward processing task ····· 36Configure a problem center parsing task ······················ 37Configure application services ··················· 38Configure ·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â· 38Configure ERSPAN ····· 38Configure INT ·············· 39Configure MOD ··········· 42Configure ··············· 45Configure application data collection ··········· 45Configure Collector collection ·············· 45Configure TCP flow analysis ························ 47Configure TCP flow processing analysis tasks ············· 47
··· 47Configure aggregation group resources and ARP parsing ···················· ············ 48Configure INT flow analysis ························· 48Configure application health overview ·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·Â· 48
····· 49Configure flow processing tasks for INQA packet loss analysis ············ 49··· 49Configure flow processing tasks for INT NETCONF resources parsing ························· 50
··· 50Configure application exceptions ················· 50Configure non-compliant traffic ············ 50Configure a non-compliant traffic parsing task ·············· 51Configure a SynFlood flow processing analysis task ···· 511-3-5 closed-loop issues ··· 53Configure closed-loop issues ······················· 53Handle fault events ············· 53Handle events with system-defined solutions/recovery events ·············· 54Handle informational ···················· 55Handle events with suggested actions · 57Restrictions ······················· ·Â·Â·Â·Â· 61
IntroductionAnalyzer is the core component of the HPE IMC Orchestrator, It can visualize the network operation,perceive potential risks actively, and warn automatically by collecting real-time data of and sensingthe state of device performance, user access, and service traffic, as well as implementing big dataanalysis and AI algorithm.Analyzer collects different data and performs targeted analysis depending on the service scenarios.For the data center scenario, data collection and analysis are performed from three aspects:network, application, and fault diagnosis. The network module provides the network health, change analysis, link analysis, andtransceiver module diagnosis functions. The application module provides the TCP flow analysis, INT flow analysis, and noncomplianceanalysis functions. The diagnostic analysis module provides the problem center, event analysis, networkdiagnosis, and Dual-Active Detection (DAD) functions.Network configurationNetwork diagram1.The analyzer and the controller are deployed separately, with the IMC Orchestrator on the IMCplatform and Analyzer on the other IMC platform.2.The collector is used to collect the traffic of ERSPAN/INT/Telemetry stream, with arecommended operating system version of RedHat 7.6 and a DPDK-supported network card.For more information about common DPDK-supported network cards, see HPE AnalyzerDeployment Guide.3.When deployed in a southbound collection network, Analyzer requires a network cardseparated from the northbound management network and uses it to collectSNMP/NETCONF/gRPC packets and report logs.
4.The collector is directly connected to one leaf. The mirrored packets of non-directly connecteddevices are transmitted via Underlay routing.5.Spine2 in the network diagram is a distributed device, which enriches the device models in theenvironment, and is not mandatory in a basic network. A single spine device is alsoacceptable.6.INT flow analysis generally requires only the exit node to be connected to the collector. Forindirect connection, a Layer 3 interconnection needs to be implemented between the exit nodeand the collector. The exit node needs to be configured with a static ARP entry to the collectionnetwork card. Please configure the network based on the specific plan of test services.Table 1 IP addresses of device and server interfacesDeviceInterfaceIP addressRemarksenp61s0f0172.30.29.111Northbound network IP address of node 1ethipv4172.30.129.101Pod IP address of node 1 for southboundcollectionethipv4:0172.30.129.100Pod cluster virtual IP address for nd network IP address of node 2ethipv4172.30.129.102Pod IP address of node 2 for nd network IP address of node 3ethipv4172.30.129.103Pod IP address of node 3 for southboundcollection\172.30.29.115Northbound virtual IP address of clusterenp61s0f0172.30.29.124Management (service) address of collectorenp61s0f3194.168.1.2Collection network card address of collectorenp61s0f0172.30.29.120Northbound network IP address of node 1eth1192.168.10.101Cluster node IP address of controllerenp61s0f0172.30.29.121Northbound network IP address of node 2eth1192.168.10.101Cluster node IP address of controllerenp61s0f0172.30.29.122Northbound network IP address of node 3eth1192.168.10.101Cluster node IP address of controller\\172.30.29.125Northbound virtual IP address of clusterMGMTvlan-int29172.30.29.1Northbound network gatewayvlan-int4172.30.129.1Southbound network gatewayvlan-int5192.168.10.1Controller container network gatewayMGE0/0/0192.168.10.3Device management addressWGE1/0/33192.168.20.2Underlay interface addressHGE1/0/25192.168.22.2Underlay interface addressLoop01.1.1.9Loopback interface addressMGE0/0/0192.168.10.2Device management addressWGE1/0/33192.168.20.1Underlay interface addressWGE1/0/41192.168.21.1Underlay interface Leaf 1spine1
spine2Leaf 2Loop02.2.2.9Loopback interface addressMGE0/0/0192.168.10.8Device management addressFGE3/0/1192.168.22.1Underlay interface addressFGE3/0/3192.168.23.1Underlay interface addressLoop04.4.4.9Loopback interface addressMGE0/0/0192.168.10.4Device management addressWGE1/0/41192.168.21.2Underlay interface addressHGE1/0/25192.168.23.2Underlay interface addressWGE1/0/20194.168.1.1Interconnection address of collection networkcard and collectorData collection and data parsing compatibilitymatrixNOTE:The table below shows the compatibility only. A data collection task can correspond to multiple dataparsing tasks. For each type of data collection task, only one data parsing task needs to beconfigured.ServiceNetwork healthNetwork eventsData collectionDataparsingcomputationSNMP collectiongRPC collection&Parsingtask typeDatasourceSNMP parsingFlinkSwitchingdevicegRPC parsingFlinkSwitchingdeviceDevice health evaluation andcalculationJavaNETCONF collectionFlow processing of devicecontrol plane connectivityFlinkSwitchingdeviceNETCONF collectionTable entry resources healthFlinkSwitchingdevicegRPCcollection--NEWFlow processing of deviceforwarding plane healthFlinkSwitchingdeviceSNMP trap collectionSNMP alarm parsingJavaNETCONF collectionNETCONF parsingFlinkRouting data processingJavaTroubleshootingTransceivermodule diagnosisSNMP collectionChanges analysisgRPC collectionSwitchingdeviceSwitchingdevicegRPC parsingFlinkSwitchingdevice
NETCONF collectionNETCONF parsingFlinkSwitchingdeviceLink analysisSNMP collectionSNMP parsingFlinkSwitchingdeviceProblem centerSYSLOG collectionProblem center parsingFlinkSwitchingdeviceApplication healthCollector collectionINT flow parsingFlinkSwitchingdeviceTCP flow analysisCollector collectionTCP flow processingFlinkSwitchingdeviceINT flow analysisCollector collectionINT flow lector collectionSynFlood flow processingFlinkSwitchingdevice
Install and deploy AnalyzerIntroductionThe installation and deployment of Analyzer require three server nodes to deploy the IMC platformwhere Analyzer is deployed. The last node is the collector, which is used to deploy the TCP and INTcollection modules.ProcedureCAUTION: The three servers of the IMC platform cluster need to be time-synchronized to avoid timedeviation during data collection and parsing. When deploying the IMC platform cluster on the Matrix page, select the inner NTP servers.Configure NTP1.After the IMC platform is installed, enter the login address of the IMC Installer platform:https://IMC Installer ip address:8443/matrix/ui in the browser to open the login page.Figure 1 Login page of the IMC Installer platform2.Click the Cluster Parameters tab to open the cluster parameters page.3.Configure the NTP server. Select Inner from the NTP Server list in this example. The NTPserver is used to ensure consistent system time on all nodes.
Figure 2 Cluster parameters pageDeploy collectorsHardware requirementsCollectors need to be installed on physical servers. They support standalone deployment and donot support cluster deployment in the current solution. The hardware configuration requirements areshown in Table 2.Table 2 Hardware requirements of the serverAttributeMinimum configurationRecommended latinum or Gold series are recommended)with no less than 20 CPU coresIntel(R)Xeon(R)scalableprocessor(Platinum or Gold series are recommended)with no less than 24 CPU coresMemory64 GB128 GB and aboveDiskNetworkcardSystem disk: 2*500G, RAID1System disk: 2*500G, RAID1Data disk: 1 TB capacityData disk: 2TB capacityTwo 10G network cardsTwo 10G network cardsThe functions of the network cards on the server are as follows: A collection network card receives mirrored packets sent by network devices. The network cardmust support DPDK binding. An Intel or Broadcom network card is recommended. The networkcard is used to deploy collector agents and will become invisible to the kernel upon configurationdeployment. The recommended network card models are intel xl710, intel 82599, BroadcomBCM57412, and Broadcom BCM57302. A service network card is used for the data interaction between the functional components withinthe analyzer. DPDK binding is prohibited for the service network card.Operating system requirementsRedHat 7.6 or later operating systems are required for the servers.NOTE:
If the operating system used by any server is earlier than RedHat 7.6, please reinstall it to avoidfailure in the configuration of the collection service.Other requirements Disable the firewall from starting automatically and disable the firewall service on collectors.a. Use the systemctl disable firewalld command to disable the firewall service.b. Use the systemctl status firewalld command to view the firewall state. If the state isActive: inactive (dead), the firewall service has been disabled.[[email protected] ]# systemctl status firewalldfirewalld.service - firewalld - dynamic firewall daemonLoaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendorpreset: enabled)Active: inactive (dead)Docs: man:firewalld(1) A Layer 2 interconnection must be implemented between the traffic collection network card ofthe collector and the service network of the data center. If the switch connected to the collectoris Device A, a Layer 2 VLAN needs to be created on Device A as the collection VLAN. Then,the collection network card needs to be connected to the member interfaces of the collectionVLAN. The configuration procedures on the switch are as follows:a. Create a VLAN interface for the collection VLAN on Device A, and add the port connectedto the server to the VLAN. Assign an IP address to the VLAN interface, which must be inthe same network segment as that of the collector.[DeviceA]vlan 47[DeviceA-vlan47]port HundredGigE 1/0/27[DeviceA]interface Vlan-interface47[DeviceA-Vlan-interface47]ip address 47.1.1.1 24b. Configure a static ARP entry on DeviceA. In the ARP entry: The IP address is that of the collector. The VLAN ID is that of the collection VLAN. The MAC address is that of the collector's DPDK network card. The port is the physical port connected to the collector.[DeviceA]arp static 47.1.1.2 0000-0000-0001 47 HundredGigE1/0/27c. Issue the route for the network segment of the collection VLAN interface. The data centernetwork uses the OSPF routing protocol.[DeviceA]ospf[DeviceA -ospf-1]area 0[DeviceA -ospf-1-area-0.0.0.0]network 47.1.1.0 0.0.0.255Deploy IMC clusters and install the Analyzer moduleInstall the Analyzer module according to "Deploy Analyzer" in HPE Analyzer Deployment inaddresshttp://IMC ip address:30000/central, where IMC ip address is the northbound service virtualIP address of the IMC. The default username is admin and the default password [email protected] to the System Deployment Management page. For initial deployment, thecomponent deployment wizard page will be opened directly. For non-initial deployment, clickInstall to enter the component deployment wizard page.
Configure network servicesConfigure network health-related devicesFor both Spine and leaf devices, you need to deploy the following configurations.# Configure the static route from the network device to the Analyzer southbound collection network.#ip route-static 172.30.129.0 24 192.168.10.1## Configure the pod cluster virtual IP address for southbound collection as the log host IP address.#info-center loghost 172.30.129.100 facility local5## Configure SNMP, and configure the target host address as the pod cluster IP address for jointsouthbound collection.16#snmp-agentsnmp-agent local-engineid 800063A280F47488D34A0800000001snmp-agent community write privatesnmp-agent community read publicsnmp-agent sys-info version v2c v3snmp-agent target-host trap address udp-domain 172.30.129.100 params securityname public v2csnmp-agent trap enable arpsnmp-agent trap enable l2vpnsnmp-agent trap enable radiussnmp-agent trap enable stpsnmp-agent trap source M-GigabitEthernet0/0/0 #NOTE:In this example, 172.30.129.100 is the virtual IP address of the southbound collection pod cluster.# Configure NETCONF.#netconf soap http enablenetconf soap https enablenetconf ssh server enablerestful https enable## Configure SSH.#ssh server enable## Configure a local user.#
local-user hpe class managepassword hash h 6 GCV93TMddhOZCMCI x4qEZfvtQxwudXd I7rHVM/PC4 SFI9eG74A service-type ftpservice-type telnet http https sshauthorization-attribute user-role network-adminauthorization-attributeuser-role network-operatorline vty 0 63authentication-mode schemeuser-role network-adminuser-role network-operator#NOTE:In the absence of automated underlay, all of the above configurations need to be deployedmanually.# Configure gRPC.The controller supports gRPC configuration. However, it supports deploying only 11 sensor paths,and the rest need to be configured manually. Add a collectora. Navigate to the Assurance Telemetry Collectors page.b. Add collectors with the following IP addresses and port numbers: Analyzer southbound IP address: 50051. Analyzer southbound IP address: 50052. Collection network card address: 5555
Add collection configurationa. Navigate to the Assurance Telemetry Collection Configuration page, and select thecorresponding sensor path. In the current solution, DC supports the configuration of 11sensor paths only, and the rest need to be manually configured on the devices.b. Navigate to the Assurance Telemetry Collection Configuration page, and add thecollection devices. Upon application, the relevant gRPC configuration will be issued to thecorresponding devices.
#Grpc enable //telemetrysensor-group s1 //Create sensor group s1 (gRPC)sensor path device/basesensor pathdevice/boardssensor pathdevice/extphysicalentitiessensor pathdevice/physicalentitiessensor pathdevice/transceiverssensor pathifmgr/interfacessensor path ifmgr/statisticssensor-group s2//Create sensor group s2 (gRPC-new)sensor path buffermonitor/bufferusagessensor path buffermonitor/commbufferusagessensor path buffermonitor/commheadroomusagessensor path buffermonitor/ecnandwredstatisticssensor path buffermonitor/egressdropssensor path buffermonitor/ingressdropssensor path buffermonitor/pfcspeedssensor path buffermonitor/pfcstatistics//The above eight paths are used for buffermonitordata collection.sensor path ifmgr/ethportstatistics//CRC packet error statistics sensor pathsensor path resourcemonitor/monitors
sensor path resourcemonitor/resources//The above two paths are used for table entryresource collection, instead of NETCONF collection.sensor path route/ipv4routessensor path route/ipv6routessensor path lldp/lldpneighborssensor path mac/macunicasttablesensor path arp/arptable //The above five paths are used for the collection of changeanalysis-related table entries.sensor path sensor path inqa/statisticses/statistics/losses/loss//The above two paths are used foriNQA traffic collection.sensor-group s3//Create sensor group s3 (event-triggered sensor paths)sensor path buffermonitor/portquedropeventsensor path buffermonitor/portqueoverruneventsensor path resourcemonitor/resourceeventsensor path tcb/tcbpacketinfoeventsensor path tcb/tcbrawpacketinfoevent //The above two paths are used for TCB collection.sensor path telemetryftrace/genevent //This path is used for MOD data collection.destination-group d1 // Create the destination group, and configure the collector address andportipv4-address 172. 30.129.100 port 50051ipv4-address 172.30.129.100 port 50052subscription grp UUTBRWYFGYAXPZ2CBV6WIVCHXI // Create a subscription, and associate it withthe sensor group and destination groupsensor-group s1 sample-interval 60sensor-group s2 sample-interval 60sensor-group s3source-address 192.168.10.3destination-group d1grpc enable#CAUTION: When configuring the destination group, set the IP address to the virtual IP address of thesouthbound pod cluster. The port number 50051 corresponds to the gRPC collection-newcollection, while 50052 corresponds to the gRPC collection. If the device's interface to Analyzeris bound to a VPN instance, you need to append the VPN-instance parameter to the collectoraddress of the destination group. The interval of non-event-triggered collection is recommended as one minute. If there arespecific requirements for display accuracy, the collection interval can be adjusted dynamically. In the current solution, only the 15 sensor paths in sensor group S1 support controllerdeployment. The rest need to be configured manually.Add network assetsImport assets from controller1.Navigate to the Analysis Analysis Options Network Settings Assets Managementpage.
2.Click Third-Party System, and add the controller’s connection information as follows. Then,select Device from Import Import from Controller.NOTE:If the DC’s closed-loop issues function is required, you must select Import from Controller toadd devices for Analyzer. IP: northbound IP address of DC Scenario: DC Username/Password to log in to the DC: admin/[email protected] Port number: 30000 (https needs to be disabled)
Manually import devicesNote: If the DC’s closed-loop issues function is not needed, this import method can be used.1.Navigate to the Analysis Analysis Options Network Settings Assets Managementpage.2.Click Add Asset. On the page that opens, perform the following tasks:a. Select the asset type and device type.b. Enter the asset name, which is required.c. Enter the IP address, which is required.3.Click Save.The asset name and IP address are required, while the rest are optional. Upon asset addition,the other information of the device will be automatically obtained.Configure collection templatesEdit an SNMP template1.Navigate to the Analysis Options Collection Configuration Templates page.
2.Click Edit Network Protocol Access Parameter Template, select SNMP, and click AddTemplate.3.Specify the parameters, including the template name, read-only community name, andwrite-only community name according to the device SNMP configuration, select the parametertype, and click OK.Edit a NETCONF template1.Select NETCONF, and click Add Template.2.Specify the parameters, including the template name, username, and password according tothe device NETCONF configuration. Click OK and then click Back.
CAUTION:The username and password in the template must be consistent with the local user usernameand password configured on the device.Add a collection template1.Click Collection Configuration, select SNMP, and click Add Collection Template.2.Specify the template name.3.Select the services (you can select all).
4.Switch to NETCONF, and edit the polling intervals of the collection items.
For example, the polling intervals of the table entries are 60 minutes by default, and you canset them to other values.Edit a device access parameter template1.Select all the devices, and click Bulk Edit Device Access Parameter Template.2.Perform the following tasks on the page that opens:a.Select Do Not Change the Existing SNMP Template Settings and select the newlyadded SNMP template.b.Select Do Not Change the Existing NETCONF Template Settings and select the newlyadded NETCONF template.c.Select Do Not Change the Template Settings and select the newly added collectiontemplate. Click OK.d.Select Whether to Use GRPC Collection and switch it to Yes.Obtain and save topologyAfter the network health configuration is completed and the device comes online, navigate to theHealth Analysis Health Overview Area Health page, and click the topo icon to obtain thetopology in the global topology. After the topology is generated, adjust the device location and clickthe save icon to save the topology.
Configure agents1.Navigate to the Analysis Options Collection Configuration Agent page, and add a hostagent, which is to act as a collector.
2.Enter the host IP address, username, password and host description, and then click OK. (Theusername and password are generally root/123456 for login to the Linux back end.)3.Verify that the
Analyzer is the core component of the HPE IMC Orch estrator, It can visualize the network operation, perceive potential risks actively, and warn automatically by collecting real-time data of and sensing the state of device performance, user access, and service traffic, as well as implementing big data analysis and AI algorithm.