As Communications Service Providers (CSPs) worldwide scale up the deployments of their 5G networks, they face strong pressure to optimize their Return on Investment (RoI), given the massive expenses they already incurred to acquire spectrum as well as the ongoing costs of infrastructure rollouts.
100GbE line-rate traffic generation using TRex and Napatech NICs
TRex traffic generation @ 100Gbps
A couple of months back I attempted to enable TRex (https://trex-tgn.cisco.com) to utilize the Napatech NICs and last week I decided to see how the new version of the Napatech DPDK would fit TRex at 100Gbps.
Napatech 100Gbps NIC – NT100E3-1-PTP
The server I used is a Dell-730 with two Xeon E5-2690 CPUs. Unfortunately, this server doesn’t support PCIe bifurcation, so I couldn’t use the NIC I originally wanted to use, which is the NT200A01-SCC. Instead, I ended up using two NT100E3-1-PTP NICs – one for traffic generation and the other for traffic receive.
Napatech DPDK changes
The current Napatech DPDK release (v17.02.1_1.0) runs the Napatech NICs as virtual devices (–vdev) because these NICs do not expose a BUS-ID per physical port, nor does it require a UIO driver, so we thought initially that it would be easiest to run as a virtual device. Recently, I discovered that instead of running as a virtual device, Napatech NICs could run as a physical devices and expose all ports to DPDK, enabling a more plug-n-play experience, still without the need for a UIO driver though. The current development of the Napatech DPDK PMD (master) suports running Napatech as a physical NIC instead of a virtual device, so I chose to port TRex to use that instead of the previous –vdev based solution.
TRex changes
The changes turned out to be minimal now that the Napatech NICs are probed instead of being applied as virtual devices/ports. I basically only needed to make the Napatech C++ class to wrap the Napatech PMD and I was up running at least on the NT100E3-1-PTP.
It was more challenging to make the Napatech 4 port 10G NIC work with TRex, because it has one PCI BUS-ID and 4 ports on that BUS-ID. I had a solution that was partially working, but after talking to Hanoch Haim from Cisco, it turned out that there was already a solution for NICs with more ports per BUS-ID. Apparently, Mellanox CX4 already works this way. I reverted my changes and used the ‘BUS-ID/port’ solution in trex_cfg.yaml instead.
Test
The goal was to run TRex on a 100GbE link and I was able to do that using only 7 cores. The Dell 730 only has one x16 PCIe connector so I could only generate 1 link with 100Gbps traffic. The animation below shows TRex running 64 byte packets on a 100GbE link transmitting from one adapter to the other, with zero packet loss and at full line-rate.
I’m very pleased with the result both in terms of the performance but also with TRex itself. The last time I explored TRex, which was earlier this year, I didn’t have the time to look at what could actually be done with the tool. This time I dug a bit deeper into the tool and started playing with the ‘trex-console’, which give a great overview and ability of the start/stop traffic load on different ports.
I have published my work at the Napatech github (https://github.com/napatech/trex-core).
What’s next?
The above work took me a couple of days to create and test and the output is that we now have a platform that will enable us to use our Napatech NICs to create full line-rate traffic at all speeds from 1G to 100G depending on the NIC we put into the server. The next steps would involve enabling the state-full test-suites which I didn’t have time to investigate this time.