Skip to content

INTRODUCING
THE SUPERPOWER F2070X IPU
DATA CENTER

Coupled with Napatech software, the F2070X is the perfect solution for network, storage and security offload and acceleration. It enables virtualized cloud, cloud-native or bare-metal server virtualization with tenant isolation.

Powerful Intel®-based Infrastructure Processing Unit (IPU)
The Napatech F2070X Infrastructure Processing Unit (IPU) is a 2x100Gbe PCIe card with an Intel® Agilex® AGFC023 FPGA and an Intel® Xeon® D SoC. The unique combination of FPGA and full-fledged Xeon CPU on a PCI card allows for unique offload capabilities. Coupled with Napatech software, the F2070X is the perfect solution for network, storage and security offload and acceleration. It enables virtualized cloud, cloud-native or bare-metal server virtualization with tenant isolation.

Customization on demand
The F2070X uniquely offers both programmable hardware and software, to tailor the IPU to the most demanding and specific needs in your network, and to modify and enhance its capabilities over the life of the deployment. It is based on the Intel Application Stack Acceleration Framework (ASAF) that supports the integration of software and IP from Intel, Napatech, 3rd parties and homegrown solutions. This one of a kind architecture enables hardware performance and the speed of software innovation.

Scalable platform
The Napatech F2070X comes in a standard configuration, and includes support for several combinations of Intel® FPGAs, Xeon® D processors, and memory. This enables tailored platform configurations matching requirements for specific use cases.

NEW DATA CENTER BENEFITS

IPU Providing new data center value

This IPU-based architecture has several major benefits

  • First, the strong separation of infrastructure functions and tenant workload allows tenants to take full control of the CPU.
    • Guest can fully control the CPU with their SW, while CSP maintains control of the infrastructure and Root of Trust
  • Second, the cloud operator can offload infrastructure tasks to the IPU. This helps maximize revenue.
    • Accelerators help process these task efficiently. Minimize latency and jitter and maximize revenue from CPU
  • And third, IPU’s allow for a fully diskless Server Architecture in the cloud data center.
    • Simplifies data center architecture while adding flexibility for the CSP
NIC vs SMARTNIC vs IPU

Nic vs SmartNIC vs IPU

Infrastructure Processing Unit

An IPU offers the ability to:

  • Accelerate infrastructure functions, including storage virtualization, network virtualization and security with dedicated protocol accelerators.
  • Free up CPU cores by shifting storage and network virtualization functions that were previously done in software on the CPU to the IPU.
  • Improve data center utilization by allowing for flexible workload placement.
    Enable cloud service providers to customize infrastructure function deployments at the speed of software.
IPU VALUE PROPOSITION

IPU Value Proposition

IPUs are architected from the ground up specifically to be an infrastructure control point. Intel® IPUs provide the greatest level of security in a bare-metal hosted environment.

  • Research from Google and Facebook has shown 22%[1] to 80%[2] of CPU cycles can be consumed by microservices communication overhead.
  • Some of the key motivations to using an IPUs are:
    • Accelerate networking infrastructure by moving some of the workload onto the IPU this reduces the total server overhead.
  • Reference Workloads are localized on the IPU. Performance is increased through:
    • Removing some of the PCIe latency between the host and the IPU.
    • Offloading SW functions to HW – For reference only: ZSTD Compression in HW can achieve L9 100Gbs @ 64KB packets and a compression ratio of 32.6%, a similar SW solution would consume 369 cores @ 1.8GHz achieve L7 @ 32.7% compression ratio.
    • Application-level HW optimization can be made in the FPGA and IPU.
  • IPU are use reconfigurable and highly programmable allowing for customization of particular features and development and deployment on software timescales
  • This IPU-based architecture has some major advantages:
    • First, the cloud operators can offload infrastructure tasks to the IPU. The IPU accelerators can process this workload very efficiently. This optimizes performance and the cloud operator can rent out 100% of the CPU to his guest which also helps to maximize revenue.
    • The IPU allows the separation of functions so the guest can fully control the CPU. The guest can bring its own hypervisor, but the cloud is still fully in control of the infrastructure and it can sand box functions such as networking security and storage.
    • The IPU can also help replace local disk storage directly connected to the server with virtual storage connected to the other network that will greatly simplify data center architecture while allowing tremendous amount of flexibility.

[1] The reference “From Profiling a warehouse-scale computer, Svilen Kanev, Juan Pablo Darago, Kim M Hazelwood, Parthasarathy Ranganathan, Tipp J Moseley, Guyeon Wei, David Michael Brooks, ISCA’15” https://research.google/pubs/pub44271.pdf — figure 4.
[2] https://research.fb.com/publications/accelerometer-understanding-acceleration-opportunities-for-data-center-overheads-at-hyperscale/

F2070X IPU SOLUTIONS

Napatech F2070X IPU

Hardware-plus-software solutions accelerate data center networking services

The Napatech F2070X IPU and Link-Storage™ Software provide a perfect solution for storage and network offload, virtualized cloud, cloud-native or bare-metal server virtualization with tenant isolation within the Intel® Infrastructure Processing Unit (IPU) ecosystem.

Link Storage Software

Link-Storage™ Software

Enterprise and cloud data centers are increasingly adopting the NVMe-oF storage technology because of the advantages it offers in terms of performance, latency, scalability, management and resource utilization. However, implementing the required storage initiator workloads on the server’s host CPU imposes significant compute overheads and limits the number of CPU cores available for running services and applications.

Napatech’s integrated hardware-plus-software solution, comprising the Link-Storage™ software stack running on the F2070X IPU, addresses this problem by offloading the storage workloads from the host CPU to the IPU while maintaining full software compatibility at the application level.

Napatech’s storage offload solution not only frees up host CPU cores which would otherwise be consumed by storage functions but also delivers significantly higher performance than a software-based implementation. This significantly reduces data center CAPEX, OPEX and energy consumption.

The Napatech solution also introduces security isolation into the system, increasing protection against cyber-attacks, which reduces the likelihood of the data center suffering security breaches and high-value customer data being compromised.

Link-Security software

Link-Security™ Software

Within enterprise and cloud data centers, Transport Layer Security (TLS) encryption is used to ensure security, confidentiality and data integrity. It provides a secure communication channel between servers over the data center network, ensuring that the data exchanged between them remains private and tamper-proof. However, implementing the TLS protocol on the server’s host CPU imposes significant compute overheads and limits the number of CPU cores available for running services and applications.

Napatech’s integrated hardware-plus-software solution, comprising the Link-CDN™ software stack running on the F2070X IPU, addresses this problem by offloading the TLS and TCP protocols from the host CPU to an Infrastructure Processing Unit (IPU) while maintaining full software compatibility at the application level.

Napatech’s security offload solution not only frees up host CPU cores which would otherwise be consumed by security protocols but also delivers significantly higher performance than a software-based implementation. This significantly reduces data center CAPEX, OPEX and energy consumption.

The Napatech solution also introduces security isolation into the system, increasing protection against cyber-attacks, which reduces the likelihood of the data center suffering security breaches and high-value customer data being compromised.

Link-Virtualization™ Software

Link-Virtualization™ Software

Operators of enterprise and cloud data centers are continually challenged to maximize the compute performance and data security available to tenant applications, while at the same time minimizing the overall CAPEX, OPEX and energy consumption of their Infrastructure-as-a-Service (IaaS) platforms. However traditional data center networking infrastructure based around standard or “foundational” Network Interface Cards (NICs) imposes constraints on both performance and security by running the networking stack on the host server CPU, as well as related services like the hypervisor.

Napatech’s integrated hardware-plus-software solution, comprising the Link-Virtualization™ software stack running on the F2070X IPU, addresses this problem by offloading the networking stack from the host CPU to an Infrastructure Processing Unit (IPU) while maintaining full software compatibility at the application level.

The solution not only frees up host CPU cores which would otherwise be consumed by networking functions but also delivers significantly higher data plane performance than software-based networking, achieving a level of performance that would otherwise require more expensive severs with higher-end CPUs. This significantly reduces data center CAPEX, OPEX and energy consumption.

The IPU-based architecture also introduces security isolation into the system, increasing protection against cyber-attacks, which reduces the likelihood of the data center suffering security breaches and high-value customer data being compromised.

Finally, offloading the infrastructure services such as the networking stack and the hypervisor to an IPU allows data center operators to achieve the deployment agility and scalability normally associated with Virtual Server Instances (VSIs), while also ensuring the cost, energy and security benefits mentioned above.

F2070X IPU FOR HIGH PERFORMANCE

Napatech F2070X IPU

 

FPGA Device and Memory
  • Intel Agilex® AGFC023
    • 2.3M LEs, 782.4K ALMs, 10.4K M20Ks
    • Hardened crypto
  • 4×4 GB DDR4 (ECC, 40b, 2666MT)
SoC Processor and Memory
  • Intel® Xeon® D-1736 processor
    • 8 Cores, 16 Threads
    • 2.3GHz, 3.4 GHz Turbo Freq.
    • 15 MB Cache
  • 2×8 GB DDR4 (ECC, 72b, 2900MT)
  • 64 GB NVMe in M.2 slot for Operating System and applications
PCI Express Interfaces
  • PCIe Gen 4.0 x16 (16 GT/s) to the host
  • PCIe Gen 4.0 x16 (16 GT/s) between the FPGA and SoC
Front Panel Network Interfaces
  • 2-ports QSFP28/56
  • 2x100GBASE-LR4/SR4/CR4
  • 2×10/25GBASE-LR/SR/CR (QSFP/SFP adapter)
  • 8×10/25GBASE-SR/CR (breakout cable)
  • Dedicated RJ45 management port
Supported Compute and Memory Devices (Mount Options)
  • FPGA: Intel Agilex® AGFC022 or AGFC027
  • CPU: All Intel® Xeon® D-1700 Series processors
  • FPGA Memory
    • 3×4 GB DDR4 (ECC, 40b, 3200 MT) + 1×8 GB DDR4 (ECC 72b 3200 MT)
  • CPU Memory
    • 3×8 GB DDR4 (ECC, 72b, 2900MT)
    • 3×16 GB DDR4 (ECC, 72b, 2900MT)
Size
  • Full-height, half-length, dual-slot PCI form factor Power and Cooling
Power and Cooling
  • Max power consumption for standard HW configuration: 150W
  • Max power dissipation supported by platform: 250W
  • NEBS compliant passive cooling
Time Synchronization (Mount Options)
  • Dedicated PTP RJ45 Port
  • External SMA-F Connector (PPS/10MHz I/O)
  • Internal MCX-F Connector (PPS/10MHz I/O)
  • Stratum 3 TCXO or Stratum 3e OCXO
  • IEEE1588v2 Suppor
Board Management
  • Ethernet, USB (Front panel), UART (Internal) connectivity
  • Secure FPGA image update
  • Wake-on-LAN
  • MCTP over SMBus
  • MCTP over PCIe VDM
  • Dedicated NC-SI RBT internal port
  • PLDM for Monitor and Control (DSP0248)
  • PLDM for FRU (DSP0257)
CPU Operating System
  • Fedora 37 (Linux Kernel T.BD)
  • UEFI BIOS
  • PXE Boot support
  • Full shell access via SSH and UART
Environment and Approvals
  • EU, US, APCJ Regulatory approvals
  • Thermal, Shock, Vibration tested
  • UL Marked, RoHS, REACH compliant
  • Temperature range: -5 to +45 deg. C.
  • ASHRAE class A2
Application Stack Acceleration Framework (ASAF)
  • Framework for embedding customer Accelerator Functional Units (AFU) implementing workload acceleration/offload in FPGA
  • 6 AFUs supported
  • Throughput up to 200Gbps
  • Look-aside and inline AFU configurations
  • Pre-integrated AFUs for Host virtio-net DMA, SoC virtio-net DMA and packet processor w. fundamental NIC functions
Network Offload
  • Supports packet processor with basic NIC functionality
  • Supports packet processor implementing Open vSwitch hardware offload of the dataplane
  • OvS hardware offload at the Megaflows layer (wildcard matches)
  • 1024 MegaFlows cache supporting millions of exact flows
  • Supports VLAN, QinQ, and VxLAN encapsulation/decapsulation in hardware
  • OvS Control plane on the host or on the SoC
  • Exposes VirtIO-net virtual interfaces as PFs or VFs (SR-IOV)
Storage Offload
  • NVMeOF TCP offload
  • Presents 16 Block devices to the Host (Virtio-Blk)
  • Compatible with the VirtIO-Block drivers present in the latest RHEL and Ubuntu Linux distributions
  • No proprietary software and drivers required in the Host
  • No network interfaces exposed to the Host
  • NVMe/TCP initiator running on the SoC
  • Offloads all NVMe/TCP operations from the host CPU to the IPU
  • No access to the SoC from the Host (Airgap)
  • Storage configuration over SPDK RPC interface
  • NVMe/TCP Multipath support
  • 2x100G connectivity to the storage network
Security Offload
  • TCP+TLS offload
  • Present up to 16 network devices to the Host (Virtio-net)
  • 2x100G Ethernet front-port connectivity
  • TLS 1.2/1.3 encryption offload
  • Openssl support
  • Nginx based HTTP(s) reverse proxy with caching
  • WebSockets support
  • Load balancing to host
  • Web server acceleration (reduced page load time) with static file caching and image optimization
Supported Hardware and Transceivers
  • F2070X IPU:
    • 100GBASE-LR4/SR4/CR4
    • 10/25GBASE-LR/SR/CR

Resources and downloads

Data Sheet

Solution Description

Solution Description

Solution Description

UNLEASH THE F2070X IPU SUPERPOWER

Want to learn more?

Get more information about our latest F2070X IPU to offload and accelerate data center networking services.