High-performance computing (HPC) solutions require high bandwidth, low latency components with CPU offloads to get the highest server efficiency and application productivity. The Mellanox ConnectX-3 and ConnectX-3 Pro network adapters for System x servers deliver the I/O performance that meets these requirements.This product guide provides essential presales information to understand the ConnectX-3 offerings and their key features, specifications, and compatibility. This guide is intended for technical specialists, sales specialists, sales engineers, IT architects, and other IT professionals who want to learn more about ConnectX-3 network adapters and consider their use in IT solutions.High-performance computing (HPC) solutions require high bandwidth, low latency components with CPU offloads to get the highest server efficiency and application productivity. The Mellanox ConnectX-3 and ConnectX-3 Pro network adapters for System x® servers deliver the I/O performance that meets these requirements.Mellanox's ConnectX-3 and ConnectX-3 Pro ASIC delivers low latency, high bandwidth, and computing efficiency for performance-driven server applications. Efficient computing is achieved by offloading from the CPU routine activities, which makes more processor power available for the application. Network protocol processing and data movement impacts, such as InfiniBand RDMA and Send/Receive semantics, are completed in the adapter without CPU intervention. RDMA support extends to virtual servers when SR-IOV is enabled. Mellanox's ConnectX-3 advanced acceleration technology enables higher cluster efficiency and scalability of up to tens of thousands of nodes.Features The Mellanox Connect X-3 10GbE Adapter has the following features:The Mellanox ConnectX-3 40GbE / FDR IB VPI Adapter has the following features:The Mellanox ConnectX-3 Pro ML2 2x40GbE/FDR VPI Adapter has the same features as the ConnectX-3 40GbE / FDR IB VPI Adapter with these additions: PerformanceBased on Mellanox's ConnectX-3 technology, these adapters provide a high level of throughput performance for all network environments by removing I/O bottlenecks in mainstream servers that are limiting application performance. With the FDR VPI IB/E Adapter, servers can achieve up to 56 Gb transmit and receive bandwidth. Hardware-based InfiniBand transport and IP over InfiniBand (IPoIB) stateless off-load engines handle the segmentation, reassembly, and checksum calculations that otherwise burden the host processor.RDMA over InfiniBand and RDMA over Ethernet further accelerate application run time while reducing CPU utilization. RDMA allows very high-volume transaction-intensive applications typical of HPC and financial market firms, as well as other industries where speed of data delivery is paramount. With the ConnectX-3-based adapter, highly compute-intensive tasks running on hundreds or thousands of multiprocessor nodes, such as climate research, molecular modeling, and physical simulations, can share data and synchronize faster, resulting in shorter run times.In data mining or web crawl applications, RDMA provides the needed boost in performance to enable faster search by solving the network latency bottleneck that is associated with I/O cards and the corresponding transport technology in the cloud. Various other applications that benefit from RDMA with ConnectX-3 include Web 2.0 (Content Delivery Network), business intelligence, database transactions, and various cloud computing applications. Mellanox ConnectX-3's low power consumption provides clients with high bandwidth and low latency at the lowest cost of ownership.TCP/UDP/IP accelerationApplications utilizing TCP/UDP/IP transport can achieve industry leading data throughput. The hardware-based stateless offload engines in ConnectX-3 reduce the CPU impact of IP packet transport, allowing more processor cycles to work on the application.NVGRE and VXLAN hardware offloadsThe Mellanox ConnectX-3 Pro ML2 2x40GbE/FDR VPI Adapter offers NVGRE and VXLAN hardware offload engines which provide additional performance benefits, especially for public or private cloud implementations and virtualized environments. These offloads ensure that Overlay Networks are enabled to handle the advanced mobility, scalability, serviceability that is required in today's and tomorrow's data center. These offloads dramatically lower CPU consumption, thereby reducing cloud application cost, facilitating the highest available throughput, and lowering power consumption.Software supportAll Mellanox adapter cards are supported by a full suite of drivers for Microsoft Windows, Linux distributions, and VMware. ConnectX-3 adapters support OpenFabrics-based RDMA protocols and software. Stateless offload is fully interoperable with standard TCP/ UDP/IP stacks. ConnectX-3 adapters are compatible with configuration and management tools from OEMs and operating system vendors.InfiniBand specifications (ConnectX-3 FDR VPI IB/E Adapter and ConnectX-3 Pro ML2 2x40GbE/FDR VPI Adapter):Enhanced InfiniBand specifications:Ethernet specifications:Hardware-based I/O virtualization:SR-IOV features:Additional CPU offloads:Management and tools:InfiniBand:Ethernet:Protocol support:The adapters have the following dimensions:Approximate shipping dimensions: