Developers leading this revolutionary technology for the data center are productizing now around the InfiniBand 4X standard. Engineering groups testing these InfiniBand based systems rely on LeCroy's technology leadership, support, and ease of use to validate their designs. The difficulty of the validation task is further burdened by InfiniBand's high data rate and robust interoperability features designed in to the standard.
IBTracer's real-time hardware triggering and flexible recording options help test engineers focus their efforts on the critical components of the InfiniBand fabric. Powerful analysis, search, and statistics features facilitate identification of specific data patterns, MAD types, errors or other conditions and quickly pinpoint areas of interest. More than just an analyzer, IBTracer 4X provides insight on how InfiniBand components work together and comply with the 4X specification.
Building a successful industry standard takes collaboration and cooperation. Over 100 companies within the InfiniBand Trade Association are leading this transition to a fabric-based I/O architecture. LeCroy has participated in the IBTA's Compatibility and Interoperability Working Group since its inception. In conjunction with Lamprey Networks, LeCroy has developed a scripting architecture designed to speed test execution during the InfiniBand Trade Association Plugfests. LeCroy's Script Verification Engine (SVE) replaces manual data analysis tasks with a highly automated system that ensures the compliance testing process will produce quick and consistent results. This important initiative is another example of how LeCroy's expertise in the test and debug of high speed serial IO can be leveraged to reduce time to market and provide higher quality products for next generation computing environments.
InfiniBand is a channel-based, serial switched, fabric I/O technology designed to meet the requirements of a high-speed, standard interconnect for large servers. Formed when the competing NGIO and FutureIO groups consolidated their initiatives behind a single standards effort, InfiniBand vendors are now beginning volume production of the technology and several corporate and institutional sites in early deployment stage.
In typical entry-to-midrange servers, the standard I/O architecture uses a PCI bus to communicate to adapter cards, which in turn communicate with storage devices and networks. Bus architectures have proven to be an efficient transport for traffic in and out of a server chassis, but as the cost of silicon has decreased, serial I/O alternatives have become more attractive. Serial I/O provides point-to-point connectivity between computing resources and offers increased reliability and performance. While IBA was originally envisioned as a replacement for the PCI architecture, complexity and delays diverted InfiniBand from its original charter. But Infiniband's unique characteristics have created compelling opportunity for the technology within the High Performance Computing (HPC) arena.
Today's server clusters rely on proprietary interconnects to effectively manage the complex nature of clustering traffic. With InfiniBand Architecture, server clusters can be configured for the first time with an industry standard I/O interconnect, creating an opportunity for clustered servers to become ubiquitous in data center deployments
InfiniBand architecture will ultimately enable the clustering and management of multiple servers as one entity. Performance will scale by adding additional boxes, without many of the complexities of traditional clustering. Even though more systems can be added, the system can be managed as one unit. As processing requirements increase, additional power can be added to the cluster in the form of another server or "blade."
* Layered Protocol
* Multiple Layer Connectivity
* Packet-based Communications
* Multicast Capable
* Packet and End Node Fault Tolerance
* Subnet Management Capability
* Varied Link Speeds - 1x, 4x, 12x
* 2.5 - 30 Gigabit per Second Transfers
* PCB, Copper, and Fiber Physical links
* Support for Remote DMA
Channel-Based Architecture - InfiniBand architecture is grounded on a channel-based I/O model, connections between fabric nodes are inherently more reliable than today's multidrop, shared bus paradigm.
Message-Passing Structure - InfiniBand Architecture protocol utilizes an efficient message-passing structure to transfer data. This moves away from the traditional "load store" model used by the majority of today's systems and creates a more efficient and reliable transfer of data.
Natural Redundancy - InfiniBand fabrics are constructed with multiple levels of redundancy in mind. Nodes can be attached to a fabric for link redundancy. If one path fails, traffic can be rerouted to the final endpoint destination. InfiniBand Architecture also supports redundant fabrics for the ultimate in fabric reliability. With multiple redundant fabrics, an entire fabric can fail without creating data center downtime.
Quality of Service (QoS) - The link layer also allows for the QoS characteristics of InfiniBand. InfiniBand has the ability to configure device specific prioritization with 15 independent levels (VL0-14) and one management path (VL15) available. This allows IO operations across the fabric to be assigned a priority so more critical communications are given preference.
Credit-based flow control - With the credit-based flow control management approach, each receiving node on the IBA network forwards the originating device with a value representing the maximum amount of efficiently transferable data with no packet loss. The credit data is transferred via a dedicated link along the IBA fabric. No packets are sent until the sending device is signaled that data transfer is possible via the receiving devices primary communications buffer.
CRC Check - Two CRCs provide end-to-end integrity checking; a 16-bit variant CRC value is assigned for each data field and is re-calculated at each IBA fabric hop. A 32-bit invariant CRC value is designed to protect static data that does not change between each IBA hop point.
Subnet Management - InfiniBand's network layer provides for the routing of packets from one subnet to another. Each routed packet features a Global Route Header (GRH) and a 128-bit IPv6 address for both source and destination nodes. The network layer also embeds a standard 64-bit unique global identifier for each device along all subnets. With consistent handling of these identifier values, the IBA network allows for data transversal across multiple logical subnets.
IBA allows for multiple connect paths scaling up to 30 Gigabit per second in performance. Since the full duplex serial communications nature of IBA specifies a requirement of only four wires, a high-speed 12x implementation requires only a moderate 48 wires (or pathways). This spec is quite impressive, especially as compared to the 90-pin design of the PCI-X architecture when utilized for back plane connectivity. Other specifications of the IBA physical layer include provisions for custom back plane I/O connectors and hot swapping capabilities. For cost efficiency, IBA at the network interconnect level will rely on current "off the shelf" copper twisted pair and fiber optic cabling technologies.
The link and transport layers of InfiniBand are perhaps the most important aspect of this new interconnect standard. At the packet communications level, two specific packet types for data transfer and network management are specified. The management packets provide operational control over device enumeration, subnet directing, and fault tolerance. Data packets transfer the actual information with each packet deploying a maximum of four kilobytes of transaction information. Within each specific device subnet the packet direction and switching properties are directed via a Subnet Manager with 16 bit local identification addresses.
IBTracer 4X Fully Featured Infiniband Analysis System