Chapter 3. Compute Module

This chapter describes the function and physical components of the compute module. It also describes the possible system configurations and the technical specifications for this module. Specifically, this chapter includes the following information.

The SGI Onyx 350 system uses two types of compute modules, as follows:

Base compute module. This module is your system's primary compute module where your system's operating system resides. (Every system must have a base compute module.) The base compute module provides processors, memory, and PCI/PCI–X slots to connect I/O devices. It also comes standard with a factory–installed SCSI disk drive, a PCI 4-port USB card, an IO9 PCI card, and an internal serial daughter card that provides various I/O ports to your system.

System expansion compute module. This module in contrast to the base compute module, comes with processors, memory, and PCI/PCI–X slots, but the SCSI disk drive(s) and IO9 card are optional.


Note: In this chapter, the term “compute module” refers to both types of compute modules. Keep in mind that some of the features that are standard for the base compute module are optional for the system expansion compute modules. When information is applicable to only one of the two types of modules, that will be specified.


System Features

A single 2U base compute module can connect directly to an InfiniteReality Onyx 350 graphics pipe; or it can be rackmounted with other optional modules to create an Onyx 350 system with more functionality. The base compute module consists of 2 or 4 64-bit MIPS RISC (reduced instruction set computer) processors and from 1 to 8 GB of local memory available on two to eight dual inline memory modules (DIMMs). An optional read-only CD/DVD drive is available in any compute module that has an IO9 installed.

This base compute module can also be combined with one or more of the following optional modules to expand the function of the system:

  • The system expansion compute module, which is interconnected to the base compute module via a NUMAlink 3 cable, adds processors, memory, and four PCI and PCI-X card slots to your system. It may or may not include an IO9 card. If it includes an IO9 card, it will take up the lowermost PCI/PCI–X slot. (The new combined single system created by connecting the base compute module with a system expansion compute module can include 4, 6, or 8 processors with local memory of up to 16 GBs.)

  • The 4U PCI expansion module adds PCI slots, but no processors, no memory, and no IO9 card. There are two versions of the PCI expansion module: one module has 12 PCI slots that support 3.3-V or universal PCI cards and the other module has 6 PCI slots that support 5-V or universal PCI card and 6 slots that support 3.3-V or universal PCI cards. For more information about this module, see the PCI Expansion Module User's Guide (5.0-V Support and/or 3.3-V Support), 007-4499-00x.

  • The 2U memory and PCI expansion (MPX) module can provide extra memory and four PCI/PCI-X card slots to your system. See Chapter 6, “Memory and PCI Expansion Module” for details about this module.

  • The TP900 storage module can provide additional storage to the system. See the SGI Total Performance 900 Storage System User's Guide (007-4428-00x), for details about this module. The model 350 supports RAID and other optional mass storage options.

  • The NUMAlink module connects two to eight compute modules. See “NUMAlink Module” in Chapter 1 for details about this module.

Figure 3-1 shows a front panel and side views of the compute module.

Figure 3-1. Front and Side Views of a Compute Module

Front and Side Views of a Compute Module

The compute module includes the following features:

  • An L1 controller to manage and monitor functions of the compute module such as system temperature. The module includes an L1 controller display that shows system processes and error messages.

  • An optional internal read-only slim–line CD/DVD–ROM drive, and 1 or (optionally) 2 hard disk drives.

  • Up to 2 power supplies. The second power supply, which is optional, is a redundant supply.

  • 1 NUMAlink 3 port to connect to your system to a system expansion compute module, an MPX module, or a 4U PCI expansion module.

  • Supports 1 Crosstown2 XIO port that enables the module to connect to an InfiniteReality graphics pipeline.

  • 4 PCI/PCI–X card slots on two buses. These are 64-bit slots that can house 33-MHz and 66-MHz PCI cards, or 66-MHZ and 100-MHz PCI–X cards. Note that a PCI–X card runs at full speed only when the card on that same PCI bus runs at the same speed. Each bus will run only as fast as the lowest-speed card installed.

  • Your system's primary “base” compute module comes standard with an IO9 PCI card that is installed in the lowermost (PCI slot 1) of the module's PCI slots. Inclusion of the IO9 on the bus limits bus speed to 66 MHz. Note that the optional internal CD/DVD drive supported by the IO9 is a read-only device.


    Note: For I/O expandability, the compute module can connect to a peer-attached PCI expansion module, which adds 12 PCI slots to your system.


  • 2 DB–9 serial ports. One labeled L1 console port (console and diagnostic port) that enables you to connect a system console to the L1 controller on the compute module. The second serial port, labeled Serial port 0, connects serial devices to the compute module.

  • 1 type B USB (Universal Serial Bus) L1 port that is used to connect the compute module to an L2 controller.

  • A factory-installed serial daughtercard that includes 2 PS/2 connectors and 3 DB9 serial ports to connect RS–232/RS–422 serial devices to the system.

  • An IO9 card that provides the following connectors and functions to your compute module:

    • A real-time interrupt input (RTI) port and a real-time interrupt output (RTO) port.

    • One 10/100/1000 BaseT Ethernet port.

    • A 68-pin VHDCI Ultra3 SCSI connector. The IO9 card supports two internal SCSI disk drives that have a peak data transfer speed of up to 160 MB/s between the disks and system memory. (For storage expandability, the compute module can connect to a 2U 8-disk Ultra3/160 SCSI JBOD TP900 storage system.)

Table 3-1 compares Onyx 300 systems with Onyx 350 systems.

Table 3-1. Comparing Onyx 300 and Onyx 350 Systems

System Feature

Onyx 300 System
Base module

Onyx 300 System
Expansion module

Onyx 350 System
Base module

Onyx 350 System
Expansion module

MIPS RISC processors

2 or 4

2 or 4

2 or 4

2 or 4

Memory

1GB to 4 GB

1 GB to 4 GB

1 GB to 8 GB

1 GB to 8 GB

I/O expansion slots

2 64-bit slots for 33-MHz or 66- MHz PCI cards.

2 64-bit slots for 33- MHz or 66-MHz PCI cards.

1 64–bit slot available for 33/66-MHz PCI or 66/100-MHz PCI–X cards. 1 slot for 33/66-MHz PCI only. [a]

1 64–bit slots available for 33/66-MHz PCI or 66/100-MHz PCI–X cards.

Serial ports

2 DB-9 RS-232 or RS-422 serial ports.

2 DB-9 RS-232 or RS-422 serial ports.

4 DB-9 RS-232 or RS-422 serial ports

1 port

L1 console port

1 DB–9 serial L1 console port to connect a console to the module.

1 DB–9 serial L1 console port to connect a console to the module.

1 DB–9 serial L1 console port to connect a console to the module.

1 DB–9 serial L1 console port to connect a console to the module.

3.5-inch drive bays

2

2

2

2 (with optional IO9 board)

CD/DVD (read-only)

None - external option available

None - external option available

One (optional)

One (with optional IO9 board)

USB type A ports (optional daughtercard)

2 USB type A ports to connect keyboards and mice.

2 USB type A ports to connect keyboards and mice.

4 USB type A ports to connect keyboards and mice.

USB type A ports to connect keyboards and mice (optional).

PS/2 ports

None

None

2 PS/2 ports

None

USB L1 port (type B)

1 USB type B L1 port to connect the module to an L2 controller.

1 USB type B L1 port to connect the module to an L2 controller.

1 USB type B L1 port to connect the module to an L2 controller.

1 USB type B L1 port to connect the module to an L2 controller.

NUMAlink port

1

N/A (Used to link with base module or NUMAlink module)

1

N/A (Used to link with base module or NUMAlink module)

XIO port

1

1

1

1

Power supplies

1

1

1 (optional second supply available).

1 (optional second supply available).

Ethernet port

1 10/100BaseT port

1 10/100BaseT ports

1 standard 10/100/1000BaseT port

1 (optional) 10/100/1000BaseT port[b]

SCSI channel (internal)

1 Ultra3 SCSI, 160 MB/s

1 Ultra3 SCSI, 160 MB/s

1 Ultra3 SCSI, 160 MB/s

1 optional Ultra3 SCSI

SCSI channel (external)

1 Ultra3 SCSI (VHDCI)

1 Ultra3 SCSI (VHDCI)

1 external Ultra3 SCSI (VHDCI)

1 Ultra3 SCSI optional with IO9

RT interrupt input port

1

1

1

1 (with IO9)

RT interrupt output port

1

1

1

1 (with IO9)

[a] The fourth (bottom–most) slot is used for factory-installed IO9 card only. The slot next to it is limited to use of a 64-bit 33-MHz or 66-MHz PCI card.

[b] The additional Ethernet, SCSI, and RT interrupt connectors are available only if the expansion compute module includes an optional IO9 card.


Compute Module Architecture

The compute module architecture includes the following components shown in Figure 3-2 and discussed in the following subsections:

IP53 Node Board

The IP53 node board consists of the following components:

  • Up to four processors (labeled CPU in Figure 3-3).

  • Primary and secondary (L2) cache. The primary cache is internal to the processor. The L2 cache is labeled SRAM in Figure 3-3.

  • Local memory (DIMMs).

  • Bedrock ASIC.

    Figure 3-3. IP53 Node Board

    IP53 Node Board

Processors (CPUs)

The 64-bit system processors are soldered to the IP53 node board. Each processor implements the 64-bit MIPS IV instruction set architecture. It fetches and decodes four instructions per cycle and issues the instructions to five fully pipelined execution units. It predicts conditional branches and executes instructions along the predicted path.

The processor also uses a load/store architecture in which the processor does not operate on data that is located in memory; instead, it loads the memory data into its registers and then operates on the data. When the processor is finished manipulating the data, the processor stores the data in memory.

Primary and Secondary Cache

To reduce memory latency, a processor has access to two on-chip 32-KB L1 (primary) caches (one cache is for data and the other cache is for instructions) and an off-chip L2 (secondary) cache. The L1 caches are located within the processor for fast, low-latency access of instructions and data. The base compute module supports a 4MB L2 cache.


Note: The IP53 node boards use SECDED ECC to protect data when transferred to/from secondary cache, main memory, and directory memory.

The IP53 node boards use parity to protect data when transferred between a processor and primary cache and to protect system commands sent between the Bedrock ASIC and a processor.

Local Memory (DIMMs)

Each compute module has from 1 to 8 GB of local memory, which includes main memory and directory memory for cache coherence. Local memory is provided by DIMMs, which contain double data rate synchronous dynamic random-access memory (DDR SDRAM chips), installed in two or more DIMM slots located on the compute module.

These eight DIMM slots are laid out into one group of even–numbered slots 0, 2, 4, and 6 and a second group of odd-numbered slots 1, 3, 5, and 7, as shown in Figure 3-4.

DIMMs are installed or removed one per DIMM slot, and two at a time, so that the two DIMMs installed provide local memory, or remove local memory, for the same pair of banks. For example, you could install a DIMM in slot 0 and another in slot 1 to provide local memory for banks 0 and 1. And conversely, you could remove a DIMM from slot 0 and another from slot 1 in order to remove local memory from banks 0 and 1.

The two DIMMs that compose a bank pair must be the same size; however, the bank pairs can differ in size.

Figure 3-4. Local Memory Layout

Local Memory Layout

Table 3-2 lists the DIMM sizes that IP53 node boards support.

Table 3-2. Memory DIMM Specifications

DIMM Capacity

Chip Capacity

Total Memory Capacity

512 MB

128 MB

2 DIMMs (1 bank pair) - 1 GB
8 DIMMs (4 bank pairs) - 4 GB

1 GB

256 MB

2 DIMMs (1 bank pair) - 2 GB
8 DIMMs (4 bank pairs) - 8 GB


Bedrock ASIC

The Bedrock ASIC enables communication among the processors, memory, network, and I/O devices. It controls all activity within the node board (for example, error correction and cache coherency). The Bedrock ASIC also supports page migration.

The Bedrock ASIC consists of the following:

  • 1 central crossbar (XB) provides connectivity between the Bedrock ASIC interfaces.

  • 2 processor interfaces (PI_0 and PI_1). Each processor interface communicates directly with two processors. When the node board contains two processors, only one processor interface is used.

  • 1 memory/directory interface (MD) controls all memory access.

  • 1 network interface (NI) is the interface between the crossbar unit and the NUMAlink 3 interconnect.

  • 1 I/O interface (II) that allows I/O devices to read and write memory (direct memory access [DMA] operations) and allows the processors within the system to control the I/O devices (PIO operations).

  • 1 local block (LB) services processor I/O (PIO) requests that are local to the Bedrock ASIC.

IO9 Card

The IO9 PCI card, which resides in bus 1, slot 1 (the lowermost slot) of the base compute module, provides the base I/O functionality for the system.


Note: The expansion compute module can be ordered with an IO9 PCI card. This card resides in bus 1, slot 1.

The IO9 PCI card has the following connectors:

  • External VHDCI 68-pin SCSI connector.

  • 1 10/100/1000BaseT Ethernet connector.

  • 1 real-time interrupt output (RTO) connector and 1 real-time interrupt input (RTI) connector.

The IO9 card also contains an IOC-4 ASIC that supports the following features:

  • 1 (internal only) IDE channel for the optional CD/DVD-ROM drive.

  • 4 serial ports.

  • 2 PS/2 ports.


    Note: The PS/2 ports and three serial ports are located on a daughtercard.


  • NVRAM and time-of-day clock.

Interface Board with a Daughtercard

The interface board contains the following components:

  • L1 controller logic.

  • Power supply interface.

  • IO9 expansion connectors; connects to the serial daughter card that contains DB-9 connectors (serial ports) and DIN-6 connectors (PS/2 ports).

  • NUMAlink connector.

  • XIO connector.

  • Voltage regulator modules (VRMs).

  • Connectors to the IP53 node board and the PCI riser card.

PCI Riser Card

The PCI riser card provides the following:

  • PCI ASIC.

  • A connector that connects the PCI riser card to the IP53 motherboard.

  • A connector that connects the IP53 motherboard with the IO9 card, and a 50-pin AMP connector that connects to the IO9 card.

  • 1 nonstandard PCI/PCI–X connector that connects to the IO9 card.

  • 4 PCI/PCI–X card slots (64 bit, 3.3 V) and a slot for an InfinitePerformance graphics board or optional digital media boards. (The slot for the graphics/digital media board is located on the backside of the PCI riser card.)

DVD–ROM

The compute module can contain an optional slim-line DVD-ROM that also has CD-ROM capabilities.


Note: The CD/DVD-ROM is a read-only unit that requires an IO9 PCI card.

The CD/DVD-ROM is located at the front left side of the module (above the disk drives).

Disk Drives

The base compute module supports one or two sled-mounted Ultra3 SCSI disk drives that have a peak data transfer speed of up to 140 MB/s between the disks and system memory. The two disks connect to a SCSI backplane. The SCSI backplane connects to the internal SCSI 160 logic on the IO9 PCI card.


Note: An expansion compute module can also be ordered with SCSI disk drives. This configuration requires an IO9 PCI card.

The system supports different disk drive storage capacities and both 10,000-RPM and 15,000-RPM drives are available.

The disk drives are located at the front left side of the module (below the optional CD/DVD-ROM location). The master (standard) drive is the bottom drive.

Power Supplies

The base compute module, the expansion compute module, or the MPX module, can contain one or two power supplies; the second power supply is optional. The power supply can input 110-220 VAC and output 500 W (12 VDC, 5 VDC, and 3.3 VDC).

Power supplies are hot–swappable only when two units are installed and working in a module. They are located at the front right side of the module. The primary power supply is the left supply and the optional second power supply installs in the right side of the power bay.

External Components

This section describes the external components of the compute module, which are located in the front and rear panels.

Front Panel Items

This section describes the front panel controls and indicators of the compute module, as shown in Figure 3-5. Note the need for a paper clip to actuate the reset or NMI functions.

Figure 3-5. Front Panel Items

Front Panel Items

The front panel of the module has the following items:

  • L1 controller display. A liquid crystal display (LCD) that displays status and error messages that the L1 controller generates.


    Note: See the SGI L1 and L2 Controller Software User's Guide (007-3938-00x) for more information on the L1 controller.


  • Power button with LED. Press this button to power on the internal components. Alternatively, you can power on the internal components using an optional system console. The LED illuminates green when the internal components are on.

  • Reset. Actuate this switch (with the end of a paper clip) to reset the internal processors and ASICs. The reset will cause a memory loss. (See the NMI switch information that follows to perform a reset without losing memory.)

  • NMI switch. Actuate the NMI (non-maskable interrupt) switch (with the end of a paper clip) to reset the internal processors and ASICs without losing memory. Register data and memory are stored in a /var/adm/crash file.

  • Service-required LED. This LED illuminates yellow to indicate that an item has failed or is not operating properly, but the compute module is still operating.

  • Failure LED. This LED illuminates red to indicate that a failure has occurred and that the module is down.

  • Drive LEDs. These LEDs illuminate green to indicate drive activity.

Rear Panel Items

This section describes the rear panel connectors, PCI/PCI–X slots, and LEDs of the base module, as shown in Figure 3-6.

Figure 3-6. Rear Panel Items

Rear Panel Items

The rear panel of the compute module has the following items:

  • Power connector. This connector connects the base compute module to an AC power outlet.

  • Serial port 0. This DB9 RS-232/RS-422 serial port connects a serial device to the compute module.

  • L1 console port.  This DB–9 serial port (console and diagnostic port) enables you to connect a system console to the L1 controller on the compute module.

  • L1 port (USB type B). This universal serial bus (USB) type B connector connects the compute module's L1 controller to an optional L2 controller.

  • LINK connector. This NUMAlink 3 connector (labeled NI) connects the base compute module to an expansion compute module (a second module, with or without an IO9 card) or to a NUMAlink module. This connection is made with a NUMAlink 3 cable at 1.6 GB/s in each direction.

    • NUMAlink 3 LED. The NUMAlink 3 connector has 2 LEDs. One LED lights yellow to indicate that the compute module and the expansion compute module or NUMAlink module (router) to which it is connected are powered on. The other LED (located to the right of the NUMAlink 3 connector) lights green when the link between the compute module and the module to which it is connected is established.

  • XIO connector. This Crosstown2 connector (labeled II) connects the base compute module to a PCI expansion module or InfiniteReality graphics pipeline. This connection is made with a NUMAlink 3 cable at 800 MB/s in each direction.

    • XIO connector LEDs. The XIO connector has 2 LEDs. One LED lights yellow to indicate that both the compute module and the PCI expansion module or InfiniteReality graphics pipeline to which the compute module is connected are powered on. The other LED lights green when the compute module link to the PCI expansion module or graphics pipeline is established.

  • PCI/PCI–X slots 1, 2, 3, and 4. 2 of these slots are on one bus, and 2 slots are on another. These 64-bit slots can house 33-MHz and 66-MHz PCI cards or 66 MHz, and 100 MHz PCI–X cards. (See SGI Supportfolio at http://support.sgi.com for an updated list of supported cards.) The bottom-most slot houses an IO9 PCI card.


    Note: If you run PCI and PCI–X cards on the same bus at the same time, the PCI–X card will run in PCI mode. If you run cards of different speeds on the same bus, the highest speed card will run at the speed of the slower card. For example, if a card is running at 100-MHz in one slot of a bus and a card running at 33-MHz is installed in the second slot of the same bus, both cards will run at 33-MHz.


The factory-installed serial daughtercard provides the following connectors:

  • Two PS/2 ports.

  • Serial ports 2, 3, and 4. These 3 DB9 RS-232/RS-422 serial ports are used to connect serial devices to the compute module.

The factory-installed IO9 card provides the following connectors:

  • RT interrupt input and output. RTO (output) enables the compute module to interrupt an external device. RTI (input) enables an external device to interrupt the compute module.

  • Ethernet port (10/100/1000 MB).  This autonegotiating 10BaseT/100BaseT/1000BaseT twisted-pair Ethernet port connects the compute module to an Ethernet network.

  • SCSI connector. This 68-pin VHDCI external SCSI port, which is internally connected to a second internal SCSI disk drive, enable you to connect SCSI devices to the compute module. See SGI Supportfolio at http://support.sgi.com for an updated list of supported SCSI devices.

The factory-installed USB PCI card provides the following:

  • 4 USB ports. The card provides USB connectors for keyboard/mouse use.

Internal Components and Features

The internal components of the compute module are described in the following sections:

IP53 Motherboard

The IP53 motherboard houses the following components:

  • 2 or 4 MIPS RISC processors (each processor has a secondary (L2) cache).

  • 8 dual inline memory module (DIMM) slots to install DIMMs to provide 1 to 8 GB of main memory to local memory bank pairs on your system. See “Dual Inline Memory Modules (DIMMs)” for detailed information on DIMMs.

  • PIC ASIC (application-specific integrated circuit) is the interface between the Bedrock ASIC and the PCI/PCI–X slots.

  • Bedrock ASIC (or hub ASIC) enables communication between the processors, memory, and I/O devices.

  • Serial ID EEPROM contains component information.

  • L1 controller logic monitors and controls the environment of the compute module (for example, fan speed, operating temperature, and system LEDs). See the SGI L1 and L2 Controller Software User's Guide (007-3938-00x) for more information on the L1 controller.

  • 5 VRMs that convert the incoming voltages to the voltage levels required by the components.

  • Light-emitting diodes (LEDs) provide information about the NUMAlink port and the XIO interface connectors as follows:

    • 2 NUMAlink 3 LEDs, controlled by the L1 controller

    • 2 XIO LEDs, controlled by the L1 controller


      Note: Ports and LEDs are described in detail in “Rear Panel Items”.


Dual Inline Memory Modules (DIMMs)

Each compute module has from 1 to 8 GB of local memory, which includes main memory and directory memory for cache coherence.

Local memory is provided by DIMMs, which contain double data rate synchronous dynamic random-access memory (DDR SDRAM) chips, installed in two or more DIMM slots located on the compute module.

These eight DIMM slots are laid out into one group of even–numbered slots 0, 2, 4, and 6 and a second group of odd-numbered slots 1, 3, 5, and 7, as shown in Figure 3-7.

DIMMs are installed one per DIMM slot, and two at a time, so that the two DIMMs installed provide local memory for the same pair of banks. Table 3-3 lists the DIMM slots and the corresponding bank pairs to which local memory is provided when DIMMs are installed:

Table 3-3. DIMMs and Bank Pairs

DIMM in Slot Number

Provides Local Memory for Bank Pair Numbers

0[a]

0 and 1

1

0 and 1

2

2 and 3

3

2 and 3

4

4 and 5

5

4 and 5

6

6 and 7

7

6 and 7

[a] The first two DIMMs must be installed in DIMM slot 0 and DIMM slot 1.

Figure 3-7. Layout of DIMM Slots and Local Memory Banks

Layout of DIMM Slots and Local Memory Banks

IO9 Card

The IO9 card provides I/O interface functions, the I/O connectors to the system backpanel, and the L1 controller functions.

The IO9 card has the following connectors:

  • 1 internal and one external 68-pin VHDCI SCSI port connector.

  • 1 10BaseT/100BaseT/1000BaseT auto-selecting Ethernet connector.

  • 1 real-time interrupt output (RTO) port and 1 real-time interrupt input (RTI) port.


    Note: Ports and LEDs are described in detail in “Rear Panel Items”.


SCSI Backplane Board and Disk Drive Options

The SCSI backplane provides a connection point between the internal SCSI interface cable connected to the IO9 board and up to two disk drives. The SCSI backplane supports Ultra3 SCSI LVD disks with a peak transfer rate of 160 MB/s. The chassis accommodates up to two sled-mounted 3.5-inch by 1-inch Ultra3 SCSI LVD drives. The system supports both 10,000-RPM and 15,000-RPM disk drives.

See SGI Supportfolio at http://support.sgi.com for an updated list of supported drives.

System Configuration

This section lists the internal compute module configuration options, such as the number of DIMMs that can be installed in the compute module to increase its local memory.

This section also lists external compute module configuration options that can enhance the performance of the Onyx 350 system. For example, the compute module can connect to a 2U TP900 storage system to expand storage, or it can connect to a PCI expansion module to increase I/O capabilities.

Internal Configurations

PCI and PCI–X cards, disk drives, power supplies, and memory (DIMMs) are the configurable internal components of the compute module.

Processor upgrades can only be installed by trained SGI system support engineers (SSEs).

As a customer, you can configure PCI and PCI-X cards, disk drives, and memory. Chapter 4, “Installing and Removing Customer-replaceable Units,” provides instructions for installing and removing these items to reconfigure your module.


Warning: To prevent personal injury, or damage to your system, only trained SGI system support engineers (SSEs) can service or configure internal components of the compute module that are not specifically listed as serviceable and configurable by customers.


External Configurations

The base compute module can be configured with the following optional items to expand its function:

  • The system expansion compute module, which is interconnected to the base module via a NUMAlink 3 cable, adds processors, memory, and 4 PCI/PCI-X card slots. It may or may not include an IO9 card. (If you combine the base compute module with the system expansion compute module, you can create a single system that includes 4, 6, or 8 processors, with up to 16 GB of local memory, and seven PCI/PCI–X card slots.)

  • The 4U PCI expansion module adds PCI slots, but no processors, no memory, and no IO9 card. There are two versions of the PCI expansion module: one module has 12 PCI slots that support 3.3-V or universal PCI cards, and the other module has 6 PCI slots that support 5-V or universal PCI cards and 6 slots that support 3.3-V or universal PCI cards. For more information about this module, see the PCI Expansion Module User's Guide (5.0-V Support and/or 3.3-V Support), 007-4499-00x.

  • The optional 2U memory and PCI/PCI-X expansion (MPX) module provides extra memory and 4 PCI/PCI-X card slots to your system. See Chapter 6, “Memory and PCI Expansion Module” for details about this module.

  • The TP900 storage module, provides additional storage to the system. See SGI Total Performance 900 Storage System User's Guide, 007-4428-00x, for details about this module. The Onyx 350 system supports optional RAID and other storage modules. See “Optional System Components” in Chapter 1 for additional detail.

  • The NUMAlink module connects two or more compute modules. See “NUMAlink Module” in Chapter 1 for more information about this optional module.

  • If your system uses InfiniteReality (IR) graphics, you may expand the number of graphics pipes by adding 1 or more IR graphics modules. See Chapter 5, “InfiniteReality Graphics Module” for more information on the IR graphics module.

The Onyx 350 system can be configured many different ways to satisfy your computing needs. This section shows two sample configurations.

Figure 3-8 shows an Onyx 350 system rackmounted in a 17U rack that includes the following items:

  • A 2U base compute/graphics module with up to 4 processors, 8 GB of local memory, and three PCI/PCI–X card slots (the fourth lowermost slot comes with a factory installed IO9 PCI card).

  • A 4U PCI expansion module (plus power bay) that adds 12 PCI card slots.

  • A TP900 module that adds optional mass storage capability.

    Figure 3-8. System with One Base Compute Module, One 4U PCI Expansion Module, and One TP900 Module

    System with One Base Compute Module, One 4U PCI Expansion Module, and One TP900 Module

Figure 3-9 shows an Onyx 350 system rackmounted in a 17U rack that includes the following items:

  • A 2U base compute module with up to 4 processors, 8 GB of local memory, and 3 PCI/PCI–X card slots the fourth (lowermost) slot comes with a factory-installed IO9 PCI card.

  • A 2U system expansion compute module that adds up to four processors, 8 GB of local memory, and 4 PCI/PCI–X card slots.

  • One 2U NUMAlink module for expanded system connectivity.

  • One 4U PCI expansion module that adds 12 PCI card slots.

  • One MPX expansion module.

    Figure 3-9. System with Base Compute Module, System Expansion Compute Module, MPX module, and 4U PCI Expansion Module

    System with Base Compute Module, System Expansion Compute Module, MPX module, and 4U PCI Expansion Module

Bandwidth Specifications

Table 3-4 lists the bandwidth characteristics of the Onyx 350 compute module.

Table 3-4. Bandwidth Characteristics of the Compute Module

Characteristic

Peak Bandwidth

Sustainable Bandwidth

LINK channel

3.2 GB/s full duplex
1.6 GB/s each direction

~1420 MB/s each direction

Xtown2 channel

2.4 GB/s full duplex
1.2 GB/s each direction

~1066 MB/s half duplex
~1744 MB/s full duplex, ~872 MB/s each direction

Main memory

3200 MB/s

3200 MB/s

SYSAD

1600 MB/s

~1400 MB/s

For additional system technical specifications, see Appendix A, “Technical Specifications and Pinouts”.