Clarity 440 SG246196 User Manual To The 0b07025b 6071 46f3 B658 4fd5cdc18188

User Manual: Clarity 440 to the manual

Open the PDF directly: View PDF PDF.
Page Count: 202

DownloadClarity 440 SG246196 User Manual  To The 0b07025b-6071-46f3-b658-4fd5cdc18188
Open PDF In BrowserView PDF
Front cover

IBM
xSeries
440 Planning and
Installation Guide
Describes the technical details of
the x440 models
Helps you prepare for and
perform an installation
Covers key IBM Director
management tools

David Watts
Reza Fanaei Aghdam
Duncan Furniss
Jason King

ibm.com/redbooks

International Technical Support Organization
IBM ^ xSeries 440 Planning and Installation
Guide
October 2002

SG24-6196-00

Note: Before using this information and the product it supports, read the information in
“Notices” on page vii.

First Edition (October 2002)
This edition applies to the IBM ^ xSeries 440, machine type 8687.
© Copyright International Business Machines Corporation 2002. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule
Contract with IBM Corp.

Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
The team that wrote this redbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Comments welcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Chapter 1. Technical description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 The x440 product line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 System partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 IBM XA-32 chipset. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Processors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4.1 Intel Xeon Processor MP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.4.2 Intel Xeon Processor DP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.5 SMP Expansion Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.6 IBM XceL4 Server Accelerator Cache. . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.7 System memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.8 PCI subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.9 Redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.10 Light Path Diagnostics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.11 Remote Supervisor Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.12 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.13 IBM Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Chapter 2. Positioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.1 xSeries 440 application solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.1.1 Server consolidation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.1.2 Enterprise applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.1.3 Infrastructure applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.1.4 Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.2 Why choose the x440 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.2.1 IBM XA-32 chipset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.2.2 Intel Xeon MP and DP processors . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.2.3 XceL4 Server Accelerator Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.2.4 High-performance memory subsystem . . . . . . . . . . . . . . . . . . . . . . . 46
2.2.5 Active PCI-X . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.2.6 XpandOnDemand scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.2.7 System Partition Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

© Copyright IBM Corp. 2002. All rights reserved.

iii

2.3 The benefits of system partitioning . . . . . . . . .
2.4 Server consolidation . . . . . . . . . . . . . . . . . . . .
2.4.1 Types of server consolidation . . . . . . . . .
2.4.2 Why consolidate servers . . . . . . . . . . . . .
2.4.3 Benefits from server consolidation . . . . .

......
......
......
......
......

.......
.......
.......
.......
.......

......
......
......
......
......

..
..
..
..
..

49
51
51
57
57

Chapter 3. Planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.1 System hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.1.1 Processors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.1.2 Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.1.3 PCI slot configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.1.4 Broadcom Gigabit Ethernet controller . . . . . . . . . . . . . . . . . . . . . . . . 72
3.2 Cabling and connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.2.1 SMP Expansion Module connectivity . . . . . . . . . . . . . . . . . . . . . . . . 74
3.2.2 Remote Supervisor Adapter connectivity . . . . . . . . . . . . . . . . . . . . . 77
3.2.3 Remote Expansion Enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
3.2.4 Serial connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
3.3 Storage considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
3.3.1 xSeries storage solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
3.3.2 Disk subsystem performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
3.3.3 Tape backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
3.4 Server partitioning and consolidation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
3.5 Operating system considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
3.5.1 Windows 2000 Datacenter Server . . . . . . . . . . . . . . . . . . . . . . . . . . 92
3.5.2 Microsoft Windows NT 4.0 Enterprise Edition . . . . . . . . . . . . . . . . . . 95
3.5.3 Microsoft Windows 2000 Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
3.5.4 Microsoft Windows .NET Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
3.5.5 Novell NetWare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
3.5.6 Red Hat/SuSE Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
3.5.7 VMware ESX Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
3.6 Application considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
3.6.1 Scalability and performance considerations . . . . . . . . . . . . . . . . . . 100
3.6.2 SMP and server types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
3.7 Rack installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
3.8 Power considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
3.9 Solution Assurance Review. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Chapter 4. Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
4.1 System BIOS settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
4.1.1 Updating BIOS and firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
4.1.2 Enabling memory mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
4.1.3 Enabling Hyper-Threading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
4.2 Device drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

iv

IBM ^ xSeries 440 Planning and Installation Guide

4.3 Operating system installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
4.3.1 Microsoft Windows 2000 Server and Advanced Server . . . . . . . . . 112
4.3.2 Red Hat Linux installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
4.3.3 NetWare installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
4.3.4 VMware ESX Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
4.4 Additional Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Chapter 5. Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
5.1 Active PCI Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
5.1.1 Using Active PCI Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
5.1.2 Adding adapters to the system . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
5.1.3 Analyzing an existing configuration . . . . . . . . . . . . . . . . . . . . . . . . . 144
5.2 System Partition Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
5.3 Process Control. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
5.3.1 Process alias rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
5.3.2 Process execution rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
5.3.3 Group process execution rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Related publications . . . . . . . . . . . . . . . . . . . . . .
IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Referenced Web sites . . . . . . . . . . . . . . . . . . . . . .
How to get IBM Redbooks . . . . . . . . . . . . . . . . . . .
IBM Redbooks collections . . . . . . . . . . . . . . . . .

......
......
......
......
......

.......
.......
.......
.......
.......

......
......
......
......
......

.
.
.
.
.

175
175
175
178
178

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179

Contents

v

vi

IBM ^ xSeries 440 Planning and Installation Guide

Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area.
Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product, program, or service that
does not infringe any IBM intellectual property right may be used instead. However, it is the user's
responsibility to evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document.
The furnishing of this document does not give you any license to these patents. You can send license
inquiries, in writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer
of express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may
make improvements and/or changes in the product(s) and/or the program(s) described in this publication at
any time without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm
the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on
the capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrates programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the
sample programs are written. These examples have not been thoroughly tested under all conditions. IBM,
therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy,
modify, and distribute these sample programs in any form without payment to IBM for the purposes of
developing, using, marketing, or distributing application programs conforming to IBM's application
programming interfaces.

© Copyright IBM Corp. 2002. All rights reserved.

vii

Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
Active Memory™
Active™ PCI-X
Chipkill™
DB2®
Electronic Service Agent™
Enterprise Storage Server™
ESCON®
FlashCopy®
IBM®
Informix®
iSeries™
Memory ProteXion™
Netfinity®

PowerPC®
PowerPC 750™
Predictive Failure Analysis®
pSeries™
Redbooks(logo)™
RETAIN®
S/390®
ServeRAID™
ServerProven®
SP™
SP1®
SP2®
ThinkPad®

Tivoli®
TotalStorage™
Wake on LAN®
WebSphere®
X-Architecture™
XA-32™
XceL4™
XpandOnDemand™
xSeries™
zSeries™

The following terms are trademarks of International Business Machines Corporation and Lotus Development
Corporation in the United States, other countries, or both:
Domino™

Lotus®

Notes®

The following terms are trademarks of other companies:
ActionMedia, LANDesk, MMX, Pentium and ProShare are trademarks of Intel Corporation in the United
States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.
Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun
Microsystems, Inc. in the United States, other countries, or both.
C-bus is a trademark of Corollary, Inc. in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
SET, SET Secure Electronic Transaction, and the SET Logo are trademarks owned by SET Secure
Electronic Transaction LLC.
Other company, product, and service names may be trademarks or service marks of others.

viii

IBM ^ xSeries 440 Planning and Installation Guide

Preface
The IBM ^ xSeries 440 is IBM’s flagship industry standard server and is
the first full implementation of the 32-bit IBM XA-32 chipset, code named
“Summit”, as part of the Enterprise X-Architecture strategy. The x440 provides
new levels of high availability and price performance, and offers scalability from
two-way to 16-way SMP, from 2 GB to 128 GB of memory, and up to 24 PCI slots,
all in one single system image.
This redbook is a comprehensive resource on the technical aspects of the server,
and is divided into five key subject areas:
򐂰 Chapter 1, “Technical description” introduces the server and its subsystems
and describes the key features and how they work.
򐂰 Chapter 2, “Positioning” examines the types of applications that would be
used on a server such as the x440, including server consolidation,
line-of-business application, and infrastructure applications. It reviews the
features that make the x440 such a powerful system.
򐂰 Chapter 3, “Planning” describes the aspects of planning to purchase and
planning to install the x440. It covers such topics as configuration, operating
system specifics, scalability, and physical site planning.
򐂰 Chapter 4, “Installation” goes through the process of installing Windows 2000,
Red Hat Linux, NetWare, and VMware ESX Server. It describes what BIOS
and drivers updates are appropriate and when to install them.
򐂰 Chapter 5, “Management” describes how to use the key IBM Director
extensions designed for the x440: System Partition Manager, Active PCI
Manager, and Process Control.
A partner redbook is Server Consolidation with the IBM ^ xSeries 440
and VMware ESX Server, SG24-6852.

The team that wrote this redbook
This redbook was produced by a team of specialists from around the world
working at the International Technical Support Organization, Raleigh Center.
David Watts is a Consulting IT Specialist at the International Technical Support
Organization in Raleigh. He manages residencies and produces IBM
Redbooks on hardware and software topics related to IBM xSeries systems
and associated client platforms. He has authored over 20 redbooks; his most

© Copyright IBM Corp. 2002. All rights reserved.

ix

recent books include Integrating IBM Director with Enterprise Management
Solutions and Implementing IBM Director Management Solutions. He has a
Bachelor of Engineering degree from the University of Queensland (Australia)
and has worked for IBM for over 13 years. He is an IBM ^ Certified
Specialist for xSeries and an IBM Certified IT Specialist.
Reza Fanaei Aghdam is a Senior IT Specialist working in Zurich, Switzerland.
He has 10 years of experience in support of computer, software and
programming. He has a Bachelor of Computer Sciences degree from the
Fachhochschule Konstanz and a Bachelor of Information Management from the
University of Konstanz. His areas of expertise include xSeries servers, IBM
Director, IBM FAStT solutions, and database programming. He is a Microsoft
MCSE, Microsoft Certified Cluster Specialist, Novell MCNE, Citrix CCA, and an
IBM ^Certified Expert for xSeries.
Duncan Furniss is an Advisory IT Specialist for IBM Canada, and is the senior
xSeries product specialist for western Canada. He has 14 years of professional
experience with Intel-based hardware, networking, and storage technologies,
more than 11 of them at IBM. His areas of expertise include systems design and
implementation, performance tuning, and systems management. He currently
writes, consults, and presents on these and related topics regularly in the course
of his work. He is an IBM ^ Certified Specialist for xSeries. He was
co-author of the redbook High Availability without Clustering.
Jason King is a Service Engineer working for W J Moncrieff in Perth, Western
Australia. He has seven years of experience working with xSeries and Netfinity
hardware. He is a Microsoft Certified Professional and an IBM ^
Certified Specialist for xSeries. His areas of expertise include IBM xSeries
servers, Windows NT 4.0, Windows 2000, and IBM Director.

x

IBM ^ xSeries 440 Planning and Installation Guide

The team (l-r): David, Duncan, Reza, Jason

Thanks to the following people for their contributions to this project:
Alfredo Aldereguia, Lead Engineer, SS16 System Development, Raleigh
Kenny Bain, EMEA Advanced Technical Support, Greenock
Patrick de Broux, IT Consultant, ATS Product Introduction Centre, Hursley
Donn Bullock, Global Brand Manager, Enterprise X-Architecture, Raleigh
Alex Candelaria, Staff Engineer, Enterprise Support Group, Seattle
Michael Cannon, xSeries Sales & Technical Education, Raleigh
Mark Chapman, xSeries Marketing Communications, Raleigh
Henry Chung, Technical Project Manager, Datacenter Offerings, Seattle
Peter Escue, Americas Advanced Technical Support, Dallas
Dottie Gardner, Technical Project Manager, Information Development, Raleigh
Roger Hellman, xSeries Global Product Marketing Manager, Raleigh
Ron Humphrey, Technical Project Manager, Active PCI Manager, Seattle
Koichi Kii, Development Manager, Active PCI Manager, Seattle
Grace Lennil, IBM Center for Microsoft Technologies, Seattle
David A McIntosh, Technical Specialist, xSeries Techline, Greenock
John McAbel, World Wide Cluster Offering Product Manager, Beaverton
Gregg McKnight, Distinguished Engineer, xSeries Performance, Raleigh
Robert Moon, Team Lead, xSeries Techline, Greenock
Michael Parris, WW Technical Support Marketing, Raleigh
Kiron Rakkar, Manager, WebSphere Beta Programs, Raleigh
Paul Shaw, Active PCI Manager Development, Seattle
Gary Turner, Technical Project Manager, System Partition Manager, Seattle

Preface

xi

Damon West, Course Developer, xSeries Education, Raleigh
Thanks also to the team that wrote the redbook Server Consolidation with the
IBM ^ xSeries 440 and VMware ESX Server, SG24-6852: Steve Russell,
Keith Olsen, Gabriel Sallah, and Chandrasekhara Seetharaman.

Become a published author
Join us for a two- to six-week residency program! Help write an IBM Redbook
dealing with specific products or solutions, while getting hands-on experience
with leading-edge technologies. You'll team with IBM technical professionals,
Business Partners and/or customers.
Your efforts will help increase product acceptance and customer satisfaction. As
a bonus, you'll develop a network of contacts in IBM development labs, and
increase your productivity and marketability.
Find out more about the residency program, browse the residency index, and
apply online at:
ibm.com/redbooks/residencies.html

Comments welcome
Your comments are important to us!
We want our Redbooks to be as helpful as possible. Send us your comments
about this or other Redbooks in one of the following ways:
򐂰 Use the online Contact us review redbook form found at:
ibm.com/redbooks

򐂰 Send your comments in an Internet note to:
redbook@us.ibm.com

򐂰 Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HZ8 Building 662
P.O. Box 12195
Research Triangle Park, NC 27709-2195

xii

IBM ^ xSeries 440 Planning and Installation Guide

1

Chapter 1.

Technical description
The IBM ^ xSeries 440 is the latest IBM top-of-the-range server and is
the first full implementation of the 32-bit IBM XA-32 chipset, code named
“Summit” as part of the Enterprise X-Architecture strategy. The x440 provides
new levels of high availability and price performance, and offers scalability
beyond a single server.
The following are the key features of the x440:
򐂰 Two-way Intel Xeon processor MP models, upgradable to four-way and
eight-way
򐂰 Two-way Intel Xeon processor DP models, upgradable to four-way Xeon DP
or four-way (and beyond) Xeon MP
򐂰 Ability to connect two x440s together to form a single eight-way (4+4), 12-way
(4+8) or 16-way (8+8) SMP system image
򐂰 Physical system partitioning, controlled by IBM Director and the Remote
Supervisor Adapter, to consolidate servers or set up high-speed clustering
configurations
򐂰 4U rack-dense design
򐂰 32 MB XceL4 Server Accelerator Cache providing an extra level of cache
򐂰 2 GB or 4 GB RAM standard, up to 64 GB total using 2 GB ECC SDRAM
DIMMs

© Copyright IBM Corp. 2002. All rights reserved.

1

򐂰 Memory enhancement such as memory mirroring, Chipkill, and Memory
ProteXion
򐂰 Six Active PCI-X slots: two 64-bit 133 MHz, two 64-bit 100 MHz, two 64-bit
66 MHz
򐂰 Connectivity to an RXE-100 external PCI-X enclosure for an additional 12
PCI-X slots
򐂰 Integrated dual-channel Ultra160 SCSI controller
򐂰 Two hot-swap 1” drive bays
򐂰 Support for major storage subsystems, including Fibre Channel and
ServeRAID
򐂰 Light Path Diagnostics and the Remote Supervisor Adapter for systems
management
򐂰 Integrated 10/100/1000 Mbps Ethernet controller
The ability to connect multiple systems together and to partition them is the
implementation of the concept of XpandOnDemand.
XpandOnDemand represents the first industry-standard implementation of true
“pay-as-you-grow” servers. New levels of scalability are achieved using a building
block design that allows more cost-effective scalability. These technologies,
powered by the XA-32 chipset, will provide scalability from two-way up to 16-way
systems using “scalable enterprise nodes”, the x440s being each of those nodes,
and, optionally, one or more external remote I/O enclosures.
Each scalable enterprise node contains processors, memory, I/O support,
storage and other devices and operates as an independent system. Each node
may run a different operating system from the other nodes, or if desired multiple
nodes can be assigned to one operating system image via system partitioning.
Nodes are attached to one another through dedicated high-speed
interconnections, called SMP Expansion Ports. This offers the flexibility to run
several hardware nodes as either a single complex of nodes or as two or more
smaller units to support multiple operating systems and/or clustered
configurations. The nodes can even be rearranged later into other configurations,
as needed.

1.1 The x440 product line
The models of the x440 are being made available throughout 2002. This is
because the complexity associated with developing the new IBM XA-32 chipset,
formerly known by its code name “Summit”, has meant additional development
and testing being required for introducing the x440 above that required of other

2

IBM ^ xSeries 440 Planning and Installation Guide

products. Additional testing pertains directly to the complexity of multiple SMP
configurations and the time commitment required for testing the ServerProven list
against each of these configurations.
All of the capabilities of the x440, including 16-way SMP capability and remote
I/O sharing, were announced in March 2002, but as a result of this additional
configuration development and testing, the x440 configurations will be introduced
in multiple phases during 2002 and 2003 as testing is completed.
Important: This document covers the products as of November 2002 in detail,
and only introduces the likely features of the follow-on models.
The models available as of November 2002 are listed in Table 1-1.
Table 1-1 Models available from November 2002
Model

Standard processors

Max SMP

L2 cache

L3 cache

Std memory

8687-1RX

2x 1.4 GHz Intel Xeon MP

8-way

256 KB

512 KB

2 GB (4x 512 MB)

8687-2RX

2x 1.5 GHz Intel Xeon MP

8-way

256 KB

512 KB

2 GB (4x 512 MB)

8687-3RX

2x 1.6 GHz Intel Xeon MP

8-way

256 KB

1 MB

2 GB (4x 512 MB)

8687-4RX

2x 1.5 GHz Intel Xeon MP

8-way

256 KB

1 MB

2 GB (4x 512 MB)

8687-5RX

2x 1.9 GHz Intel Xeon MP

8-way

256 KB

1 MB

2 GB (4x 512 MB)

8687-6RX

4x 1.9 GHz Intel Xeon MP

8-way

256 KB

1 MB

4 GB (4x 1 GB)

8687-7RX

4x 2.0 GHz Intel Xeon MP

8-way

256 KB

2 MB

2 GB (4x 512 MB)

8687-3RY

2x 2.4 GHz Intel Xeon DP

4-way

512 KB

0

2 GB (4x 512 MB)

8687-4RY

4x 2.4 GHz Intel Xeon DP

4-way

512 KB

0

4 GB (8x 512 MB)

The x440 models that have Xeon MP processors installed currently only support
processor configurations of two, four and eight processors. The x440 models that
have Xeon DP processors only support processor configurations of two or four
processors, but can be upgraded to eight Xeon MP processors if desired.
Figure 1-1 on page 4 shows the available single-node configurations and the
CPU and memory options.

Chapter 1. Technical description

3

One RXE expansion
connection
xSeries 440
Two Xeon DP processors, 2-32 GB
Four Xeon DP processors, 4-64 GB
Two Xeon MP processors, 2-32 GB
Four Xeon MP processors, 2-64 GB
Eight Xeon MP proecessors, 4-64 GB

RXE-100
6 PCI-X slots
12 PCI-X slots

Figure 1-1 x440 configurations currently available

The attachment of a single RXE-100 Remote Expansion Enclosure is also
supported, as shown in Figure 1-1. The RXE-100 has six PCI-X slots standard,
upgradable to 12 PCI-X slots, giving the customer up to a total of 12 PCI-X or 18
PCI-X slots respectively.
In addition to the single-node configurations, three additional two-node
configurations are possible:
򐂰 A single 16-way system comprised of two eight-way x440 nodes, as shown in
Figure 1-2 on page 5. This will be available in November 2002.
򐂰 A single 12-way system comprised of an eight-way and a four-way x440, as
shown in Figure 1-3 on page 5. This will be available in early 2003.
򐂰 A single eight-way system comprised of two four-way x440 nodes, as shown
in Figure 1-3 on page 5. This will be available in early 2003.
Each of these configurations can optionally also have an RXE-100 attached (see
Figure 1-2 on page 5 for an example).

4

IBM ^ xSeries 440 Planning and Installation Guide

RXE expansion
connections

RXE-100
6 PCI-X slots
12 PCI-X slots

SMP expansion
connections

One 16-way complex
Each xSeries 440 has:
Eight CPUs
4-64 GB memory
Figure 1-2 16-way server configuration using two eight-way x440 nodes

One eight-way complex

One 12-way complex
x440 node 1:
Four CPUs
2-32 GB memory

SMP expansion
connections
x440 node 2:
Four CPUs
2-32 GB memory

x440 node 1:
Eight CPUs
4-64 GB memory

SMP expansion
connections
x440 node 2:
Four CPUs
2-32 GB memory

Figure 1-3 Eight-way and 12-way two-node configurations

Chapter 1. Technical description

5

1.2 System partitioning
Partitioning is the ability to divide a system to support multiple operating system
images simultaneously. The benefits of system partitioning include:
Hardware consolidation
Software migration and coexistence
Version control
Development, testing and maintenance
Workload isolation
Resource optimization around a particular application and operating system
combination
򐂰 Independent backup and recovery on a partition basis
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰

There are two types of system partitioning: physical partitioning
(hardware-based, but not yet available) and logical partitioning (software-based,
enabled with VMware ESX Server):
򐂰 Logical partitioning
Using logical partitioning, administrators can partition a multinode complex at
the individual processor level (with associated memory, I/O and other required
resources) or even lower (that is, multiple partitions per processor) without
shutting down and restarting the hardware and software.
VMware ESX Server V1.5 supports one to eight partitions per CPU, up to a
maximum total of 64 partitions. For example, in an eight-way server, you can
have between eight partitions and 64 partitions. In V1.5, a partition cannot
span multiple CPUs, but a partition can be allocated a fraction of a CPU,
down to 1/8th of a CPU.
ESX Server virtualizes the resources of the x440 and is the closest that
Intel-based servers have come to date to the LPAR implementation of zSeries
mainframes.
When workload demands change, you can reassign resources from one
logical partition to another without having to shut down and restart the
system. ESX Server does not, however, support hot-adding of hardware
(such as disks and adapters).
For more information on ESX Server, see the redbook Server Consolidation
with the IBM ^ xSeries 440 and VMware ESX Server, SG24-6852 and
3.5.7, “VMware ESX Server” on page 98.
򐂰 Physical partitioning
This form of partitioning is available in 4Q 2002 with the release of System
Partition Manager, a plug-in for IBM Director.

6

IBM ^ xSeries 440 Planning and Installation Guide

With physical partitioning, a single multinode server complex can
simultaneously run multiple instances of one operating system in separate
partitions, as well as multiple versions of an operating system or even
different types of operating systems. The components of the server (for
example memory, CPUs, and I/O) are physically divided, under the control of
the server’s firmware and IBM Director.
The server can have up to two nodes, each capable of running its own
operating system and applications, all running simultaneously. A partition can
also span nodes, even to the point of having all four nodes serving one
operating system. Each node can be managed independently by IBM
Director.
See 5.2, “System Partition Manager” on page 150 for details.

1.3 IBM XA-32 chipset
The IBM XA-32 chipset is the product name describing the chipset developed
under the code name “Summit” and implemented on the IA-32 platform. A
product of the IBM Microelectronics Division in Austin, Texas, the XA-32 chipset
is fabricated using the latest in copper technology and is composed of the
following components:
򐂰 Memory controllers — one memory controller, code named “Cyclone”, per
four-way located within the SMP Expansion Module
򐂰 Processor/cache controllers — one processor and cache controller, code
named “Twister”, per eight-way located within the SMP Expansion Module
򐂰 PCI bridges — two PCI bridges, code named “Winnipeg”, per x440 located on
the centerplane and the I/O board that control both the PCI-X and Remote I/O
Figure 1-4 on page 8 shows the various IBM XA-32 components in a four-way
x440 configuration.

Chapter 1. Technical description

7

CEC 1
CPU 1

IBM XA-32
core chipset

CPU 2

CPU 3

CPU 4

400 MHz

3.2 GBps

32 MB
L4 cache 3.2 GBps

Processor &
cache controller

SMP Expansion
Ports (3.2GBps)

3.2 GBps
SDRAM

3.2 GBps

SDRAM

3.2
GBps
100MHz
4-way
interleave

SDRAM
SDRAM
RXE
Expansion
Port A
(1 GBps)

PCI bridge
66 MHz

Ultra160
SCSI
Gigabit
Ethernet

66 MHz

Memory
controller

2 GBps

2 GBps

Bus A

PCI bridge
B-100

33 MHz

C-133

D-133

Video
USB
Kbd/Ms
RSA

64-bit
66 MHz

64-bit
100 MHz

64-bit
133 MHz

Figure 1-4 xSeries 440 system block diagram — one SMP Expansion Module

The component that contains the CPUs, processor/cache controller, memory
controller, memory, and cache is called the SMP Expansion Module (or central
electronics complex—CEC). The Xeon MP-based models of the x440 ship with
one SMP Expansion Module with two or four CPUs and 2 GB or 4 GB of RAM.
The Xeon DP-based models have either two CPUs in one SMP Expansion
Module or four CPUs in two SMP Expansion Modules.
Tip: The terms central electronics complex, CEC, and SMP Expansion
Module are used interchangeably in relation to the x440. We use SMP
Expansion Module in this redbook.

8

IBM ^ xSeries 440 Planning and Installation Guide

The CPUs are connected together with a 100 MHz frontside bus, but supply data
at an effective rate of 400 MHz using the “quad-pump” design of the Intel
NetBurst architecture as described in 1.4.1, “Intel Xeon Processor MP” on
page 13. To ensure the processors are optimally used, the x440 has a 32 MB
XceL4 Server Accelerator Cache, comprised of 200 MHz DDR memory. This L4
system cache services all CPUs in an SMP Expansion Module.
Memory used in the x440 is standard 133 MHz ECC SDRAM DIMMs; however,
the 133 MHz DIMMs are run at 100 MHz (for parts availability reasons). With
2 GB DIMMs, up to 32 GB can be installed using all 16 DIMM sockets. The
memory is four-way interleaved so that the memory subsystem can supply data
fast enough to match the throughput of the CPUs. Four-way interleaving means
that DIMMs must be installed in matched fours and in specific DIMM sockets (see
3.1.2, “Memory” on page 65).
The second SMP Expansion Module can be installed when more than four Xeon
MP processors, or two Xeon DP processors, are required. This also enables the
system to have up to 64 GB of RAM, using 2 GB DIMMs. The block diagram with
two SMP Expansion Modules is shown in Figure 1-5 on page 10.
Note: When Xeon DP processors are used, only two CPUs can be installed in
each SMP Expansion Module. The processors are installed in CPU positions
1 and 4. Positions 2 and 3 must hold air baffles to maintain proper air flow.

Chapter 1. Technical description

9

CEC 2

CEC 1
CPU 1

CPU 2

CPU 3

CPU 4

400 MHz

3.2 GBps

32 MB
L4 cache 3.2 GBps

SDRAM
SDRAM

SMP Expansion
Ports (3.2GBps)

Processor &
cache controller

32 MB
3.2 GBps L4 cache

3.2 GBps

3.2 GBps

Memory
controller

100 MHz

PCI bridge

Ultra160
SCSI
Gigabit
Ethernet

CPU 1

Processor &
cache controller

3.2
GBps

66 MHz

CPU 2

400 MHz

3.2 GBps

RXE Expansion
Port A (1 GBps)

CPU 3

3.2 GBps

3.2
GBps

SDRAM
SDRAM
SDRAM

2 GBps

66 MHz

3.2 GBps
Memory
controller

RXE
Expansion
Port B
(1 GBps)

SDRAM
SDRAM

CPU 4

2 GBps

Bus A

SDRAM

PCI bridge
B-100

33 MHz

100 MHz

C-133

D-133
IBM XA-32
core chipset

Video
USB
Kbd/Ms
RSA

64-bit
66 MHz

64-bit
100 MHz

64-bit
133 MHz

Figure 1-5 xSeries 440 system block diagram — two SMP Expansion Modules

When two SMP Expansion Modules are installed, they are connected together
using two 3.2 GBps SMP Expansion Ports. The third scalability port is not used in
this single-node eight-way configuration.
The two PCI bridges in the XA-32 chipset provide support for 33, 66, 100, and
133 MHz devices using four PCI-X buses (labeled A-D in Figure 1-5). This is
discussed further in 1.8, “PCI subsystem” on page 23.
The PCI bridge also has a 1 GBps bi-directional Remote Expansion I/O port
(RXE port) for connectivity to the RXE-100 enclosure. This port is labeled “RXE
Expansion Port A” in both Figure 1-4 on page 8 (four-way) and Figure 1-5
(eight-way). The RXE-100 provides up to an additional 12 PCI-X slots. When the
second SMP Expansion Module is installed to form an eight-way system
(Figure 1-5), the second RXE port, labeled “RXE Expansion Port B”, connects to
the memory controller of the second SMP Expansion Module.

10

IBM ^ xSeries 440 Planning and Installation Guide

As of November 2002, you can connect two x440 servers together to form one
16-way complex. The two x440 nodes are connected together using all three
SMP Expansion Ports as shown in Figure 1-6.

CPU 1

CPU 2

CPU 3

CPU 4

Processor &
cache controller

32 MB
L4 cache

CEC 1

CEC 2

1
2
3

1
2
3

CPU 4

CPU 3

CPU 2

Processor &
cache controller

CPU 1

32 MB
L4 cache

SDRAM

SDRAM
Memory
controller

SDRAM

Memory
controller

SDRAM

SDRAM

SDRAM

SDRAM

SDRAM

PCI bridge

x440 Node 1
CPU 1

CPU 2

32 MB
L4 cache

CPU 3

CPU 4

Processor &
cache controller

SDRAM
SDRAM

Memory
controller

PCI bridge

CEC 2

CEC 1

1

1
2
3

2
3

SMP Expansion
Ports (3.2GBps)

CPU 4

CPU 3

Processor &
cache controller

CPU 2

CPU 1

32 MB
L4 cache

SDRAM
Memory
controller

SDRAM

SDRAM

SDRAM

SDRAM

SDRAM

x440 Node 2

PCI bridge

PCI bridge

Figure 1-6 16-way configuration (four SMP Expansion Modules)

The rear panel of the x440, indicating the location of the SMP Expansion Ports
and RXE Expansion Ports, is shown in Figure 1-7 on page 12.

Chapter 1. Technical description

11

Figure 1-7 Rear panel of the xSeries 440 (one SMP Expansion Module installed)

1.4 Processors
The x440 models use one of the following processors:
򐂰 Xeon Processor MP (“Gallatin”)
򐂰 Xeon Processor MP (“Foster”)
򐂰 Xeon Processor DP (“Prestonia”)
The Xeon MP models of the x440 come with two or four processors installed in
the standard SMP Expansion Module. Up to four processors are supported in the
standard module and, with the addition of a second SMP Expansion Module, up
to eight processors can be installed in an x440.
The x440 entry-level systems can be ordered with either two Xeon DP
processors in a single SMP Expansion Module or with four Xeon DP processors
in two SMP Expansion Modules. There is no further upgrade beyond four Xeon
DP processors, other than replacing them with Xeon MP processors.
See 3.1.1, “Processors” on page 64 for further discussion about what you should
consider before implementing an x440 solution.

12

IBM ^ xSeries 440 Planning and Installation Guide

1.4.1 Intel Xeon Processor MP
The Xeon Processor MP (code named “Foster” or “Gallatin”) returns to the ZIF
socket design of the original Pentium processor, instead of the Slot 2 cartridge
design of the Pentium III Xeon processors. This smaller form factor means that
the x440 can have up to eight processors in a 4U node.
The Xeon MP processor has three levels of cache, all of which are on the
processor die:
򐂰 Level 3 cache is equivalent to L2 cache on the Pentium III Xeon. Foster
processors in the x440 models contain either 512 KB or 1 MB of L3 cache.
Gallatin processors contain either 1 MB or 2 MB or L3 cache.
򐂰 Level 2 cache is equivalent to L1 cache on the Pentium III Xeon and is 256 KB
in size. The L2 cache implements the Advanced Transfer Cache technology,
which means L2-to-processor transfers occur across a 256-bit bus in only one
clock cycle.
򐂰 A new level 1 cache, 12 KB in size, is “closest” to the processor and is used to
store micro-operations (that is, decoded executable machine instructions) and
serves those to the processor at rated speed. This additional level of cache
saves decode time on cache hits. There is an additional 8 KB for data related
to those instructions, for a total of 20KB.
The x440 also implements a Level 4 cache as described in 1.6, “IBM XceL4
Server Accelerator Cache” on page 19.
Intel has also introduced a number of features associated with its newly
announced NetBurst micro-architecture. These are available in the x440,
including:
򐂰 400 MHz frontside bus
The Pentium III Xeon processor has a 100 MHz frontside bus that equates to
a burst throughput of 800 MBps. With protocols such as TCP/IP, this has been
shown to be a bottleneck in high-throughput situations. The Xeon Processor
MP improves on this by using two 100 MHz clocks, out of phase with each
other by 90° and using both edges of each clock to transmit data. This is
shown in Figure 1-8.

100 MHz clock A
100 MHz clock B
Figure 1-8 Quad-pumped frontside bus

Chapter 1. Technical description

13

This increases the performance of the frontside bus without the difficulty of
high-speed clock signal integrity issues. The end result is an effective burst
throughput of 3.2 GBps, which can have a substantial impact, especially on
TCP/IP-based LAN traffic.
򐂰 Hyper-Threading
Hyper-Threading technology enables a single physical processor to execute
two separate code streams (threads) concurrently. To the operating system, a
processor with Hyper-Threading appears as two logical processors, each of
which has its own architectural state - that is, its own data, segment, and
control registers and its own advanced programmable interrupt controller
(APIC).
For example, Figure 1-9 shows a 16-way x440 complex running Datacenter
Server with Hyper-Threading enabled.

Figure 1-9 Datacenter sees 32 processors when Hyper-Threading is enabled on a 16-way configuration

14

IBM ^ xSeries 440 Planning and Installation Guide

Each logical processor can be individually halted, interrupted, or directed to
execute a specified thread, independently from the other logical processor on
the chip. Unlike a traditional two-way SMP configuration that uses two
separate physical processors, the logical processors share the execution
resources of the processor core, which include the execution engine, the
caches, the system bus interface, and the firmware.
Note: Hyper-Threading is disabled by default on the x440. This is because
of a known bug in Windows 2000 Advanced Server. If Hyper-Threading is
enabled on an eight-way server, then the Windows 2000 Advanced Server
will trap (blue screen) during installation. This problem does not affect other
supported operating systems.
Hyper-Threading technology is designed to improve server performance by
exploiting the multi-threading capability of operating systems, such as
Windows .NET and Linux, and server applications, in such a way as to
increase the use of the on-chip execution resources available on these
processors.
Fewer or slower processors usually yield the best gains from
Hyper-Threading because there is a greater likelihood that the software can
spawn sufficient numbers of threads to keep both paths busy. The following
performance gains are likely:
– Two physical processors: 15-25% performance gain
– Four physical processors: 1-13% gain
– Eight physical processors: 0-5% gain
Tests have found that software often limits SMP scalability, but customers
should expect improved results as software matures. Best-case applications
today are:
–
–
–
–

Databases
Java
Web servers
E-mail

Note: Microsoft licensing of the Windows Server operating systems is by
number of processors (four-way for Server, eight-way for Advanced Server,
32-way for Datacenter Server). Therefore, the appearance of twice as many
logical processors can potentially affect the installation of the operating
system. See 1.12, “Operating system support” on page 28 for details.
For more information about Hyper-Threading, see:
http://www.intel.com/technology/hyperthread/

Chapter 1. Technical description

15

򐂰 Advanced Dynamic Execution
The Pentium III Xeon processor has a 10-stage pipeline. However, the large
number of transistors in each pipeline stage means that the processor is
limited to speeds under 1 GHz, due to latency in the pipeline.
The Xeon Processor MP has a 20-stage pipeline, which can hold up to 126
concurrent instructions in flight and up to 48 reads and 24 writes active in the
pipeline. The lower complexity of each stage also means that future clock
speed increases are possible.
It is important to note, however, that the longer pipeline means that it now
takes more clock cycles to execute the same instruction when compared to
the Pentium III Xeon.
Comparing the Xeon Processor MP with the Pentium III Xeon and current
operating systems (Windows 2000, Linux with 2.4 kernel), good rules of
thumb are:
– 1.5 GHz Xeon Processor MP/512 KB L3 ≈ 5-20% faster than 900 MHz 2
MB L2 Xeon
– 1.6 GHz Xeon Processor MP/1 MB L3 ≈ 15-35% faster than 900 MHz 2
MB L2 Xeon
The next generations of operating systems will likely improve performance of
the MP processor as they take advantage of the NetBurst architecture. These
include Windows .NET and the Linux 2.5/2.6 kernels.
For more information about the features of the Xeon Processor MP, go to:
http://www.intel.com/design/xeon/xeonmp/prodbref

1.4.2 Intel Xeon Processor DP
The Xeon DP is similar to the Xeon MP and is also based on the Intel NetBurst
micro-architecture. The Xeon DP was designed by Intel to be suitable only in
uniprocessor and two-way SMP processor systems. However, with the use of the
IBM XA-32 chipset, the x440 can have up to four Xeon DP processors installed.
The Xeon DP models of the x440 models use 2.4 GHz processors, part
37L3533.
The key differences between the processors are listed in Table 1-2.
Table 1-2 Differences between the Xeon DP and the Xeon MP

Feature

Xeon Processor DP

Xeon Processor MP

Maximum CPUs per SMP Expansion Module

Two

Four

Maximum CPUs per x440 node

Four

Eight

16

IBM ^ xSeries 440 Planning and Installation Guide

Feature

Xeon Processor DP

Xeon Processor MP

Supported in multi-node configurations

No

Yes

Core frequency (x440 models)

2.4 GHz

1.4, 1.5, 1.6, 1.9, or 2.0 GHz

Level 2 cache

512 KB

256 KB

Level 3 cache

None

512 KB, 1 MB or 2 MB

For more information about the features of the Xeon Processor DP, go to:
http://www.intel.com/design/xeon/prodbref

1.5 SMP Expansion Module
The SMP Expansion Module is the central electronics complex that contains the
processors, memory, L4 system cache, and respective controllers for these
components. The base x440 system includes one SMP Expansion Module. Each
SMP Expansion Module contains slots for up to four Xeon MP processors (or two
Xeon DP processors) and 16 DIMMs.
There are two SMP Expansion Module part numbers for x440 models:
򐂰 32P8340 is used in Xeon MP models. It is “unpopulated”, which means it
does not contain any processors or memory. Any of the support Xeon MP
processors can be installed in it.
򐂰 71P7919 is used in Xeon DP models. It contains two 2.4 GHz Xeon DP
processors and VRMs, and is used to upgrade a two-way Xeon DP x440 to a
four-way configuration.
71P7919 is also compatible with Xeon MP processors. If you wish to upgrade
your Xeon DP-based x440 to use Xeon MP processors, you can simply
replace the processors and VRMs with supported Xeon MP processors.
Note: Information about the SMP Expansion Modules to be used in
Gallatin-based systems (or existing systems you wish to upgrade to Gallatin
processors) was not available at the time of publication.
The SMP Expansion Module is installed from the top of the server and mounts to
the side of the centerplane using two levers on the top, as shown in Figure 1-10
on page 18. These same levers are used to remove the top of the SMP
Expansion Module when adding additional processors or memory.

Chapter 1. Technical description

17

Tip: Be careful when removing or installed the SMP Expansion Modules,
because you may damage the center plane. See tip H176162 for details:
http://www.pc.ibm.com/qtechinfo/MIGR-43675.html

SMP Expansion
Module cover

See-through hinged doors
for DIMM access
Connects to
center plane
this side

Locking
levers

DIMM sockets

CPU 1

Handle

CPU 3
XceL4 cache
CPU 4
VRM

CPU 2

Figure 1-10 SMP Expansion Module

Each SMP Expansion Module also contains 16 DIMM slots to take the memory
up to a maximum of 64 GB per node (using 2 GB DIMMs) and an additional 32
MB of Level 4 system cache for a maximum of 64 MB per node.
When two SMP Expansion Modules are installed, they are connected together
using two 3.2 GBps SMP Expansion Ports (also known as scalability ports).
Using two connections improves throughput beyond that of one connection and
provides load balancing. The third scalability port is not used in this single-node
eight-way configuration.
Each SMP Expansion Module is also equipped with the following LEDs for Light
Path Diagnostics:
򐂰
򐂰
򐂰
򐂰

18

Each DIMM
Each CPU
Each VRM
SMP Expansion Module board

IBM ^ xSeries 440 Planning and Installation Guide

1.6 IBM XceL4 Server Accelerator Cache
Integrated into each SMP Expansion Module is 32 MB of high-speed Level 4
cache (see Figure 1-10). This XceL4 Server Accelerator Cache provides the
necessary extra level of cache to alleviate the bottlenecks caused by memory
latency across the scalability port.
Cache memory is two-way interleaved 200 MHz DDR memory and is faster than
standard memory because it is directly connected to the memory controller and
does not have additional latency associated with the large fan-out necessary to
support the 16 DIMM slots.
Initial tests have shown the XceL4 cache has improved overall system
performance up to 20% on various applications.

1.7 System memory
The Xeon MP models of the x440 have 2 GB or 4 GB of RAM standard,
implemented as four PC133 ECC SDRAM DIMMs (four 512 MB or four 1 GB
DIMMs). There are 16 DIMM sockets (two ports of eight) in each of the two SMP
Expansion Modules for a total of 32 sockets. Using 2 GB DIMMs, this means that
each x440 can have up to 64 GB RAM.
See 3.1.2, “Memory” on page 65 for further discussion of how memory is
implemented in the x440 and what you should consider before an x440
installation.
There are a number of advanced features implemented in the x440 memory
subsystem, collectively known as Active Memory:
򐂰 Memory ProteXion
Memory ProteXion, also known as “redundant bit steering”, is the technology
behind using redundant bits in a data packet to provide backup in the event of
a DIMM failure.
Currently, other industry-standard servers use 8 bits of the 72-bit data packets
for ECC functions and the remaining 64 bits for data. However, because the
x440 uses four-way interleaved memory, it needs only 6 bits to perform the
same ECC functions, thus leaving 2 bits free (Figure 1-11 on page 20).

Chapter 1. Technical description

19

72 Bit DIMM
64 bits
Data

6 bits 2 bits
ECC Spare

Figure 1-11 Memory ProteXion

In the event that a chip failure on the DIMM is detected by memory scrubbing,
the memory controller can re-route data around that failed chip through the
spare bits (similar to the hot-spare drive of RAID array). It can do this
automatically without issuing a Predictive Failure Analysis (PFA) or Light Path
Diagnostics alert to the administrator. After the second DIMM failure, PFA and
Light Path Diagnostics alerts would occur on that DIMM as normal.
򐂰 Memory scrubbing
Memory scrubbing is an automatic daily test of all the system memory that
detects and reports memory errors that might be developing before they
cause a server outage.
Memory scrubbing and Memory ProteXion work in conjunction with each
other, but they do not require memory mirroring (as described below) to be
enabled to work properly.
When a bit error is detected, memory scrubbing determines if the error is
recoverable or not. If it is recoverable, Memory ProteXion is enabled and the
data that was stored in the damaged locations is rewritten to a new location.
The error is then reported so that preventative maintenance can be
performed. As long as there are enough good locations to allow the proper
operation of the server, no further action is taken other than recording the
error in the error logs.
If the error is not recoverable, then memory scrubbing sends an error
message to the Light Path Diagnostics, which then turns on the proper lights
and LEDs to guide you to the defective DIMM. If memory mirroring is enabled,
then the mirrored copy of the data in the damaged DIMM is used until the
system is powered down and the DIMM replaced.

20

IBM ^ xSeries 440 Planning and Installation Guide

򐂰 Memory mirroring
Memory mirroring is roughly equivalent to RAID-1 in disk arrays, in that
memory is divided in two ports and one port is mirrored to the other half (see
Figure 1-12). If 8 GB is installed, then the operating system sees 4 GB once
memory mirroring is enabled (it is disabled in BIOS by default). All mirroring
activities are handled by the hardware without any additional support required
from the operating system.

Port 2

Port 1

Figure 1-12 Memory DIMMs are divided into two ports

When memory mirroring is enabled (see 4.1.2, “Enabling memory mirroring”
on page 108), the data that is written to memory is stored in two locations.
One copy is kept in the port 1 DIMMs, while a second copy is kept in the
port 2 DIMMs. During the execution of the read command, the data is read
from the DIMM with the least amount of reported memory errors through
memory scrubbing.
If memory scrubbing determines the DIMM is damaged beyond use, read and
write operations are redirected to the partner DIMM in the other port. Memory
scrubbing then reports the damaged DIMM and the Light Path Diagnostics
display the error. If memory mirroring is enabled, then the mirrored copy of the

Chapter 1. Technical description

21

data in the damaged DIMM is used until the system is powered down and the
DIMM replaced.
Certain restrictions exist with respect to placement and size of memory
DIMMs when memory mirroring is enabled. These are discussed in “Memory
mirroring” on page 67.
򐂰 Chipkill memory
Chipkill is integrated into the XA-32 chipset and does not require special
Chipkill DIMMs. Chipkill corrects multiple single-bit errors to keep a DIMM
from failing. When combining Chipkill with Memory ProteXion and Active
Memory, the x440 provides very high reliability in the memory subsystem.
Chipkill memory is approximately 100 times more effective than ECC
technology, providing correction for up to four bits per DIMM (eight bits per
memory controller), whether on a single chip or multiple chips.
If a memory chip error does occur, Chipkill is designed to automatically take
the inoperative memory chip offline while the server keeps running. The
memory controller provides memory protection similar in concept to disk array
striping with parity, writing the memory bits across multiple memory chips on
the DIMM. The controller is able to reconstruct the “missing” bit from the failed
chip and continue working as usual.
Chipkill support is provided in the memory controller and implemented using
standard ECC DIMMs, so it is transparent to the operating system.
In addition, to maintain the highest levels of system availability, if a memory error
is detected during POST or memory configuration, the server can automatically
disable the failing memory bank and continue operating with reduced memory
capacity. You can manually re-enable the memory bank after the problem is
corrected via the Setup menu in BIOS.
Memory mirroring, Chipkill, and Memory ProteXion provide multiple levels of
redundancy to the memory subsystem. Combining Chipkill with Memory
ProteXion enables up to two memory chip failures per memory port (8 DIMMs)
on the x440. An eight-way x440 with its four memory ports could sustain up to
eight memory chip failures. Memory mirroring provides additional protection with
the ability to continue operations with memory module failures.
1. The first failure detected by the Chipkill algorithm on each port doesn’t
generate a Light Path Diagnostics error, since Memory ProteXion recovers
from the problem automatically.
2. Each memory port could then sustain a second chip failure without shutting
down.
3. Provided that memory mirroring is enabled, the third chip failure on that port
would send the alert and take the DIMM offline, but keep the system running
out of the redundant memory bank.

22

IBM ^ xSeries 440 Planning and Installation Guide

Note: The ability to hot-replace a failed DIMM or hot-add additional DIMMs are
currently not supported.

1.8 PCI subsystem
As shown in Figure 1-4 on page 8, there are six PCI-X slots internal to the x440:
򐂰 Two 133 MHz slots, which accept 32 or 64-bit, 3.3 V, PCI or PCI-X adapters,
from 33-133 MHz
򐂰 Two 100 MHz slots, which accept 32 or 64-bit, 3.3 V, PCI or PCI-X adapters,
from 33-100 MHz
򐂰 Two 66 MHz slots, which accept 32 or 64-bit, 3.3 V, 33 or 66 MHz, PCI or
PCI-X adapters
See 3.1.3, “PCI slot configuration” on page 68 for details on what adapters are
supported and in what combinations.
The PCI subsystem also supplies these I/O devices:
򐂰 Two Wide Ultra 160 SCSI ports, one internal and one external (Adaptec
AIC-7899 chipset)
򐂰 Gigabit Ethernet port (Broadcom 5700 chipset)
The x440 was the first xSeries server to offer a Gigabit Ethernet controller
integrated standard in the system. The x440 includes a single-port Broadcom
BCM5700 10/100/1000 Base-T MAC (Media Access Controller) on a PCI
64-bit 66 MHz bus.
The BCM5700 supports full and half-duplex performance at all speeds
(10/100/1000 Mbps, auto-negotiated) and includes integrated on-chip
memory for buffering data transmissions to ensure the highest network
performance and dual onboard RISC processors for advanced packet parsing
and backwards compatibility with today's 10/100 network. The Broadcom
controller also includes software support for failover, layer-3 load balancing,
and comprehensive diagnostics.
Category 5 or better Ethernet cabling is required with RJ-45 connectors. If
you plan to implement a Gigabit Ethernet connection, ensure your network
infrastructure is capable of the necessary throughput to match the server’s I/O
capacity.
򐂰 SVGA with 8 MB video memory (S3 Savage4 Pro chipset)
򐂰 Three USB ports (one on front panel, two on rear)
򐂰 Remote Supervisor Adapter (RS-485 ASM interconnect bus, 10/100 Ethernet
and serial ports)

Chapter 1. Technical description

23

Note: There are no parallel or serial ports on the x440. For serial connections,
use the USB to Serial Adapter, part number 10K3661, as described in 3.2.4,
“Serial connections” on page 83.
With the addition of an RXE-100 Remote Expansion Enclosure, you can connect
an additional six or 12 PCI-X adapters to the x440. See 3.2.3, “Remote
Expansion Enclosure” on page 78 for details.
Note: Currently, only one RXE-100 can be connected to an x440 configuration.
For configurations up to eight-way (that is, single chassis), connectivity is using
one RXE Expansion Port and cable. The dual-chassis 16-way configuration uses
two redundant RXE cables. This is described in detail in 3.2.3, “Remote
Expansion Enclosure” on page 78.

1.9 Redundancy
The x440 has the following redundancy features to maintain high availability:
򐂰 Four hot-swap multi-speed fans
With four hot-swap redundant fans, the x440 has adequate cooling for each of
its major component areas. There are two fans located at the front of the
server that direct air through the SMP Expansion Modules. These fans are
accessible from the top of the server without having to open the system
panels. In the event of a fan failure, the other fan will speed up to continue to
provide adequate cooling until the fan can be hot-swapped by the IT
administrator.
The other two fans are located just behind the power supplies and provide
cooling for the I/O devices. Similar to the SMP Expansion Module fans, these
fans will speed up in the event that one should fail to compensate for the
reduction in air flow. In general, failed fans should be replaced within 24 hours
following failure.
Important: Due to airflow requirements, fans should not be removed for
longer than two minutes. The fan compartments need to be fully populated
even if the fan is defective. Therefore, remove a defective fan only when a
new fan is available for immediate replacement.
򐂰 Two hot-swap power supplies with separate power cords.
Note: For large configurations, redundancy is achieved only when connected
to a 220 V power supply. See 3.8, “Power considerations” on page 103 for
details.

24

IBM ^ xSeries 440 Planning and Installation Guide

򐂰 Two hot-swap hard disk drive bays. An optional ServeRAID adapter can be
configured to form a RAID-1 disk array for the operating system.
򐂰 The memory subsystem has a number of redundancy features, including
memory mirroring, as described in 1.7, “System memory” on page 19.
The layout of the front panel of the x440, showing the location of the drive bays,
power supplies and fans, is shown in Figure 1-13.
Hot-swap fans
Power-on light
Power button
Reset button
USB port

Hot swap
power supplies
Hot swap
drive bays
Diskette drive

CD-ROM drive

System-error light (amber)
Information light (amber)

Light Path Diagnostics
panel (pulls out)

SCSI activity light (green)
Locator light (blue)

Figure 1-13 Front panel of the xSeries 440

1.10 Light Path Diagnostics
To limit the need to slide the server out of the rack to diagnose problems, a new
Light Path Diagnostics panel has been added to the front of the x440. This panel
can be ejected from the server to view all Light Path Diagnostics-monitored
server subsystems. In the event that maintenance is then required, the customer
can slide the server out from the rack and using the LEDs, find the failed or failing
component.
As illustrated in Figure 1-14 on page 26, Light Path Diagnostics is able to monitor
and report on the health of CPUs, main memory, hard disk drives, PCI-X and PCI
slots, fans, power supplies, VRMs, and the internal system temperature.

Chapter 1. Technical description

25

C

N
1
R
E Y
W L
O P
P UP
S
2

Y
U OR
M
E D S
S U
P
A B
M
D I
E
C
T
P

M

P

A
F

N
G
I
D
O
R L
A T
O N
E
V
E
D C
M R E PE
R
V N S
O R
N E
V
D
O
IN
M
E
R

B

M

i

!

Light Path
Diagnostics™

CPU
MEMORY
DASD
PCI-X BUS

NMI

BOARD
FAN

TEMP
1 POWER
2 SUPPLY

EVENT LOG
VRM
NON REDUND
OVER SPEC

REMIND

Figure 1-14 Light Path Diagnostics panel on the x440

The Light Path Diagnostics on the x440 has three levels:
1. Level 1 is the pop-out panel as shown in Figure 1-14.
2. For further investigation, there are Light Path Diagnostics LEDs visible
through the top of the server. This requires the server to be slid out of the
rack.
3. For the third level of diagnostics, LEDs on the planar indicates the component
causing the error.
The pop-out panel (Figure 1-14) also has a Remind button. This places the front
panel system-error LED into remind mode, which means it flashes briefly every 2
seconds. By pressing the button, you acknowledge the failure but indicate that
you will not take immediate action. If a new failure occurs, the system-error LED
will turn on again. The system-error LED remains in the Remind mode until one
of the following situations occurs:
򐂰 All known problems are resolved
򐂰 The system is restarted
򐂰 A new problem occurs, at which time it then is illuminated continuously

26

IBM ^ xSeries 440 Planning and Installation Guide

1.11 Remote Supervisor Adapter
The x440 includes a Remote Supervisor Adapter (RSA), which is positioned
horizontally in a dedicated PCI slot beneath the PCI-X adapter area of the
system.

Rear of x440
External power
supply

ASM interconnect
(RS-485) port

Error LED
(amber)

Power LED
(green)

10/100
Ethernet port

Management
COM port

Figure 1-15 Remote Supervisor Adapter connectors

The Remote Supervisor Adapter offers the following capabilities:
򐂰 In-band and out-of-band remote server access and alerting through IBM
Director
򐂰 Full Web browser support with no other software required
򐂰 Enhanced security features
򐂰 Graphics/text console redirection for remote control
򐂰 Windows NT and 2000 blue screen capture
򐂰 Dedicated 10/100 Ethernet access port
򐂰 ASM interconnect bus for connection to other service processors
򐂰 Serial dial in/out
򐂰 E-mail, pager and SNMP alerting
򐂰 Event log
򐂰 Predictive Failure Analysis on memory, power, hard drives, and CPUs
򐂰 Temperature and voltage monitoring with settable threshold
򐂰 Light Path Diagnostics
򐂰 Automatic Server Restart (ASR) for operating system and POST
򐂰 Wake on LAN
򐂰 Remote firmware update
򐂰 LAN access
򐂰 Alert forwarding
See the IBM Redbook Implementing IBM Director Management Solutions,
SG24-6188 for more information on the Remote Supervisor Adapter.

Chapter 1. Technical description

27

In addition to these functions, the Remote Supervisor Adapter is an integral
component of the two-node x440 configurations. With the two-node 16-way
configuration, the adapters are used in the following way:
򐂰 The adapters in both systems are each assigned an IP address (on the same
subnetwork)
򐂰 The adapters are connected via their Ethernet ports, either with a cross-over
cable, or on a hub or switch, as shown in Figure 3-6 on page 76.
򐂰 One adapter is configured as the primary, and the other is configured as the
secondary.
򐂰 Pressing the power button on either x440 will cause the adapters to power up
both nodes.

1.12 Operating system support
In line with the overall message of providing application flexibility to meet the
varying needs of our enterprise customers, the x440 is optimized for numerous
operating system and application solutions. Table 1-3 on page 29 lists the
supported operating systems for the x440. For the latest operating system
support information, go to:
http://www.pc.ibm.com/us/compat/nos/matrix.shtml
See 3.5, “Operating system considerations” on page 90 for further information on
operating system support on the x440.
Note: Windows 2000 Datacenter Server and VMware ESX Server are the only
operating systems currently supported on the 16-way x440 fixed configuration.
In the column titled Hyper-Threading Support in Table 1-3 on page 29:
򐂰 None indicates the operating system does not recognize the logical
processors that Hyper-Threading enables.
򐂰 Yes indicates that the operating system recognizes the logical processors and
can execute threads on them but is not optimized for Hyper-Threading.
򐂰 Optimized indicates that the operating system recognizes the logical
processors and the operating system code has been designed to fully take
advantage of the technology.

28

IBM ^ xSeries 440 Planning and Installation Guide

Table 1-3 x440 operating system support

Description

Release

SMP support1

Hyper-Threading
support

Windows 2000 Server

SP2/3

Supports up to four-way

Yes

Windows 2000 Advanced Server

SP2/3

Supports up to eight-way

Yes

Windows 2000 Datacenter Server

SP3

Supports up to 32-way 2

Yes

Windows NT Enterprise Edition

4.0

Only supports four-way on the
x440
Hot-plug PCI not supported

None

Windows .NET Server

1Q/03

Supports up to two-way

Optimized

Windows .NET Enterprise Server

1Q/03

Supports up to eight-way

Optimized

2

Optimized

Windows .NET Datacenter Server

1Q/03

Supports up to 32-way

NetWare

6.0

Supports up to 32-way 2,

3

Yes

Red Hat Linux Advanced Server

2.1

Supports up to eight-way 4

Yes

SuSE Linux Enterprise

8.0

Supports up to eight-way 4

Yes

VMware ESX Server

1.5

Supports up to 16-way
Supports up to one processor per
VM5

None

Notes to Table 1-3:
1. While operating systems may support eight-way or larger systems, scalability
is a function of both the operating system and the application/workload. Few
applications are designed to take advantage of larger SMP systems.
2. x440 configurations with 16 processors and Hyper-Threading enabled are
seen as 32 processors under Windows 2000 Datacenter and Windows .NET.
Licensing of processors in Windows 2000 is based on physical and logical
processors combined, whereas Windows .NET licensing is based on physical
processors.
3. NetWare notes:
– NetWare 5.1 is currently not supported, but it should still install. See
RETAIN tip H176163 for details on a known shutdown problem:
http://www.pc.ibm.com/qtechinfo/MIGR-43679.html

Chapter 1. Technical description

29

– With NetWare 6.0, the server may show extreme CPU utilization values
(for example, 13000%). This will be fixed with NetWare 6.0 Support Pack
2. See RETAIN tip H176060 at:
http://www.pc.ibm.com/qtechinfo/MIGR-43532.html
– Once supported, a multi-chassis configuration must be fully assembled
before installing NetWare. Novell doesn’t currently support adding chassis
after NetWare is installed.
4. Ongoing work will improve both Linux and key application scalability.
Currently, the general recommendation is to keep system size limited to
eight-way and below, and 16 GB and below. Work on scalability beyond
eight-way is in progress, and is likely to become available in early to
mid-2003.
5. VMware ESX Server 1.5 allows eight virtual machines per processor.
However, a virtual machine (VM) can consist of no more than one processor.
16-way support will require Version 1.5.1.

1.13 IBM Director
IBM Director is designed to manage all platforms in the Intel environment and
support a variety of operating systems.
IBM Director 3.1 supports IBM Enterprise X-Architecture capabilities, including
Remote I/O via the IBM RXE-100 Remote Expansion Enclosure and the new
Real Time Diagnostics feature of the x440. CIM-related enhancements include:
򐂰 CIM instrumentation for Linux
򐂰 Mass configuration of client CIM properties — Saves time by setting up and
configuring multiple systems as a group, rather than having to configure each
system individually
򐂰 Hardware instrumentation using CIM — Enables RAID and systems
management hardware information and alerts to be passed up to higher-level
management packages as part of the IBM Director upward integration
modules (UIMs)
IBM Director 4.1 will support VMware ESX Server both at the VMware console
level and at the guest operating system level.
IBM Director includes server extensions that help administrators configure,
deploy, manage, and maintain your servers easily and effectively. IBM Director
Extensions include the following:
򐂰 System Partition Manager — System Partition Manager provides a
graphical interface for creating static hardware partitions. It allows an

30

IBM ^ xSeries 440 Planning and Installation Guide

administrator to configure a specific server (while it is offline) from a remote
system, prior to starting the operating system.
See 5.2, “System Partition Manager” on page 150 for more information.
򐂰 Active PCI Manager — Active PCI Manager helps optimize I/O performance
by matching the PCI-X bus and card characteristics and offering guidance on
the best slots in which to install PCI and PCI-X adapters.
See 5.1, “Active PCI Manager” on page 130 for details.
򐂰 Capacity Manager — Capacity Manager monitors critical server resources
such as processor utilization, disk capacity, memory usage and network
traffic. Using advanced artificial intelligence, it identifies bottlenecks for an
individual system, a group of systems, or a cluster, and recommends
upgrades to prevent diminished performance or downtime. Capacity Manager
can even identify latent bottlenecks and make recommendations for
preventive action. For example, Capacity Manager can predict hard disk drive
and memory shortages that might cause problems.
Because Capacity Manager features can help predict problems before they
occur, the administrator can perform proactive planning and schedule service
and upgrades before potential problems degrade performance.
Capacity Manager will be updated to support partitioning in the next release
of IBM Director, planned for the second half of 2002.
򐂰 Cluster Manager — Cluster Manager allows an administrator to easily
identify, configure, and manage clustered servers using one graphical tool.
Administrators can be alerted via pager or e-mail about cluster events in
hardware, the operating system, and Microsoft Cluster Service (MSCS).
Alternatively, Cluster Manager can trigger recovery programs or others
automatically.
򐂰 Management Processor Assistant — The Management Processor
Assistant (MPA) task, previously named the Advanced System Management
task, lets the administrator monitor critical subsystems as well as restart and
troubleshoot servers, even if a server has suffered a fatal error or is powered
off. This utility works in concert with the IBM family of systems management
processors and adapters described previously. IBM Director 3.1 added
management support for the RXE-100 Remote I/O unit.
򐂰 Rack Manager — Rack Manager offers a drag-and-drop interface for easily
configuring and monitoring rack components using a realistic visual
representation of the rack and its components. It also provides detailed health
status information for the rack and its elements. IBM Director 3.1 added the
ability to drag-and-drop objects between racks.
򐂰 RAID Manager — RAID Manager lets an administrator configure, monitor,
and manage ServeRAID subsystems without taking the server offline. IBM
Director 3.1 includes field replaceable unit (FRU) number reporting in alerts

Chapter 1. Technical description

31

for RAID components and hard disk drives. This reduces labor and service
costs by providing replacement part information in the alert message so that
the correct part can be obtained for the service call.
򐂰 Software Rejuvenation — In networked servers, software often exhibits an
increasing failure rate over time, due to programming errors, data corruption,
numerical error accumulation, etc. These errors can spawn threads or
processes that are never terminated, or they can result in memory leaks or file
systems that fill up over time. These effects constitute a phenomenon known
as “software aging”, which can lead to unplanned server outages. Advanced
IBM analytical techniques allow IBM Director Software Rejuvenation to
monitor trends and predict system outages based on the experience of
system outages on a given server. Alerts of this sort act as Predictive Failure
Analysis for software, giving an administrator the opportunity to schedule
servicing (rejuvenation) at a convenient time in advance of an actual failure
and avoid costly downtime.
Software Rejuvenation can be scheduled to reset all or part of the software
system with no need for operator intervention. When Software Rejuvenation
reinitializes a server, the server’s software failure rate returns to its initial lower
level because resources have been freed up and the cumulative effects of
numerical errors have been removed.
When Software Rejuvenation is invoked within a clustered environment,
cluster management failover services (such as Microsoft Cluster Services and
Microsoft Datacenter Server) may be used to stop the offending subsystem
and restart it on the same or another node in the cluster in a controlled
manner. In a clustered environment, xSeries servers can be set to fail over to
another server, then be reset by IBM Director without downtime.
IBM Director 3.1 includes a Trend Viewer feature to graphically monitor the
software aging process and an “application culprit” list that identifies the
applications most likely to be causing the aging.
򐂰 System Availability — System Availability accurately measures
uptime/downtime for individual servers or groups of servers, and provides a
variety of graphical views of this information. This enables users to track the
improvements in their server availability in order to verify the benefits of the
systems management processes and tools. IBM Director 3.1 includes the
ability to distinguish between planned versus unplanned outages.
򐂰 Electronic Service Agent — Electronic Service Agent enables the Director
server to contact IBM automatically in the event of a fault condition. Data
gathered by IBM Director that is relevant to the fault is included in the
message, in most cases allowing IBM service to respond to the condition
without the need for additional details. Once IBM has been notified of the
event, the course of action is the same as if a service call was placed
manually. Electronic Service Agent support requires registering the systems

32

IBM ^ xSeries 440 Planning and Installation Guide

with IBM, including providing a contact name and phone number, and is
available for systems covered under warranty or maintenance agreements.
Electronic Service Agent currently requires the use of an analog phone line
and modem. Access via VPN may be possible in future releases.
See the IBM Redbook Implementing IBM Director Management Solutions,
SG24-6188 for details on IBM Director and its plug-ins.

Chapter 1. Technical description

33

34

IBM ^ xSeries 440 Planning and Installation Guide

2

Chapter 2.

Positioning
In this chapter we discuss topics that help you understand how the x440 can be
useful to your business and what is the best configuration to use. The topics
covered are:
򐂰
򐂰
򐂰
򐂰

xSeries 440 application solutions
Why choose the x440
The benefits of system partitioning
Server consolidation

© Copyright IBM Corp. 2002. All rights reserved.

35

2.1 xSeries 440 application solutions
The x440 is an ideal platform for customers running mission-critical applications.
There are a number of ways the x440 can be deployed in specific application
solution environments. These include:
򐂰
򐂰
򐂰
򐂰

Server consolidation
Enterprise applications
Infrastructure applications
Clustering

2.1.1 Server consolidation
Server consolidation is a process of centralizing business computing workloads
to reduce cost, complexity, network traffic, management overhead and, in
general, to simplify the existing IT infrastructure and provide a foundation for new
solution investment and implementation.
Server consolidation is discussed in detail in 2.4, “Server consolidation” on
page 51.
Server consolidation solutions can be divided into two groups: those where no
more than four-way SMP is needed, and those that will take advantage of more
CPUs.
򐂰 Four-way configurations
The four-way configurations would most likely be good candidates for
traditional messaging/collaboration environments such as Microsoft
Exchange and Lotus Domino. These applications do not scale well beyond a
four-way SMP configuration. It is an optimal platform for customers who
intend to migrate from Exchange 5.5 to Exchange 2000 using new features of
Exchange 2000 such as the support for more databases. Many customers
have distributed Exchange and Lotus Domino sites, which is costly and
difficult to manage. Here, the x440 can be a very attractive platform to
consolidate distributed sites into a central site.
Many ISPs are running different Internet applications and mail systems on
several servers. In most cases, they run applications on several servers to get
better I/O. The four-way x440 server connected to an RXE-100 fulfills this
requirement and ISPs can continue servicing the customers by consolidating
to an x440 server.
Although many applications such as file, print, and terminal servers do not
scale well beyond two processors, the four-way x440 can be a good platform
on which to consolidate those distributed applications. For example, using

36

IBM ^ xSeries 440 Planning and Installation Guide

VMware many file and print servers that are distributed around the enterprise
can be consolidated to a four-way x440 server, reducing the TCO.
Using logical partitioning with four-way configurations can produce a one-box
cluster solution for small-to-medium-sized businesses (SMB) that need to
protect their mission-critical applications and files. With this solution, SMB
customers can reduce their total cost of ownership and save money.
In addition, a four-way x440 can be a good platform for light ERP solutions
such as Navision.
򐂰 Eight-way and 16-way configurations:
The eight-way and 16-way x440 is ideal for customers who want to
consolidate their enterprise applications (ERP, CRM, and SCM) or roll out
new enterprise applications. These configurations offer computing power,
high availability, and reliability, which are the main requirements when running
enterprise applications. The goal is to help customers to control their
expenses while establishing an environment that is easier to manage
because of fewer nodes.
The eight-way and 16-way configurations are solid platforms to be used for
consolidating database applications such as DB2, SQL Server, and Oracle.
For instance, a single database that spans multiple servers can be
consolidated to an eight-way x440 server or multiple databases on multiple
server can be consolidated to a 16-way x440 complex.
Many customers have multiple databases distributed on multiple sites and
they are planning to migrate to new database versions. This could be a very
costly and time-intensive process. The migration process needs to be well
planned and tested without any interruption of the business process. The
eight-way or 16-way can be an optimal platform for these customers. For
example, you can consolidate the distributed databases on multiple sites to a
16-way x440. Using logical partitioning on x440, you can build, test and
deploy many virtual databases on one physical server.
The main reasons to consolidate database applications are:
– Migration from older database versions to new versions getting the
advantages relating to availability, reliability and performance.
– Support for more databases. For instance, SQL Server 2000 can support
up to 32,767 open databases.
– Reducing the management costs of distributed database sites by
consolidating to an easy-to-manage central site.
In addition, using logical partitioning with eight-way and 16-way configurations
can produce a powerful server solution that is capable of hosting multiple
applications.

Chapter 2. Positioning

37

2.1.2 Enterprise applications
Because enterprise applications such as ERP, SCM, CRM and BI work with the
most critical data of a business, x440 with its high-availability features is an ideal
server for these applications.
򐂰 Enterprise Resource Planning
Enterprise Resource Planning (ERP) is an industry term for the broad set of
activities supported by multi-module application software that helps a
manufacturer or other business manage the important parts of its business,
including product planning, parts purchasing, maintaining inventories,
interacting with suppliers, providing customer service, and tracking orders.
ERP can also include application modules for the finance and human
resources aspects of a business. Typically, an ERP system uses or is
integrated with a relational database system.
The key operation areas of the x440 for ERP applications are:
– As an application server and as a database server with two-way servers
such as the x330 acting as Web servers.
– As an application server front-end to a pSeries or zSeries database
server, due to the fact that ERP applications involve integration across
heterogeneous environments.
– Using partitionable x440 servers to deploy ERP applications within a
single large-scale server, which could be an attractive solution for SMB
customers offering them new levels of manageability as it relates to their
ERP implementation.
Key server attributes for ERP applications are availability, scalability, and
performance. The x440, with its Enterprise X-Architecture technology such as
XpandOnDemand capability, Active Memory, and XceL4 server accelerator
cache, is a robust basis to build and implement successful ERP solutions.
Key ERP software vendors include SAP, Oracle, PeopleSoft, Microsoft, JD
Edwards, Baan/Invensys, and Navision.
򐂰 Supply chain management
Supply chain management (SCM) is the oversight of materials, information,
and finances as they move in a process from supplier to manufacturer to
wholesaler to retailer to consumer. Supply chain management involves
coordinating and integrating these flows both within and among companies.
The x440 is a preferred platform for SCM management applications. The
x440 offers a range of leading technologies that will help to deliver the uptime
required for business-critical applications at the lowest price/performance
ratio. The x440 covers all high-availability features for customers looking for
servers to power their SCM solutions. Also, the x440 can be considered as an

38

IBM ^ xSeries 440 Planning and Installation Guide

application server or in a heterogeneous environment as a front-end to a
pSeries or zSeries database server.
Key SCM software vendors include i2 Technologies, SAP, International
Business Systems (IBS), JD Edwards, and PeopleSoft.
򐂰 Customer relationship management
Customer relationship management (CRM) is an information-industry term for
methodologies, software, and usually Internet capabilities that help an
enterprise manage customer relationships in an organized way.
With the Intel Xeon Processor MP and the IBM XceL4 cache, the x440
provides a performance-based foundation upon which customers can build
and deploy CRM solutions. The x440 will most likely be implemented as an
application server and/or a database server. In addition, the x440's
partitioning capabilities will help to build a partitioned CRM environment,
allowing customers to maximize server utilization while simplifying overall
management of the deployment.
Key CRM software vendors include: Siebel Systems, Baan/Invensys, Onyx,
PeopleSoft, and SAP.
򐂰 Business Intelligence
Business intelligence (BI) is a broad category of applications and
technologies for gathering, storing, analyzing, and providing access to data to
help enterprise users make better business decisions. BI applications include
the activities of decision-support systems, query and reporting, online
analytical processing (OLAP), statistical analysis, forecasting, and data
mining.
The recent move of BI solutions into smaller enterprises has led to the strong
positioning of Windows on Intel processor-based servers within this market.
The x440 brings scalability and performance to handle compute-intensive BI
applications. The highlights of the x440 are its XceL4 cache, which will help
speed up data-intensive BI applications that help companies to increase the
productivity of their employees.
Key BI software vendors include SAS, Cognos, Business Objects, Hyperion,
and Crystal Decisions.

Chapter 2. Positioning

39

2.1.3 Infrastructure applications
Some of the infrastructure applications are database, messaging/collaboration,
and e-business applications. The x440 can be recommended for these three
areas as follows:
򐂰 Database applications:
Four-way and eight-way configurations can be used as database servers, and
application servers or combination database and application servers
providing an extremely scalable platform with room to scale to additional
nodes. These configurations require an external storage enclosure or SAN,
depending on the size of the database, which is driven by the number of
users.
The 16-way configuration can deliver a highly reliable and capable platform
for customers who need to run multiple instances of databases that can scale
beyond eight processors.
Key database software vendors include IBM (DB2), Microsoft (SQL Server),
and Oracle.
򐂰 Messaging/collaboration:
The four-way x440 with its high-availability features is a good platform for
messaging/collaboration applications. Even though there are some scalability
limits for Microsoft Exchange 2000 (which does not scale well above four
processors), the x440 can be seen as an ideal server for Exchange 2000
deployments.
Another possible operation area for the x440 in the messaging/collaboration
arena is the utilization of partitioning, allowing customers to maximize server
resources while improving overall manageability.
Key messaging/collaboration software vendors include Lotus (Domino) and
Microsoft (Exchange).
򐂰 e-business:
e-business is the use of Internet technologies to improve and transform key
business processes.
This includes Web-enabling core processes to strengthen customer service
operations, streamlining supply chains and reaching existing and new
customers. In order to achieve these goals, e-business requires a highly
scalable, reliable, and secure server platform.
The x440 is a strong candidate for an application integration server that
integrates the back-end data with the servers containing end-user or client
programs. This involves data transformation, process flow, and other
capabilities, thus allowing companies to integrate applications and other data

40

IBM ^ xSeries 440 Planning and Installation Guide

sources. These types of servers benefit from the processing power offered by
the x440.
Key e-business software vendors include IBM (WebSphere) and BEA.

2.1.4 Clustering
A cluster is group of independent computers, also known as nodes, that are
linked together to provide highly available resources (such as file shares) for a
network. Each node that is a member of the cluster has both its own individual
disk storage and access to a common disk subsystem.
When one node in the cluster fails, the remaining node or nodes assume
responsibility for the resources that the failed node was running. This allows the
users to continue to access those resources while the failed node is out of
operation.
In addition, x440 in conjunction with VMware offers clustering, which can be seen
as another key solution for server consolidation. For example, a two-node IIS
cluster and a two-node file server cluster can be consolidated into a single x440
server. This helps customers to save costs, facilitate cluster management, and
improve cluster performance through high-speed SMP Expansion Ports.
The x440 with its high-availability features is an optimal platform to protect
mission-critical applications. The x440 offers two types of clustering for server
consolidation purposes:
򐂰 One-box cluster
This provides simple clustering to deal with software crashes or administrative
errors. The cluster consists of multiple virtual machines (VMs) on a single
physical machine. It supports shared disks without any shared SCSI
hardware. It supports the heartbeat network without any extra network
adapters.

Chapter 2. Positioning

41

x440 server

Node A1

Two node
cluster

Node A2

"Shared" storage

Figure 2-1 One-box cluster running VMware with virtual shared storage

Using VMware, this allows you to set up a one-box cluster, which provides the
following benefits:
– Much lower cost than for duplicate systems required for traditional-based
clustering
– Protects against all OS and application faults
– Reduces management tasks
򐂰 Cluster across multiple systems
This type of cluster also uses virtual machines. The virtual disks are stored on
real shared disks, so all virtual machines can access them. Using this type of
cluster, you can protect your mission-critical applications in a cost-effective
way. For example, you can set up a cluster to protect your Web server
applications and you can configure a second cluster to protect your file server.
You can consolidate four clusters of two machines each to two physical
machines with four virtual machines each. This provides protection from both
hardware and software failures.

42

IBM ^ xSeries 440 Planning and Installation Guide

x440 server

x440 server

Node A1

Node A2

Node B1

Node B2

Node C1

Node C2

Node D1

Node D2

Two node
cluster

Shared storage

Figure 2-2 Four clusters on two x440s running VMware with shared storage

Dual-chassis eight-way configuration can be used as clustered database
servers and/or application servers in an ERP/CRM/SCM environment,
delivering high performance, high availability, and reliability, which are key
requirements of enterprise applications. This configuration requires an
external storage enclosure or SAN, depending on the size of the database,
which is driven by the number of users.

2.2 Why choose the x440
There are some good reasons to choose the x440 as your mission-critical Intel
platform. In this section we describe the major benefits of x440.

2.2.1 IBM XA-32 chipset
The IBM XA-32 chipset contains advanced core logic, which determines how the
various parts of a system (microprocessors, system cache, main memory, I/O,
etc.) interact.

Chapter 2. Positioning

43

This chipset is built on IBM’s advanced copper 0.13-micron technology, which
creates faster, lower power-consuming and heat-producing chips. So servers
built with the chipsets will run faster, have lower power costs, and require less
cooling, which increases reliability and reduces TCO.
The XA-32 has the following features:
򐂰 Support for up to 16-way SMP with Xeon MP processors and up to four-way
SMP with Xeon DP processors.
򐂰 Support for scalability ports that lets you expand the x440 server from
two-way, to four-way, to eight-way and by connecting two x440s together, to
16-way.
򐂰 32-64 MB of a Level 4 cache (levels 1-3 are internal to the Xeon MP
processors), using IBM XceL4 Server Accelerator Cache, to maximize
performance, especially for eight-way and 16-way configurations.
򐂰 Two Remote I/O buses per node to connect an RXE-100 external PCI-X
enclosure.
򐂰 Memory mirroring and up to 6.4 GBps memory bandwidth.
򐂰 Up to 16 GB of main memory per SMP Expansion Module using 1 GB DIMMs
(and 32 GB of RAM with 2 GB DIMMs once they are available).
򐂰 Six PCI-X buses, two for integrated devices, four to internal PCI-X slots.

2.2.2 Intel Xeon MP and DP processors
Many of the x440 models use the Xeon Processor MP, Intel’s latest
microprocessor for high-end server. It has the following key features:
򐂰 400 MHz front-side bus providing an effective burst throughput of 3.2GBps,
compared to 800 MBps available to a 100 MHz bus. This provides high
performance, especially with TCP/IP.
򐂰 Hyper-Threading creates two logical processors that share resources in one
physical processor. A processor with Hyper-Threading can execute multiple
threads, delivering a performance improvement in servers running software
that has been optimized to use Hyper-Threading:
– On a four-way x440, the benefit can be as much as 20%
– On an eight-way, the benefit can be as much as 10%
Figure 2-3 on page 45 shows that two physical processors will outperform
one processor with Hyper-Threading enabled.
Customers should expect improved results as applications are
Hyper-Threading aware. Best-case applications today are databases, Java
applications, Web servers, and e-mail.

44

IBM ^ xSeries 440 Planning and Installation Guide

Performance with & without Hyper-Threading

Hyper-Threading delta
Physical processor

One-way

Two-way

Figure 2-3 Comparing processor performance

򐂰 The three-level cache architecture of the Xeon MP processor delivers the
following benefits compared to Xeon PIII processor:
– Higher throughput: Peak bandwidth of 51.2 GBps compared to 28.8 GBps
for Xeon PIII processor.
– Improved average cache hit rates due to larger cache line sizes. Line size
of 128-bytes compared to 32-bytes for the Xeon PIII processor.
򐂰 Advanced Dynamic Execution
The Pentium III Xeon processor has a 10-stage pipeline. However, the large
number of transistors in each pipeline stage means that the processor is
limited to speeds under 1 GHz due to latency in the pipeline.
The Xeon Processor MP has a 20-stage pipeline, which can hold up to 126
concurrent instructions inflight and up to 48 reads and 24 writes active in the
pipeline. Faster raw execution results in higher transaction rates and faster
response times for Web and database servers.
Intel reports that the Xeon MP processor supports 36% more users and can
process 40% more orders in an e-business environment than supported and
processed in the Pentium III Xeon processor.
The Xeon DP is similar to the Xeon MP and is also based on the Intel NetBurst
micro-architecture. The Xeon DP was designed by Intel to only support two-way
SMP. However, with the use of the IBM XA-32 chipset, the x440 can have up to
four Xeon DP processors installed.

Chapter 2. Positioning

45

x440s with Xeon DP processors are a good platform for customers who are
looking for better price/performance platforms but still maintain high levels of
scalability that the x440 provides.
Lab tests using standard transaction processing benchmark conditions have
shown that the comparative performance of the Xeon DP and Xeon MP x440s is
approximately the following:
򐂰
򐂰
򐂰
򐂰

Two-way 1.6 GHz Xeon MP (1 MB L3 cache) = 1.0
Two-way 2.4 GHz Xeon DP (0 MB L3 cache) = 1.10
Four-way 1.6 GHz Xeon MP (1 MB L3 cache) = 1.70
Four-way 2.4 GHz Xeon DP (0 MB L3 cache) = 1.65

2.2.3 XceL4 Server Accelerator Cache
The XceL4 Server Accelerator Cache (L4 cache) is 32 MB of PC200-compliant
DDR-SDRAM using a 64-bit 400 MHz bus with 3.2 GBps throughput.
32MB of L4 high-performance high-speed ECC cache memory per four-way
SMP Expansion Module speeds up your most complex applications by reducing
memory latency and increasing memory bandwidth. The more high-speed cache
memory there is, the more often the processor finds the data it needs and the
less often it has to access main memory.
XceL4 server Accelerator cache provides the following benefits:
򐂰 XceL4 server Accelerator Cache delivers up to 20% more performance for
transaction-intensive workloads.
򐂰 Minimizes processor and I/O memory contention delivering full PCI-X
bandwidth to network and storage devices.
򐂰 Advanced Level 4 caching is designed to provide zero wait-state memory
access, up to 3X performance increase over typical main memory fetches.

2.2.4 High-performance memory subsystem
The x440 memory subsystem provides multiple levels of redundancy, combining
memory mirroring, Chipkill, Memory ProteXion, and memory scrubbing.
Combining Chipkill with Memory ProteXion means that up to two failed memory
chips (“chipkills”) per memory port on an x440 can be tolerated. A 16-way x440
with its eight memory ports could sustain up to 16 failed chips.
The first chipkill on each port would not even generate a Light Path error,
because Memory ProteXion would provide the first layer of protection. Each
memory port could then sustain a second chipkill without shutting down.
Provided that Active Memory with memory mirroring is enabled, the third chipkill

46

IBM ^ xSeries 440 Planning and Installation Guide

on that port would send the alert and take down the DIMM, but keep the system
running out of the redundant memory bank.
To maintain throughput to the processors, the x440 memory subsystem improves
performance by the use of four-way interleaving. Interleaving improves memory
performance because multiple 64-bit objects can be transferred into the memory
controller in a single operation. This improves the memory performance by
reducing the latency time.
For more information regarding an x440 memory subsystem, refer to 1.7,
“System memory” on page 19.

2.2.5 Active PCI-X
PCI-X is a new PCI bus specification and is now available on the xSeries 440. It
was developed to satisfy the increased requirements of I/O adapters such as
Gigabit Ethernet, Fibre Channel and Ultra 3 SCSI. PCI-X is fully compatible with
standard PCI devices.
PCI-X provides a new generation of capabilities for the PCI bus, including more
efficient data transfers, more adapters per bus segment, and faster bus speeds
for server systems. PCI-X enhances the PCI standard by doubling the throughput
capability and providing new adapter-performance options while maintaining
compatibility with PCI adapters.
PCI-X allows all current 66 MHz PCI adapters, either 32-bit or 64-bit, to operate
normally on the PCI-X bus. PCI-X adapters take advantage of the new 100 MHz
and 133 MHz bus speeds, which allow a single 64-bit adapter to move as much
as 1 GB of data per second.
Additionally, PCI-X supports twice as many 66 MHz/64-bit adapters in a single
bus as PCI. Active PCI-X also increases total server availability by letting you add
or replace Active PCI and Active PCI-X cards without having to shut down your
xSeries servers.

2.2.6 XpandOnDemand scalability
XpandOnDemand scalability represents an industry-standard implementation of
true “pay as you grow” scalability. New levels of scalability are achieved with the
Enterprise X-Architecture platform using enhanced, high-performance SMP
building blocks that allow effective scalability beyond four-way SMP.

Chapter 2. Positioning

47

The modular scalability feature of XpandOnDemand offers the following benefits:
򐂰 Performance scalability through the SMP Expansion Module
SMP Expansion Modules can be easily added at any time to increase the
operational capacity of a node. By adding a second SMP Expansion Module,
a system can take advantage of more processors, memory and Level 4 cache
to increase overall system performance for managing more database users
on a network or processing more transactions faster.
򐂰 Performance scalability through multi-node SMP
Enterprise X-Architecture technology powers this industry-standard server
building block. By linking two x440 nodes together, a customer can assemble
a modular SMP system with increased performance.
򐂰 I/O scalability through the RXE-100 Remote Expansion Enclosure
Adding additional PCI-X slots is achieved by connecting an RXE-100 Remote
Expansion Enclosure to the server.

2.2.7 System Partition Manager
System Partition Manager is designed for easily managing multi-node
configurations, allowing the customer to build complexes of four-way and
eight-way nodes up to 16-way SMP, define and activate/deactivate partitions, and
enable automatic re-partitioning of hardware under the control of Director Event
Action plans.
The other feature of System Partition Manager is chassis failure recovery. If the
operating system crashes in a multi-node partition due to a failure of one of the
chassis, System Partition Manager can generate an alert event to IBM Director,
notifying the administrator to manually reconfigure the partition or initiating
additional events to automatically reconfigure the multi-node partition and thus
restart the chassis in that partition.
For this to occur, the system administrator would have created IBM Director
action plans to define what action SPM must take when a chassis fails.
Customers must consider such things as boot device attachment, data storage
attachment, and other topology issues when configuring the complex and
creating the action plans.
System Partition Manager uses the network link to the onboard systems
management processor or adapter to establish the relationships among nodes.
These relationships are maintained in a persistent database and can be recalled
and activated at any time using the graphical interface.

48

IBM ^ xSeries 440 Planning and Installation Guide

2.3 The benefits of system partitioning
System partitioning is virtualization of system resources, including processor,
memory, I/O, and storage so that all concurrent users appear to have access to
the system, although each user is actually segmented and protected from the
actions of other users. If one virtual partition freezes up, it would not affect the
others.
System partitioning offers the ability to divide a system so that it can
simultaneously support multiple operating system images. Among the benefits of
system partitioning are:
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰

Server hardware consolidation
High availability
Software migration and coexistence
Version control
Development
Testing and maintenance
Better protection from viruses and software crashes
Workload isolation
Independent backup and recovery on a partition basis

System resources, including processor, memory, I/O and storage are virtualized
so that all concurrent programs appear to have complete access to the system. If
one virtual partition were to lock up, it would not affect the others.
Here are just a few of the ways that system partitioning can help you to improve
IT efficiency:
򐂰 Server hardware consolidation — Consolidate many underused,
underpowered, and unnecessary servers into a few productive ones. Reduce
the number of current servers and buy fewer servers in the future.
򐂰 Increased server utilization — Divide a processor into multiple partitions
rather than wasting an entire processor on one low-throughput application.
򐂰 Simplified server management — Manage fewer servers centrally versus
many of them individually in multiple locations. Have fewer servers, cables,
operating systems, and applications to deal with.
򐂰 Low-cost clustering/failover — Create clusters of partitions among hardware
nodes. Have several different servers fail over to multiple partitions in one
server.
򐂰 Simplified application deployment — Once you have tested and qualified a
specific hardware platform for use with a particular operating system and
application combination, you can deploy software images on multiple
partitions, rather than having to requalify the software on another hardware
platform.

Chapter 2. Positioning

49

Two types of system partitioning are:
򐂰 Physical partitioning
With physical partitioning, a single server consisting of two nodes, such as the
x440, can run multiple instances of an operating system in separate
partitions. It can also run multiple versions of an operating system or even
different types of operating systems.
This means that a server can continue to run an operating system in one
node while you install and test another version of that operating system, or a
different operating system entirely in another node on that server without
having to take the entire server offline.
Physical partitioning includes two different types:
– Static partitioning, which can be implemented using IBM System Partition
Manager, requires the nodes being adjusted to be taken offline. The
remaining nodes in the server are unaffected and continue to operate
normally. Static partitioning is performed on node or system boundaries.
This means that a partition must have the hardware to function
independently. Static partitioning also means that one node can't be
subdivided into multiple partitions, but a partition can consist of multiple
nodes.
– Dynamic partitioning has the same hardware boundaries as static
partitioning, but it permits hardware reconfiguring while the partition's
operating system is still running.
򐂰 Logical partitioning
Servers using VMware ESX Server will be able to reconfigure a system
partitioned at the individual processor level, without shutting down and
restarting the virtual server. When workload demands change, you can also
reassign resources from one logical partition to another by restarting the
server.
If you intend to consolidate servers, system partitioning offers many benefits:
򐂰 Multiple operating systems previously run on multiple servers could all be
running simultaneously on one server in one location.
򐂰 System partitioning enables you to set up different cluster types. Clustering
delivers high availability, because multiple servers can be connected together
with one server backing up the other. In the event that one of the servers
requires maintenance or service, the second server can support the users
and workload while corrective action is performed and the offline server is
brought back online.

50

IBM ^ xSeries 440 Planning and Installation Guide

򐂰 Using IBM technology such as memory mirroring, Chipkill Memory, Memory
ProteXion and system partitioning, customers can implement high-availability
cluster solutions.
򐂰 Scalable clusters provide customers with industry-leading scalability at a
system level, as well as load balancing to maximize performance and the
support received by users accessing the system.

2.4 Server consolidation
Server consolidation means combining the functions performed by many servers
into a fewer number of servers to reduce cost, complexity, network traffic, and
management overhead, and to increase the efficiency of systems management,
security, and resource utilization.
Server consolidation is complex, and needs methodical approach because of the
nature of the problem:
򐂰 Large numbers of servers are involved.
򐂰 Servers from different vendors, of different sizes, with different configurations.
򐂰 Software ranges from used and well-known to local and poorly understood.
򐂰 Business services being provided will vary greatly in volume and type.
򐂰 Consolidation may provide essential business functionality that must be
protected from disruption.
򐂰 Consolidation must take place without delivering limits on an organization’s
future ability to adjust the size, scope, and direction of its business initiatives.

2.4.1 Types of server consolidation
One of the most important things to remember is that there are no “off-the-shelf”
solutions for server consolidation. Every organization requires a unique solution
that will match its unique infrastructure and business model.
There are four general types of server consolidation, offering a wide range of
business value through varying degrees of solution complexity and investment.
There are four types of server consolidation:
򐂰
򐂰
򐂰
򐂰

Centralization
Physical consolidation
Data integration
Application integration

Chapter 2. Positioning

51

These are summarized in Table 2-1 and described in detail below.
Table 2-1 Server consolidation strategies

Type of Consolidation

Definition

Potential Benefit

Centralization

Relocate to fewer sites

Reduction in administration costs
Increased reliability and availability
Lower operation costs
Improved security and management

Physical Consolidation

Replace with larger
servers

Reduced hardware and software costs
Improved processor utilization
Reduced facilities costs (space, power, A/C)
Lower operations costs
Improved manageability

Data Integration

Combine data from
multiple sources into a
single repository

Reduced storage management costs
Improved resource utilization
Reduction in administration costs
Improved backup/recovery capabilities
Enhanced data access and integrity

Application Integration

Consolidation of
multiple applications
onto one server
platform

Reduction in administration costs
Increased reliability and availability
Reduced facilities costs (space, power, A/C)
Lower operation costs
Scalability

򐂰 Centralization
Server consolidation means different things to different people. As shown in
Figure 2-4, in its simplest form, servers are physically moved to a common
location. Because this simplifies access for the IT staff, it helps reduce
operations support costs, improve security, and ensure uniform systems
management. This is an important predecessor to future consolidation
activities.

London

Zurich

Toronto

Zurich

Sydney

Hong
Kong

Los
Angeles

Hong
Kong

Figure 2-4 Centralization

Centralization involves relocating existing servers to fewer sites, for example,
taking 20 servers scattered over three floors in your building and moving them

52

IBM ^ xSeries 440 Planning and Installation Guide

to a single server room, or moving 200 servers originally installed across 20
locations to three data centers.
– Relocating existing servers to one or fewer IT sites
Centralization, or data center consolidation, may be a first step for an
organization after a merger. After a merger, the resulting entity does not
want to attempt merging applications; however, they will collocate their
systems as a first step.
For both servers and storage systems, two subcategories of centralization
are defined:
•
•

Virtual centralization, which is mainly made through the network
Physical centralization, where hardware is physically moved to
different locations

Centralization is often the initial step a company takes toward controlling
costs through consolidation. It’s also generally the first step taken toward
rationalizing the architecture after a merger or acquisition.
By simply relocating existing servers to fewer numbers of IT sites,
economies of scale of operation can provide simplified management and
cost improvement.
– Virtual centralization or remote management
You can begin centralization in small steps. With virtual centralization or
remote management, physically dispersed servers or storage systems are
logically centralized and controlled through the network. Hardware
remains physically distributed, but is brought under a common umbrella of
systems management and network management tools. Operations costs
can therefore be reduced, and system availability can be improved.
– Physical centralization or server relocation
Existing servers or storage systems are physically relocated to one or
fewer IT sites. Because this simplifies access for the IT staff, it helps
reduce operations support costs, improves security, and ensures uniform
systems management. This is a step in the right direction, but the payback
is relatively low. However, it is an important predecessor to future
consolidation activities.
򐂰 Physical consolidation
Physical consolidation is the replacement or reduction of some number of
smaller systems with fewer and more powerful systems. This consolidation
does have advantages:
– It improves availability because there are fewer points of failure.
– It can reduce the cost and complexity of system communications.
– It simplifies operations.

Chapter 2. Positioning

53

With its Enterprise X-Architecture enabled features, the x440 server offers
flexibility, availability, and scalability to handle customer requirements for
consolidating distributed workloads onto a single powerful and highly
available platform to achieve total cost of ownership (TCO) savings.
– Reducing the number of servers by replacing many small servers
with fewer large servers
Physical consolidation may be implemented on a site, department, or
enterprise basis. For example, many x220 file/print servers can be
consolidated onto newer, much faster, more reliable x440 servers, or older
servers with high hardware maintenance costs can be consolidated or
replaced by newer, much faster, cheaper-to-maintain x440 servers.
– Physical server consolidation
The number of separate hardware platforms and operating system
instances within a consolidation site may vary considerably by customer.
Typically, some reduction in the number of distinct servers is
accomplished when gathering distributed systems into a central
installation or when a number of small servers are replaced with larger
servers of the same platform. Based on the enterprise’s platform, four
physical server consolidation cases can be considered.
•

Case 1: Small servers from one platform to server(s) on the same
platform

•

Case 2: Small servers from different platforms to servers on different
platforms (platform source and target are the same)

•

Case 3: Small servers from one platform to server(s) on a different
platform

•

Case 4: Small servers from different platforms to server(s) on a
different platform (platforms’ source and target are not the same)

Cases 1 and 2 are physical server consolidation, and there is no logical
work to do. For cases 3 and 4, a platform migration has to be planned, and
applications and data have to be ported from one platform to another. The
objective of the physical server consolidation phase is not to share
applications or data but to have an application that was running on one
platform run on a new platform. Therefore, this operation has to be
differentiated from application or data integration.
Physical consolidation can be divided into two subcategories, namely
server consolidation and storage consolidation.
This can take place within the same architecture -- for example, several
two-way servers replaced with one 16-way x440 server or many
uniprocessor servers moving to several multiprocessor x440 servers.

54

IBM ^ xSeries 440 Planning and Installation Guide

With x440 capabilities such as system partitioning, you can migrate and
consolidate workloads across systems for improvements in systems
management and resource utilization.
This approach is typically appropriate for implementations of key packaged
applications such as SAP, PeopleSoft, and Siebel, where minimal integration
with other applications and data is required. LAN file/print servers using
Windows 2000 or Novell NetWare solutions represent another opportunity
area for consolidation activities and savings.
Storage consolidation is combining data from different sources (same or
disparate types) into a single repository and format. This means that storage
is viewed as an enterprise resource, where centralized disk space is used to
supply the storage for the servers of the enterprise.
Additional benefits can be gained through data integration and application
integration. While these are often more complex projects that require
extensive analysis, planning and implementation, they can provide significant
return-on-investment.
򐂰 Data integration
Data Integration involves physically combining data from different sources
across the enterprise into a single repository and format. The result is that the
merged data can reside on fewer servers and more centralized and consistent
storage devices, greatly lowering the total costs.
When all corporate data resides on the same system, consolidation allows
high levels of security and data integrity that are nearly impossible to achieve
in a distributed environment. In addition, data sharing throughout the
enterprise is vastly simplified.
The data can be file data such as Windows 2000, Novell, or Linux
consolidated to a single network operating system. Also, multiple types of
databases, such as DB2, Informix, Oracle, Sybase, etc., can be converged to
fewer database architectures.
In many client/server infrastructures, centralizing LAN data can bring dramatic
improvements in data transfer speed. New enhancements in communications
hardware will expand the high-speed connectivity options to server platforms
of all types.
There can be two kinds of data integration:
– Data integration from several servers and consolidated on a unique
repository
– Data integration from several repositories in one server and consolidated
on a unique repository
Depending on the type of application integration selected, data integration
can be performed separately or together with application integration.

Chapter 2. Positioning

55

򐂰 Application integration
Application integration is the combining of multiple, similar applications, such
as Web servers, onto one consolidated server.
Application integration is also the combining of different application workload
types within a single server/system and migrating an application or data to a
new platform in order to collocate the application and data.
It reduces administration, operation, and facilities costs and increases
reliability and availability.
The main objective of application integration is to migrate applications from
one or several locations to a single location. Based on the consolidation
platform, this migration can take different forms:
– The migration may not bring any additional costs beyond that of relocating
the application on a new server.
– The migration may imply that application programs have to be recompiled
in order to run on the new platform.
– The migration may imply that application programs have to be redesigned
and rewritten in order to run on the consolidation platform. As for physical
server consolidation, application integration has several cases.
– Application integration is combining different application workload types
within a single server or system.
– Distributed systems do not run identical applications and system software
and have to be integrated into a consolidation server running a different
operating system.
From another point of view, consolidation takes one of three basic approaches:
򐂰 Logical
Logical consolidation brings all server resources to the same level so that
they can be viewed logically as a single unified environment.
In logical consolidation, actual systems are still distributed, while
administrative procedures and processes are standardized company-wide.
This kind of consolidation is relatively easy and safe to implement, but it
carries the least potential for significant returns. Cost savings come from
better asset management and opportunities to deploy high-quality, consistent
administrative practices across the enterprise.
򐂰 Physical
Physical consolidation does pretty much what it says: systems are relocated
to a single server site. The number of servers you have to manage remains
the same, and cost savings come from better staff utilization, higher service

56

IBM ^ xSeries 440 Planning and Installation Guide

levels, simplified backups and restores, and better asset management and
security.
򐂰 Rational
In combination, or rational, consolidation, the company's distributed
applications and services are combined onto fewer servers. It is a
considerably more complex undertaking, but the potential rewards are
greater. Cost savings range from 25 to 75 percent here resulting from better
asset utilization and elimination of unnecessary systems, reduced staffing,
lower maintenance costs, and fewer operating environments to support.

2.4.2 Why consolidate servers
IT managers are feeling, pressure to reduce costs, maintain or improve service
levels, and maintain or improve the availability of systems that become ever more
critical to daily operations.
Users want new applications that are delayed or inadequate because of IT
infrastructure. IT needs to provide a cost-effective and reliable service, which is
made difficult by constantly changing applications.
Many organizations are realizing that, as the number of servers increases, the
cost and operational complexity are also propagating. In many cases, there are
concerns whether multiple distributed servers can provide the application
availability, hours of service, responsiveness, and ability to grow with the
requirements of the business. These characteristics are being increasingly
demanded by business applications. To reduce these costs, many customers are
attempting to consolidate their servers into a more manageable central location.
The main objectives of server consolidation are:
򐂰
򐂰
򐂰
򐂰

Recentralizing servers
Merging workloads onto a single large server
Consolidate architecture
Optimize the IT infrastructure

2.4.3 Benefits from server consolidation
The main benefits of server consolidation are:
򐂰 Single point of control
Rapidly growing firms, especially those growing through mergers and
acquisitions, frequently felt that disparate distributed systems were so
unwieldy to manage that they were losing control, which could constrain
further corporate growth.

Chapter 2. Positioning

57

A single point of control allows enterprises to:
–
–
–
–
–
–
–
–
–
–
–

Reduce or eliminate department operational costs
Reduce some software licenses
Reduce number of systems, disk storage costs
Reduce maintenance charges
Avoid multiple copies of the same application on distributed systems
Reduce owner operational costs
Offer better availability of service
Improve systems management
Have better version control management
Have better software distribution
Reduce risk and increase security

򐂰 Giving users better services
With a consolidated infrastructure, end users can count on round-the-clock
service, seven days a week. The response time is much better than with an
overly distributed environment, and the data is more easily accessible while
being highly protected. The control procedures are simpler, while security
becomes even higher. And information sharing is improved, giving end users
increased data consistency. The availability of service is improved mainly due
to a reduction in the time needed to communicate between clients and
servers in a single location.
򐂰 Regaining flexibility
The standardization of procedures, releases, and servers also makes it easier
to install new application software, for example, Internet and intranet,
electronic commerce, and so on. In today’s fast moving environment,
computing resource consolidation enables a trouble-free upgrade of the
information system and less costly adaptation to organization or environment
changes. Enterprises can react more quickly to market changes, since
storage is readily available and can easily be reallocated.
򐂰 Avoid floor space constraints
While a small server may be easily fit into a closet, as compute demands
increase, enterprises find that suitable floor space is hard to find for
proliferating small servers. The solution is a central site outfitted with
appropriate power, cooling, access to communications links, and so on, and
populated with more powerful systems, each giving more performance in the
same footprint.
򐂰 Reduction of the Total Cost of Ownership (TCO)
There are several costs associated with server consolidation, including:
– Hardware costs — new servers and infrastructure, upgrades
– Software costs — fewer software licenses are required with fewer servers
– Disruption costs — migration, change management

58

IBM ^ xSeries 440 Planning and Installation Guide

򐂰 Manageability and availability
Server consolidation can help you improve manageability and availability of IT
systems in the following ways:
– Enterprise management - Integrated operations allows for consistent
management of all facilities and IT services.
– Consistent performance - Providing consistent response time at peak load
periods is very important.
– Dependability - Commonly cited problems of distributed environments
include frequency of outages and excessive requirements for manual
intervention by the IT staff.
In addition, it provides the following benefits:
– It is easier to enforce consistent user policies in a consolidated
environment.
– Fewer servers lead to a simpler network structure that is easier to
manage.
– Reorganization following mergers or acquisitions is easier in a
well-controlled environment.
– Consolidation encourages standardization of tools, processes, and
technologies to provide a stable and consistent application platform.
Server consolidation can help you improve data access and protection in the
following ways:
– Network technology - The growth of networking and network speeds is
enabling the centralization of IT networks today and will continue and
expand into the future.
– Fragmentation and duplication of data - This is a core issue in most
organizations with large numbers of distributed servers.
– Physical security - Consolidation of servers in a central data center can
restrict unwanted access and ensure a more secure environment.
– Integrity, local backup and recovery - Enterprises are concerned about the
dangers of business disruption, customer lawsuits, and regulatory action
in the event of severe data loss, and they need to implement effective
disaster recovery procedures.
Server consolidation can help you leverage existing investments in the
following ways:
– Expand existing servers - Add new capabilities to the existing installation
rather than to deploy new dedicated servers.
– Optimization of capacity utilization - In order to manage performance and
have a level of acceptable, consistent response times, enterprises typically

Chapter 2. Positioning

59

run at 50-60% utilization. Excess or underutilized capacity on one server
cannot be shared with workloads of other servers in a distributed
environment.
– Optimization of skilled resources - Under the distributed alternative,
systems management responsibilities are often only part-time, extra-duty
assignments such that a critical skill level is rarely achieved. Furthermore,
since other departments may employ disparate architectures and
applications, there is little opportunity to benefit from the experiences of
others.
򐂰 Scalability and workload growth
Server consolidation can help you handle scalability and workload growth
issues in the following ways:
– True scalability - Server consolidation provides the ability to deal with peak
usage without crashing or seriously degrading performance. It also
provides an upgrade path without degradation in response, excessively
complex forms of database partitioning, or other problems.
– Granular upgrades - Server consolidation provides the ability to quickly
grow the number of users, the number of applications, or the size of an
application when needed, without major disruptions to the current
production environment.
򐂰 Service level
Most companies spend their IT budget for services. They need services for
hardware, software, and infrastructure maintenance. Server consolidation can
help you to reduce the increasing service costs in the following ways:
– Delivery of a specified service level is costly if servers are uncontrolled.
– Management of servers as corporate assets is easier when they are
centralized.
– Application deployment is quicker and easier when not spread over a large
number of servers.
– Staff time freed from server maintenance tasks can be used for activities
more directly related to business needs.
򐂰 Business continuity
Almost all enterprises need to run their business without interruption.
Business interruption can be very costly and it influences the productivity of
your business. Server consolidation can help you to run your business without
interruption in the following ways:
– Consolidating IT resources can help you ensure that critical business
information and processes are accessible and shared across the
enterprise.

60

IBM ^ xSeries 440 Planning and Installation Guide

– Implementing critical new solutions that may enable a competitive edge is
easier.
򐂰 Reduced technical complexity
Three-tier logical architectures tend, in practice, to become five-tier
architectures (client, local server, central server, gateway, and enterprise
server). Server consolidation can simplify technical complexities by
eliminating the true number of tiers in a purported three-tier architecture by
reducing or eliminating central servers and gateways.

Chapter 2. Positioning

61

62

IBM ^ xSeries 440 Planning and Installation Guide

3

Chapter 3.

Planning
In this chapter we discuss topics you need to consider before you finalize the
configuration of your x440 system and before you begin implementing the
system. The topics covered are:
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰

System hardware
Cabling and connectivity
Storage considerations
Server partitioning and consolidation
Operating system considerations
Application considerations
Rack installation
Power considerations
Solution Assurance Review

© Copyright IBM Corp. 2002. All rights reserved.

63

3.1 System hardware
The x440 provides a scalable and flexible hardware platform. There are a
number of important aspects of the system hardware to consider when planning
your configuration. These are discussed in this section.
Tip: For the latest hints and tips on the x440, review the document Hints, Tips,
and Frequently Asked Questions for the xSeries 440 Quick Reference,
available from:
http://www.pc.ibm.com/qtechinfo/MIGR-43876.html

3.1.1 Processors
There are currently two processor types available with the x440 system:
򐂰 Xeon DP models can be ordered with either two Xeon DP processors in a
single SMP Expansion Module or with four Xeon DP processors in two SMP
Expansion Modules. There is no further upgrade beyond four Xeon DP
processors, other than replacing them with Xeon MP processors.
򐂰 Xeon MP models come with two Xeon MP processors installed in the
standard SMP Expansion Module. Up to four Xeon MP processors are
supported in the standard SMP Expansion Module. Using the optional second
SMP Expansion Module, part number 32P8340, up to eight processors can
be installed in an x440.
Processors are available as options:
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰

Xeon Processor MP 2.0 GHz 2 MB L3 Cache, 59P5173 (“Gallatin”)
Xeon Processor MP 1.9 GHz 1 MB L3 Cache, 59P5172 (“Gallatin”)
Xeon Processor MP 1.5 GHz 1 MB L3 Cache, 59P5171 (“Gallatin”)
Xeon Processor MP 1.6 GHz 1 MB L3 Cache, 32P8707 (“Foster”)
Xeon Processor MP 1.5 GHz 512 KB L3 Cache, 32P8706 (“Foster”)
Xeon Processor MP 1.4 GHz 512 KB L3 Cache, 32P8705 (“Foster”)
Xeon Processor DP 2.4 GHz 512 KB L2 Cache, 37L3533 (“Prestonia”)

Key processor configuration rules:
򐂰 All CPUs used in a single-server (that is, two, four or eight-way) or
multi-server (eight, 12 or 16-way) configuration must be the same type,
speed, and L2/L3 cache size.

64

IBM ^ xSeries 440 Planning and Installation Guide

򐂰 For servers with Xeon MP processors:
– Ensure you order sufficient processors to maintain a supported
configuration of two, four, or eight CPUs. Other quantities of CPUs (3, 5, 6,
or 7) are not supported.
– The standard SMP Expansion Module must have four processors installed
before the second one can be installed and used.
– Use part number 32P8340 for the second SMP Expansion Module. This
module is “unpopulated” (that is, it does not have any CPUs or memory
installed in it).
– The second SMP Expansion Module is supported only with four Xeon MP
processors. Consequently, if you install the second one, the system must
have eight CPUs after the installation.
– All Xeon MP processors must be identical for 16-way configurations.
򐂰 For servers with Xeon DP processors:
– Each SMP Expansion Module must have two processors installed and
those processor must be installed in CPU sockets 1 and 4.
– The standard SMP Expansion Module must have two Xeon DP processors
installed before the second one can be installed and used.
– Use part number 71P7919 for the second SMP Expansion Module. This
part number includes two 2.4 GHz Xeon DP processors.
– You can upgrade a Xeon DP model to have Xeon MP processors, but all
Xeon DP CPUs must be removed. You cannot mix Xeon MP and Xeon DP
processors in the same x440 system.
See 1.5, “SMP Expansion Module” on page 17 for more information on the SMP
Expansion Modules.

3.1.2 Memory
The 16 sockets on each SMP Expansion Module are divided into two ports, and
each port contains two banks:
򐂰 Port 1:
– Bank 1 = DIMM connectors 1, 3, 5, 7
– Bank 3 = DIMM connectors 2, 4, 6, 8
򐂰 Port 2:
– Bank 2 = DIMM connectors 9, 11, 13, 15
– Bank 4 = DIMM connectors 10, 12, 14, 16

Chapter 3. Planning

65

Physically, the banks occupy alternating sockets, as shown in Figure 3-1 on
page 66.

Port 1
Bank 1 (standard)

Port 2
Bank 2

DIMM
socket
numbers

Bank 3

Bank 4

Figure 3-1 DIMMs sockets on the x440 SMP Expansion Module

Key memory configuration rules:
򐂰 Because the x440 uses four-way interleaving, memory DIMMs must be
installed in banks (four DIMMs). Supported DIMMs are:
– 512 MB DIMMs, part number 33L3324
– 1 GB DIMMs, part number 31P8300
– 2 GB DIMMs, part number 31P8840
򐂰 Memory DIMMs of different sizes can be used in the same SMP Expansion
Module, but all four DIMMs in a bank must be the same size.
򐂰 If you want to install more than 32 GB of RAM, you must use two SMP
Expansion Modules. This in turn means that a certain number of CPUs must
also be installed. In Xeon MP-based systems, eight processors must be
installed and in Xeon DP-based systems, four processors must be installed.
򐂰 Four 512 MB or four 1 MB DIMMs are standard in the Xeon MP models, and
the Xeon DP models have eight 512 MB DIMMs standard (see Table 1-1 on
page 3). If you wish to install more than 26 GB in the standard SMP
Expansion Module, you will need to remove the 512 MB DIMMs and fully
populate the module with 2 GB DIMMs.

66

IBM ^ xSeries 440 Planning and Installation Guide

Memory mirroring
As discussed in 1.7, “System memory” on page 19, memory mirroring is
supported by the x440 for increased fault tolerance and high levels of availability.
Key configuration rules relating to memory mirroring:
򐂰 Memory mirroring must be enabled in the BIOS (it is disabled by default). See
4.1.2, “Enabling memory mirroring” on page 108 for details.
򐂰 Enabling memory mirroring halves the amount of memory available to the
operating system.
򐂰 Both ports in an SMP Expansion Module must have the same total amount of
memory. Partial mirroring is not supported.
򐂰 When using memory mirroring, all of the DIMMs in an individual memory port
(that is in both banks) must be the same size and clock speed (all memory
must be 133 MHz DIMMs). DIMM sizes in one port can be different from
DIMM sizes in the other port, but the total amount of memory in Port 1 must
be equal to the total memory in Port 2.
Important: While memory mirroring is disabled, DIMMs in one bank may
be a different size from DIMMs in the second bank of the same port. This
configuration is not supported if memory mirroring is enabled.
򐂰 The ability to hot-replace a failed DIMM or hot-add additional DIMMs is
currently not supported.
򐂰 SMP Expansion Modules are individually configured for memory mirroring in
the BIOS. This means that as well as full memory mirroring, you can also
enable memory mirroring only in one SMP Expansion Module. IBM
recommends against this.
򐂰 Memory mirroring does not work across SMP Expansion Modules. You
cannot set up four 512 MB DIMMs in the bottom SMP Expansion Module to
be mirrored by four 512 MB DIMMs in the top SMP Expansion Module.
Memory mirroring only operates across ports in the same SMP Expansion
Module.

Memory performance considerations
From a performance perspective, you should attempt to balance memory
between SMP Expansion Modules. This is more important than maximizing
memory bandwidth to a module. Make sure each SMP Expansion Module has
the same amount of memory. Then, if possible, make sure each module has
eight DIMMs installed. For performance reasons, consider the following:
򐂰 When installing eight DIMMs, install four in bank 1 (sockets 1, 3, 5, and 7) and
four in bank 2 (sockets 9, 11, 13, and 15).

Chapter 3. Planning

67

򐂰 When installing DIMMs, try to evenly divide the amount of RAM available
between the two ports.
For example, if you have 12 DIMMs (eight 512 MB DIMMs and four 1 GB
DIMMs for a total of 8 GB), install all eight 512 MB DIMMs (4 GB) in one port
and the four 1 GB DIMMs (also 4 GB) in the other port. This will give you
better performance than mixing four 512 MB DIMMs and four 1 GB DIMMs (6
GB total) in one port and four 512 MB DIMMs (2 GB) in the other port.

Additional memory considerations
An x440 system with two SMP Expansion Modules installed currently supports a
maximum of 64 GB of memory, using 2 GB DIMMs. To enable your operating
system to address this amount of memory, there may be certain operating
system configuration modifications required.
For example, to enable Windows 2000 Advanced Server and Datacenter Server
to access physical memory over 4 GB, the /PAE switch is required in the boot.ini
file. For detailed information on the /PAE switch and the /3GB switch, refer to
Microsoft Knowledge Base Article Q283037 at:
http://support.microsoft.com/default.aspx?scid=kb;en-us;Q283037

3.1.3 PCI slot configuration
As shown in Figure 3-2 on page 69, there are six PCI-X slots internal to the x440.
These six slots are implemented using four PCI buses, labeled A-D in Figure 3-2
on page 69:
򐂰 Bus A (slot 1 and slot 2): Two 64-bit 66 MHz slots
򐂰 Bus B (slot 3 and slot 4): Two 64-bit 100 MHz slots (133 MHz if only one slot is
occupied)
򐂰 Bus C (slot 5): One 64-bit 133 MHz slot
򐂰 Bus D (slot 6): One 64-bit 133 MHz slot

68

IBM ^ xSeries 440 Planning and Installation Guide

CEC 1
CPUs
L4 Cache
Processor/cache controller
Memory controller
Memory DIMMs
RXE
Expansion
Port A
(1 GBps)

2 GBps

PCI bridge

66 MHz
Ultra160
SCSI
Gigabit
Ethernet

2 GBps

66 MHz

33 MHz

PCI bridge
A-66

B-100

C-133

D-133

Video
USB

1

2

3

4

5

6

Kbd/Ms
RSA

64-bit
66 MHz

64-bit
100 MHz

64-bit
133 MHz

Figure 3-2 x440 block diagram showing the internal PCI-X slots

These slots can accept adapters rated at speeds ranging from 33 MHz to 133
MHz. When deciding which adapters to put in which slots, we recommend you
use the Active PCI Manager wizard to help you determine the best slots to use.
See 5.1, “Active PCI Manager” on page 130 for details.
You should also consider the following:
򐂰 Each adapter has a maximum rated speed and each bus also has a
maximum rated speed.
򐂰 Installed adapters in a single bus will operate at the slowest of three speeds:
– The rated speed of adapter 1
– The rated speed of adapter 2 (if the bus the adapter is installed in has two
slots)
– The rated speed of the bus
򐂰 Bus B supports one adapter at up to 133 MHz or two adapters at up to 100
MHz.
򐂰 32-bit adapters can be installed in any of the slots and will run in 32-bit mode.
32-bit and 64-bit adapters can coexist in 64-bit slots in the same bus. The
32-bit adapters will run in 32-bit mode, and the 64-bit adapters will run in
64-bit mode.

Chapter 3. Planning

69

Tip: Take the time to understand these rules and to select the best slots for
your adapters. Incorrect choices can result in a loss of PCI adapter
performance.
As extreme configuration examples, you could configure either of the following:
򐂰 Six 33 MHz PCI adapters, all operating at 33 MHz.
򐂰 Six 133 MHz PCI-X adapters, with two operating at 133 MHz (buses C and
D), two at 100 MHz (bus B) and two at 66 MHz (bus A).
Important: A PCI-X and a PCI adapter can be installed in slots on the same
bus. However, those two adapters will both operate in PCI mode.
In addition, if you have a PCI-X adapter installed, you cannot hot-add a PCI
adapter to the same bus. This is because with just the PCI-X adapter installed,
the bus is running in PCI-X mode, and you cannot hot-add a PCI adapter into
a bus that is in PCI-X mode.
Table 3-1 summarizes the supported adapter speeds. Take into account the
speed reductions when there are two adapters installed in a bus, as described
above.
Table 3-1 Supported adapter speeds in each slot

Slot

Bus

Width (bits)

Supported adapter speed (MHz)

1

A

32 or 64

33 or 66

2

A

32 or 64

33 or 66

3

B

32 or 64

33, 66, or 100 (133 as long as no adapter is in slot 4)

4

B

32 or 64

33, 66, or 100 (133 as long as no adapter is in slot 3)

5

C

32 or 64

33, 66, 100 or 133

6

D

32 or 64

33, 66, 100 or 133

The physical location of these slots in the server is shown in Figure 3-3 on
page 71.

70

IBM ^ xSeries 440 Planning and Installation Guide

PCI-X slot 4
(100 MHz)
PCI-X slot 5
(133 MHz)

PCI-X slot 3
(100 MHz)
PCI-X slot 2
(66 MHz)
Back of server

PCI-X slot 6
(133 MHz)

PCI-X slot 1
(66 MHz)

Bus: D

C

B

A

Figure 3-3 PCI-X slots in the x440

Other configuration information:
򐂰 The x440 server supports connection to the RXE-100.
Refer to 3.2.3, “Remote Expansion Enclosure” on page 78 for more
information.
򐂰 Video adapters are not supported.
򐂰 The PCI slots supports 3.3 V adapters only.
Important: 5 V adapters are not supported.
򐂰 The ServeRAID 4H adapter is not supported for internal drives because the
adapter is too high to fit in the 4U server when a cable is attached to its
internal connector.
򐂰 Do not install a ServeRAID card in slot 1. This is because there is little space
between the top of the adapter and the cover when the covers are closed.
This could damage the SCSI cable. See tip H176217 at
http://www.pc.ibm.com/qtechinfo/MIGR-43804.html for details.
򐂰 The x440 comes with an additional pre-installed cable to enable the
ServeRAID adapter to connect to the internal drives.

Chapter 3. Planning

71

Tip: The pre-installed cable for the ServeRAID adapter is disconnected at
both ends. To use it, disconnect the smaller SCSI cable from the hard drive
backplane. Then connect the ServeRAID cable to the hard drive backplane
and to the ServeRAID card itself.
򐂰 Some long adapters have extension handles or brackets installed. Before
installing the adapter, you must remove the extension handle or bracket.
򐂰 The system scans PCI-X slots to assign system resources. The system
attempts to start the first device found. The search order is:
a.
b.
c.
d.
e.

CD-ROM
Disk drives
Integrated SCSI devices
x440 PCI-X slots (in the order 1, 2, 6, 5, 3, 4)
Integrated Ethernet controller

If an RXE-100 is attached, the order is:
a.
b.
c.
d.
e.
f.

CD-ROM
Disk drives
Integrated SCSI devices
x440 PCI-X slots (1, 2, 6, 5, 3, 4)
RXE-100 slots (A5, A6, A3, A4, A1, A2, B6, B5, B3, B4, B1, B2)
Integrated Ethernet controller

Active PCI Manager
Active PCI Manager is an IBM Director extension that helps manage PCI and
PCI-X adapters in supported xSeries servers. It includes an analyze function that
will help you to plan and optimize the PCI and PCI-X adapter placement in the
x440 and Remote Expansion Enclosure (RXE-100). For a detailed discussion on
Active PCI Manager, refer to 5.1, “Active PCI Manager” on page 130.

3.1.4 Broadcom Gigabit Ethernet controller
The x440 is the first xSeries server to offer a Gigabit Ethernet controller
integrated standard in the system. The x440 includes a single-port Broadcom
BCM5700 10/100/1000 BASE-T MAC (Media Access Controller) on a PCI 64-bit
66 MHz bus. The BCM5700 supports full and half-duplex performance at all
speeds (10/100/1000 Mbps, auto negotiated) and includes integrated on-chip
memory for buffering data transmissions, and dual onboard RISC processors for
advanced packet parsing and backwards compatibility with 10/100 devices. The
Broadcom controller also includes software support for failover, layer-3 load
balancing, and comprehensive diagnostics.

72

IBM ^ xSeries 440 Planning and Installation Guide

Category 5 or better Ethernet cabling is required with RJ-45 connectors. If you
plan to implement a Gigabit Ethernet connection, ensure your network
infrastructure is capable of the necessary throughput to match the server’s I/O
capacity.
You will need to provide Ethernet cables for the onboard 10/100/1000 Ethernet
controller.

Adapter teaming
The Broadcom controller is capable of participating in an adapter team for the
purposes of failover, load balancing, and port trunking. The choice of adapters to
team with the onboard controller depends on whether you have a copper-only
network or a mixed copper/fiber network. Our recommendations are:
򐂰 If you have a copper Gigabit environment, use the Broadcom-based
NetXtreme 1000T Ethernet adapter, part 31P6301. Alternatively, use the Intel
PRO/1000 XT Server adapter, part 22P6801. Note that the 22P6801 is only
supported in specific slots — see the following for details:
http://www.pc.ibm.com/us/compat/x440/ibm_22P6801.html
򐂰 If you have a mixed fiber/copper Gigabit server switch network, use the
Broadcom-based 22P7801, NetXtreme 1000 SX Fiber Ethernet adapter.
You can also team the onboard Gigabit card with 10/100 cards such as 06P3601
and 22P4901, but this not a recommended configuration. You can also team with
the older Gigabit fiber card, 06P3701.
Adapter teaming and failover works by using software additional to the adapter
driver to provide the failover functionality.
When installing the on-board Broadcom controller in an adapter team with an
Intel-based Gigabit controller, we recommend you install the Broadcom controller
driver, then the Broadcom Advanced Server Program (BASP) software and
finally the driver for the Intel-based controller. Only install a single adapter
teaming package. Do not use the Intel advanced teaming software.
Detailed instructions for installing the individual driver and failover packages are
available with the driver software.
For the latest network adapter drivers and software for the x440 server, go to the
x440 driver matrix:
http://www.pc.ibm.com/qtechinfo/MIGR-39747.html
For details about compatibility, see the ServerProven LAN adapter page:
http://www.pc.ibm.com/us/compat/lan/matrix.html

Chapter 3. Planning

73

3.2 Cabling and connectivity
There are a number of unique factors to consider when cabling the x440 server:
򐂰
򐂰
򐂰
򐂰

SMP Expansion Module connectivity
Remote Supervisor Adapter connectivity
RXE-100 connectivity
Serial connectivity

We discuss each of these in this section.
The rear panel of the x440 showing the locations of cable connectors is shown in
Figure 3-4. For port locations on the Remote Supervisor Adapter, refer to
Figure 3-7 on page 77.

Figure 3-4 Rear Panel of the x440 (single SMP enclosure installed)

3.2.1 SMP Expansion Module connectivity
As standard the x440 ships with a single SMP enclosure installed. When the
CPU slots in the first enclosure are fully populated, the second SMP Expansion
Module can be added. For detailed instructions on installing the second SMP
Expansion Module, refer to Chapter 2, “Installing Options”, in the IBM ^
xSeries 440 Installation Guide, which is available from:
http://www.pc.ibm.com/qtechinfo/MIGR-42328.html

74

IBM ^ xSeries 440 Planning and Installation Guide

When a second SMP Expansion Module is installed in the x440, an additional
three SMP Expansion Ports are made available on the rear of the chassis, giving
a total of six. The SMP Expansion Module option includes two cables that are
used to connect the modules together, as shown in Figure 3-5.

Figure 3-5 SMP Expansion Ports with two SMP Expansion Modules installed

Note: The two 10-inch scalability cables used to connect the two SMP Expansion
Modules in a single x440 are included with the SMP Expansion module.
With single-x440 configurations, these ports are used to connect the two internal
SMP Expansion Modules together. Only four ports are used (two cables). The
other two ports are not connected.
When connecting two x440 nodes together to form a 16-way configuration, these
ports are cabled together as shown in Figure 3-6 on page 76.

Chapter 3. Planning

75

xSeries 440 - chassis 1

SMP Expansion
cables

Crossover Cat 5 cable
(or connected to
an Ethernet switch)

xSeries 440 - chassis 2

Figure 3-6 Connecting the two x440s together in a 16-way configuration

The two x440s are connected through the scalability port on each SMP
Expansion Module and require the installation of four 3.5 m Remote I/O cables
(part number 31P6102) to complete the configuration.
Tip: These four additional cables are the same as the ones used to connect
the RXE-100 Remote Expansion Enclosure.
Key points relating to SMP Expansion Module cabling:
򐂰 The SMP Expansion Module ports cannot currently be used as high-speed
interconnects for clustering purposes.
򐂰 The connections do not offer redundancy. If a connection is lost, the server
will shut down or restart depending on your system’s configuration.

76

IBM ^ xSeries 440 Planning and Installation Guide

򐂰 In 16-way configurations, the Ethernet port on the Remote Supervisor
Adapter in one system is connected to the Ethernet port of the adapter in the
other system. This connection is used during system startup and shutdown.
We recommend either of the following connections:
– Connecting the two using a crossover cable, as shown in Figure 3-6 on
page 76.
– Connecting the two over an isolated LAN segment using a switch or hub.
Connecting over a LAN segment will enable you to maintain Ethernet
connectivity directly to the Remote Supervisor Adapter for out-of-band
management.
Tip: We recommend that you assign static IP addresses to the Remote
Supervisor Adapters on both servers.

3.2.2 Remote Supervisor Adapter connectivity
The x440 features an integrated Remote Supervisor Adapter (RSA). For detailed
information on functionality and configuration of the RSA refer to section 9.5
“Remote Supervisor Adapter” in the redbook Implementing IBM Director
Management Solutions, SG24-6188. This document talks about the Remote
Supervisor Adapter as a separate adapter; however the functionality and location
of ports is consistent with the integrated version of the RSA in the x440.

External power
supply

Error LED
(amber)

ASM interconnect
(RS-485) port

Power LED
(green)

10/100
Ethernet port

Management
COM port

Figure 3-7 Remote Supervisor Adapter Connectors

The following RSA connections need to be considered when cabling the x440
(see Figure 3-7):
򐂰 External power supply connector. This connector allows the RSA to be
connected to its own independent power source. This external power supply
is not included with the x440 and will need to be ordered as an option (order a
ThinkPad 56W AC Adapter with a suitable power cord for your
country/region).

Chapter 3. Planning

77

If this power supply is not used, the RSA will draw power from the server as
long as the server is connected to a functioning power source.
򐂰 9-pin Serial port, which supports systems management functions through null
modem or modem connections.
򐂰 Ethernet port, which provides system management functions over the LAN.
As described in 3.2.1, “SMP Expansion Module connectivity” on page 74, in
two-chassis configurations (such as the 16-way), the Ethernet ports of the two
servers must be connected together either using a crossover cable or via a
100 Mbps Ethernet switch. The use of a switch is recommended if you also
wish to perform out-of-band management activities.
򐂰 Advanced Systems Management (ASM) RS-485 Interconnect port to facilitate
advanced systems management connections to other servers.
For detailed instructions on cabling ASM interconnect networks, refer to
section 9.11 “ASM Interconnect” in the redbook Integrating IBM Director with
Enterprise Management Solutions, SG24-5388.
Note: The x440 does not include the necessary dongle to connect the
Remote Supervisor Adapter to an ASM interconnect bus using the RS-485
port on the adapter. Consequently, you will need the Advanced System
Management Interconnect Cable Kit (part number 03K9309) for connection to
an ASM interconnect network.

3.2.3 Remote Expansion Enclosure
The RXE-100 can be connected to the x440 to provide an additional six or 12
PCI-X slots to the server. Currently, only one RXE-100 is supported per x440
server or per 16-way two-node configuration.
The RXE-100 has six 133 MHz 64-bit PCI-X slots as standard and can accept
adapters with speeds ranging from 33 MHz to 133 MHz. With the optional six-slot
expansion kit (part number 31P5998) installed, the RXE-100 has 12 slots. Each
set of six adapter slots is divided into three buses of two slots each, as shown in
Figure 3-8 on page 79.
Note: When connecting the RXE-100 to a single x440 configuration, the
RXE-100 can have six or 12 PCI-X slots. When connecting the RXE-100 to a
16-way two-node configuration, the RXE-100 must have 12 slots.

78

IBM ^ xSeries 440 Planning and Installation Guide

RXE Expansion Port
Bus: C
Slot: 6

B
5

4

A
3

2

1

Figure 3-8 RXE-100 PCI-X expansion board (6 slots)

For each of the three buses (A, B, C), one of the following can be installed:
򐂰 One 64-bit 3.3 V PCI-X 133 MHz adapter (in the odd-numbered slot), running
at up to 133 MHz
򐂰 Two 64-bit 3.3 V PCI-X 133 MHz adapters running at up to 100 MHz
򐂰 Two 64-bit 3.3 V PCI or PCI-X, 33 or 66 MHz adapters
Note: The PCI slots supports 3.3 V adapters only. 5 V adapters are not
supported.
Like the x440, these slots can accept adapters rated at speeds ranging from 33
MHz to 133 MHz. When deciding which adapters to put in which slots, consider
the following:
򐂰 Each adapter has a maximum rated speed and each bus also has a
maximum rated speed.

Chapter 3. Planning

79

򐂰 Installed adapters will operate at the slowest of three speeds:
– The rated speed of adapter 1 in the bus
– The rated speed of adapter 2 in the bus
– The rated speed of the bus
򐂰 32-bit adapters can be installed in any of the slots and will run in 32-bit mode.
32-bit and 64-bit adapters can coexist in 64-bit slots in the same bus. The
32-bit adapters will run in 32-bit mode, and the 64-bit adapters will run in
64-bit mode.
򐂰 When installing a 133 MHz PCI-X adapter, it must be installed in the first or
odd-numbered slot in the bus (that is in slots 1, 3 or 5).
򐂰 Like the x440, a PCI-X and a PCI adapter can be installed in slots on the
same bus in the RXE-100. However, these two adapters will both operate in
PCI mode.
In addition, if you have a PCI-X adapter installed, you cannot hot-add a PCI
adapter to the same bus. This is because with just the PCI-X adapter
installed, the bus is running in PCI-X mode, and you cannot hot-add a PCI
adapter into a bus that is in PCI-X mode.
򐂰 With Windows NT 4.0 Enterprise Edition, certain token-ring adapters do not
work in some slots in the RXE-100. See RETAIN tip H175383 for more
information:
http://www.pc.ibm.com/qtechinfo/MIGR-42139.html

Connecting the RXE-100
There are two types of cables used to connect the RXE-100 to the x440:
򐂰 Remote I/O cable, for data
This cable connects from the x440 RXE Expansion Port A to the RXE-100 as
shown in Figure 3-9 on page 81. Two lengths are available:
– 3.5 m Remote I/O cable kit (part number 31P6102)
– 8 m Remote I/O cable kit (part number 31P6103)

80

IBM ^ xSeries 440 Planning and Installation Guide

RXE Management Port

RXE Management A (in) Port

RXE Expansion
Port A

RXE-100

RXE Management A (out) Port
RXE Management B (in) Port

RXE Expansion Port A
Additional cable required
if 12 PCI-X slots are installed
in the RXE-100

Figure 3-9 Connecting the RXE-100 to the x440

With single-node configurations (that is only one x440 node in a two, four or
eight-way configuration), only one RXE-100 can be connected using one
Remote I/O cable as shown in Figure 3-9. In this configuration, all 12 slots in
the RXE-100 are available to the system. The use of two cables (for example,
for redundancy or performance) is currently not supported.
The RXE-100 ships with a 3.5 m Remote I/O cable to connect the unit to the
x440. This cable is long enough when the devices are in the same rack as
each other. For installation in an adjacent rack, use the optional 8 m Remote
I/O cable kit.
In the 16-way configuration (that is two x440 nodes), only one RXE-100 can
be connected as shown in Figure 3-10 on page 82. Three Remote I/O cables
are used — two to connect the x440s to the RXE-100 and one to connect the
two x440s together.

Chapter 3. Planning

81

xSeries 440 - chassis 1

Crossover Cat 5 cable
(or connected to
an Ethernet switch)

RXE-100

SMP Expansion
cables
xSeries 440 - chassis 2

RXE Data cables
RXE Management
cables (Cat 5)

Figure 3-10 Connecting an RXE-100 to a 16-way x440 configuration

All 12 slots in the RXE-100 are available to the operating system, with six
slots being accessed over each cable. If one cable connection fails, all 12
slots are accessed over the surviving cable connection. It is not currently
supported to have each x440 node of a 16-way configuration connected to a
separate RXE-100.
One 3.5 m Remote I/O cable ships with the RXE-100. The other two must be
ordered separately. Use either the 3.5 m or the 8 m Remote I/O cable.
򐂰 Interconnect management cable, for remote I/O management
The RXE-100 also includes a 3.5 m interconnect management cable (an
Ethernet cable), which in single-node configurations is used to connect the
RXE Management Port on the x440 to the RXE Management A (In) Port on
the RXE-100, as shown in Figure 3-9 on page 81.
Two lengths are available:
– 3.5 m interconnect management cable kit (part number 31P6087)
– 8 m interconnect management cable kit (part number 31P6088)

82

IBM ^ xSeries 440 Planning and Installation Guide

If the RXE-100 has the second set of six PCI slots installed, use the short
interconnect management cable (supplied with the PCI slot option kit) to
connect Management A (out) Port to Management B (in) Port (see Figure 3-9
on page 81).
Important: In the publication IBM RXE-100 Remote Expansion Enclosure
Installation Guide, the section entitled “Attaching the enclosure to an
xSeries 440 server” does not include instructions to connect the ports
Management A (out) and Management B (in) on the RXE-100. Our testing
in the lab indicates that this additional cable is necessary.
The 8 m interconnect management cable is suitable for inter-rack
configurations.
For 16-way configurations, the management ports must be connected as
shown in Figure 3-10 on page 82. An additional cable will need to be ordered.
Important: Power to the RXE-100 is controlled by the x440, via the
interconnect management cable and under the control of the Remote
Supervisor Adapter.

3.2.4 Serial connections
The x440 does not have an external serial port. If a serial port is required (for
example, for UPS remote management), then a USB-to-serial adapter is
required, such as the Belkin USB to Serial Adapter (part number 10K3661).
Restriction: IBM USB Serial/Parallel Adapter (part number 22P5298) is not
supported in the x440.
It is also possible to configure the serial port on the Remote Supervisor Adapter
to be sharable between the alerting functions of the adapter and the operating
system. However, we recommend that you use a separate serial port.

3.3 Storage considerations
When you are planning the storage configuration to accompany the x440, there
are important performance and sizing issues that need to be considered.
The two internal hot-swap 1” drive bays will typically be used for operating
system installation. We recommended these drives be configured as a two-drive
RAID-1 array to provide a higher degree of system availability. Drives up to

Chapter 3. Planning

83

15,000 RPM and the converged tray design are supported. To configure RAID-1,
a ServeRAID adapter is required. The ServeRAID-4Mx and ServeRAID-4Lx can
be used for connection to the hot-swap backplane of the internal drive bays.
Important: The ServeRAID-4H is supported in the x440 when used for
external storage enclosure connectivity only, because the adapter is too high
for the 4U chassis when the internal SCSI connector is in use.
Note: The x440 has two cables pre-installed for use with the internal drive bays,
but one is not connected. The shorter cable is initially connected from the
onboard SCSI to the drive backplane. When you install a ServeRAID adapter for
use with the internal drive bays, you will need to disconnect this cable and
connect both ends of the longer cable. See “Cabling a ServeRAID adapter” in
Chapter 2 of the IBM ^ xSeries 440 Installation Guide for details.
Typically the x440 will be attached to an external disk enclosure for data storage
requirements. Some of the supported IBM storage options include:
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰

SCSI RAID adapters and storage enclosures
Fibre Channel adapters and Storage Area Networks (SANs)
Network Attached Storage (NAS)
SCSI over IP (iSCSI)
IBM Enterprise Storage Server (ESS)
ESCON connectivity to a zSeries server

3.3.1 xSeries storage solutions
This section discusses some of the available xSeries storage solutions and
related technologies, as well as tape backup and performance considerations.

ServeRAID with external storage enclosures
The current ServeRAID-4 family of adapters includes the ServeRAID 4H, 4Mx
and 4Lx. These 64-bit, Active PCI controllers include advanced features such as
Logical Drive Migration, nine RAID levels including RAID 1E, 1E0 and 5E, as well
as adapter and cluster failover.
򐂰 ServeRAID-4H features four Ultra160 SCSI channels, 128 MB of removable
battery-backed ECC cache memory, and an IBM PowerPC 750 processor
onboard. Up to 56 Ultra160 and Ultra2 SCSI devices are supported. (Using
73.4 GB hard disk drives produces 4.11 TB capacity per adapter.)
򐂰 ServeRAID-4Mx features two Ultra160 SCSI channels, 64 MB of
battery-backed ECC cache memory, and an Intel i80303 processor. Up to 28
Ultra160 and Ultra2 SCSI devices are supported.

84

IBM ^ xSeries 440 Planning and Installation Guide

򐂰 ServeRAID-4Lx features one Ultra160 SCSI channel, 32 MB of ECC cache
memory, and an Intel i80303 processor. Up to 14 Ultra160 and Ultra2 SCSI
devices are supported.
Each ServeRAID adapter supports up to 14 drives (and 160 MB per second
throughput) per channel (for an aggregate of up to 56 drives and 640 MBps
for the 4-channel ServeRAID-4H adapter, for example). Multiple adapters can
be installed as needs and available slots dictate.
򐂰 The EXP300 storage expansion unit has a maximum 1 TB of disk storage (14
73.4 GB drives) in a 3U package, allowing up to 14 expansion units to be
used in a standard 42U rack (meaning that a full rack of EXP300 units can
hold an amazing 14 TB). The EXP300 provides Predictive Failure Analysis
(PFA) on key components, including hot-swap fans, hard drives and
redundant power supplies. The EXP300 is optimized for Ultra160 SCSI, with a
sustained data transfer rate of 160 MBps.
For more information on IBM SCSI RAID storage solutions go to:
http://ibm.com/pc/ww/eserver/xseries/scsi_raid.html

IBM Fibre Array Storage Technology
The IBM Fibre Array Storage Technology (FAStT) family of Fibre Channel storage
solutions is designed for high-availability, high-capacity requirements. FAStT
solutions can support transfers over distances up to 10 km (6.2 miles) at rates of
up to 200 MBps.
The FAStT Storage Server is a RAID controller device that contains Fibre
Channel (FC) interfaces to connect the host systems and the disk drive
enclosures. The Storage Server provides high system availability through the use
of hot-swappable and redundant components. We briefly discuss the following
three products:
򐂰 The IBM TotalStorage FAStT200 Storage Server
򐂰 The IBM TotalStorage FAStT500 Storage Server
򐂰 The IBM TotalStorage FAStT700 Storage Server

The IBM TotalStorage FAStT200 Storage Server
The FAStT200 Storage Server is a 3U rack-mountable Fibre Channel RAID
controller and disk drive enclosure. It targets the entry and midrange segment of
the FC storage market. A typical use of the FAStT200 would be in a two-node
cluster environment with up to 30 Fibre Channel disk drives attached to the
Storage Server.
Two models are available:
򐂰 The FAStT200 Storage Server, with a single RAID controller.

Chapter 3. Planning

85

򐂰 The FAStT200 High Availability (HA) Storage Server, which contains two
RAID controllers and can therefore provide higher availability.
Both models feature hot-swap and redundant power supplies and fans and you
can install up to 10 slim-line or half-high FC disk drives. If you need to connect
more than 10 disks, you can use the EXP500 FC storage expansion enclosures.
Each EXP500 can accommodate 10 additional disk drives, and up to five
EXP500s are supported on the FAStT200. This means that the maximum
supported number of disk drives is 60.
The use of hot-swappable and redundant components provides high availability
for the FAStT200 Storage Server. A fan or a power supply failure will not cause
downtime and such faults can be fixed while the system remains operational. The
same is true for a disk drive failure if fault-tolerant RAID levels are used. With two
RAID controller units and proper cabling, a RAID controller or path failure will not
cause loss of access to data.
Each RAID controller has one host and one drive FC connection. The FAStT200
HA model can use the two host and drive connections to provide redundant
connection to the host adapters and to EXP500 enclosures. Each RAID
controller unit also contains 128 MB of battery-backup cache.
Tip: The FAStT200 ships with IBM FAStT Storage Manager 7.10. This version
is not supported on the x440. See the following for details:
http://www.pc.ibm.com/qtechinfo/MIGR-41745.html
Download the latest version from http://www.pc.ibm.com/support.

The IBM TotalStorage FAStT500 Storage Server
The FAStT500 Storage Server is a 4U rack-mountable Fibre Channel RAID
controller device. It provides the levels of performance, availability, and
expandability needed to satisfy high-end storage requirements. You would
typically use the FAStT500 Storage Server in advanced cluster environments
and possibly with heterogeneous operating systems running on the host
systems. Another application would be where multiple servers are being
consolidated onto one or more x440 systems and there is a requirement to
centralize storage for these systems.
The FAStT500 Storage Server features two RAID controller units, redundant
power supplies, and fans. All these components are hot-swappable, which
ensures excellent system availability. You use the EXP500 external storage
expansion enclosures to install the FC disk drives and you can connect up to 22
EXP500 enclosures to the FAStT500. This means a total of up to 220 disk drives.

86

IBM ^ xSeries 440 Planning and Installation Guide

The enclosures can be connected in a fully redundant manner, which provides a
very high level of availability. On the host side FC connections, you can use up to
four mini-hubs.
This allows you to establish up to eight host connections without needing an
external hub or a switch. For performance and availability, each RAID controller
unit contains 256 MB of battery-backed cache and this amount can be further
expanded.

The IBM TotalStorage FAStT700 Storage Server
The FAStT700 Storage Server is the newest addition to the FAStT range of
products. As with the FAStT 500 Storage Server, you would typically implement
the FAStT 700 Storage Server in high-end cluster and server consolidation
environments and where multiple servers are being consolidated onto a smaller
number of x440 systems.
It is the same physical size as the FAStT500 with new higher performance
controllers. These new controllers are 2 Gbps and connect via mini-hubs to the
new FAStT FC-2 Host Bus Adapter (HBA) and the new 2109 F16 Fibre Channel
switch to give full 2 Gbps fabric.
Like the FAStT500 it attaches to up to 220 FC disks via 22 EXP500 expansion
units or up to 224 FC disks via 16 EXP700 expansion units to provide scalability
for easy growth (18 GB up to 16 TB using 73 GB drives). To avoid single points of
failure, it also features dual hot-swappable RAID controllers, dual redundant FC
disk loops, write cache mirroring, redundant hot-swappable power supplies, fans,
and dual AC line cords.
Using the new FAStT Storage Manager Version 8.21, it supports FlashCopy,
Dynamic Volume Expansion and Remote Mirroring with controller-based support
for up to 64 storage partitions. RAID levels 0,1, 3, 5, and 10 are supported and
for performance it includes a total of 2 GB battery-backed cache (1 GB per
controller).
Note: The FAStT700 is currently the only certified storage solution for the
x440 in the 16-way and Microsoft clustered configurations. To check the
Microsoft Hardware Compatibility List (HCL) for updates to certified solutions,
refer to http://www.microsoft.com/hcl.
Additional information on the entire range of FAStT storage solutions can be
found at:
http://www.storage.ibm.com/hardsoft/disk/fastt/index.html

Chapter 3. Planning

87

Enterprise Storage Server (ESS)
ESS provides integrated caching and RAID support for the attached disk devices.
ESS can be configured in a variety of ways to provide scalability in capacity and
performance. One ESS can support in excess of 28 TB and can utilize 2 Gbps
Fibre Channel connectivity.
Redundancy within ESS provides continuous availability. It is packaged in one or
more enclosures, each with dual line cords and redundant power. The redundant
power system allows ESS to continue normal operation when one of the line
cords is deactivated.
ESS provides an image of a set of logical disk devices to attached servers. The
logical devices are configured to emulate disk device types that are compatible
with the attached servers. The logical devices access a logical volume that is
implemented using multiple disk drives. This allows ESS to connect to all IBM
servers, from zSeries to iSeries, pSeries and xSeries, directly or thorough a
SAN, thus helping the x440 fit into a heterogeneous environment containing a
variety of server architectures. ESS offers several choices of host I/O interface
attachment methods, including SCSI and Fibre Channel for xSeries.
For more information on the ESS go to:
http://www.storage.ibm.com/hardsoft/products/ess/index.html

3.3.2 Disk subsystem performance
Because of the processing capacity of the x440, a poorly designed storage
subsystem could become a bottleneck, seriously impacting overall system
performance. You should implement a disk subsystem that is able to efficiently
process the potentially massive number of disk I/O requests generated by the
processor subsystem.
Assuming that you will be implementing a RAID storage configuration, the rule of
thumb is that more physical disks will improve the throughput of your disk
subsystem and consequently overall system performance. In almost all
applications, adding disks to your RAID configuration will continue to improve
performance until another system component becomes a bottleneck. A storage
solution with too few physical disks will become a bottleneck for the entire
system.
You will need to carefully analyze your storage capacity requirements, your
application requirements, and your host requirements before you finalize your
storage solution.

88

IBM ^ xSeries 440 Planning and Installation Guide

A detailed discussion on performance tuning disk subsystems in xSeries servers
is available in the redbook Tuning IBM ^ xSeries Servers for
Performance, SG24-5287.

3.3.3 Tape backup
As with your disk subsystem, you need to carefully analyze backup requirements
before a tape solution is selected. Considerations when selecting a backup
solution should include:
򐂰 Currently implemented backup solutions
If you are consolidating a number of servers onto a single x440 solution, for
example, you may want to take the opportunity to move from differing and
distributed tape technologies (such as DDS and DLT) and consolidate those
into a single, high-performance, automated solution. An example is the IBM
Ultrium Autoloader.
򐂰 Current and projected capacity requirements
Select a solution that has the ability to scale as capacity requirements
increase.
򐂰 Performance requirements
You need to consider the backup window available, as well as the amount of
data being backed up when determining what your backup performance
requirements will be. It is also important to consider the need for quick access
to data committed to tape when selecting a solution.
򐂰 Connection requirements
Will the tape solution be connected to an existing SAN fabric and if so, will this
require additional fabric hardware?
򐂰 Hardware and software compatibility
If you implement a new tape solution, you need to ensure that current backup
and management software is still suitable. Disaster recovery procedures may
also need to be revised.
IBM offers a full range of high-performance, high-capacity and automated tape
solutions for xSeries servers. For detailed information on these products, go to:
http://ibm.com/pc/ww/eserver/xseries/tape.html
Note: The x440 and RXE-100 support 3.3 Volt PCI adapters only. Make sure
any SCSI adapters you use to connect your tape subsystem are 3.3 V or dual
voltage adapters.

Chapter 3. Planning

89

The following redbooks discuss IBM tape solutions in greater detail:
򐂰 Netfinity Tape Solutions, SG24-5218
򐂰 The IBM LTO Ultrium Tape Libraries Guide, SG24-5946

3.4 Server partitioning and consolidation
The concepts of server partitioning and consolidation are discussed in detail in
Chapter 2, “Positioning” on page 35.
Implementing a server consolidation solution using VMware and the x440
introduces a number of important and unique issues that you need to consider
during the planning phase of your project.
In particular the x440 configuration needs to be carefully sized to meet the
resource requirements of the VMware operating system, the guest operating
systems, and the applications being deployed.
A detailed discussion on planning, sizing, and implementing VMware solutions
on the x440 can be found in the redbook Server Consolidation with the IBM
^ xSeries 440 and VMware ESX Server, SG24-6852.

3.5 Operating system considerations
In line with the overall message of providing application flexibility to meet the
varying needs of our enterprise customers, the x440 is optimized for numerous
operating system and application solutions. For the latest operating system
support information, go to:
http://www.pc.ibm.com/us/compat/nos/matrix.shtml
As described in 1.4.1, “Intel Xeon Processor MP” on page 13 Hyper-Threading
technology allows a single physical processor to appear to the operating system
and applications as two logical processors. The logical processors share the core
processing engine of the physical processor but are able to execute code
streams concurrently.
Operating systems must be “Hyper-Threading aware” before they can “see” the
additional processors. When they are, they will “see” twice as many CPUs are
there really are (see Figure 4-8 on page 118 for an example).
Simply enabling Hyper-Threading may not guarantee improved overall system
performance, however. In order to benefit from enabling Hyper-Threading, the
operating system and server applications need to be capable of detecting the

90

IBM ^ xSeries 440 Planning and Installation Guide

additional logical processors and spawning multiple threads, which can exploit
the additional processing power.
As well as considering whether the operating system you are installing supports
Hyper-Threading, there may be licensing implications to consider before enabling
Hyper-Threading technology.
For a more detailed discussion on Hyper-Threading technology, refer to:
http://www.intel.com/eBusiness/products/server/processor/xeon/wp020901_sum.htm
Table 3-2 lists the supported operating systems for the x440 and the level of
support for Hyper-Threading technology provided by the operating system.
In the column titled Hyper-Threading Support:
򐂰 None indicates the operating system does not recognize the logical
processors that Hyper-Threading enables.
򐂰 Yes indicates that the operating system recognizes the logical processors and
can execute threads on them but is not optimized for Hyper-Threading.
򐂰 Optimized indicates that the operating system recognizes the logical
processors and the operating system code has been designed to fully take
advantage of the technology.
Table 3-2 x440 operating system support

Description

Release

SMP support1

Hyper-Threading
support

Windows 2000 Server

SP2/3

Supports up to four-way

Yes

Windows 2000 Advanced Server

SP2/3

Supports up to eight-way

Yes

2

Yes

Windows 2000 Datacenter Server

SP3

Supports up to 32-way

Windows NT Enterprise Edition

4.0

Only supports four-way on the x440
Hot-plug PCI not supported

None

Windows .NET Server

1Q/03

Supports up to two-way

Optimized

Windows .NET Enterprise Server

1Q/03

Supports up to eight-way

Optimized

Windows .NET Datacenter Server

1Q/03

Supports up to 32-way 2

Optimized

NetWare

6.0

Supports up to 32-way 2,

3

Yes

Red Hat Linux Advanced Server

2.1

Supports up to eight-way 4

Yes

8.0

4

Yes

SuSE Linux Enterprise

Supports up to eight-way

Chapter 3. Planning

91

Description

Release

SMP support1

Hyper-Threading
support

VMware ESX Server

1.5

Supports up to 16-way
Supports up to one processor per VM5

None

Notes to Table 3-2:
1. While operating systems may support eight-way or larger systems, scalability
is a function of both the operating system and the application/workload. Few
applications are designed to take advantage of larger SMP systems.
2. x440 configurations with 16 processors and Hyper-Threading enabled are
seen as 32 processors under Windows 2000 Datacenter and Windows .NET.
Licensing of processors in Windows 2000 is based on physical and logical
processors combined, whereas Windows .NET licensing is based on physical
processors.
3. NetWare notes:
– NetWare 5.1 is currently not supported, but it should still install. See
RETAIN tip H176163 for details on a known shutdown problem:
http://www.pc.ibm.com/qtechinfo/MIGR-43679.html
– With NetWare 6.0, the server may show extreme CPU utilization values
(for example, 13000%). This will be fixed with NetWare 6.0 Support Pack
2. See RETAIN tip H176060 at:
http://www.pc.ibm.com/qtechinfo/MIGR-43532.html
– Once supported a multi-chassis configuration must be fully assembled
before installing NetWare. Novell doesn’t currently support adding chassis
after NetWare is installed.
4. Ongoing work will improve both Linux and key application scalability.
Currently, the general recommendation is to keep system size limited to
eight-way and below, and 16 GB and below. Work on scalability beyond
eight-way is in progress, and is likely to become available in early to
mid-2003.
5. VMware ESX Server 1.5 allows eight virtual machines per processor.
However, a virtual machine (VM) can consist of no more than one processor.
16-way support will require Version 1.5.1.

3.5.1 Windows 2000 Datacenter Server
Windows 2000 Datacenter Server is a highly scalable network operating system
designed for mission-critical enterprise-wide applications. High-volume online

92

IBM ^ xSeries 440 Planning and Installation Guide

transaction processing, large-scale data warehousing, and scientific simulations
are some of the applications that Datacenter is optimized for.
Datacenter Server features include:
򐂰
򐂰
򐂰
򐂰
򐂰

Support for 32 processors
Supports up to 32 GB of memory
Four node cluster support
Supports Network Load Balancing (NLB) with a maximum 32 nodes
Supports WinSock Direct high-speed inter-process communications

Datacenter Server will initially be supported on four-way and eight-way x440
configurations in one, two, three, and four cluster node configurations and on a
16-way configuration in a one cluster node configuration.
The IBM Datacenter Solution includes Microsoft-certified hardware, the Windows
Datacenter Server operating system, and a set of standard and optional
services, the latest of these being a Software Update Subscription, to provide
updates designed to keep the Datacenter Solution at the latest levels.
The subscription for an eight-way configuration is 4816-ABX, and for 16-way,
4816-ADX.
The Software Update Subscription provides periodic updates to the Windows
2000 Datacenter operating system, which you license for a period of one year.
This subscription also includes IBM updates to firmware and device drivers
certified by Microsoft for use with the Datacenter solution. IBM builds, tests, and
provides the complete certified package of these components.
The complete Datacenter Solution Program can be found at:
http://www.pc.ibm.com/ww/eserver/xseries/windows/datacenter.html

Datacenter eight-way configuration
Two installation options are currently available for Datacenter Server on the x440.
򐂰 Option 1: Factory installation. Table 3-3 shows the available x440 eight-way
Datacenter models.

Chapter 3. Planning

93

Table 3-3 Factory configurable eight-way Datacenter Server models

Model

Standard processors

Max SMP

L2 cache

L3 cache

Std memory

8687-1AX

2x 1.4 GHz Intel Xeon MP

8-way

256 KB

512 KB

0 GB(*)

8687-2AX

2x 1.5 GHz Intel Xeon MP

8-way

256 KB

512 KB

0 GB(*)

8687-3AX

2x 1.6 GHz Intel Xeon MP

8-way

256 KB

1 MB

0 GB(*)

(*) Memory/storage and options for Datacenter models will be included as defined by customer
requirements through the Special Bid process.

Note: All AX models are to be ordered via an IBM Special Bid process, and all
Datacenter solutions require an IBM Solutions Assurance Review, to ensure
compliance with the Datacenter certified solution.
򐂰 Option 2: Software-only preload kit.
This CD-ROM kit allows a customer to install Datacenter Server on an
existing x440 configuration that is already certified for use with Datacenter
Server.
The eight-way preload kit, 4816-1BX, contains:
– Windows 2000 Datacenter Server
– License entitlement for up to eight processors
– Device drivers and firmware updates, which have been tested to support
the Datacenter solution on certified xSeries 440 configurations
– System documentation and recovery CD-ROMs
Customers should also purchase the annual Software Update Subscription for
eight-way Datacenter Server, 4816-ABX.

Datacenter 16-way configuration
The 16-way configurations are comprised of two eight-way Datacenter Server
systems as described in Table 3-3 on page 94.
Note: The x440 16-way fixed configuration for Datacenter Server will initially
only be supported in a stand-alone configuration, not in a cluster. Check for
updates to the Microsoft Hardware Compatibility List (HCL) at:
http://www.microsoft.com/hcl/default.asp

94

IBM ^ xSeries 440 Planning and Installation Guide

Datacenter 16-way fixed configurations can be achieved in two ways:
򐂰 New Datacenter configuration
Customers can purchase the complete solution directly from IBM through the
Special Bid program, or through the EXAct Business Partner program.
When using the Special Bid program, IBM will perform an expert Solutions
Assurance Review. If the solution is obtained through a Business Partner,
they will perform this review.
򐂰 Datacenter Upgrade Path
Upgrades to x440 16-way fixed configuration Datacenter can be
accomplished in two ways:
– Option 1: Customers who already have a Windows 2000 Advanced Server
on an 8687-1RX, 2RX, 3RX, can upgrade to Datacenter by doing the
following:
•
•
•
•
•

Upgrade the system to meet Datacenter HCL
Add a second RX model (also meeting Datacenter HCL criteria) or AX
model
Add the 1-16 processor License Kit, model number 4816-1DU
Add the Software Update Subscription, model number: 4816-ADX
Add RAM so the total memory is 8 GB (4 GB in each system)

– Option 2: Customers who already own an eight-way Datacenter
configuration (Table 3-3 on page 94) can upgrade to 16-way by doing the
following:
•
•
•
•

Add second RX (also meeting Datacenter HCL criteria) or AX model
Add the 9-16 processor License Kit, model number 4816-12U
Add the Software Update Subscription, model number: 4816-ADX
Add RAM so the total memory is 8 GB (4 GB in each system)

Notes:
򐂰 All processors must be identical for 16-way configurations.
򐂰 A minimum of 8 GB of RAM must be installed in each x440.
All x440 16-way Datacenter Server configurations also require a Support Line
contract or maintenance agreement.

3.5.2 Microsoft Windows NT 4.0 Enterprise Edition
Hyper-Threading is not supported by Windows NT 4.0 EE, which has not been
enhanced to exploit Intel's ACPI. This operating system is therefore unable to
recognize logical processors, which are simply ignored. The eight logical

Chapter 3. Planning

95

processors in an x440 are viewed exactly the same as eight processors in an
x370.
There are no plans from Microsoft to patch Windows NT 4.0 Enterprise Edition to
support Hyper-Threading technology.
A custom Hardware Abstraction Layer (HAL) must be installed during the
operating system installation and is available from IBM. The HAL is required to
support the Active PCI-X slots in the server and the RXE-100. Download it from:
http://www.pc.ibm.com/qtechinfo/MIGR-42067.html

3.5.3 Microsoft Windows 2000 Server
The members of the Windows 2000 server family support Hyper-Threading, but
they have not been optimized for it. They use a custom Hardware Abstraction
Layer (HAL), which is based on Service Pack 2. This HAL should be installed
during the operating system installation. It is available from IBM at:
http://www.pc.ibm.com/qtechinfo/MIGR-42325.html
Refer to “Installing the x440 Windows 2000 custom HAL” on page 112 for
detailed instructions on installing the HAL.
From a licensing point of view, logical processors as provided by
Hyper-Threading are counted against the Windows licensing limit. Windows will
first count physical processors and, if the license permits more processors, then
logical processors will be counted. For example, in a four-way x440, Windows
2000 Server will count four physical processors, then stop, because Server is
limited to four processors. In the same x440, Advanced Server will count eight
processors (four physical and four logical) because the license permits up to
eight processors.
Eight-way systems: If the server has eight CPUs installed, then you must
ensure Hyper-Threading is disabled in BIOS before installing Windows 2000
Advanced Server. You can re-enable Hyper-Threading after Service Pack 2 or
later is installed. If you do not disable Hyper-Threading during installation, a
blue screen trap will occur.
Windows 2000 Advanced Server supports a maximum of eight CPUs, so there
is no performance benefit to be gained from enabling Hyper-Threading on an
eight-way Advanced Server system.

96

IBM ^ xSeries 440 Planning and Installation Guide

3.5.4 Microsoft Windows .NET Server
Microsoft is adding “NUMA enhanced” CPU-affinity code and Hyper-Threading
optimization to maximize the performance of XpandOnDemand NUMA-based
servers such as the x440.
Windows .NET Server operating systems will understand the concept of physical
processors versus logical processors. In the case of Windows .NET Server, only
physical processors will count against the license limit. Although a Windows
.NET Standard Server (limited to two processors) running on a two-way system
with Hyper-Threading enabled will recognize and utilize the processing
capabilities of four logical processors, only two physical processors will be
counted for licensing purposes.

3.5.5 Novell NetWare
IBM is working with Novell's NetWare technology team (NetWare V6.0 and
Modesto 32/64-bit) and its one Net strategy to take full advantage of the
capabilities of IBM Enterprise X-Architecture technology. Novell will support high
scalability (32 processors x 32 nodes per cluster), large memory (64 GB), Active
PCI-X hot swap/hot-add, the Xeon Processor MP (including Hyper-Threading
and xAPIC), non-uniform memory addressing (NUMA), Remote I/O, and system
partitioning (static and dynamic) in all forms. Support for many of these
technologies exists today, with others to be added in future support packs or
operating system updates.
Refer to 4.3.3, “NetWare installation” on page 121 for more detailed information
on installing NetWare on the x440.
Important: Once supported multi-node configurations must be assembled
before installing NetWare. Novell doesn’t currently support adding chassis
after NetWare is installed.

3.5.6 Red Hat/SuSE Linux
Red Hat Linux Advanced Server 2.1 and SuSE Linux Professional 8.0 are
architected to exploit the capabilities of the x440 server, including the support for
Hyper-Threading technology. The most recent version of the Linux 2.4.x kernel
scales effectively to four-way and eight-way, although results vary depending on
the workload and the scalability of the application.

Chapter 3. Planning

97

The xSeries development team, the IBM Linux Technology Center, and other
parts of IBM are working with the Linux community, Red Hat, SuSE, and other
Linux alliance partners to develop advanced Enterprise X-Architecture features
in upcoming releases of Linux. These features will include memory optimization,
additional hot-swap/hot-add capabilities, dynamic partitioning, and additional
scalability improvements.
Refer to 4.3.2, “Red Hat Linux installation” on page 119 for more detailed
information on installing Linux on the x440.

3.5.7 VMware ESX Server
VMware ESX Server is virtual machine software for consolidating and
partitioning servers. It is a cost-effective, highly scalable virtual machine platform
with advanced resource management capabilities. VMware ESX Server is used
to minimize the total cost of ownership (TCO) of server infrastructure by
maximizing server manageability, flexibility, and efficiency across the enterprise.
There are three 16-way ESX Server models currently (or soon to be) available:
Table 3-4 New models for the 16-way fixed configurations running VMware ESX Server

Model

Standard processors

L2 cache

L3 cache

Std memory

8687-4RX

16 x 1.4 GHz Intel Xeon MP

256 KB

512 KB

8 GB(*)

8687-5RX

16 x 1.5 GHz Intel Xeon MP

256 KB

512 KB

8 GB(*)

8687-6RX

16 x 1.6 GHz Intel Xeon MP

256 KB

1 MB

8 GB(*)

(*) Extra memory/storage and options will be included as defined by customer requirements through the
Special Bid process.

These models also have a ServeRAID-4Mx adapter installed as standard.
ESX Server works by letting you transform physical computers into a pool of
logical computing resources. Physical servers are partitioned into secure virtual
servers. Operating systems and applications are isolated in these multiple virtual
servers that reside on a single piece of hardware. These resources can then be
distributed to any operating system or application as needed.
ESX Server provides dynamic logical partitioning. It runs directly on the hardware
to partition and isolate server resources, using advanced resource management
controls to let you remotely manage, automate, and standardize these server
resources.

98

IBM ^ xSeries 440 Planning and Installation Guide

Dynamic logical partitioning involves:
򐂰 Partitioning server resources
The ESX Server acts as the host operating system, provides dynamic logical
partitions to hold other operating systems, and virtualizes most system
resources, including processors, memory, network capacity, and disk
controllers.
򐂰 Isolating server resources
With ESX Server, each hosted operating system thinks it owns the entire
computer, yet it sees only the resources that the administrator (through ESX
Server) assigns to it. As shown in Figure 3-11 on page 99, ESX Server
resides between the hardware and the various operating systems and
applications. Partitions can be administered remotely, even down to the BIOS
level, just as individual servers are.

NetWare

SuSE Linux
Enterprise

Physical layer

Red Hat Advanced
Server

Virtual layer

Windows NT EE

Windows 2000

Partitions

VMware ESX Server

Server hardware

Figure 3-11 ESX Server resides between the server hardware and server resources

򐂰 Managing server resources
The ESX Server’s advanced resource management controls allow you to
guarantee service levels. CPU capacity can be allotted on a time-share basis.
Memory can be assigned dynamically based on partition workloads and
defined minimums. If the allocated amount is insufficient in one partition, ESX
Server can temporarily borrow memory from one partition and lend it to
another, and then restore it to the original partition when needed. Network
sharing is determined by token allocation or consumption based on the
average or maximum bandwidth requirements for a partition.

Chapter 3. Planning

99

Currently ESX Server does not support Hyper-Threading technology. For support
information including supported servers, see:
http://www.pc.ibm.com/ww/eserver/xseries/vmware.html
For more information on VMware, refer to the redbook Server Consolidation with
the IBM ^ xSeries 440 and VMware ESX Server, SG24-6852.

3.6 Application considerations
As well as operating systems, there are enterprise applications currently
available that are licensed on a per-processor basis. You should be aware that
enabling Hyper-Threading or adding physical processors on systems running
these applications may have licensing implications for your organization. This will
need to be considered in the planning phase of your deployment and, if required,
additional licenses purchased prior to enabling Hyper-Threading.
Microsoft has stated that current server products licensed on a per-processor
basis will require one license per physical processor. For example, on a two-way
system with Hyper-Threading enabled and running Microsoft SQL Server, a
two-processor license is required, even though the application may process
threads on four logical processors.
Performance benefits versus licensing costs may be a consideration before
enabling Hyper-Threading or adding processors and may require testing to
confirm that there will be a substantial benefit to application performance. In most
applications there will be a performance gain as processors are added; however,
this gain does not generally remain linear with the continued addition of
processors. The performance improvements seen will depend largely on
application scalability, which is discussed in more detail in the next section.

3.6.1 Scalability and performance considerations
Adding processors improves server performance because software instruction
execution can be shared among the additional processors. However, this
requires software to detect the additional CPUs and generates additional work in
the form of threads or processes, which execute on the additional processors.
This does not happen automatically. The operating system provides a platform
that enables the capability of multiprocessing, but it is up to the application to
generate the additional threads and processes to execute on all processors. This
is referred to as application scalability.

100

IBM ^ xSeries 440 Planning and Installation Guide

Having faster machines in the server hardware space means more parallelism
(more processors, larger memory, larger disk arrays, additional PCI buses, and
so on). The obvious case of software that does not scale is DOS. Run DOS on a
server with eight CPUs, 16 GB of memory, equipped with 250 15K RPM disks in
RAID arrays, and you get the same performance as if you have one CPU, one
disk, and 640 KB of memory. Obviously, the server isn’t slow. The problem is that
the software (in this case DOS) does not scale.
Software scalability is a complex subject, one most people don't consider until it
is too late. Often people purchase new high-performance SMP servers expecting
huge performance gains with old applications, only to learn the bottleneck is in
the server application. In this case there is little they can do to efficiently utilize
the new SMP server until the application is modified.
A scalable application makes use of greater amounts of memory, generates
scalable I/O requests as the number of disks in a disk array increases, and will
utilize multiple LAN adapters when a single LAN adapter limits bandwidth. In
addition, a scalable application has to detect the number of installed processors
and spawn additional threads as the number of processors increases to keep all
processors busy.
Hyper-Threading increases the number of logical processors and demands that
the software spawn additional threads to run at maximum efficiency. However,
most applications don't yet do this. This is why Hyper-Threading performs well
with two-way and four-way SMP systems, because many applications already
generate sufficient threads to keep four physical/logical CPUs busy. But at
eight-way and 16-way, the applications have to do even more than they do today
to efficiently utilize Hyper-Threading. All of these things must be engineered into
the server application and operating system. In general, the only applications that
scale past four-way are database applications.

3.6.2 SMP and server types
SMP has a direct relationship to the type of application server being used. If the
server is used as a file server, adding a processor to a file server does not
significantly improve performance, whereas it can result in a very high
performance gain for an application server.
As you can see from Figure 3-12, a file/print server benefits only marginally from
the addition of a second processor and can actually degrade performance when
the third and fourth processors are added. However, with a database or
application server, the addition of one to three processors makes a marked
improvement in processing power.

Chapter 3. Planning

101

Relative Performance (%)

File/Print Serving
Environment

400 -

300 -

300 200 100 -

Database/Application
Environment

400 -

118
100

1

116

114

200 100 -

2

3

230

4

Number of
Processors

280

175
100

1

2

3

4

Number of
Processors

Figure 3-12 Effect of adding processors under file/print and application environments

SMP will not provide a linear improvement in processing power as additional
processors are added. You might achieve a 70-90% performance increase from
the second processor, but each additional processor after the second will provide
a diminishing return on investment as other system bottlenecks come into play.
For more information regarding multiprocessor systems and performance tuning
xSeries servers, refer to the redbook Tuning IBM ^ xSeries Servers for
Performance, SG24-5287.
For performances results of four-way and eight-way x440s running SAP and
TPC-C benchmarks, see:
http://www.pc.ibm.com/ww/eserver/xseries/benchmarks/series.html

3.7 Rack installation
The x440 is 4U in height and is intended for use as a rack-drawer server. Due to
power distribution considerations, it is recommended that no more than eight 4U
x440 chassis be installed in a single 42U rack, leaving 10U available for RXE-100
Remote I/O enclosures, disk or tape storage, or other devices.
The x440 is 27.5" deep, and is designed to be installed in a 19-inch rack cabinet
designed for 28” deep devices, such as the NetBAY42 ER, NetBAY42 SR, or
NetBAY25 SR. Although the x440 system is rack optimized, it may be converted
into a tower by installing it in a NetBAY11 SR Standard Rack Cabinet. The
NetBAY11 rack supports shipment of fully configured xSeries 440 and other
rack-optimized xSeries servers.
Installation considerations include the following:
򐂰 The system is not designed to run vertically, and therefore must always be run
in a horizontal position.

102

IBM ^ xSeries 440 Planning and Installation Guide

򐂰 For thermal considerations, the x440 must be installed with perforated doors
on both front and back. Do not install the x440 in a rack with a glass front
door.
򐂰 Although installation is supported in non-enterprise racks, it is not
recommended, since cable management then becomes an issue.
򐂰 The maximum weight of the system, depending on your configuration, is 50
kg (110 lb.). Therefore, this system requires two people to install it in a rack.
If you use a non-IBM rack, the cabinet must meet the EIA-310-D standards with a
depth of at least 28 inches. Also, adequate space (approximately two inches for
the front bezel and one inch for air flow) must be maintained from the slide
assembly to the front door of the rack cabinet to allow sufficient space for the
door to close and provide adequate air flow.
Make sure all the cables attached to the x440 are long enough to permit the
server to be slid out of the rack. This would include the normal cables such as
power, network, and fiber cables, but also includes the Remote I/O cable for
connecting to the RXE-100 and the SMP expansion cables for connecting to
another x440. See “Remote Expansion Enclosure” on page 78 for RXE-100
cabling information.
Since the x440 is rack optimized, the IBM xSeries rack configurator should be
used to ensure correct placement. The configurator can be downloaded from:
򐂰 For EMEA: http://www.pc.ibm.com/europe/configurators/
򐂰 For USA:
http://www.pc.ibm.com/us/eserver/xseries/library/configtools.html
򐂰 For other countries or regions:
a.
b.
c.
d.
e.
f.
g.

Go to http://www.ibm.com
Click Select a country
Select your country
Click Products and Services
Click Intel-based servers
Click Tools
Scroll down to find the Rack Configurator section

3.8 Power considerations
The x440 ships with two redundant, hot-swappable power supplies that produce
1050 W each at 220 V, or 550 W each at 110 V. When the x440 is populated with
more than two processors, memory, and adapters, the power supplies may not
be redundant if they are connected to a 110 V power source.

Chapter 3. Planning

103

Therefore, IBM recommends that the x440 be connected to a 220 V power
source to ensure power supply redundancy for large configurations.
Tip: If power is not redundant, the Nonredundant LED will be lit in the Light
Path Diagnostic panel (see Figure 1-14 on page 26.
Two system power-cord connectors are available on the back of the x440, one for
each of the power supplies. Connect each of these power connectors to separate
power circuits to ensure availability if one circuit should fail.
The x440 ships with two 2.8 m/9 ft. IEC 320-C13 to IEC 320-C14 power cables
for intra-rack power distribution. Models shipped in the US also include two 2.8
m/9 ft. IEC 320-C13 to NEMA 6-15P power cords for attachment to high-voltage
power sources.

3.9 Solution Assurance Review
Some level of Solution Assurance Review (SAR) should be performed for all IBM
solutions. The level of SAR (self, peer, or expert) should match the complexity of
the solution. For example, simpler solutions may need only a self review.
However, a combination of the customer environment risk combined with the
complexity of the solution may require that an expert level SAR take place,
facilitated by a Quality Assurance practitioner and supported by a team of
technical experts.
Note: The EXAct Business Partners themselves may be required to perform the
Solutions Assurance Review.
If a solution contains four or more Enterprise X-Architecture servers (currently
x360 and x440), then an expert SAR is mandatory.
For further information on what is required, refer to the above Solution Assurance
Web sites. EMEA and Americas information is available. Procedures for Asia
Pacific countries are currently in development.

Trigger Tool
The SAR Trigger Tool provides a recommendation on the level of quality/Solution
Assurance that will be required. It is available from one of the URLs above.
The three levels are:
򐂰 Expert
– For technically challenging, high-risk solutions

104

IBM ^ xSeries 440 Planning and Installation Guide

– Process dictates expert personnel's participation
– Formal, rigorous
򐂰 Peer
– For low-to-medium-risk solutions
– Informal, inexpensive
򐂰 Self
– For low-risk solutions
– Informal, inexpensive

eSAR — Electronic Solution Assurance Review
There is also an eSAR tool available to help you establish whether you require an
expert review. This is available from:
򐂰 For IBM employees: http://w3.ibm.com/support/assure, then click eSAR
򐂰 For Business Partners: http://www.ibm.com/support/assure/esar

Chapter 3. Planning

105

106

IBM ^ xSeries 440 Planning and Installation Guide

4

Chapter 4.

Installation
In this chapter we describe procedures specific to the installation of Windows
2000, Linux, Novell NetWare and VMware operating systems on the x440
platform. Topics covered include:
򐂰
򐂰
򐂰
򐂰

System BIOS settings
Device drivers
Operating system installation
Additional Information

Prior to commencing installation, you need to download the latest firmware and
drivers. These are all available from the x440 driver matrix at:
http://www.pc.ibm.com/qtechinfo/MIGR-39747.html

© Copyright IBM Corp. 2002. All rights reserved.

107

4.1 System BIOS settings
This section describes system BIOS settings that you may need to configure
prior to installing an operating system.

4.1.1 Updating BIOS and firmware
We recommend you check the BIOS and firmware levels on the items listed
below and update to the most current revision, as part of your installation
procedure for the x440:
򐂰
򐂰
򐂰
򐂰

System BIOS
Remote Supervisor Adapter firmware and BIOS
Onboard diagnostics
Additional devices if installed, such as ServeRAID adapters and FAStT Fibre
Channel host adapters.

The latest BIOS and firmware code can be found at:
http://www.pc.ibm.com/qtechinfo/MIGR-39747.html
Follow the installation instructions provided with each package.

4.1.2 Enabling memory mirroring
Memory mirroring (part of IBM’s Active Memory technology) provides an
additional level of fault tolerance to the memory subsystem. For detailed
information and guidelines on memory mirroring, see 3.1.2, “Memory” on
page 65.
To enable memory mirroring on the x440, perform the following steps:
1. Press F1 when prompted during system startup to enter the System
Configuration Utility.
2. From the main menu select Advanced Setup -> Memory Settings ->
Memory Mirroring Settings. Figure 4-1 on page 109 appears.
If you have two SMP Expansion Modules installed, you will see two options:
– CEC 1, which corresponds to the bottom SMP Expansion Module
– CEC 2, which corresponds to the top SMP Expansion Module

108

IBM ^ xSeries 440 Planning and Installation Guide

Figure 4-1 Enabling memory mirroring

3. Select the SMP Expansion Module you want memory mirroring enabled on
and press the right arrow key to change the value to Enabled.
Tip: In a server with two SMP Expansion Modules installed, it is possible to
enable memory mirroring in only one SMP Expansion Module; however,
we do not recommend this configuration.
4. Exit the System Configuration Utility and save any changes.
5. Once memory mirroring is enabled, you will the following message during
POST:
Active Memory(tm) Mirroring enabled on CEC1

4.1.3 Enabling Hyper-Threading
Hyper-Threading technology allows a single processor to execute two separate
instruction threads concurrently, effectively operating as two separate logical
processors. The installed operating system sees these logical processors as two
separate physical processors. This is demonstrated in Figure 4-8 on page 118.
There are a number of important factors to consider before enabling
Hyper-Threading. These considerations are discussed in detail in 3.5, “Operating
system considerations” on page 90. It is important to fully understand the

Chapter 4. Installation

109

configuration rules, performance issues and potential licensing implications
related to Hyper-Threading before you proceed.
Eight-way systems: Hyper-Threading is disabled by default on the x440. This
is because of a known bug in Windows 2000 Advanced Server. If
Hyper-Threading is enabled on an eight-way server, then the Windows 2000
Advanced Server will trap (blue screen) during installation. This problem does
not affect other supported operating systems. You can enable
Hyper-Threading on an eight-way Advanced Server system after Service Pack
2 or later has been installed.
Windows 2000 Advanced Server supports a maximum eight processors.
There is no performance benefit to be gained from enabling Hyper-Threading
on an eight-way Advanced Server system.
To enable Hyper-Threading on the x440, do the following:
1. Press F1 during system startup to enter the System Configuration Utility.
2. From the main menu, select Advanced Setup -> CPU Options. Figure 4-2
appears.

Figure 4-2 Enabling Hyper-Threading

3. Change the Hyper-Threading Technology setting to Enabled.
4. Exit the System Configuration Utility saving changes.

110

IBM ^ xSeries 440 Planning and Installation Guide

Important: The procedures and menu items described above may change as
BIOS levels are revised.

4.2 Device drivers
Specific device drivers are available for the x440 that may not be included with
the base operating system.
Drivers that you should obtain separately from the operating system include:
򐂰
򐂰
򐂰
򐂰
򐂰

Broadcom Ethernet controller
Active PCI-X controller
Remote Supervisor Adapter management processor
S3 Savage4 LT video controllers
ServeRAID

The required drivers as well as the latest versions of BIOS, service processor
firmware, and diagnostics firmware are available from:
http://www.pc.ibm.com/qtechinfo/MIGR-4JTS2T.html
If you are implementing Microsoft Cluster Service (MSCS) check the Microsoft
Cluster Service Hardware Compatibility List (HCL) to confirm that hardware,
drivers and firmware have been Microsoft Certified. The HCL can be found at:
http://www.microsoft.com/hcl/default.asp

4.3 Operating system installation
This topic will discuss the installation of the following operating systems on the
x440 hardware platform:
򐂰 Windows 2000 Server and Advanced Server
򐂰 Red Hat Linux
򐂰 Novell NetWare
In the following discussions we assume that disk subsystems, such as RAID
arrays, for both operating system installations and data storage have been
configured.

Chapter 4. Installation

111

4.3.1 Microsoft Windows 2000 Server and Advanced Server
Eight-way systems: If the server has eight CPUs installed, then you must
ensure Hyper-Threading is disabled in BIOS before installing Windows 2000
Advanced Server. You can re-enable Hyper-Threading after Service Pack 2 or
later is installed. If you do not disable Hyper-Threading during installation, a
blue screen trap will occur.
Windows 2000 Advanced Server supports a maximum eight processors.
There is no performance benefit to be gained from enabling Hyper-Threading
on an eight-way Advanced Server system.
Before commencing the installation of Windows 2000 Server or Advanced
Server, ensure that BIOS and firmware levels have been updated to the latest
levels. Refer to “Updating BIOS and firmware” on page 108.
Refer to 4.2, “Device drivers” on page 111 for information on device drivers
required for the installation of Windows 2000 Server and Advanced Server.
If you are connecting an RXE-100 enclosure to the system, complete the
operating system installation before connecting the RXE-100. For more
information on connecting the RXE-100, refer back to “3.2.3, “Remote Expansion
Enclosure” on page 78”.

Installing the x440 Windows 2000 custom HAL
When installing Microsoft Windows 2000 Server, Advanced Server or Datacenter
Server on the x440 server, a customized Hardware Abstraction Layer package is
required. The customized HAL enables Windows 2000 to operate with the
extended CPU and PCI configurations that are possible in the x440 system.
The custom HAL for Windows 2000 Server and Advanced Server can be
downloaded from:
http://www.pc.ibm.com/qtechinfo/MIGR-42325.html
When the file has been downloaded, run the executable and extract the files to a
formatted diskette. The HAL for Windows 2000 Datacenter Server is supplied on
the CD-ROMs.
To install Windows 2000, do the following:
1. Power on the x440 with bootable Windows 2000 Server or Advanced Server
CD in the CD-ROM drive.
2. When prompted, press any key to boot from the CD.

112

IBM ^ xSeries 440 Planning and Installation Guide

3. When you see the following message, immediately press F5:
Setup is inspecting your computer’s hardware configuration...
If you plan to install the operating system on disks attached to a ServeRAID
controller, press F6 at this point so that you will be prompted to install the
ServeRAID driver as well.
4. When you are prompted to select the Computer Type, as shown in Figure 4-3,
select Other and press Enter.

Figure 4-3 Installing IBM custom HAL

5. Insert the xSeries 440 Windows 2000 HAL Support Disk (the one you
downloaded and created) into the diskette drive and press Enter.
6. Select eServer xSeries 440 (Windows 2000 HAL), as shown in Figure 4-4
on page 114 and press Enter. The necessary files will be copied from the
diskette.

Chapter 4. Installation

113

Figure 4-4 Selecting computer type from custom HAL disk

7. If you pressed F6 to add mass storage device drivers, such as for a
ServeRAID controller, you will now be prompted to insert the driver disk.
8. Continue the Windows 2000 installation and configuration as normal. You will
be prompted to insert the Windows 2000 HAL Support Disk (and the
ServeRAID driver disk if required) a second time during the text mode portion
of setup.

Installing the Windows 2000 Service Pack
Once the Windows 2000 installation is complete, install the latest Windows 2000
service pack.
During the installation, you should select No when prompted to replace the
HAL.DLL file, as shown in Figure 4-5 on page 115.

114

IBM ^ xSeries 440 Planning and Installation Guide

Figure 4-5 Windows 2000 Server Service Pack 2 installation

Installing additional drivers
After Windows 2000 is installed, Device Manager will report a number of
unknown devices, as shown in Figure 4-6 on page 116. These unknown devices
correspond to specific components as shown in Figure 4-6 on page 116:

Chapter 4. Installation

115

Broadcom Gigabit Ethernet controller
IBM Remote Supervisor Adapter
IBM Active PCI controller

Figure 4-6 Device Manager post Windows 2000 Server installation.

When the Windows 2000 Server installation has completed, you should perform
the updates listed below. The order of these updates is not critical, but we
recommend that you install the latest Windows 2000 Server Service Pack prior to
installing the device drivers.
1. Install or update the ServeRAID adapter driver.
Important: ServeRAID device drivers should be at the same level as the
installed ServeRAID firmware and BIOS.
2. Install the driver for the onboard Broadcom NetXtreme Gigabit Ethernet
adapter.
To perform this update open Device Manager, right-click Ethernet Controller
under Other devices, select Properties, select the Drivers tab then click the
Update Driver button. Follow the steps in the Upgrade Device Driver wizard,
using the downloaded driver files extracted to a floppy disk.

116

IBM ^ xSeries 440 Planning and Installation Guide

3. Install the driver for the Remote Supervisor Adapter.
4. Install the Active PCI device driver.
5. Update the S3 Savage4 LT video controller driver.
6. If you have other devices installed, such as FAStT Fiber Channel adapters,
update or add the drivers for these.
Follow the installation instructions supplied with each individual driver package.
Figure 4-7 shows Device Manager after the Active PCI, RSA and Broadcom
Gigabit Ethernet drivers have been installed.

Figure 4-7 Windows 2000 Device Manager after device driver updates

Enabling Hyper-Threading on a Windows 2000 system
For instructions on enabling Hyper-Threading in BIOS refer to 4.1.3, “Enabling
Hyper-Threading” on page 109. Refer to 3.5, “Operating system considerations”
on page 90 for a discussion of the performance and planning issues related to
Hyper-Threading technology.

Chapter 4. Installation

117

Once you enable Hyper-Threading, the Performance tab in Task Manager will
show twice the number of CPUs installed as there are physical processors.
Figure 4-8 shows the windows before and after enabling Hyper-Threading on a
two-way x440.
Before
Hyper-Threading
is enabled
After
Hyper-Threading
is enabled

Figure 4-8 Before and after enabling Hyper-Threading on a two-way x440

Attaching the RXE-100 Remote Expansion Enclosure
Once Windows 2000 is installed, if you haven’t done so already, you can attach
the RXE-100. Follow these steps:
1. Once the installation of Windows 2000 Server or Windows 2000 Advanced
Server is complete, shut down the operating system and power off the server.
2. Connect the RXE-100 to the x440 as described in “Connecting the RXE-100”
on page 80.
3. If power is connected to the RXE-100, remove for 10 to 20 seconds
4. Re-apply power to the RXE-100.
5. Power on the server.
The enclosure will power on automatically as the server is started.

118

IBM ^ xSeries 440 Planning and Installation Guide

If the additional PCI slots are not visible, confirm that the custom HAL has
been installed, as follows:
a.
b.
c.
d.

Open Device Manager
Expand the Computer entry
Right-click ACPI Multiprocessor Node and select Properties
Click the Driver tab

The Driver Provider is listed as IBM, as shown in Figure 4-9.

Indicates that the IBM
custom HAL has
been installed.

Figure 4-9 HAL.DLL driver file details

4.3.2 Red Hat Linux installation
In this section we discuss the installation of Red Hat Linux Advanced Server
Version 2.1 on the x440 server.
Before installing Red Hat Linux, update firmware and BIOS levels as discussed
in 4.1.1, “Updating BIOS and firmware” on page 108.
Tip: If you have an RXE-100, disconnect it before you begin. You should
reconnect it after you finish the Red Hat installation.
1. Start the server and insert the Advanced Server CD 1. The Welcome screen
appears.
2. Type linux apic at the shell prompt below the Welcome screen to load
support for APIC.

Chapter 4. Installation

119

3. Proceed with the installation as normal. Refer to documentation on the Red
Hat Linux CD or the following PDF:
https://www.redhat.com/docs/manuals/advserver/RHLAS-2.1-Manual/pdf/r
hl-ig-as-x86-en-21.pdf
When you have completed the installation, do the following to install the kernel
source for the summit (IBM XA-32 chipset) kernel:
1. Log in as root
2. Insert the Red Hat Linux Advanced Server Version 2.1 CD 1 and mount the
CD (if it is not auto-mounted) with the following command:
mount /mnt/cdrom
3. Install the summit kernel RPM with the following command:
rpm –ivh /mnt/cdrom/RedHat/RPMS/kernel-summit-2.4.9-e.3.i686.rpm
Note: If you experience a system hang when booting the default Advanced
Server kernel to install the summit kernel, reset the server, press F1 to enter
the Setup Utility and select Load Default Settings. Exit from the Setup Utility,
saving changes.
If you are compiling updated drivers on the summit kernel, you must install the
kernel source and apply the summit kernel patch prior to compiling the driver, as
follows:
Note: If you have previously updated the summit kernel from Red Hat Network
(RHN), follow these steps, substituting the appropriate kernel version where
necessary. For example, 2.4.9-e.3 would be changed to 2.4.9-e.8 for the
2.4.9-e.8 summit kernel.
1. When installing the kernel, do one of the following:
– If you later wish to add additional features and support to the kernel by
recompiling it, install the kernel-source RPM from either the Advanced
Server CD 2 or from the Red Hat Network site (http://rhn.redhat.com) if
you have previously updated the kernel, by running:
rpm -ivh kernel-source-2.4.9-e.3.rpm
– If you don’t need to recompile the kernel, install the kernel SRPM from
either the Advanced Server CD 4 or from the Red Hat Network site
(http://rhn.redhat.com) if you have previously updated the kernel, by
running:
rpm -ivh kernel-2.4.9-e.3.src.rpm

120

IBM ^ xSeries 440 Planning and Installation Guide

2. Run the following commands to update additional patches to the summit
kernel:
cd /usr/src/linux-2.4.9-e.3
patch -pl  C:\IBMXSBD.TXT

Adding support for the RXE-100
To add support for RXE-100, a new NBI.NLM file must be obtained by going to
http://support.novell.com/filefinder/ and searching for NBIUP1.EXE. This
file contains a new NetWare Bus Interface driver, NBI.NLM. The new driver adds
support for the x440 when the RXE-100 is installed.
Before you begin:
򐂰 Attach the RXE-100 (do this after the NetWare installation).
򐂰 The NetWare Support Pack must be installed before installing the new
NBI.NLM. Otherwise, installing the support pack will overwrite the new
NBI.NLM
Perform the following operation:
1. Bring the server down to a DOS prompt.
2. Go to C:\NWSERVER directory and rename the NBI.NLM to NBI.OLD if it
exists.

126

IBM ^ xSeries 440 Planning and Installation Guide

3. There are two NBI.NLM files and one NBIUP1.TXT file in NBIUP1.EXE (one
for NW5.1 in the nw51 directory and one for NW6.0 in the nw6 directory).
Copy the new NBI.NLM associated with NetWare 6 into the C:\NWSERVER
directory.
4. Restart the server
5. To verify that the installation was successful, load NCMCON.NLM. If this
shows more than six PCI slots, the installation was successful.
Note: Reapplying the NetWare Support Pack overwrites the new NBI.NLM. If
this happens, repeat the above steps.

Enabling Hyper-Threading
Although NetWare 6 supports 32 processors, the Hyper-Threading is not
enabled by default. To enable Hyper-Threading, follow the installation instructions
described in this section.
1. Enable Hyper-Threading in BIOS. Refer to 4.1.3, “Enabling Hyper-Threading”
on page 109 for how to enable Hyper-Threading.
2. Install NetWare 6 Support Pack 1.
3. Replace the current PSM that does not support Hyper-Threading with
ACPIDRV.PSM, which does.
This can be done in one of two ways depending on your current level of
NetWare 6 deployment.
– Method 1:
For a current installation of NetWare 6, apply NW6SP1 if not done
previously. Copy ACPIDRV.PSM from the DRIVERS directory in the
NetWare Startup directory on the DOS partition into the NetWare Startup
directory.
Edit STARTUP.NCF to load ACPIDRV.PSM instead of the current .PSM
(typically MPS14.PSM) and reset the server.
– Method 2:
For a clean installation, the ideal approach would be to obtain a NetWare 6
Overlay CD (available on the Novell Support Product Updates area for
NetWare 6), which incorporates NW6SP1.
During the install, using the NetWare 6 Overlay CD, Hyper-Threading
support is already available. Modification of the .PSM that will be used is
all that is needed. This is done when the install presents the device drivers
(Platform Support Module, Hot-Plug Support Module and Storage
Adapters) that will be installed. Choose to modify the PSM selection,

Chapter 4. Installation

127

delete the default .PSM (usually MPS14.PSM) and insert the
ACPIDRV.PSM (the selections will display when the Insert key is pressed).
This will enable Hyper-Threading from the start for this particular
installation. If an Overlay CD cannot be obtained, perform the previously
outlined method for enabling Hyper-Threading on an existing installation.
4. Restart server.
Tip: You can display the number of processor with the DISPLAY
PROCESSOR command on the system console to ensure that
Hyper-Threading is enabled.

4.3.4 VMware ESX Server
For VMware ESX Server installation instructions, see the redbook Server
Consolidation with the IBM ^ xSeries 440 and VMware ESX Server,
SG24-6852.

4.4 Additional Information
For additional information on the installation of the x440, refer to the following
product publications posted on the Web:
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰

IBM ^ xSeries 440 Installation Guide
IBM ^ xSeries 440 Option Installation Guide
IBM ^ xSeries 440 User’s Guide
IBM ^ xSeries 440 Troubleshooting Guide
IBM ^ xSeries 440 Hardware Maintenance Manual
IBM Remote Expansion Enclosure Installation Guide

These product publications can be downloaded in PDF format from:
http://www.pc.ibm.com/support

128

IBM ^ xSeries 440 Planning and Installation Guide

5

Chapter 5.

Management
IBM Director is the systems management software provided with the
IBM ^ xSeries servers. This chapter covers the three plug-ins to IBM
Director unique to IBM that are of particular relevance to the x440:
򐂰 5.1, “Active PCI Manager” on page 130
Active PCI Manager guides you when installing new PCI adapters and can
analyze an existing PCI configuration and suggest changes to improve
performance. Works with x440 internal slots and RXE-100 slots.
򐂰 5.2, “System Partition Manager” on page 150
System Partition Manager is used to physically partition an x440 complex (for
example a 16-way) so that each partition behaves as an independent server.
򐂰 5.3, “Process Control” on page 155
Process Control can help improve performance both for the server overall and
for application users in general. Application vendors have no incentive to
impose restrictions on their software. One application may have its priority set
unnecessarily high, to the detriment of all other applications, or two
applications may each try to use all available memory, causing contention.
These types of ill-behaved applications make it virtually impossible to run
many applications concurrently on a conventional server. By preventing
resource-intensive applications from dominating server resources, IBM
provides the means for application consolidation.

© Copyright IBM Corp. 2002. All rights reserved.

129

5.1 Active PCI Manager
Active PCI Manager is used to plan the addition of PCI and PCI-X adapters to the
x440 and RXE-100 expansion enclosures, to best use the bus architecture of the
systems. It can also analyze existing adapter arrangements to determine if the
configuration is optimal, and if not, suggest alternative configurations. If required,
it then graphically assists with changing the adapter placement.
Note: In this chapter, the images are taken from a pre-release version of IBM
Director 3.1 service pack 1 (Version 3.1.1).

Figure 5-1 Active PCI Manager revision

Active PCI Manager 3.1.1 will be a plug-in for IBM Director 3.1.1, just as the
current version, 1.1 is a plug-in for IBM Director 3.1. It will be available for
download from:
http://www.pc.ibm.com/support
Like other IBM Director plug-ins, there are three components:
򐂰 Agent component, installed on the x440
򐂰 Server component, installed on the IBM Director Server
򐂰 Console component, installed on each IBM Director console
Note: All Active PCI Manager functions are performed from the IBM Director
console.
The agent has the following prerequisites:
򐂰 Windows 2000 Server or Advanced Server with Service Pack 2

130

IBM ^ xSeries 440 Planning and Installation Guide

򐂰 IBM Active PCI Software Version 5.0.2.0 or later. This software contains the
following components:
– IBM Active PCI filter driver
– IBM Active PCI alert driver
– IBM Active PCI alert service
IBM Active PCI Software and the user’s guide are available from:
http://www.pc.ibm.com/qtechinfo/MIGR-4J2QEQ.html
Once installed, Active PCI Manager appears as a task in the Director console, as
shown in Figure 5-2.

Figure 5-2 Active PCI Manager task on IBM Director main console

Active PCI Manager is activated by dragging the icon from the Task frame and
dropping it on the icon of the system in the Group Contents frame. Alternatively,
Active PCI Manager is activated by right-clicking the system in the group
contents frame, and selecting Active PCI Manager from the list.

Chapter 5. Management

131

5.1.1 Using Active PCI Manager
The Active PCI Manager interface is shown in Figure 5-3. The details of your
system configuration are available in three ways by clicking the appropriate tab:
򐂰 Slot view, Figure 5-3
򐂰 Tree view, Figure 5-5 on page 135
򐂰 Table view, Figure 5-6 on page 136

Figure 5-3 x440 slot view

In the slot view, we see a graphical representation of our ServeRAID adapter in
slot 2 of the x440 and the other slots empty. The orange bar above the adapter
indicates that the adapter present in that slot is hot-swappable.
The tree pane in this view (lower left) shows there is an RXE-100 attached to the
server. By selecting the RXE-100, the graphical view changes to show the six or
12 slots in that enclosure.
The details pane in this view (lower right) includes information about the currently
selected item in the tree pane or the currently selected slot. In Figure 5-3, the
server is selected and the details pane shows information about the system and
its slots, including:
򐂰 Chassis Number: The x440 is assigned chassis number 0. In multi-chassis
x440 configurations, each chassis will be identified by a unique number.

132

IBM ^ xSeries 440 Planning and Installation Guide

򐂰 First Slot Label, Last Slot Label: The first and last slot labels in the server are
identified. When the RXE-100 is selected, the slot labels are A1 through A6
and B1 through B6. The labels correspond to the names of the slots as
printed on the chassis.
򐂰 I/O Drawer: This value is only relevant when an RXE-100 is selected. The
number is unique and the first drawer will be number 0.
򐂰 Unit Name, Unit Type: The unit name is IBM eServer xSeries 440 and the unit
type is Chassis. When you select the RXE-100, the unit name is IBM
RXE-100 and the unit type is I/O Drawer.
Selecting a slot results in the details of that slot appearing in the bottom right
pane, as shown in Figure 5-4.

Figure 5-4 Slot 3 details

Chapter 5. Management

133

For information about the attributes, consult the online help. However, some
explanation is warranted here:
򐂰 Current Speed, Max Slot Speed: As described in 1.8, “PCI subsystem” on
page 23, the adapters installed in this slot and other slots in the same bus
dictate the speed that the slot will run at. Consequently, the current speed of
the slot may not be the maximum possible speed. The “X” in the Current
Speed value indicates the slot is running in PCI-X mode (as opposed to
running in PCI mode).
Some RXE-100 slots may indicate “NS” for the slot speed (see Figure 5-6 on
page 136 for an example). NS stands for “not set”. These are even-numbered
slots when the odd-numbered slot in the pair is running at 133 MHz. This is
because if both slots in a pair are used, the maximum speed of both slots is
100 MHz.
򐂰 PME Signal: PME (power management events) is a hardware signal that
some PCI slots can supply. Most PCI cards do not implement PME and the
x440 and RXE-100 slots do not support PME.
򐂰 Bus Number: a system-assigned number for each bus. Useful for identifying
slots that are on the same bus, but the bus number does not correlate to the
slot numbers.
򐂰 Low Profile: The slot only supports cards that have a half-height end plate.
See the PCI Special Interest Group specifications of low-profile PCI adapters
at http://www.pcisig.com/data/specifications/lowp_ecn.pdf.
򐂰 LED Status: Table 5-1 shows the possible LED status messages and their
meanings:
Table 5-1 LED status messages

Message

Meaning

Recommended Action

OK (no error)

None of the other conditions exist.

None

Hot eject successful

Adapter removal completed without
error.

None

Bus speed mismatch

A second adapter was hot-added
but its rated speed does not match
the current speed of the bus. The
new adapter is held inoperative.

Either:
򐂰 Move one of the adapters to an empty
bus, or a bus with an adapter of the
same speed.
򐂰 Reboot the server and the bus will
slow down so that both adapters can
operate.

Power fault on card in
slot

Adapter has short-circuit or other
problem preventing normal
operation.

Remove the adapter and have it repaired
or replaced.

134

IBM ^ xSeries 440 Planning and Installation Guide

Message

Meaning

Recommended Action

Surprise removal
occurred

Operating system activity to slot
was not stopped before power to
adapter was removed.

This should be avoided. Use the Unplug
or Eject Hardware wizard to disable the
adapter before removing it.

Slot disabled at
current speed

A second 133 MHz adapter was
hot-added to a two-slot bus that
already had a 133 MHz adapter
installed. The new adapter is held
inoperative.

Restart the server. The bus will run at 100
MHz and both adapters will be operative.

Too many adapters on
bus

Two PCI-X 133 MHz adapters in a
slot pair.

Move one of the adapters to an empty
bus.

Bus connection error

System has detected a hardware
fault.

Remove the adapter. If this does not
correct the condition, have system
serviced.

The tree view, shown in Figure 5-5, lets you look at all of the slots in the system
at once, in a Explorer-like tree view.

Figure 5-5 x440 and RXE-100 tree view

In Figure 5-5, the x440 slot 2 is selected and the attributes of the slot and the
adapter installed in it are shown on the right. For information about the attributes,
see the discussion following Figure 5-4 on page 133 and the online help.

Chapter 5. Management

135

The table view, Figure 5-6, provides a summary of all of the slot and adapter
information in one frame.

Figure 5-6 x440 and RXE-100 table view

Not visible to the left of the information shown is a column that shows the chassis
that the slots are associated with. The columns in this view are similar to the
attributes in the tree view and slot view.

5.1.2 Adding adapters to the system
The process of adding an adapter to an x440 using Active PCI Manager is
straightforward. As an example, let’s add two adapters to our system:
򐂰 IBM Gigabit Ethernet SX Adapter
򐂰 QLogic QLA2300 PCI Fibre Channel Adapter
Our configuration has a ServeRAID adapter installed in slot 2 of the x440 and an
RXE-100 attached. The operating system is installed on the ServeRAID attached
disks.
Note: The x440 and RXE-100 only support 3.3 V PCI and PCI-X adapter. 5 V
adapters are not supported.

136

IBM ^ xSeries 440 Planning and Installation Guide

The processing of adding an adapter depends on whether the adapter is on the
list of known adapters or whether you have to specify the adapter’s
characteristics.

Adding a known adapter
The process to add a known adapter, such as the IBM Gigabit Ethernet SX
Adapter, is as follows:
1. Select Tools -> Add Card Wizard. You are presented with Figure 5-7.

Figure 5-7 Add Card Wizard

The bottom two check boxes are selected, indicating the following default
settings:
– Active PCI Manager will suggest only slots with hot-plug support (all of the
slots in the x440 and RXE-100 support hot-plug).
– Slots that will not require a server restart.
The Add Card Wizard, when finding the best slot in which to place the new
adapter, may find a slot that needs to reset the bus in order for it to work.
An example would be a 133 MHz PCI-X adapter being added to a bus that
already has a 133 MHz PCI-X adapter installed. Since the hardware only
supports two adapters in the one bus running at most at 100 MHz, the bus

Chapter 5. Management

137

must be reset to run at the lower speed. Restarting the bus requires that
the operating system be rebooted.
With this checkbox unchecked, the wizard may suggest the adapter be
added to a 133 MHz bus if it is the best slot, and will ask the user to restart
the system. With the box checked, the wizard will not suggest the slot on a
133 MHz bus even though it is the best slot, but will try to find a slot that
does not require a server restart.
By limiting the wizard this way, it may fail to find a slot. If it does find a slot,
the adapter will not require the system to be restarted, but the adapter may
not be running at the best speed or in the best mode.
2. To add the IBM Gigabit Ethernet SX Adapter, select it from the list. The
characteristics of the adapter are listed on the right side, as shown in
Figure 5-8.

Figure 5-8 Selecting the IBM Gigabit Ethernet SX Adapter

3. To insert the adapter, click Begin. Active PCI Manager analyzes the available
slots to find the one that provides the best performance for this adapter. While
this is running, the following message is displayed:
Please wait while the system is being analyzed. This may take
several minutes.

138

IBM ^ xSeries 440 Planning and Installation Guide

4. Upon completion of the analysis, Active PCI Manager recommends the best
slot to insert the adapter (if one is available with the right criteria), as shown in
Figure 5-9 on page 139. In our case it was slot 3 in the x440.

Figure 5-9 Hot Add adapter to slot 3

5. Click Finish to close the wizard.
6. From the slot view, select the slot that the Add Card Wizard recommended,
then right-click the slot and click Blink Slot, as shown in Figure 5-10.

Figure 5-10 Blink Slot

Chapter 5. Management

139

Lock options: There are lock options for a slot (see Figure 5-10). These
will lock a slot, all of the slots on the bus that a slot is a member of, or the
unit (the physical chassis or I/O expansion enclosure that a slot is a
member of).
Locking slots excludes them from being processed by:
򐂰 Add Card Wizard
򐂰 Performance optimizer, described in 5.1.3, “Analyzing an existing
configuration” on page 144.
Note: The table view does not provide access to the lock function.
The LED on the slot itself will blink as does the graphic of the slot, as shown
in Figure 5-11.

Figure 5-11 Slot 3 with the LED on

7. To hot add the adapter into the slot, open the cover on the x440 or RXE-100.
Push the orange tab above the adapter slot in the direction of the arrow,
toward the rear of the x440 or RXE-100, and swing the black adapter retaining
arm so that it is vertical. This will allow us to remove the blank filler plate and
add the adapter.
8. Once the adapter is installed, return the black adapter retaining arm to the
closed position. The orange tab will click back into place, locking the adapter
retaining arm down.
Important: Be sure to close all adapter retaining tabs before closing the
cover on the x440 or RXE-100, or the cover will not close properly and
forcing it will likely break the adapter retaining arm.

140

IBM ^ xSeries 440 Planning and Installation Guide

9. After the adapter has been added, the slot view window is dynamically
updated, as seen in Figure 5-12.

Figure 5-12 IBM Gigabit Ethernet Adapter installed in slot 3

Adding an unlisted adapter
To add an adapter whose characteristics are not known to the wizard, such as
the QLogic QLA23xx PCI Fibre Channel Adapter, do the following:
1. Select Tools -> Add Card Wizard. Leave the default selection, Adapter Not
Listed, highlighted, and click Next. Figure 5-13 on page 142 is displayed.

Chapter 5. Management

141

Figure 5-13 Adapter identification screen

There are two ways of specifying the characteristics of an unknown adapter:
– Manually select the attributes from the pull-down menus in the left pane in
Figure 5-13.
– Click the Adapter Keying button to cycle through drawings of adapters in
the right pane in Figure 5-13, and select the appropriate adapter from the
shape and location of the PCI connector.
The shape of the PCI connector indicates the voltage support (3.3 V or 5 V
or both) and the bus width (64-bit or 32-bit), as shown in Figure 5-14.

3.3v slot

5v slot

64-bit slot and connector

Figure 5-14 The meaning of the slots in the PCI connector

142

IBM ^ xSeries 440 Planning and Installation Guide

Table 5-2 shows you the possible choices when you click Adapter Keying.
As you click the button, both the graphic and the bus width and voltage
pull-down menu entries change.
Note: Clicking the Adapter Keying button only changes the bus width
and voltage pull-down menus. The other fields (maximum speed, PME,
low profile, and half length) must be set manually.
Table 5-2 Adapter attributes and associated graphics

Attributes

Graphic

Bus width: 64-bit
Voltage: dual (3.3 V and 5 V)

Bus width: 64-bit
Voltage: 3.3 V

Bus width: 32-bit
Voltage: Dual (3.3 V and 5 V)

Bus width: 64-bit
Voltage: 5 V
Note: 5 V adapters are not support in the
x440 and RXE-100
Bus width: 32-bit
Voltage: 5 V
Note: 5 V adapters are not support in the
x440 and RXE-100

2. Review the documentation that is included with the adapter to determine its
characteristics. In our case, they are:
–
–
–
–
–
–

PCI-X 66 MHz
64 bit
Dual voltage
No PME signal
Full height (that is, not low profile)
Half length

Chapter 5. Management

143

3. Enter the remaining values into the pull-down menus. We enter our
parameters and the following message is displayed:
HotAdd your adapter to slot 5, location in the IBM eServer xSeries
440
4. Blink the slot as described in step 6 on page 139.
5. Hot add the adapter as described in steps 7 and 8 on page 140.
6. Active PCI Manager updates its display to reflect the addition of the adapter,
and we see the operating system recognize the addition of the adapter, as
shown in Figure 5-15.

Figure 5-15 Windows finds new hardware

7. Supply the device driver for the adapter as per usual to complete the
installation.
Active PCI Manager is not required to hot add an adapter, but the use of the blink
slot tool will reduce the likelihood of error. The use of the Add Card Wizard can
insure that the adapter slot selection will result in an optimal configuration, if used
correctly.

5.1.3 Analyzing an existing configuration
Active PCI Manager can be used to analyze an existing configuration and assist
with the optimization of that configuration, if required. We have intentionally
re-arranged the adapters described in the previous section to illustrate this
capability. Figure 5-16 on page 145 shows the configuration we are starting with.

144

IBM ^ xSeries 440 Planning and Installation Guide

Figure 5-16 Incorrectly arranged adapters

To analyze the adapter configuration, click Tools -> Analyze. Once the analysis
is complete, the Optimization Steps window appears, as shown in Figure 5-17 on
page 146. It includes three sections:
򐂰 Observations — problems identified
򐂰 Suggested Adapter Arrangement — adapter layout after recommended
changes are made
򐂰 Recommended Actions — recommended changes
In our case a performance problem was discovered.
Note: You will note that the optimizer reports that both slots 3 and 6 contain
boot devices. These are potential boot devices, not necessarily actual boot
devices.
IBM does not support booting from a Fibre Channel device. However, Active
PCI Manager recognizes the QLogic adapter as a potential boot device. Active
PCI Manager does not make recommendations that include moving bootable
devices, since this may alter the order in which bootable devices are
enumerated during startup, resulting in a system that will not boot from its
intended boot device (or at all, if no other bootable devices exist).

Chapter 5. Management

145

Figure 5-17 Performance analysis

In the Recommended Actions, follow the instructions to move adapters. In our
example, the recommendation is to move an adapter from slot 4 to slot 1. The
steps we followed are as follows:
1. Click the link on the line Eject and Remove the adapter. Figure 5-18 on
page 147 appears indicating with an arrow the adapter to remove.

146

IBM ^ xSeries 440 Planning and Installation Guide

The optimizer prompts
you to remove the
adapter from slot 4

Figure 5-18 Removing an adapter from slot 4

Tip: The two links in Figure 5-17 merely show you which slot to insert the
adapter into or remove the adapter from.
2. Stop the adapter in the operating system. Double-click the
system tray. Figure 5-19 on page 148 appears.

icon in the

Chapter 5. Management

147

Figure 5-19 Stop adapter use by the operating system

3. Select the adapter you want to stop and click Stop.
4. Confirm that you want to stop the device by clicking OK.
5. Once the operating system confirms the adapter has been stopped, use the
Slot Blink action (see step 6 on page 139) to indicate which adapter to
remove.
6. Remove the adapter.
7. Active PCI Manager then shows the adapter has been removed, as shown in
Figure 5-20 on page 149, by adding an
to the bottom of the graphic and
changing the LED error status to “Hot eject successful”.

148

IBM ^ xSeries 440 Planning and Installation Guide

Indicators that the
hot-eject was
successful

Figure 5-20 Hot eject successful

8. Close the adapter retainer on slot 4. The slot is then reported as empty, as
shown in Figure 5-21.

Figure 5-21 Slot 4 is empty

Chapter 5. Management

149

9. Back at the Optimization Steps window, shown in Figure 5-17 on page 146,
click the link to hot add the adapter.
10.The slot view appears with an arrow showing you which slot to insert the
adapter into, as shown in Figure 5-22.

The optimizer shows you
which slot to insert the
adapter into.

Figure 5-22 Hot Add

11.Insert the adapter and close the latches.
12.Once all the recommended actions are complete, restart the server and
re-run the optimizer to confirm that there are no performance issues.
By using Active PCI Manager, we are assured that our I/O subsystem is
configured for optimal performance. We can use Active PCI manager to assist us
in configuring our system, or to reconfigure our system after it has been set up.

5.2 System Partition Manager
System Partition Manager is a tool that will let you partition multi-chassis x440
complexes into independent operating environments. For example, if you have a
16-way complex, comprised of two eight-way x440s, you can use System
Partition Manager to split the 16-way into two eight-ways. The advantages of
using this tool over simply uncabling the 16-way are that it doesn’t require any

150

IBM ^ xSeries 440 Planning and Installation Guide

hardware changes and that resources such as PCI slots in an RXE-100 can be
shared between the two partitions.
System Partition Manager is an IBM Director plug-in utility that IBM will be
releasing with IBM Director 4.1. The agent component will initially be supported
on the following operating systems:
򐂰
򐂰
򐂰
򐂰

Windows 2000
Windows .NET
Red Hat Linux Advanced Server 2.1
VMware ESX Server

Components will also be installed on the IBM Director server and all IBM Director
consoles.
System Partition Manager uses the following terms:
򐂰 Node — Functionally equivalent to an x440. Also referred to by Active PCI
Manager as a chassis.
򐂰 Remote I/O Enclosure — An RXE-100 for example.
򐂰 Partition — A node or group of nodes configured as a single server, with their
attached Remote I/O Enclosures (if present). The smallest partition is a single
x440 server.
򐂰 Complex — A group of up to four nodes and their attached Remote
Expansion Enclosures. A complex can be subdivided into partitions, or can be
one big partition.
Table 5-3 shows examples of partitions.
Table 5-3 Examples of valid partitions using System Partition Manager

Partitioning options

Diagram

One x440 with two or four processors. This configuration can
only be used as one system and cannot be split into smaller
partitions.
One x440 with eight processors. This configuration can only be
used as one system and cannot be split into smaller partitions.
Two x440s with eight processors. This configuration can be
partitioned in one of two ways:
򐂰 One partition of eight processors
򐂰 Two partitions each with four processors

Chapter 5. Management

151

Partitioning options

Diagram

Two x440s, one with eight processors, one with four processors.
This configuration can be partitioned in one of two ways:
򐂰 One partition of 12 processors
򐂰 Two partitions, one with four and one with eight processors
Two x440s with 16 processors. This configuration can be
partitioned in one of two ways:
򐂰 One partition of eight processors
򐂰 Two partitions each with eight processors

Connecting one RXE-100 with 12 slots to two different nodes, referred to as
twin-tailing, is supported (see Figure 3-10 on page 82).
Although not supported on the x440, the maximum configuration that System
Partition Manager supports is four nodes with four processors each and four
expansion enclosures. The expansion enclosures are connected, one to each
node or twin-tailed between two. This makes possible a partition of 16
processors, 256 GB of memory, and 72 PCI-X adapter slots.
Each partition that System Partition Manager creates will be very similar to the
stand-alone servers we use today. Once they are defined, they can:
򐂰 Be powered on and off individually
򐂰 Be capable of supporting one instance of an operating system, or in the case
of VMware, one host operating system
򐂰 Have a single, contiguous memory space and access to all associated
adapters
The following is a list of characteristics that are in the initial release of System
Partition Manager:
򐂰 The IBM Director administrator specifies which nodes (chassis) and Remote
I/O Enclosures are members of a complex, and assigns a name to the
complex.
򐂰 System Partition Manager is used to define the partitions. A partition of two or
more nodes will have one designated as the primary node.
򐂰 The CD-ROM and diskette drives are active on the primary node. The other
nodes have their CD-ROM and diskette drives disabled. The other nodes can
operate in “headless” mode, which means the server operates without a
console (keyboard, video, mouse) attached.

152

IBM ^ xSeries 440 Planning and Installation Guide

򐂰 System Partition Manager-specific events can be used to generate alerts
using IBM Director event action plans. Actions will also include the ability to
modify the configuration of complexes, restart partitions, and other activities.
Figure 5-23 shows the IBM Director console with the managed objects
introduced by System Partition Manager.
Note: Figure 5-23 also shows some Tivoli-specific icons that are not normally
found in an IBM Director console. They can be ignored for the purposes of
introducing System Partition Manager.

New objects
created by System
Partition Manager
New System
Partition Manager
tasks

Figure 5-23 System Partition Manager in IBM Director

Figure 5-24 on page 154 shows Partition Assistant, the tool used to define and
manage partitions.

Chapter 5. Management

153

Figure 5-24 Partition Assistant

The tree frame in Figure 5-24 shows the different nodes and external enclosures
configured. The icons next to the external enclosure names indicate how they are
allocated:
򐂰

indicates that the first (left or A) six PCI slots in the enclosure have been
allocated to this node.

򐂰

indicates that the second (right or B) six PCI slots in the enclosure have
been allocated to this node.

򐂰

indicates that all 12 PCI slots in the enclosure have been allocated to this
node.

In Figure 5-24, Enclosure 3 is connected to both Node 1 and Node 2. The A slots
are allocated to Node 1 and the B slots are allocated to Node 2.
Right-clicking a partition in Partition Assistant brings up a menu with the following
options:
򐂰 Delete — Deletes a partition.

154

IBM ^ xSeries 440 Planning and Installation Guide

򐂰 Insert Node — Associates a node with a partition.
򐂰 Set Primary Node — Determines which node will be the primary node.
򐂰 Boot — Boots the partition.
򐂰 Power Off — Powers off the partition.
򐂰 Assign — Writes the partition configuration to the nodes in a partition.
Inserting a node and setting a Primary Node do not actually take place until
these attributes are assigned to a partition.
򐂰 Power on Hold — Powers on all nodes and enclosures in a partition, but
does not boot the partition.
򐂰 Power on Release — Boots a partition that has previously been powered on
but has not booted as a result of the Power on Hold action.
򐂰 Refresh — Forces an update of the display of a partition in the Partition
Assistant window.

5.3 Process Control
Process Control is software for organizing and managing processes and system
resources on systems running Windows 2000. Process Control was developed
by IBM and built into Windows 2000 Datacenter Server. For xSeries customers,
Process Control is also available for use on:
򐂰 Windows 2000 Server
򐂰 Windows 2000 Advanced Server
The Process Control software and user’s guide and is available for download
from:
http://www.pc.ibm.com/qtechinfo/MIGR-40610.html
Process Control provides the following capabilities:
򐂰
򐂰
򐂰
򐂰

Manage two or more processes as a group
Server consolidation by the use of CPU affinity and memory constraints
Secure servers from unauthorized applications or processes
Resource utilization reporting and billing support

Process Control is designed to complement Windows 2000 Task Manager and
System Monitor, but not replace them. Configuring Process Control requires
administrator privileges on the server.
To uniquely identify processes, they are assigned aliases. This is achieved by
creating process alias rules. Once assigned aliases, processes can be arranged
into groups and rules applied equally to all members of the group.

Chapter 5. Management

155

The three main types of rules in Process Control are shown in Figure 5-25. They
are:
򐂰 Process alias rules
򐂰 Process execution rules
򐂰 Process group execution rules

Figure 5-25 Process Control rule types

5.3.1 Process alias rules
Windows 2000 uses the process ID associated with each process image name to
identify a process (see Figure 5-26). Process IDs change with each invocation of
a process and may be reused by the system. However, the use of image name
and process IDs can cause identification problems in applications such as Task
Manager when multiple processes have the same image name.

Figure 5-26 Task Manager, showing image names and process IDs

156

IBM ^ xSeries 440 Planning and Installation Guide

It is, however, possible to differentiate two or more processes with the same
image name by creating a process alias rule, if they are executed from different
directories. Aliases are defined via the use of alias rules.
Creating a process alias rule in Process Control consists of two steps:
1. Identifying a process or group of processes
2. Giving the process or group of processes an alias name
To create a process alias rule, highlight the default process alias rule, and select
Action -> Insert Rule as shown in Figure 5-27.

Figure 5-27 Creating a process alias rule

Figure 5-28 on page 158 shows how the process alias is assigned to any
process originating in the IBM HTTP Server directory.

Chapter 5. Management

157

Figure 5-28 Assigning a process alias

Process Control identifies one or many processes based on one of the three
following character-matching techniques. Select the method you want to use.
򐂰 Subdirectory name identifies processes based on a directory match.
򐂰 Image name identifies processes based on the process image name.
򐂰 String name identifies processes based on image names, directory matches,
or a combination of an image name and a directory.
Wildcards and environment variables can be used in match strings (*) for all
remaining characters and (?) for any character in this position.
Tip: Process aliases can consist of any characters except backslash (\),
comma (,), or double quotation mark (").
Similarly, we create a process alias rule for all of the processes that are
generated from software in the GuildFTPd directory, as shown in Figure 5-29 on
page 159.

158

IBM ^ xSeries 440 Planning and Installation Guide

Figure 5-29 Assigning an alias to all processes from the GuildFTPd directory

Once the two alias rules have been created, they are listed as shown in
Figure 5-30.

Figure 5-30 List of process alias rules

Chapter 5. Management

159

5.3.2 Process execution rules
Now that the processes have been identified by aliases through the use of
process alias rules, the next step is to define process execution rules. First
identify the process, using the aliases you assigned in the previous step, as
shown in Figure 5-31.

Figure 5-31 Identify the alias for process execution rule

Click Next, and Figure 5-32 on page 161 appears with the option of creating a
process group.

160

IBM ^ xSeries 440 Planning and Installation Guide

Figure 5-32 Creating a group process alias

We recommend creating a process group, even for one process. Here we are
naming a new process group web services by typing in its name. You can add
processes to an existing group by selecting it from the pull-down menu. Click
Next.
If you check the box Execute within a process group, the fields in the next
three windows are all greyed out. This is because you change these settings via
group rules, as described in the next section.

5.3.3 Group process execution rules
Group process execution rules allow you to control the behavior of a group of
processes. We recommend you use groups to aid in process management. To
set the rules, right-click the process group and click Properties, as shown in
Figure 5-33 on page 162.

Chapter 5. Management

161

Figure 5-33 Selecting the process group properties

The Properties window for the group appears (Figure 5-34).

Figure 5-34 Process group general properties

From the General tab, we are able to see the process count limit. Here we have
set it to 10. This means that no more than 10 processes can be started from our

162

IBM ^ xSeries 440 Planning and Installation Guide

group of Web services. This will prevent the overuse of these services and
therefore any impact to other processes running on this server. The next tab is
the Affinity tab, shown in Figure 5-35.

Figure 5-35 Processor affinity

Assigning processor affinity means that the processes in the group execute only
on the specified processor(s). In our example, all of our Web services execute on
processor 3 (the fourth processor, since they are numbered starting with 0). The
ability to determine which processor(s) that different groups of processes
execute on is a variation of physical partitioning (PPAR).
The next tab is the Priority tab, as shown in Figure 5-36 on page 164.

Chapter 5. Management

163

Figure 5-36 Process group priority

Group process priority has the effect of assigning the same process priority to
every process in the group. This overrides the priority setting assigned by the
application vendor, and ensures that every process in the group is granted equal
priority. This prevents any process in the group from consuming a
disproportionally large or small number of CPU cycles, in relation to other
processes granted the same priority.
Attention: Realtime priority (the highest priority class) should be used with
great care. It is possible to create a process or process group that does not
relinquish control of the CPU(s) long enough for Windows 2000 to perform
other important work.
If realtime priority is to be used, you should be sure that the process or group
using it cannot consume all the CPUs simultaneously and that other important
tasks can run on CPUs that are not potentially blocked by the realtime tasks.
To determine what priority processes have natively run the process, open Task
Manager, select the Processes tab, select View -> Select Columns, and select
Base Priority as shown in Figure 5-37 on page 165.

164

IBM ^ xSeries 440 Planning and Installation Guide

Figure 5-37 Configuring Task Manager to show base priority

In all likelihood, the processes you are interested in have a base priority of
Normal, like most of the ones shown in Figure 5-38.

Figure 5-38 Priorities of running processes

Process group priority is rather a coarse setting. Much finer control can be
achieved by setting the scheduling class, which is the next tab in the Properties
window, as shown in Figure 5-39 on page 166.

Chapter 5. Management

165

Figure 5-39 Process scheduling class

The process scheduling class determines how much CPU time processes with
the same priority are allocated. The next tab in the Properties window is the
Memory tab, as shown in Figure 5-40 on page 167.

166

IBM ^ xSeries 440 Planning and Installation Guide

Figure 5-40 Memory limits for processes in groups

This tab lets you set three parameters, which are configured in increments of
1,024 KB:
򐂰 The first one configures working set limits (minimum and maximum). The
working set is the physical memory allocated to the process group. If the
process group’s memory requirement exceeds the maximum size of the
working set, the balance will be satisfied by paging to disk, even if there is
available memory on the system.
򐂰 The second field defines an upper limit for the memory, physical and virtual,
each process can use.
򐂰 The third field sets the total amount of memory that can be used by the entire
group of processes.
These settings collectively can prevent any single process or the process group
from depriving the rest of the system of needed memory resources, either
physical memory or pagefile space.
The next tab on the Properties window, the Time tab, is relevant to batch
processes. It is shown in Figure 5-41 on page 168.

Chapter 5. Management

167

Figure 5-41 Per process and group time limits

There are two options: the first is to limit the amount of time a process is allowed
to run, and the second is the amount of time the entire group can use.
This is not elapsed time, but the time the processes actually spend executing on
the server. If these limits are exceeded, Process Control can be configured to
either terminate the processes or allow them to continue but generate an event in
the event log.
Note: Process Control does not attempt to shut down a process gracefully. It
terminates a process immediately. This may cause data corruption or other
undesired results with some applications.
IBM Director can be configured to execute an event action plan based on any
entry in to the Windows event log from Process Control. To do this, create a
Simple Event Filter and select ProcCon as the event type (in the Event Type tab,
Figure 5-42 on page 169, uncheck Any, and expand Windows NT Event Log ->
System -> ProcCon).

168

IBM ^ xSeries 440 Planning and Installation Guide

To select Process
Control events, expand
Windows NT Event Log
-> System -> ProcCon.

Figure 5-42 Selecting Process Control as the event type in IBM Director

Further details about creating event action plans in IBM Director can be found in
several sources, such as the redbook Implementing IBM Director Management
Solutions, SG24-6188.
The last tab in the Process Control Properties window is the Advanced tab, as
shown in Figure 5-43 on page 170.

Chapter 5. Management

169

Figure 5-43 Process Control advanced controls

The options in this tab are as follows:
򐂰 End process group when no processes in the group
This is useful to reset the counters for batch processes that are time limited.
Deselecting this option allows for accumulating usage statistics for billing
purposes. This can be accomplished using the Elapsed Time counter in the
Performance tool, as shown in Figure 5-44 on page 171.

170

IBM ^ xSeries 440 Planning and Installation Guide

Figure 5-44 Performance monitor counter

򐂰 Die on unhandled exceptions
This option suppresses any windows that may pop up as a result of a process
failing, which hopefully facilitates the full completion of the process and the
release of committed resources after a process fails.
This should be used on systems that have other management mechanisms in
place to alert an administrator that a process has terminated abnormally.
򐂰 Silent breakaway
This causes any process started by the executing process group to execute
outside of the process group. This implies that the newly created process will
not be subject to the Process Control rules of the parent process. It is,
however, now a candidate for Process Control, according to the master list of
Process Control rules, and will be subject to the rules on the next
management sweep.
By default, Process Control checks once a minute for processes that fit the
rules. This parameter is called Process Scan Interval, and is configured from
the Process Control Properties window (right-click Process Control (Local)
in Figure 5-25 on page 156 and click Properties then click the Service tab).

Chapter 5. Management

171

Specify here how often
you want Process
Control to check for new
processes and apply the
process alias rules.

Figure 5-45 Process Control scan interval

The other configurable entry, Request timeout interval, is used to determine
how long the service will wait for commands from a remote server. If the
timeout value is exceeded, control will pass back to the local console.
򐂰 Breakaway OK
This parameter is to allow a child process to break away from the process
group. If this box is not checked, a child process that tries to break away from
the group will not be permitted to execute. This will prevent processes from
creating other processes that are not managed under the group’s rules.
This may prevent the spread of Trojan horse viruses that enter a system
attached to other processes and then attempt to run as services, or under
another user’s credentials.
Process Control enables a very strict security method. It is possible to grant
execution privileges only to specific subdirectories. Make sure these rules are
moved to the top of the list of execution rules, because the rules are parsed in
top-down order. Then create a rule for subdirectory names that match * and
apply a process count of 0. This will prevent all processes that are initiated from
any other subdirectories from executing.

172

IBM ^ xSeries 440 Planning and Installation Guide

Abbreviations and acronyms
AC

alternating current

GB

gigabyte

ACPI

advanced control and power interface

GBps

gigabytes per second

APIC

advanced programmable interrupt
controller

Gbps

gigabits per second

HA

high availability

ASM

Advanced System Management

HAL

Hardware Abstraction Layer

ASR

Automatic Server Restart

HBA

host bus adapter

ATS

Advanced Technical Support

HCL

hardware compatibility list

BASP

Broadcom Advanced Server Program

HTTP

Hypertext Transmission Protocol

BI

business intelligence

I/O

input/output

BIOS

basic input/output system

IBM

CCA

Citrix Certified Administrator

International Business Machines
Corporation

CD-ROM

compact disk read only memory

IBS

International Business Systems

CEC

central electronic complex

ID

identification

CIM

Common Information Model

IEC

International Engineering Consortium

CPU

central processing unit

IIS

Internet Information Services

CRM

customer relationship management

IP

Internet Protocol

DDR

double data rate

ISP

Internet service provider

DDS

Digital Data Storage

IT

information technology

DIMM

dual inline memory module

KB

kilobyte

DLT

Digital Linear Tape

LAN

local area network

DOS

disk operating system

LED

light emitting diode

DP

dual processor

LPAR

logical partition

ECC

error checking and correcting

LTO

Linear Tape Open

EE

Enterprise Edition

MAC

media access control

EMEA

Europe, Middle East, Africa

MB

megabyte

ER

enterprise rack

MCNE

Master Certified Novell Engineer

ERP

enterprise resource planning

MCSE

Microsoft Certified Systems Engineer

ESCON

enterprise systems connection

MP

multiprocessor

ESS

Enterprise Storage Server

MPA

Management Processor Assistant

FC

Fibre Channel

MSCS

Microsoft Cluster Service

FRU

field replaceable unit

NAS

Network Attached Storage

GA

general availability

NBI

NetWare Bus Interface

© Copyright IBM Corp. 2002. All rights reserved.

173

NC

North Carolina

SRPM

Source Red Hat Package Manager

NEMA

National Electrical Manufacturers
Association

SUS

Software Update Subscription

SVGA

super video graphics array

NLB

Network Load Balancing

TB

terabyte

NS

not set

TCO

total cost of ownership

NUMA

non-uniform memory access

TCP/IP

OLAP

online analytical processing

Transmission Control
Protocol/Internet Protocol

OS

operating system

TPC-C

Transaction Processing Council-C

PAE

physical address extension

UPS

uninterruptable power supply

PCI

peripheral component interconnect

URL

universal resource locator

PFA

Predictive Failure Analysis

USA

United States of America

PME

power management events

USB

universal serial bus

POST

power-on self test

VM

virtual machine

PPAR

physical partitioning

VPN

virtual private network

PSM

platform support module

VRM

voltage regulator module

RAID

redundant array of independent disks

WW

worldwide

RAM

random access memory

ZIF

zero insertion force

RETAIN

remote technical assistance
information network

RISC

reduced instruction set computing

ROM

read only memory

RPM

revolutions per minute
Red Hat Package Manager

RSA

Remote Supervisor Adapter

SAN

storage area network

SAR

Solution Assurance Review

SCM

supply chain management

SCSI

small computer system interface

SDRAM

static dynamic random access
memory

SMB

small-to-medium-sized businesses

SMP

symmetric multiprocessing

SNMP

Simple Network Management
Protocol

SPM

System Partition Manager

SQL

structured query language

SR

standard rack

174

IBM ^ xSeries 440 Planning and Installation Guide

Related publications
The publications listed in this section are considered particularly suitable for a
more detailed discussion of the topics covered in this redbook.

IBM Redbooks
For information on ordering these publications, see “How to get IBM Redbooks”
on page 178.
򐂰 Netfinity Tape Solutions, SG24-5218
򐂰 Tuning IBM ^ xSeries Servers for Performance, SG24-5287
򐂰 Integrating IBM Director with Enterprise Management Solutions, SG24-5388
򐂰 S/390 Server Consolidation - A Guide for IT Managers, SG24-5600
򐂰 The IBM LTO Ultrium Tape Libraries Guide, SG24-5946
򐂰 Implementing IBM Director Management Solutions, SG24-6188
򐂰 Server Consolidation with the IBM ^ xSeries 440 and VMware ESX
Server, SG24-6852

Referenced Web sites
These Web sites are also relevant as further information sources:

IBM sites
򐂰 http://www.pc.ibm.com/support
Main support site for xSeries
򐂰 http://www.pc.ibm.com/qtechinfo/MIGR-39747.html
x440 device driver matrix
򐂰 http://www.pc.ibm.com/qtechinfo/MIGR-40610.html
Process Control User Guide
򐂰 http://www.pc.ibm.com/qtechinfo/MIGR-42067.html
x440 HAL for Windows NT 4.0
򐂰 http://www.pc.ibm.com/qtechinfo/MIGR-42317.html
x440 Advanced System Management driver

© Copyright IBM Corp. 2002. All rights reserved.

175

򐂰 http://www.pc.ibm.com/qtechinfo/MIGR-42328.html
IBM ^ xSeries 440 Installation Guide in PDF format
򐂰 http://www.pc.ibm.com/qtechinfo/MIGR-42423.html
x440 hot-plug driver for NetWare
򐂰 http://www.pc.ibm.com/qtechinfo/MIGR-43369.html
Broadcom Ethernet driver for NetWare
򐂰 http://www.pc.ibm.com/qtechinfo/MIGR-43532.html
CPU utilization fix for NetWare
򐂰 http://www.pc.ibm.com/qtechinfo/MIGR-43675.html
Tip about avoiding damaging the centerplane when removing an SMP
Expansion Module
򐂰 http://www.pc.ibm.com/qtechinfo/MIGR-43679.html
Tip about NetWare 5.1 automatically rebooting after a shutdown
򐂰 http://www.pc.ibm.com/qtechinfo/MIGR-43876.html
Document Hints, Tips, and Frequently Asked Questions for the xSeries 440
Quick Reference.
򐂰 http://www.pc.ibm.com/qtechinfo/MIGR-4J2QEQ.html
ActivePCI Software package
򐂰 http://www.pc.ibm.com/us/compat/nos/matrix.shtml
ServerProven page for information about operating system compatibility
򐂰 http://www.pc.ibm.com/us/compat/lan/matrix.html
ServerProven page for information about LAN adapter compatibility
򐂰 http://www.storage.ibm.com/hardsoft/disk/fastt/index.html
Information on the Fibre Array Storage Technology (FAStT) family of products
򐂰 http://ibm.com/pc/ww/eserver/xseries/scsi_raid.html
Information on SCSI products from IBM
򐂰 http://ibm.com/pc/ww/eserver/xseries/tape.html
Information on tape products from IBM
򐂰 http://w3.ibm.com/support/assure
Solution Assurance (accessible by IBM employees only)
򐂰 http://www.ibm.com/support/assure/esar
eSAR (electronic Solution Assurance Review)
򐂰 http://www.pc.ibm.com/europe/configurators
Configuration tools for EMEA and Asia-Pacific countries
򐂰 http://www.pc.ibm.com/us/eserver/xseries/library/configtools.html
Configuration tools for the US
򐂰 http://www.pc.ibm.com/ww/eserver/xseries/benchmarks/series.html
Benchmark results for xSeries systems

176

IBM ^ xSeries 440 Planning and Installation Guide

򐂰 http://www.pc.ibm.com/ww/eserver/xseries/vmware.html
Information about VMware on xSeries
򐂰 http://www.pc.ibm.com/ww/eserver/xseries/windows/datacenter.html
Information about the Datacenter Server offerings from IBM

Intel sites
򐂰 http://www.intel.com/design/xeon/prodbref
Information on the Xeon DP processor
򐂰 http://www.intel.com/design/xeon/xeonmp/prodbref
Information on the Xeon MP processor
򐂰 http://www.intel.com/eBusiness/products/server/processor/xeon/wp020901_sum.htm
Whitepaper: Intel Xeon Processor Family for Servers with Hyper-Threading
Technology
򐂰 http://www.intel.com/technology/hyperthread
Information on Hyper-Threading

Microsoft sites
򐂰 http://support.microsoft.com/default.aspx?scid=kb;en-us;Q283037
Use of the PAE switch to enable more that 4 GB of RAM

Novell sites
򐂰 http://support.novell.com
Main Novell support site
򐂰 http://support.novell.com/filefinder
Search for software updates

Red Hat sites
򐂰 http://rhn.redhat.com
Red Hat Network
򐂰 https://www.redhat.com/docs/manuals/advserver/RHLAS-2.1-Manual/pdf/rhl
-ig-as-x86-en-21.pdf
Red Hat Linux Advanced Server Installation Guide

PCI Special Interest Group sites
򐂰 http://www.pcisig.com/data/specifications/lowp_ecn.pdf
Specifications of low-profile PCI adapters

Related publications

177

How to get IBM Redbooks
You can order hardcopy Redbooks, as well as view, download, or search for
Redbooks at the following Web site:
ibm.com/redbooks

You can also download additional materials (code samples or diskette/CD-ROM
images) from that site.

IBM Redbooks collections
Redbooks are also available on CD-ROMs. Click the CD-ROMs button on the
Redbooks Web site for information about all the CD-ROMs offered, as well as
updates and formats.

178

IBM ^ xSeries 440 Planning and Installation Guide

Index
Numerics
03K9309, ASM Interconnect Cable Kit 78
10K3661, Belkin USB to Serial Adapter 83
20-stage pipeline 16
220 V power 103
22P5298, IBM USB Serial/Parallel Adapter 83
22P6801, NetXtreme 1000 T Ethernet adapter 73
22P7801, NetXtreme 1000 SX Fiber Ethernet 73
31P5998, RXE-100 6-slot Expansion Kit 78
31P6087, 3.5 m management cable kit 82
31P6088, 8 m management cable kit 82
31P6102, 3.5 m Remote I/O cable kit 76, 80
31P6103, 8 m Remote I/O cable kit 80
31P6301, NetXtreme 1000T Ethernet adapter 73
31P8300, 1 GB DIMM 66
31P8840, 2 GB DIMM 66
32-bit PCI adapters 69
32P8340, SMP Expansion Module 17, 64–65
32P8705, 1.4 GHz Xeon Processor MP 64
32P8706, 1.5 GHz Xeon Processor MP 64
32P8707, 1.6 GHz Xeon Processor MP 64
33L3324, 512 MB DIMM 66
37L3533, 2.4 GHz Xeon Processor DP 16, 64
4816-12U, 9-16-way license kit 95
4816-1BX, Datacenter eight-way preload kit 94
4816-1DU, 1-16-way license kit 95
4816-ABX, Datacenter eight-way subscription 93
4816-ADX, Datacenter 16-way subscription 93
59P5171, 1.5 GHz Xeon Processor MP 64
59P5172, 1.9 GHz Xeon Processor MP 64
59P5173, 2.0 GHz Xeon Processor MP 64
71P7919, SMP Expansion Module 17, 65
8687-1AX 94
8687-1RX 3
8687-2AX 94
8687-2RX 3
8687-3AX 94
8687-3RX 3
8687-3RY 3
8687-4RX 3, 98
8687-4RY 3
8687-5RX 3, 98
8687-6RX 3, 98

© Copyright IBM Corp. 2002. All rights reserved.

8687-7RX 3

A
Active Memory 19
mirroring 21, 108
Active PCI driver
NetWare 122, 124
Windows 2000 116
Active PCI Manager 130–150
adapter keying 142
Add Card Wizard 137
adding adapters 136
analysis of existing adapters 144
Blink Slot command 139
boot devices 145
bus number 134
chassis number 132
configuration optimization 144
current speed 134
download 130
first slot label 133
inserting adapters 136
keying 142
LED status 134
lock options 140
low profile 134
max slot speed 134
optimization 144
performance analysis 146
PME signal 134
slot view 132
starting 131
stopping the adapter 148
table view 136
tree view 135
unit name 133
unlisted adapters 141
user’s guide 131
using 132
wizard 137
Active PCI-X 47
adapter speeds 70
adapter teaming 73

179

Advanced Dynamic Execution 16, 45
affinity 163
air flow, rack 103
application integration 51
application scalability 100
availability 59

B
Baan 38–39
backup solutions 89
BI 39
BIOS
Hyper-Threading 109
memory mirroring 67, 108
updates 108
block diagrams 9
Broadcom 23
adapter teaming 73
Broadcom driver
NetWare 122
Windows 2000 116
Broadcom Gigabit Ethernet controller 23, 72
business intelligence 39
Business Objects 39

C
cabling 103
cable management in racks 103
Ethernet 23, 73
internal SCSI cables 71
RXE-100 80
cache 13, 19, 45
Capacity Manager 31
CEC 8, 17
centralization 51
Chipkill memory 22, 46
CIM instrumentation 30
Cluster Manager 31
clustering 41, 49
virtual clusters 42
VMware 41
Cognos 39
collaboration 40
Computer Type 113
configuration rules
See rules
configuration tools
rack configurator 103

180

consolidation 36, 49
CPU affinity 163
CPU utilization, NetWare 122
CPUs 12, 64
CRM 39
Crystal Decisions 39
custom HAL 112, 114, 119
customer relationship management 39
cyclone chip 7

D
data integration 51
database applications 37, 40
Datacenter Server 92, 94–95
eight-way preload kit 94
features 93
models 94
Software Update Subscription 93
DB2 37, 40
DDR memory 19
device drivers 111
diagnostics 108
DIMMs 19
layout 65
maximum 18
standard memory 9
Domino 36
doors on racks 103
drive bays 2, 25, 83
dynamic partitioning 50

E
Electronic Service Agent 32
enabling 108
enterprise applications 37
enterprise node 2
Enterprise Storage Server 88
Enterprise X-Architecture 1
ERP 38
eSAR 105
Ethernet, Gigabit 2, 23, 72
adapter teaming 73
Event Log
Process Control 168
Exchange 36, 40

IBM ^ xSeries 440 Planning and Installation Guide

F
fans 24–25
fault tolerance 67
features 1, 43
Fibre Channel storage options 85
Foster 12–13, 64
front panel 25
frontside bus 9, 13

Windows 2000 Server 112
Intel PRO/1000 XT 73
Intel Xeon Processor DP 16–17
positioning 45
Intel Xeon Processor MP 13–16
positioning 44
interleaving 9
Invensys 38

G

J

Gallatin 12–13, 64
SMP Expansion Module support 17
Gigabit Ethernet 23, 72

H
HAL 112
Hardware Compatibility List 111
hardware configuration 12
hardware partitioning 7
HCL 111
hot-plug PCI
See Active PCI
hot-swap fans 24
Hyperion 39
Hyper-Threading 101
disabled 15, 110
enabling 109
Linux 121
NetWare 127
operating system support 90
optimized 91
positioning 44
technology 14
Windows 2000 117

I
IBM Director 30, 129
Active PCI Manager 130
event actions plans 168
System Partition Manager 150
IBM XA-32 chipset 7, 43
installation 102, 107–128
Hyper-Threading 109
Linux 119
memory mirroring 108
NetWare 121
VMware ESX Server 128

JD Edwards 38

L
LEDs, Light Path Diagnostics 26
Level 3 cache 13
Level 4 cache 19
licensing 100
Light Path Diagnostics 25
SMP Expansion Module 18
Linux
compiling drivers 120
Hyper-Threading 121
installation 119
RXE-100 support 121
summit kernel 120
logical partitioning 6, 50
logical processors 14

M
management 129
Management Processor Assistant 31
memory 19, 65
Active Memory 19
Chipkill memory 22
disabling during POST 22
memory mirroring 21, 67, 108
Memory ProteXion 19, 46
memory scrubbing 20, 46
performance 67
ports 65
standard DIMMs 3
messaging 40
Microsoft Cluster Service 111
migration 37
models 2–3
Datacenter eight-way 93
VMware ESX Server 98

Index

181

N
Navision 38
NBI.NLM 122
NetBAY racks 102
NetBurst architecture 9
NetWare 29, 91, 97, 121–128
Active PCI driver 122, 124
Broadcom driver 122
Bus Interface driver 122
Hyper-Threading support 127
Remote Supervisor Adapter 122
RXE-100 support 126
NetXtreme 73
node 2
Novell NetWare
See NetWare

O
OLAP 39
one Net (NetWare) 97
one-box clustering 41
online analytical processing 39
Onyx 39
operating systems 28, 90
device drivers 111
Linux 97, 119
NetWare 97, 121
VMware ESX Server 98, 128
Windows .NET 97
Windows 2000 96, 112
Windows NT 4.0 EE 95
Oracle 37–38, 40

P
parallel ports 24
partial mirroring 67
partitioning 6
benefits 49
PCI bridges 10
PCI scan order 72
PCI slots 23
RXE-100 78
ServeRAID 4H 71
PCI-X 47
See also RXE-100
PCI-X and PCI coexistence 70
PCI-X slots in the x440 68
PeopleSoft 38–39

182

performance
application scalability 100
disk subsystem 88
memory configuration 67
physical consolidation 51
physical partitioning 7
pipeline 16
planning 63–105
ports 65
positioning 35–61
power redundancy 103
power supplies 24–25
Prestonia 12, 64
ProcCon 168
Process Control 155–172
advanced options 170
affinity 163
alias rules 156
breakaway 171–172
check for new processes 171
CPU affinity 163
creating a rule 157
download 155
ending a task 168
event action plans 168
execute within a process group 161
execution rules 160–161
group process execution rules 161
group properties 162
image name 157
logging 168
memory limits 167
new processes 171
priority 164
process alias rules 156
process execution rules 160
process group 161
process ID 156
processor affinity 163
scheduling class 166
silent breakaway 171
Task Manager 157
time to run 168
unhandled exceptions 171
wildcard characters in image names 158
processor affinity 163
processors 12, 64
product line 2
product publications 128

IBM ^ xSeries 440 Planning and Installation Guide

publications 128

Q
quad-pump design 9

R
rack configurator 103
rack installation 102
Rack Manager 31
RAID 84
RAID Manager 31
rear panel 11
Red Hat Linux 29, 91, 97
See also Linux
Redbooks Web site 178
Contact us xii
redundancy 24
power 103
Remote Expansion I/O port 10
Remote I/O cable 103
Remote Supervisor Adapter
16-way configurations 28
connectivity 77
driver
NetWare 122
Windows 2000 116
features 27
firmware 108
RS-485 23
rules
memory 66
memory mirroring 67
PCI slots 69
processors 64
RXE-100 79
RXE Expansion Port 10
RXE-100 78
cabling 80
Linux support 121
NetWare 126
slots 79
Windows 2000 Server installation 112

S
S3 driver 117
SAP R/3 38–39
SAS 39

scalability ports 18
scan order, PCI 72
SCM 38
scrubbing, memory 20, 46
SCSI internal cabling 71
SCSI ports 23
SDRAM 9, 19
serial port, not in x440 83
serial ports 24
server consolidation 36, 49, 51–61
application integration 56
availability 59
benefits 57
business continuity 60
centralization 52
data integration 55
flexibility 58
logical consolidation 56
physical consolidation 53
rational consolidation 57
service level agreements 60
single point of control 58
TCO 58
types 51
why consolidate? 57
ServeRAID 84
driver
NetWare 124
Windows 2000 114, 116
ServeRAID 4H support 71
shark 88
Siebel 39
single box clustering 41
SMP expansion cable 103
SMP Expansion Module 17, 65
16-way configuration 75
connectivity 75
LEDs 18
memory 65
part numbers 17
processors 65
SMP Expansion Ports 10, 18
software installation 107–128
Linux 119
NetWare 121
Windows 2000 112
software partitioning 6
Software Rejuvenation 32
Software Update Subscription 93

Index

183

Solution Assurance Review (SAR) 104
SQL Server 37, 40
standard memory 3
static partitioning 50
storage options 83
Summit chipset 7
supply chain management 38
support
memory configurations 66
operating systems 28, 90
SuSE 29, 91, 97
System Availability 32
system management 129
System Partition Manager 150–155
CD-ROM drive 152
complex 151
diskette drive 152
headless mode 152
IBM Director tasks 153
maximum configuration 152
node 151
operating system support 151
partition 151
Partition Assistant 153
partition options 151
positioning 48
twin-tailing 152
system partitioning 6, 49
benefits 50

video controller 23
video PCI adapters 71
virtual clustering 42
VMware ESX Server 6, 29, 92, 98, 128
clustering 41
models 98

W
WebSphere 41
weight 103
Windows .NET 29, 91, 97
Windows 2000
Advanced Server Hyper-Threading bug 15, 110
Hyper-Threading, affect of 15
ServeRAID driver 114
support 29, 91
Windows 2000 Datacenter Server 92–95
See also Datacenter Server
Windows 2000 Server 96
Broadcom driver 116
Computer Type 113
custom HAL 112, 114, 119
Device Manager 116
Hyper-Threading 117
installation 112
RXE-100 112, 118
ServeRAID driver 116
Service Pack 114
Windows NT 4.0 EE 95
winnipeg chip 7

T
tape backup 89
Task Manager 14, 118
TCO 58
teaming 73
TotalStorage solutions 85
Trigger Tool 104
twin-tailing 152
twister chip 7

U
Ultra 160 SCSI ports 23
USB ports 23
USB to serial adapter 83

X
XA-32 chipset 7
X-Architecture 1
XceL4 Server Accelerator Cache 19
positioning 46
Xeon Processor DP 16–17
positioning 45
Xeon Processor MP 13–16, 64
compare with Pentium III 16
phase 1 models 3, 94, 98
positioning 44
XpandOnDemand 2, 47

Z

V

ZIF socket 13

vertical position not supported 102

184

IBM ^ xSeries 440 Planning and Installation Guide

IBM ^ xSeries 440 Planning and Installation Guide

(0.2”spine)
0.17”<->0.473”
90<->249 pages

Back cover

®

IBM
xSeries 440
Planning and Installation
Guide
Describes the
technical details of
the x440 models
Helps you prepare for
and perform an
installation
Covers key IBM
Director
management tools

The IBM ^ xSeries 440 is IBM’s flagship industry
standard server and is the first full implementation of the
32-bit IBM XA-32 chipset, code named “Summit”, as part of
the Enterprise X-Architecture strategy. The x440 provides
new levels of high availability and price performance, and
offers scalability from two-way to 16-way SMP, from 2 GB to
128 GB of memory, and up to 24 PCI slots, all in one single
system image.
This IBM Redbook is a comprehensive resource on the
technical aspects of the server, and is divided into five key
subject areas:
򐂰 Chapter 1, “Technical description” introduces the server
and describes the key features and how they work.
򐂰 Chapter 2, “Positioning” examines the types of
applications that would be used on the x440.
򐂰 Chapter 3, “Planning” describes the aspects of planning
to purchase and planning to install the x440.
򐂰 Chapter 4, “Installation” goes through the process of
installing key operating systems.
򐂰 Chapter 5, “Management” describes how to use the key
IBM Director extensions designed for the x440.
A partner redbook is Server Consolidation with the IBM
^ xSeries 440 and VMware ESX Server, SG24-6852.

INTERNATIONAL
TECHNICAL
SUPPORT
ORGANIZATION

BUILDING TECHNICAL
INFORMATION BASED ON
PRACTICAL EXPERIENCE
IBM Redbooks are developed by
the IBM International Technical
Support Organization. Experts
from IBM, Customers and
Partners from around the world
create timely technical
information based on realistic
scenarios. Specific
recommendations are provided
to help you implement IT
solutions more effectively in
your environment.

For more information:
ibm.com/redbooks
SG24-6196-00

ISBN 0738427330



Source Exif Data:
File Type                       : PDF
File Type Extension             : pdf
MIME Type                       : application/pdf
PDF Version                     : 1.3
Linearized                      : Yes
Producer                        : Acrobat Distiller 4.05 for Windows
Title                           : SG246196.book
Modify Date                     : 2002:10:25 10:51:44-04:00
Create Date                     : 2002:10:25 10:49:44
Creator                         : FrameMaker 6.0
Page Count                      : 202
Page Mode                       : UseOutlines
EXIF Metadata provided by EXIF.tools

Navigation menu