IBM Netezza System Administrator’s Guide Administrator

User Manual:

Open the PDF directly: View PDF PDF.
Page Count: 550

DownloadIBM Netezza System Administrator’s Guide Administrator
Open PDF In BrowserView PDF
IBM Netezza 7.0 and Later

IBM Netezza System
Administrator’s Guide
Revised:

20282-20 Rev. 1

October 9, 2012

Note: Before using this information and the product that it supports, read the information in “Notices and Trademarks” on
page E-1.

© Copyright IBM Corporation 2001, 2012.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM
Corp.

Contents
Preface
1 Administration Overview
Administrator’s Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1
Administration Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1
Initial System Setup and Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
Netezza Software Directories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3
Managing the External Network Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-5
Managing Domain Name Service (DNS) Updates . . . . . . . . . . . . . . . . . . . . . . . . 1-5
Setting up Remote Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-7
Administration Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-7
Other Netezza Documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-8

2 Installing the Netezza Client Software
Client Software Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
Installing the Netezza CLI Client on a Linux/UNIX System . . . . . . . . . . . . . . . . . . . . . 2-3
Installing on Linux/UNIX Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3
Setting the Path for Netezza CLI Client Commands . . . . . . . . . . . . . . . . . . . . . . . 2-5
Removing the CLI Clients from UNIX Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 2-5
Installing the Netezza Tools on a Windows Client . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-5
Installation Requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-5
Installing the Netezza Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-6
Environment Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-7
Removing the IBM Netezza Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-7
Installing the Web Admin Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-7
Installing the RPM and Shared Library Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-8
Installing the Web Admin Server and Application Files . . . . . . . . . . . . . . . . . . . . 2-8
Upgrading the Web Admin Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-10
Removing the Web Admin Interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-10
Contents of the WebAdmin Directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-10
Installing the Netezza SSL Site Certificate . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-11
Clients and Unicode Characters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-11
Client Timeout Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-12
Netezza Port Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-13
Changing the Default Port Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-13

iii

Specifying Non-Default NPS Port Numbers for Clients . . . . . . . . . . . . . . . . . . . 2-14
Creating Encrypted Passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-15
Using Stored Passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-16

3 Using the Netezza Administration Interfaces
Netezza CLI Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-1
Summary of Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2
Command Locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-4
Netezza CLI Command Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-5
Using the Netezza Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-6
Specifying Identifiers in Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-6
SQL Command Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-7
nzsql Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-7
NzAdmin Tool Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-11
Client Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-11
Starting the NzAdmin Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-12
Logging In to NzAdmin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-12
Connecting to the Netezza System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-13
Displaying System Components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-14
Interpreting the Color Status Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-14
Main Menu Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-15
Using the NzAdmin Tool Hyperlinks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-16
Administration Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-17
Setting Automatic Refresh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-17
Controlling NzAdmin Session Termination . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-19
Web Admin Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-19
Using the Web Admin Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-20
Understanding the Web Admin Page Layout . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-20

4 Managing Netezza HA Systems
Linux-HA and DRBD Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-1
Differences with the Previous Netezza HA Solution . . . . . . . . . . . . . . . . . . . . . . . . . . 4-2
Linux-HA Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-3
Heartbeat Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-3
CIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-3
Important Information about Host 1 and Host 2 . . . . . . . . . . . . . . . . . . . . . . . . . 4-3
Managing Failover Timers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-4
Netezza Cluster Management Scripts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-4

iv

Identifying the Active and Standby Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-5
Monitoring the Cluster and Resource Group Status . . . . . . . . . . . . . . . . . . . . . . . 4-6
nps Resource Group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-7
Failover Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-8
Relocate to the Standby Node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-9
Safe Manual Control of the Hosts (And Heartbeat) . . . . . . . . . . . . . . . . . . . . . . . 4-9
Transition to Maintenance (Non-Heartbeat) Mode . . . . . . . . . . . . . . . . . . . . . . . 4-10
Transitioning from Maintenance to Clustering Mode . . . . . . . . . . . . . . . . . . . . . 4-11
Cluster Manager Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-12
Logging and Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-13
DRBD Administration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-13
Monitoring DRBD Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-14
Sample DRBD Status Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-15
Split-Brain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-15
Administration Reference and Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-16
IP Address Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-17
Forcing Heartbeat to Shutdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-17
Shutting Down Heartbeat on Both Nodes without Causing Relocate . . . . . . . . . . 4-17
Restarting Heartbeat after Maintenance Network Issues . . . . . . . . . . . . . . . . . . 4-17
Resolving Configuration Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-18
Fixed a Problem, but crm_mon Still Shows Failed Items . . . . . . . . . . . . . . . . . . 4-18
Output From crm_mon Does Not Show the nps Resource Group . . . . . . . . . . . . . 4-18
Linux Users and Groups Required for HA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-19
Checking for User Sessions and Activity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-19

5 Managing the Netezza Hardware
Netezza Hardware Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-1
Displaying Hardware Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-3
Hardware Types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-4
Hardware IDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-5
Hardware Location. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-5
Hardware Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-7
Hardware States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-8
Data Slices, Data Partitions, and Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-10
IBM Netezza 100/1000 Storage Design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-10
IBM Netezza C1000 Storage Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-11
System Resource Balance Recovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-12

v

Hardware Management Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-13
Callhome File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-14
Displaying Hardware Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-14
Managing Hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-14
Managing SPUs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-15
Managing Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-17
Managing Data Slices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-20
Displaying Data Slice Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-20
Monitor Data Slice Status. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-20
Regenerate a Data Slice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-21
Rebalance Data Slices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-23
Displaying the Active Path Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-24
Handling Transactions during Failover and Regeneration . . . . . . . . . . . . . . . . . . 5-25
Automatic Query and Load Continuation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-26
Power Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-27
PDU and Circuit Breakers Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-27
Powering On the IBM Netezza 1000 and IBM PureData System for Analytics N1001528
Powering Off the IBM Netezza 1000 or IBM PureData System for Analytics N1001 . 529
Powering on an IBM Netezza C1000 System . . . . . . . . . . . . . . . . . . . . . . . . . . 5-30
Powering off an IBM Netezza C1000 System . . . . . . . . . . . . . . . . . . . . . . . . . . 5-31
NEC InfoFrame DWH PDU and Circuit Breakers Overview . . . . . . . . . . . . . . . . . 5-32
Powering On the NEC InfoFrame DWH Appliance . . . . . . . . . . . . . . . . . . . . . . . 5-33
Powering Off an NEC InfoFrame DWH Appliance . . . . . . . . . . . . . . . . . . . . . . . 5-34

6 Managing the Netezza Server
Software Revision Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-1
Displaying the Netezza Software Revision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-1
Displaying the Software Revision Levels. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-2
System States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-2
Displaying the Current System State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-3
System States Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-4
Waiting for a System State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-5
Managing the System State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-6
Start the System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-6
Stop the System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-7
Pause the System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-7

vi

Resume the System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-7
Take the System Offline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-8
Restart the System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-8
Overview of the Netezza System Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-8
System States during Netezza Start-Up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-10
System Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-11
System Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-12
Backup and Restore Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-12
Bootserver Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-13
Client Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-13
Database Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-13
Event Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-14
Flow Communications Retransmit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-15
Host Statistics Generator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-15
Load Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-15
Postgres . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-15
Session Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-16
SPU Cores Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-16
Startup Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-16
Statistics Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-17
System Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-17
The nzDbosSpill File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-17
System Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-18
Display Configuration Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-18
Changing the System Registry. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-19

7 Managing Event Rules
Template Event Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-1
Managing Event Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-6
Copying a Template Event to Create an Event Rule . . . . . . . . . . . . . . . . . . . . . . . 7-7
Copying and Modifying a User-Defined Event Rule . . . . . . . . . . . . . . . . . . . . . . . 7-7
Generating an Event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-7
Deleting an Event Rule. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-8
Disabling an Event Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-8
Adding an Event Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-8
Specifying the Event Match Criteria. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-8
Specifying the Event Rule Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-12

vii

Specifying the Notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-13
The sendMail.cfg File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-13
Aggregating Event E-mail Messages. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-16
Creating a Custom Event Rule. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-18
Template Event Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-19
Specifying System State Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-19
Hardware Service Requested . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-20
Hardware Needs Attention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-21
Hardware Path Down . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-22
Hardware Restarted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-24
Specifying Disk Space Threshold Notification. . . . . . . . . . . . . . . . . . . . . . . . . . 7-24
Specifying Runaway Query Notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-26
Monitoring the System State. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-27
Monitoring for Disk Predictive Failure Errors. . . . . . . . . . . . . . . . . . . . . . . . . . . 7-28
Monitoring for ECC Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-29
Monitoring Regeneration Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-29
Monitoring Disk Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-30
Monitoring Hardware Temperature. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-32
Monitoring System Temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-33
Query History Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-34
Monitoring SPU Cores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-37
Monitoring Voltage Faults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-37
Monitoring Transaction Limits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-38
Switch Port Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-39
Reachability and Availability Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-39
Event Types Reference. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-40
Network Interface State Change Event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-40
Topology Imbalance Event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-40
S-Blade CPU Core Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-41
Displaying Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-41

8 Establishing Security and Access Control
Netezza Database Users and Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-1
Develop an Access Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-2
Default Netezza Groups and Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-3
Choosing a User Authentication Method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-4
Configuring Password Content Controls and Expiration . . . . . . . . . . . . . . . . . . . . 8-4

viii

Creating Netezza Database Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-6
Altering Netezza Database Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-7
Deleting Netezza Database Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-7
Creating Netezza Database Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-7
Altering Netezza Database Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-8
Deleting Netezza Database Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-8
Security Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-8
Administrator Privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-8
Object Privileges on Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-10
Object Privileges by Class. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-11
Scope of Object Privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-11
Revoking Privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-13
Privileges by Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-13
Indirect Object Privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-15
Always Available Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-16
Creating an Administrative User Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-16
Logon Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-17
Local Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-17
LDAP Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-17
Commands Related to Authentication Methods. . . . . . . . . . . . . . . . . . . . . . . . . 8-19
Passwords and Logons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-20
Netezza Client Encryption and Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-22
Configuring the SSL Certificate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-22
Configuring the Netezza Host Authentication for Clients . . . . . . . . . . . . . . . . . . 8-23
Commands Related to Netezza Client Connection Methods . . . . . . . . . . . . . . . . 8-26
Setting User and Group Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-26
Specifying User Rowset Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-27
Specifying Query Timeout Limits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-29
Specifying Session Timeout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-29
Specifying Session Priority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-30
Logging Netezza SQL Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-30
Logging Netezza SQL Information on the Server . . . . . . . . . . . . . . . . . . . . . . . . 8-30
Logging Netezza SQL Information on the Client . . . . . . . . . . . . . . . . . . . . . . . . 8-30
Group Public Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-31

9 Managing User Content on the Netezza Appliance
Creating Databases and User Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-1
Understanding Table Size and Storage Space . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-2

ix

Best Practices for Disk Space Usage in Tables . . . . . . . . . . . . . . . . . . . . . . . . . . 9-2
Database and Table Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-4
Accessing Rows in Tables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-4
Understanding Transaction IDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-5
Creating Distribution Keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-5
Selecting a Distribution Key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-6
Criteria for Selecting Distribution Keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-6
Choosing a Distribution Key for a Subset Table. . . . . . . . . . . . . . . . . . . . . . . . . . 9-6
Distribution Keys and Collocated Joins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-7
Dynamic Redistribution or Broadcasts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-7
Verifying Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-7
Avoiding Data Skew . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-8
Specifying Distribution Keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-9
Viewing Data Skew . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-9
Using Clustered Base Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-11
Organizing Keys and Zone Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-12
Selecting Organizing Keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-12
Reorganizing the Table Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-13
Copying Clustered Base Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-14
Updating Database Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-14
Maintaining Table Statistics Automatically. . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-15
Running the GENERATE STATISTICS Command . . . . . . . . . . . . . . . . . . . . . . . 9-16
Just in Time Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-16
Zone Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-17
Grooming Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-18
GROOM and the nzreclaim Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-19
Identifying Clustered Base Tables that Require Grooming . . . . . . . . . . . . . . . . . 9-19
About the Organization Percentage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-21
Groom and Backup Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-21
Managing Sessions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-21
Using the nzsession Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-22
Running Transactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-23
Transaction Control and Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-23
Transactions Per System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-23
Transaction Concurrency and Isolation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-24
Concurrent Transaction Serialization and Queueing, Implicit Transactions. . . . . . 9-24
Concurrent Transaction Serialization and Queueing, Explicit Transactions . . . . . . 9-25

x

Netezza Optimizer and Query Plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-26
Execution Plans. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-26
Displaying Plan Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-27
Analyzing Query Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-28
Viewing Query Status and History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-28

10 Backing Up and Restoring Databases
General Information on Backup and Restore Methods . . . . . . . . . . . . . . . . . . . . . . . 10-1
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-3
Database Completeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-3
Portability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-3
Compression in Backups and Restores. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-4
Multi-Stream Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-4
Special Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-5
Upgrade/Downgrade Concerns. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-6
Compressed Unload and Reload . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-6
Encryption Key Management in Backup and Restore . . . . . . . . . . . . . . . . . . . . . 10-6
Filesystem Connector for Backup and Recovery . . . . . . . . . . . . . . . . . . . . . . . . 10-7
Third-Party Backup and Recovery Solutions Support . . . . . . . . . . . . . . . . . . . . . 10-8
Host Backup and Restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-8
Creating a Host Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-8
Restoring the Host Data Directory and Catalog . . . . . . . . . . . . . . . . . . . . . . . . . 10-9
Using the nzbackup Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-10
The nzbackup Command Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-11
Specifying Backup Privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-14
nzbackup Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-15
Backup Archive Directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-17
Incremental Backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-17
Backup History Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-19
Backing Up and Restoring Users, Groups, and Permissions . . . . . . . . . . . . . . . 10-20
Using the nzrestore Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-22
The nzrestore Command Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-23
Specifying Restore Privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-27
nzrestore Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-28
Maintaining Database Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-29
Restoring Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-29
Understanding Incremental Restoration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-31

xi

Using the Symantec NetBackup Connector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-33
Installing the Symantec NetBackup License. . . . . . . . . . . . . . . . . . . . . . . . . . 10-33
Configuring NetBackup for a Netezza Client . . . . . . . . . . . . . . . . . . . . . . . . . . 10-34
Integrating Symantec NetBackup to Netezza . . . . . . . . . . . . . . . . . . . . . . . . . 10-35
Procedures for Backing Up and Restoring Using Symantec NetBackup . . . . . . . 10-39
Using the IBM Tivoli Storage Manager Connector . . . . . . . . . . . . . . . . . . . . . . . . . 10-41
About the Tivoli Backup Integration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-41
Configuring the Netezza Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-42
Configuring the Tivoli Storage Manager Server . . . . . . . . . . . . . . . . . . . . . . . . 10-46
Special Considerations for Large Databases . . . . . . . . . . . . . . . . . . . . . . . . . . 10-52
Running nzbackup and nzrestore with the TSM Connector . . . . . . . . . . . . . . . . 10-54
Host Backup and Restore to the TSM Server . . . . . . . . . . . . . . . . . . . . . . . . . 10-55
Backing up and Restoring Data Using the TSM Interfaces . . . . . . . . . . . . . . . . 10-56
Troubleshooting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-57
Using the EMC NetWorker Connector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-59
Preparing your System for EMC NetWorker Integration . . . . . . . . . . . . . . . . . . 10-59
NetWorker Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-60
NetWorker Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-60
NetWorker Troubleshooting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-65

11 Query History Collection and Reporting
Query History Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-1
Query History and Audit History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-2
Planning Query History Monitoring Needs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-2
Planning the History Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-3
Planning Query History Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-5
Enabling History Collection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-6
Managing Access to the History Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-7
Query History Loading Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-7
History Batch Directory Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-9
Configuring the Loader Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-9
Query History Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-11
Disabling History Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-11
Changing the Owner of a History Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-11
Changing Query History Configuration Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-12
Displaying Query History Configuration Settings . . . . . . . . . . . . . . . . . . . . . . . . . . 11-12
Dropping History Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-13
Query History Event Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-14

xii

Managing History Configurations Using NzAdmin . . . . . . . . . . . . . . . . . . . . . . . . . 11-14
Query History Views and User Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-15
Query History and Audit History Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-15
_v_querystatus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-16
_v_planstatus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-16
$v_hist_queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-18
$v_hist_successful_queries and $v_hist_unsuccessful_queries. . . . . . . . . . . . . 11-19
$v_hist_incomplete_queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-19
$v_hist_table_access_stats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-20
$v_hist_column_access_stats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-20
$v_hist_log_events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-21
$hist_version. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-22
$hist_nps_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-22
$hist_log_entry_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-23
$hist_failed_authentication_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . . . . . 11-23
$hist_session_prolog_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . . . . . . . . . . 11-24
$hist_session_epilog_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . . . . . . . . . . 11-26
$hist_query_prolog_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-27
$hist_query_epilog_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-28
$hist_query_overflow_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . . . . . . . . . . 11-29
$hist_service_$SCHEMA_VERSION. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-30
$hist_state_change_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-31
$hist_table_access_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-32
$hist_column_access_$SCHEMA_VERSION. . . . . . . . . . . . . . . . . . . . . . . . . . 11-33
$hist_plan_prolog_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-34
$hist_plan_epilog_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-36
History Table Helper Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-36
FORMAT_QUERY_STATUS () . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-37
FORMAT_PLAN_STATUS () . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-37
FORMAT_TABLE_ACCESS() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-37
FORMAT_COLUMN_ACCESS() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-38
Example Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-38

12 Managing Workloads on the Netezza Appliance
Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-1
Service Level Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-1
WLM Feature Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-2

xiii

Resource Sharing Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-2
Concurrent Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-3
Managing Short Query Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-4
Managing GRA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-6
Resource Percentages and System Resources. . . . . . . . . . . . . . . . . . . . . . . . . . 12-6
Assigning Users to Resource Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-7
Resource Groups Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-7
GRA Allocations Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-9
Resource Allocations for the Admin User . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-10
Allocations for Multiple Jobs in the Same Group. . . . . . . . . . . . . . . . . . . . . . . 12-11
Priority and GRA Resource Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-12
Guaranteed Resource Allocation Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-13
Tracking GRA Compliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-14
Monitoring Resource Utilization and Compliance . . . . . . . . . . . . . . . . . . . . . . 12-15
Managing PQE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-19
Netezza Priority Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-20
Managing the Gate Keeper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-21

13 Displaying Netezza Statistics
Netezza Stats Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-1
Database Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-2
DBMS Group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-3
Host CPU Table. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-3
Host File System Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-4
Host Interface Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-4
Host Management Channel Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-6
Host Network Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-7
Host Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-8
Hardware Management Channel Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-9
Per Table Per Data Slice Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-10
Query Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-10
Query History Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-11
SPU Partition Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-12
SPU Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-13
System Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-13
Table Table. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-14
Displaying System Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-15

xiv

The nzstats Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-15
To display table types and fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-15
To display a specific table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-15

14 Managing the MantraVM Service
Mantra Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-1
MantraVM Hostname and IP Addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-2
MantraVM and High Availability Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-2
MantraVM Users and Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-2
MantraVM Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-2
Mantra Documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-3
Starting and Stopping the MantraVM Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-3
Starting the MantraVM Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-3
Stopping the MantraVM Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-3
Displaying the Status of the MantraVM Service. . . . . . . . . . . . . . . . . . . . . . . . . 14-4
Managing the MantraVM Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-4
Displaying the MantraVM Service Configuration . . . . . . . . . . . . . . . . . . . . . . . . 14-4
Displaying the MantraVM Service Version. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-5
Enabling the MantraVM Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-5
Disabling the MantraVM Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-5
Setting the MantraVM IP Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-6
Reconfiguring the MantraVM IP Addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-6
Configuring the MantraVM Monitoring Interfaces . . . . . . . . . . . . . . . . . . . . . . . 14-7
Displaying the MantraVM Monitoring Interfaces . . . . . . . . . . . . . . . . . . . . . . . . 14-8
Accessing the Mantra Web Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-8
Troubleshooting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-9
Double-Byte Character Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-9
Event Throttling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-9
/nz Partition is Full . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-9
Mantra Inactivity Timeout. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-10

Appendix A: Netezza CLI
Summary of Command Line Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-1
Command Privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-4
Commands without Special Privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-6
Exit Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-6
Netezza CLI Command Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-6
nzbackup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-7

xv

nzcontents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-7
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-7
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-7
Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-7
nzconvert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-8
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-8
Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-8
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-8
nzds. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-8
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-8
Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-9
Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-11
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-11
Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-11
nzevent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-12
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-12
Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-12
Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-13
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-16
Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-17
nzhistcleanupdb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-17
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-17
Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-18
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-18
Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-19
nzhistcreatedb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-20
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-20
Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-20
Outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-21
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-22
Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-22
nzhostbackup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-22
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-23
Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-23
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-23
Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-24
nzhostrestore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-24

xvi

Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-24
Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-25
Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-25
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-26
Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-26
nzhw . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-26
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-27
Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-27
Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-30
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-30
Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-31
nzload . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-33
nzpassword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-33
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-33
Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-33
Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-34
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-34
Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-35
nzreclaim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-35
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-35
Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-36
Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-36
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-36
Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-37
nzrestore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-37
nzrev . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-37
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-37
Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-38
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-38
Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-38
nzsession . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-39
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-39
Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-39
Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-40
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-41
Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-43
nzspupart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-43

xvii

Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-44
Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-44
Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-44
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-46
Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-46
nzstart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-47
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-47
Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-47
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-47
Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-48
nzstate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-48
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-48
Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-49
Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-49
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-50
Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-50
nzstats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-50
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-50
Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-51
Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-51
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-52
Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-53
nzstop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-53
Syntax Description. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-53
Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-54
Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-54
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-54
Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-54
nzsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-55
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-55
Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-55
Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-56
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-56
Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-57
Customer Service Troubleshooting Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-58
nzconvertsyscase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-59
nzdumpschema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-61

xviii

nzinitsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-62
nzlogmerge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-62

Appendix B: Linux Host Administration Reference
Managing Linux Accounts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-1
Setting Up Linux User Accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-1
Modifying Linux User Accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-2
Deleting Linux User Accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-2
Changing Linux Account Passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-2
Managing Linux Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-2
Adding Linux Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-2
Modifying Linux Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-3
Deleting Linux Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-3
Managing the Linux Host System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-3
Hostname and IP Address Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-3
Rebooting the System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-3
Reformatting the Host Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-4
Fixing System Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-4
Viewing System Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-4
Stopping Errant Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-5
Changing the System Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-5
Determining the Kernel Release Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-6
System Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-6
Displaying Directories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-6
Finding Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-6
Displaying File Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-6
Finding Netezza Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-7
Timing Command Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-7
Setting Default Command Line Editing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-7
Miscellaneous Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-7

Appendix C: Netezza User and System Views
User Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C-1
System Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C-3

Appendix D: System Configuration File Settings
System Startup Configuration Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D-1
System Manager Configuration Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D-3
Other Host Processes Configuration Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D-6

xix

SPU Configuration Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D-10

Appendix E: Notices and Trademarks
Notices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E-1
Trademarks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E-3
Electronic Emission Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E-4
Regulatory and Compliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E-7

Glossary of Database and System Terms
Index

xx

Tables
Table 2-1:

Netezza Supported Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2

Table 2-2:

Sample UNIX CD/DVD Mount Commands . . . . . . . . . . . . . . . . . . . . . 2-4

Table 2-3:

Environment Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-7

Table 2-4:

Directory Structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-11

Table 2-5:

Netezza Port Numbers for Database Access . . . . . . . . . . . . . . . . . . 2-13

Table 3-1:

Command Line Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2

Table 3-2:

CLI Command Locations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-4

Table 3-3:

nzsql Command Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-7

Table 3-4:

nzsql Internal Slash Commands . . . . . . . . . . . . . . . . . . . . . . . . . . 3-10

Table 3-5:

Color Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-15

Table 3-6:

Main Menu Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-15

Table 3-7:

Automatic Refresh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-18

Table 4-1:

HA Tasks and Commands (Old Design and New Design) . . . . . . . . . . 4-2

Table 4-2:

Cluster Management Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-5

Table 4-3:

HA IP Addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-17

Table 5-1:

Key Netezza Hardware Components to Monitor . . . . . . . . . . . . . . . . . 5-1

Table 5-2:

Hardware Description Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-4

Table 5-3:

Hardware Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-7

Table 5-4:

Hardware States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-9

Table 5-5:

Data Slice Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-21

Table 5-6:

System States and Transactions . . . . . . . . . . . . . . . . . . . . . . . . . . 5-25

Table 6-1:

Netezza Software Revision Numbering . . . . . . . . . . . . . . . . . . . . . . . 6-2

Table 6-2:

Common System States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-3

Table 6-3:

System States Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-4

Table 6-4:

Netezza Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-8

Table 6-5:

Error Categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-11

Table 7-1:

Template Event Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-2

Table 7-2:

Netezza Template Event Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-4

Table 7-3:

Event Types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-9

Table 7-4:

Event Argument Expression Syntax . . . . . . . . . . . . . . . . . . . . . . . . 7-13

Table 7-5:

Notification Substitution Tags . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-14

Table 7-6:

Notification Syntax. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-15

Table 7-7:

System State Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-19

xxi

xxii

Table 7-8:

Hardware Service Requested Event Rule . . . . . . . . . . . . . . . . . . . . 7-20

Table 7-9:

Hardware Needs Attention Event Rule . . . . . . . . . . . . . . . . . . . . . . 7-22

Table 7-10:

Hardware Path Down Event Rule . . . . . . . . . . . . . . . . . . . . . . . . . . 7-23

Table 7-11:

Hardware Restarted Event Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-24

Table 7-12:

Disk Space Event Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-25

Table 7-13:

Threshold and States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-26

Table 7-14:

Runaway Query Event Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-27

Table 7-15:

SCSI Predictive Failure Event Rule . . . . . . . . . . . . . . . . . . . . . . . . 7-28

Table 7-16:

ECC Error Event Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-29

Table 7-17:

Regen Fault Event Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-30

Table 7-18:

SCSI Disk Error Event Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-31

Table 7-19:

Thermal Fault Event Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-32

Table 7-20:

Sys Heat Threshold Event Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-33

Table 7-21:

histCaptureEvent Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-34

Table 7-22:

histLoadEvent Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-35

Table 7-23:

SPU Core Event Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-37

Table 7-24:

Voltage Fault Event Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-37

Table 7-25:

Transaction Limit Event Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-39

Table 8-1:

Administrator Privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-9

Table 8-2:

Object Privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-10

Table 8-3:

Netezza SQL Commands for Displaying Privileges . . . . . . . . . . . . . . 8-13

Table 8-4:

Privileges by Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-14

Table 8-5:

Indirect Object Privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-15

Table 8-6:

Authentication-Related Commands . . . . . . . . . . . . . . . . . . . . . . . . 8-19

Table 8-7:

Client Connection-Related Commands . . . . . . . . . . . . . . . . . . . . . . 8-26

Table 8-8:

User and Group Settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-26

Table 8-9:

Public Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-31

Table 8-10:

System Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-32

Table 9-1:

Data Type Disk Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-2

Table 9-2:

Table Skew . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-10

Table 9-3:

Database Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-14

Table 9-4:

Generate Statistics Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-15

Table 9-5:

Automatic Statistics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-16

Table 9-6:

cbts_needing_groom Input Options . . . . . . . . . . . . . . . . . . . . . . . . 9-20

Table 9-7:

The 64th read/write Transaction Queueing . . . . . . . . . . . . . . . . . . . 9-25

Table 9-8:

The _v_qrystat View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-29

Table 9-9:

The _v_qryhist View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-29

Table 10-1:

Choosing a Backup and Restore Method. . . . . . . . . . . . . . . . . . . . . 10-2

Table 10-2:

Backup/Restore Commands and Content . . . . . . . . . . . . . . . . . . . . 10-3

Table 10-3:

Retaining Specials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-5

Table 10-4:

The nzbackup Command Options . . . . . . . . . . . . . . . . . . . . . . . . 10-11

Table 10-5:

Environment Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-14

Table 10-6:

Backup History Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-20

Table 10-7:

Backup and Restore Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-21

Table 10-8:

The nzrestore Command Options . . . . . . . . . . . . . . . . . . . . . . . . . 10-23

Table 10-9:

Environment Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-27

Table 10-10:

Backup History Target . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-31

Table 10-11:

Restore History Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-33

Table 10-12:

NetBackup Policy Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-34

Table 11-1:

History Loader Settings and Behavior. . . . . . . . . . . . . . . . . . . . . . . 11-9

Table 11-2:

_v_querystatus. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-16

Table 11-3:

_v_planstatus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-16

Table 11-4:

$v_hist_queries View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-18

Table 11-5:

$v_hist_incomplete_queries View . . . . . . . . . . . . . . . . . . . . . . . . 11-19

Table 11-6:

$v_hist_table_access_stats View . . . . . . . . . . . . . . . . . . . . . . . . . 11-20

Table 11-7:

$v_hist_column_access_stats View . . . . . . . . . . . . . . . . . . . . . . . 11-20

Table 11-8:

$v_hist_log_events View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-21

Table 11-9:

$hist_version. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-22

Table 11-10:

$hist_nps_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . . . . . . . . . 11-22

Table 11-11:

$hist_log_entry_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . . . . . 11-23

Table 11-12:

$hist_failed_authentication_$SCHEMA_VERSION. . . . . . . . . . . . . 11-23

Table 11-13:

$hist_session_prolog_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . 11-24

Table 11-14:

$hist_session_epilog_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . 11-26

Table 11-15:

$hist_query_prolog_$SCHEMA_VERSION. . . . . . . . . . . . . . . . . . . 11-27

Table 11-16:

$hist_query_epilog_$SCHEMA_VERSION. . . . . . . . . . . . . . . . . . . 11-28

Table 11-17:

$hist_query_overflow_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . 11-29

Table 11-18:

$hist_service_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . . . . . . . 11-30

Table 11-19:

$hist_state_change_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . . 11-31

Table 11-20:

$hist_table_access_$SCHEMA_VERSION. . . . . . . . . . . . . . . . . . . 11-32

Table 11-21:

$hist_column_access_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . 11-33

Table 11-22:

$hist_plan_prolog_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . . . 11-34

Table 11-23:

$hist_plan_epilog_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . . . 11-36

Table 12-1:

Workload Management Feature Summary . . . . . . . . . . . . . . . . . . . . 12-2

Table 12-2:

Short Query Bias Registry Settings. . . . . . . . . . . . . . . . . . . . . . . . . 12-5

xxiii

xxiv

Table 12-3:

Sample Resource Sharing Groups . . . . . . . . . . . . . . . . . . . . . . . . . 12-7

Table 12-4:

Assigning Resources to Active RSGs . . . . . . . . . . . . . . . . . . . . . . . 12-9

Table 12-5:

Guaranteed Resource Allocation Settings . . . . . . . . . . . . . . . . . . . 12-13

Table 12-6:

GRA Compliance Registry Settings . . . . . . . . . . . . . . . . . . . . . . . 12-14

Table 12-7:

GRA Report Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-16

Table 12-8:

Netezza Priorities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-20

Table 12-9:

Gate Keeper Registry Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-22

Table 13-1:

Netezza Groups and Tables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-1

Table 13-2:

Database Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-2

Table 13-3:

DBMS Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-3

Table 13-4:

Host CPU Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-3

Table 13-5:

Host File System Table. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-4

Table 13-6:

Host Interfaces Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-4

Table 13-7:

Host Management Channel Table . . . . . . . . . . . . . . . . . . . . . . . . . 13-6

Table 13-8:

Host Network Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-7

Table 13-9:

Host Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-8

Table 13-10:

Hardware Management Channel Table . . . . . . . . . . . . . . . . . . . . . . 13-9

Table 13-11:

Per Table Data Slice Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-10

Table 13-12:

Query Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-10

Table 13-13:

Query History Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-11

Table 13-14:

SPU Partition Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-12

Table 13-15:

SPU Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-13

Table 13-16:

System Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-13

Table 13-17:

Table Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-14

Table A-1:

Command Line Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-1

Table A-2:

Administrator Privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-4

Table A-3:

Object Privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-5

Table A-4:

nzds Input Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-9

Table A-5:

nzds Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-11

Table A-6:

nzevent Input Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-12

Table A-7:

nzevent Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-13

Table A-8:

nzhistcleanupdb Input Options . . . . . . . . . . . . . . . . . . . . . . . . . . . A-18

Table A-9:

nzhistcreatedb Input Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-20

Table A-10:

nzhistcreatedb Output Messages . . . . . . . . . . . . . . . . . . . . . . . . . . A-21

Table A-11:

nzhostbackup Input Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-23

Table A-12:

nzhostrestore Input Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-25

Table A-13:

nzhostrestore Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-25

Table A-14:

nzhw Input Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-27

Table A-15:

nzhw Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-30

Table A-16:

nzpassword Input Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-33

Table A-17:

nzpassword Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-34

Table A-18:

nzreclaim Input Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-36

Table A-19:

nzreclaim Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-36

Table A-20:

nzrev input Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-38

Table A-21:

nzsession Input Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-39

Table A-22:

nzsession Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-40

Table A-23:

Session Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-42

Table A-24:

nzspupart Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-44

Table A-25:

nzspupart Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-44

Table A-26:

nzstart Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-47

Table A-27:

nzstate Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-49

Table A-28:

nzstate Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-49

Table A-29:

nzstats Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-51

Table A-30:

nzstats Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-51

Table A-31:

nzstop Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-54

Table A-32:

nzstop Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-54

Table A-33:

nzsystem Inputs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-55

Table A-34:

nzsystem Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-56

Table A-35:

Diagnostic Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-58

Table A-36:

nzconvertsyscase Input Options. . . . . . . . . . . . . . . . . . . . . . . . . . . A-60

Table A-37:

nzdumpschema Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-61

Table A-38:

nzlogmerge Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-62

Table C-1:

User Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C-1

Table C-2:

System Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C-3

Table D-1:

Startup Configuration Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D-1

Table D-2:

System Manager Configuration Options . . . . . . . . . . . . . . . . . . . . . . D-3

Table D-3:

Host Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D-6

Table D-4:

SPU Configuration Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D-10

xxv

xxvi

Figures
Figure 3-1:

Sample Run Command Window. . . . . . . . . . . . . . . . . . . . . . . . . . . 3-12

Figure 3-2:

Login Dialog Box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-13

Figure 3-3:

Netezza Revision Warning Window. . . . . . . . . . . . . . . . . . . . . . . . . 3-13

Figure 3-4:

NzAdmin Main System Window . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-14

Figure 3-5:

NzAdmin Hyperlink Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-17

Figure 3-6:

Preferences Dialog Box. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-18

Figure 3-7:

Connection Error window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-19

Figure 3-8:

Navigation Pane. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-21

Figure 3-9:

Status Pane. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-21

Figure 3-10:

System Summary Page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-22

Figure 5-1:

Sample nzhw show Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-3

Figure 5-2:

Sample nzhw show Output (IBM Netezza C1000 Systems) . . . . . . . . 5-4

Figure 5-3:

IBM Netezza Full-Rack System Components and Locations . . . . . . . . 5-6

Figure 5-4:

IBM Netezza C1000 System Components and Locations . . . . . . . . . . 5-7

Figure 5-5:

SPUs, Disks, Data Slices, and Data Partitions. . . . . . . . . . . . . . . . . 5-11

Figure 5-6:

Netezza C1000 SPU and Storage Representation . . . . . . . . . . . . . . 5-12

Figure 5-7:

Balanced and Unbalanced Disk Topologies. . . . . . . . . . . . . . . . . . . 5-13

Figure 5-8:
5-27

Netezza 1001-6 and N1001-005 and Larger PDUs and Circuit Breakers .

Figure 5-9:
IBM Netezza 1000-3 and IBM PureData System for Analytics N1001-002
PDUs and Circuit Breakers5-28
Figure 5-10:

NEC InfoFrame DWH ZA100 PDUs and Circuit Breakers . . . . . . . . . 5-33

Figure 7-1:

Alerts Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-42

Figure 9-1:

Record Distribution Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-8

Figure 9-2:

Table Skew Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-11

Figure 9-3:

Organizing Tables with CBTs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-11

Figure 10-1:

Database Backups Timeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-18

Figure 11-1:

Query History Staging and Loading Areas . . . . . . . . . . . . . . . . . . . . 11-8

Figure 12-1:

SQB Queuing and Priority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-4

Figure 12-2:

GRA Usage Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-8

Figure 12-3:

Impacts of the Admin User on GRA . . . . . . . . . . . . . . . . . . . . . . . 12-10

Figure 12-4:

Multiple Jobs in a Group Share the Group’s Resources . . . . . . . . . 12-11

Figure 12-5:

GRA and Priority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-13

Figure 12-6:

Resource Allocation Performance Window . . . . . . . . . . . . . . . . . . 12-17

xxvii

xxviii

Figure 12-7:

Resource Allocation Performance History Window . . . . . . . . . . . . . 12-18

Figure 12-8:

Resource Allocation Performance Graph. . . . . . . . . . . . . . . . . . . . 12-19

Figure 12-9:

Using PQE to Control Job Concurrency by Runtime and Priority . . . 12-21

Figure 12-10:

Gate Keeper Default Normal Work Queue . . . . . . . . . . . . . . . . . . . 12-23

Figure 12-11:

Gate Keeper Time-Based Normal Queues and Registry Settings . . . 12-24

Figure 14-1:

Mantra and MantraVM Service . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-1

Preface
The IBM® Netezza® data warehouse appliance is a high performance, integrated database
appliance that provides unparalleled performance, extensive scaling, high reliability, and
ease of use. The Netezza appliance uses a unique architecture that combines current
trends in processor, network, and software technologies to deliver a very high performance
system for large enterprise customers.

Audience for This Guide
The IBM Netezza System Administrator’s Guide is written for system administrators and
database administrators. In some customer environments, these roles could be the responsibility of one person or several administrators.
To use this guide, you should be familiar with Netezza concepts and user interfaces, as
described in the IBM Netezza Getting Started Tips. You should be comfortable using command-line interfaces, Linux operating system utilities, windows-based administration
interfaces, and installing software on client systems to access the Netezza appliance.

Purpose of This Guide
The IBM Netezza System Administrator’s Guide describes the tasks, concepts, and interfaces for managing the Netezza appliance and databases. This guide describes tasks such
as the following:


Installing Netezza clients



Managing the Netezza appliance



Managing Netezza system processes



Managing users, groups, and access security



Managing the database and database objects



Backing up and restoring data

Symbols and Conventions
This guide uses the following typographical conventions:


Italics for emphasis on terms and user-defined values such as user input



Upper case for SQL commands; for example INSERT, DELETE



Bold for command line input; for example, nzsystem stop

xxix

If You Need Help
If you are having trouble using the Netezza appliance, you should:
1. Retry the action, carefully following the instructions given for that task in the
documentation.
2. Go to the IBM Support Portal at: http://www.ibm.com/support. Log in using your IBM
ID and password. You can search the Support Portal for solutions. To submit a support
request, click the Service Requests & PMRs tab.
3. If you have an active service contract maintenance agreement with IBM, you can contact customer support teams via telephone. For individual countries, visit the Technical
Support section of the IBM Directory of worldwide contacts (http://www14.software.ibm.com/webapp/set2/sas/f/handbook/contacts.html#phone).

Comments on the Documentation
We welcome any questions, comments, or suggestions that you have for the IBM Netezza
documentation. Please send us an e-mail message at netezza-doc@wwpdl.vnet.ibm.com
and include the following information:


The name and version of the manual that you are using



Any comments that you have about the manual



Your name, address, and phone number

We appreciate your comments on the documentation.

xxx

CHAPTER

1

Administration Overview
What’s in this chapter
 Administrator’s Roles
 Administration Tasks
 Initial System Setup and Information
 Administration Interfaces
 Other Netezza Documentation

This chapter provides an introduction and overview to the tasks involved in administering
an IBM® Netezza® data warehouse appliance.

Administrator’s Roles
Netezza administration tasks typically fall into two categories:


System administration — managing the hardware, configuration settings, system status, access, disk space, usage, upgrades, and other tasks



Database administration — managing the user databases and their content, loading
data, backing up data, restoring data, controlling access to data and permissions

In some customer environments, one person could be both the system and database
administrator to perform the tasks when needed. In other environments, multiple people
may share these responsibilities, or they may own specific tasks or responsibilities. You can
develop the administrative model that works best for your environment.
In addition to the administrator roles, there are also database user roles. A database user is
someone who has access to one or more databases and has permission to run queries on
the data stored within those databases. In general, database users have access permissions
to one or more user databases, and they have permission to perform certain types of tasks
as well as to create or manage certain types of objects (tables, synonyms, and so forth)
within those databases.

Administration Tasks
The administration tasks generally fall into these categories:


Deploying and installing Netezza clients



Managing the Netezza appliance

1-1

IBM Netezza System Administrator’s Guide



Managing system notifications and events



Managing Netezza users and groups



Database management



Loading data (described in the IBM Netezza Data Loading Guide)



Database backup and restore



Query history



Workload management

This guide describes these tasks and how to perform them using the various Netezza
administration UIs.

Initial System Setup and Information
A factory-configured and installed Netezza system includes the following components:

1-2



A Netezza data warehouse appliance with pre-installed Netezza software



A preconfigured Linux operating system (with Netezza modifications) on one or both
system hosts. Netezza high-availability (HA) models have two hosts, while non-HA
models have one host.



A virtual server environment to run the Mantra compliance application.



Several preconfigured Linux users and groups, which should not be modified or
deleted.


The nz user is the default Netezza system administrator account. The Linux user is
named nz with a default password of nz. The Netezza software runs as this user,
and you can access the system using a command shell or remote access software
as the nz user.



Netezza HA systems also require a Linux user (hacluster) and two Linux groups
(hacluster and haclient) which are added automatically to the host during the
Heartbeat RPM installation. For more information, see “Linux Users and Groups
Required for HA” on page 4-19.



The MantraVM service uses the mantravm user and mantravm group which are
automatically added to the host during the MantraVM installation. For more information, see “MantraVM Users and Groups” on page 14-2.



A Netezza database user named admin (with a default password of password). The
admin user is the database super-user, and has full access to all system functions and
objects at all times. You cannot delete the admin user. You use the admin account to
start creating user databases and additional database user groups and accounts to
which you can assign appropriate permissions and access.



A preconfigured database group named public. All database users are automatically
placed in the group public and therefore inherit all of its privileges. The group public
has default access privileges to selected system views, such as lists of available databases, tables, and views. You cannot delete the group public.

20282-20

Rev.1

Initial System Setup and Information

Netezza Support and Sales representatives will work with you to install and initially configure the Netezza in your customer environment. Typically, the initial rollout consists of
installing the system in your data center, and then performing some configuration steps to
set the system’s hostname and IP address to connect the system to your network and make
it accessible to users. They will also work with you to perform initial studies of the system
usage and query performance, and may advocate other configuration settings or administration ideas to improve the performance of and access to the Netezza for your users.

Netezza Software Directories
The Netezza software is installed in several directories on the Netezza host as follows:


The /nz directory is the Netezza host software installation directory.



The /export/home/nz directory is a home directory for the nz user.



The Linux operating system boot directories.

The following sections describe these directories and their contents.

Host Software Directory
The Netezza host installation directory contains the following software directories and files.


/nz — The root of the Netezza software install tree. On a production host, the default
software installation directory is /nz. If you are a Linux user connected to the Netezza
host, include /nz/kit/bin and /nz/kit/bin/adm in your PATH.



/nz/data-> — A link to the current data directory.



/nz/kit-> — A link to the current kit of executables. The kit link points to the current
software revision in use.



/nz/data./ — System catalog and other host-side database files.



/nz/kit./ — The set of optimized executables and support files needed to run the
product. Note that the  represents the revision of the software.



/nz/mantravm — The MantraVM service configuration files and executables.



/nz/tmp/ — Netezza temporary files.



/nzscratch — A location for Netezza internal files. This location is not mirrored. The
/nzscratch/tmp directory is the default temporary files directory, specified by the NZ_
TMP_DIR variable. It holds files created and used by the transaction manager and
other processes. The contents of NZ_TMP_DIR are deleted when the Netezza software
starts and when the Netezza system restarts. As a best practice, do not store large files
in /nzscratch or its subdirectories; if /nzscratch runs out of space, Netezza processes
could fail.

The data Directory


20282-20

Rev.1

The /nz/data directory contains the following subdirectories:

data./base — Contains system tables, catalog information and subdirectories for
the databases. Each database you create has its own subdirectory whose name
matches the database’s object ID value. For example, base/1/ is the system database,
base/2/ is the master_db database, and base/nnn is an end-user database, where nnn
is the object ID of the database.

1-3

IBM Netezza System Administrator’s Guide



data./cache — Contains copies of compiled code that was dynamically generated
on the host, cross-compiled to run on the SPUs, then downloaded to the SPUs for execution. The copies are saved to eliminate extra steps and overhead when running
similar queries.



data./config — Contains configuration files such as callHome.txt, sendMail.cfg,
and system.cfg files. The callHome.txt is the callhome attachment file; sendMail.cfg
contains the configuration parameters for the sendmail program; system.cfg is the system’s configuration registry, which allows you to control and tune the system. Other
files may exist in this directory if the Netezza system uses options such as LDAP
authentication and other applications.



data./plans — Contains copies of the most recent execution plans for reference.
The system stores the execution plan (for each query) in a separate file with a .pln
extension, and includes the following information:


The original SQL that was submitted.



The plan itself, describing how the various tables and columns are to be accessed,
when joins, sorts, and aggregations are performed, and so on.



If the system was able to reuse a cached (already compiled) version of the code.

The system also generates a separate C program (.cpp file) to process each snippet of
each plan. The system compares this code against files in /nz/data/cache to determine
whether the compilation step can be skipped.
The kit Directory

The kit directory contains the following subdirectories:



kit./ — Top level directory for the release  (for example, kit.6.0).



kit./bin/ — All user-level CLI programs.



kit./bin/adm — Internal CLI programs.



kit./log// — Component log files, one subdirectory per component —
containing a file per day of log information up to seven days. The information in the
logs includes when the process started, when the process exited or completed, and any
error conditions.



kit./ sbin — Internal host and utility programs not intended to be run directly by
users. These programs are not specifically prefixed (for example, clientmgr).



kit./share/ — Postgres-specific files.



kit./sys/ — System configuration files, startup.cfg and some subdirectories (init,
include, strings).



kit./sys/init/ — Files used for system initialization.

nz User’s Home Directory
The host software runs under a preconfigured Linux user named nz. The home directory for
the nz user is /export/home/nz. The default shell configuration file, in addition to standard
UNIX specifications, adds /nz/kit/bin to the PATH environment variable so that user nz can
automatically locate CLI commands.

1-4

20282-20

Rev.1

Initial System Setup and Information

Linux Boot Directories
To ensure that the system starts the Netezza software when it boots, Netezza places some
entries in the init.d directory — a standard system facility for starting applications. As a
best practice, never modify the Linux operating system boot directories or files unless you
are directed to by Netezza Support or documented Netezza procedures. Changes to these
files can impact the operation of the host.

Managing the External Network Connections
During the onsite installation of the Netezza system, Netezza installation engineers will
work with you to configure your system using the site survey information prepared for your
environment. The initial setup process includes steps to configure the external network
connections (that is, the hostname and IP address information) of your Netezza system.
If you need to change the hostname or IP address information, do not use the general Linux
procedures to change this information. Contact Netezza Support for assistance to ensure
that the changes are using Netezza’s procedures to ensure that the changes are propagated
to the high availability configuration and related services.

Managing Domain Name Service (DNS) Updates
The Netezza server uses a domain name service (DNS) server to provide name resolution to
devices such as S-Blades within the system. This allows SPUs to have a DNS name (such
as spu0103) as well as an IP address.
To change the DNS settings for your system, use the nzresolv service to manage the DNS
updates. The nzresolv service updates the resolv.conf information on the Netezza host; for
highly-available Netezza systems (such as the IBM Netezza 1000, C1000, or IBM PureData System for Analytics N1001 systems), the nzresolv service updates the information on
both hosts. (You can log in to either host to perform the DNS updates.) You must be able to
log in as the root user to update the resolv.conf information; any Linux user such as nz can
display the DNS information using the show option.
Note: Do not manually edit the /etc/resolv.conf* files, even as the root user. Use the nzresolv service to update the files and to ensure that the information is maintained correctly
on the host(s).

Displaying the DNS Information
To display the current DNS information for the system:
1. Log in to the active host as a Linux user such as nz.
2. Enter the following command:
[nz@nzhost1 ~]$ service nzresolv show

Sample output follows:
search yourcompany.com
nameserver 1.2.3.4
nameserver 1.2.5.6

20282-20

Rev.1

1-5

IBM Netezza System Administrator’s Guide

Changing DNS Information
You update the DNS information using the nzresolv service. You can change the DNS information using a text editor, as well as read the DNS information from a file or enter it on the
command line. Any changes that you make take effect immediately (and on both hosts, for
HA systems). The DNS server uses the changes for the subsequent DNS lookup requests.
To change the DNS information:
1. Log in to either host as root.
2. Enter the following command:
[root@nzhost1 ~]# service nzresolv update

Note: If you use the service command to edit the DNS information, you must use vi as
the text editor tool, as shown in these examples. However, if you prefer to use a different text editor, you can set the $EDITOR environment variable and use the /etc/init.d/
nzresolve update command to edit the files using your editor of choice.
3. A text editor opens with the system’s DNS information:
# !!! All lines starting '# !!!' will be removed.
# !!!
search yourcompany.com
nameserver 1.2.3.4
nameserver 1.2.5.6

4. You can enter, delete, or change the information as required. When you finish,you can
save your changes and exit (or exit without saving the changes). For example, type one
of the following commands:


:wq to save the changes.



:q to exit the file.



:q! to exit without saving any changes you made in the file.

Use caution before changing the DNS information; incorrect changes can impact the operation of the Netezza system. Review any changes with the DNS administrator at your site to
ensure that the changes are correct.
Overwriting DNS Information with a Text File
information from an existing text file:

To change the DNS information by reading the

1. Log in to either host as root.
2. Create a text file with your DNS information. Your text file should have a format similar
to the following:
search yourcompany.com
nameserver 1.2.3.4
nameserver 1.2.5.6

3. Enter the following command, where file is the fully qualified pathname to the text file:
[root@nzhost1 ~]# service nzresolv update file

Appending DNS Information from the Command Prompt To change the DNS information by
entering the information from the command prompt:
1. Log in to either host as root.

1-6

20282-20

Rev.1

Administration Interfaces

2. Enter the following command (note the dash character at the end of the command):
[root@nzhost1 ~]# service nzresolv update -

The command prompt proceeds to a new line where you can enter the DNS information. Enter the complete DNS information, because the text that you type replaces the
existing information in the resolv.conf file.
3. After you finish typing the DNS information, type one of the following commands:


Control-D to save the information that you entered and exit the editor.



Control-C to exit without saving any changes.

Setting up Remote Access
Netezza systems are typically installed in a data center which is often highly secured from
user access and sometimes located in a geographically separate location. Thus, you may
need to set up remote access to the Netezza so that your users can connect to the system
through the corporate network. Common ways to remotely log onto another system through
a shell (Telnet, rlogin or rsh) do not encrypt data that is sent over the connection between
the client and the server. Consequently, the type of remote access you choose depends
upon the security considerations at your site. Telnet is the least secure and SSH (Secure
SHell) is the most secure.
If you allow remote access through Telnet, rlogin, or rsh, you can more easily manage this
access through the xinetd daemon (Extended Internet Services). The xinetd daemon starts
programs that provide Internet services. This daemon uses a configuration file,
/etc/xinetd.conf, to specify services to start. Use this file to enable or disable remote access
services according to the policy at your site.
If you use SSH, it does not use xinetd, but rather its own configuration files. For more information, see the Red Hat documentation.

Administration Interfaces
Netezza offers several ways or interfaces that allow you to perform the various system and
database management tasks:

20282-20

Rev.1



Netezza commands (nz* commands) are installed in the /nz/kit/bin directory on the
Netezza host. For many of the nz* commands, you must be able to log on to the
Netezza system to access and run those commands. In most cases, users log in as the
default nz user account, but you may have created other Linux user accounts on your
system. Some commands require you to specify a database user account, password,
and database to ensure that you have permissions to perform the task.



The Netezza CLI client kits package a subset of the nz* commands that can be run
from Windows and UNIX client systems. The client commands may also require you to
specify a database user account, password, and database to ensure that you have database administrative and object permissions to perform the task.



SQL commands. The SQL commands support administration tasks and queries within a
SQL database session. You can run the SQL commands from the Netezza nzsql command interpreter or through SQL APIs such as ODBC, JDBC, and the OLE DB Provider.
You must have a database user account to run the SQL commands with appropriate
permissions for the queries and tasks that you perform.

1-7

IBM Netezza System Administrator’s Guide



NzAdmin tool. NzAdmin is a Netezza interface that runs on Windows client workstations to manage Netezza systems.



Web Admin. Web Admin is a Web browser client that users can access on the Netezza
system or a compatible Linux server to manage their Netezza systems.



Netezza Performance Portal. The Netezza Performance Portal is a Web browser client
that provides detailed monitoring capabilities for your Netezza systems. You can use
the portal to answer questions about system usage, workload, capacity planning, and
overall query performance.

The nz* commands are installed and available on the Netezza system, but it is more common for users to install Netezza client applications on client workstations. Netezza
supports a variety of Windows and UNIX client operating systems. Chapter 2, “Installing
the Netezza Client Software,” describes the Netezza clients and how to install them.
Chapter 3, “Using the Netezza Administration Interfaces,” describes how to get started
using the administration interfaces.
The client interfaces provide you with different ways to perform similar tasks. While most
users tend to use the nz* commands or SQL commands to perform tasks, you can use any
combination of the client interfaces, depending upon the task or your workstation environment, or interface preferences.

Other Netezza Documentation
The Netezza documentation set contains other documents which may help you in your dayto-day use of the Netezza system and features:


IBM Netezza Database User’s Guide — describes the Netezza SQL commands and how
to use them to create queries as well as how to create and manage database objects



IBM Netezza Data Loading Guide — describes how to load data into a Netezza system



IBM Netezza ODBC, JDBC and OLE DB Installation and Configuration Guide —
describes how to configure data connectivity clients to connect to your Netezza system
and run queries through the supported drivers



IBM Netezza Advanced Security Administrator’s Guide — describes how to manage
multi-level security, audit logging and history, and authentication within the Netezza
database



IBM Netezza Getting Started Tips — provides a high-level overview of Netezza appliances and concepts for the new user, plus an overview of the documentation set



IBM Netezza Software Upgrade Guide — describes how to upgrade the Netezza
software



IBM Netezza Release Notes — describes new features and changes in a Netezza software release, as well as a summary of known issues and fixes for customer-reported
issues

There are several Netezza documents that offer more specialized information about features or tasks. For more information, see the IBM Netezza Getting Started Tips guide.

1-8

20282-20

Rev.1

CHAPTER

2

Installing the Netezza Client Software
What’s in this chapter
 Client Software Packages
 Installing the Netezza CLI Client on a Linux/UNIX System
 Installing the Netezza Tools on a Windows Client
 Installing the Web Admin Interface
 Clients and Unicode Characters
 Client Timeout Controls
 Netezza Port Numbers
 Creating Encrypted Passwords
 Using Stored Passwords

In most cases, the only applications that Netezza administrators or users need to install are
the client applications to access the Netezza system. Netezza provides client software that
runs on a variety of systems such as Windows, Linux, Solaris, AIX, and HP-UX systems. For
a description of the client applications, see “Administration Interfaces” on page 1-7.
This chapter describes how to install the Netezza CLI clients, NzAdmin tool, and Web
Admin interface. Note that the instructions to install and use the Netezza Performance Portal are in the IBM Netezza Performance Portal User’s Guide, which is available with the
software kit for that interface.
Note: This chapter does not describe how to install the Netezza system software or how to
upgrade the Netezza host software. Typically, Netezza Support works with you for any situations that might require software reinstallations, and the steps to upgrade a Netezza system
are described in the IBM Netezza Software Upgrade Guide.
If your users or their business reporting applications access the Netezza system through
ODBC, JDBC, or OLE-DB Provider APIs, see the IBM Netezza ODBC, JDBC and OLE DB
Installation and Configuration Guide for detailed instructions on the installation and setup
of these data connectivity clients.

2-1

IBM Netezza System Administrator’s Guide

Client Software Packages
If you have access to IBM Passport Advantage or the IBM Fix Central downloads area, you
can obtain the Netezza client software. You must have support accounts with permission to
download the IBM Netezza software from these locations.
To access Passport Advantage, go to http://www-01.ibm.com/software/howtobuy/passportadvantage/ pao_customers.htm.
To access Fix Central, go to http://www-933.ibm.com/support/fixcentral/.
The client packages include


The IBM Netezza Client Components — there are client packages for the supported
client operating systems. The UNIX clients include interface software such as the CLI
and the ODBC/JDBC drivers.



The IBM Netezza Client Components — Windows package contains the interface software such as NzAdmin, some nz* commands, the ODBC/JDBC drivers, and the OLE-DB
Provider.

Table 2-1 lists the supported operating systems and revisions for the Netezza CLI clients.
Table 2-1: Netezza Supported Platforms
Operating System

32-bit

64-bit

Intel/AMD

Intel/AMD

Red Hat LAS Linux 4.0, 5.2, 5.3, 5.5, 6.1

Intel/AMD

Intel/AMD

SUSE Linux Enterprise Server 8 and 9

Intel/AMD

Intel/AMD

IBM System z

IBM System z

Windows
Windows 2003, 2008, XP, Vista, 7
Linux

SUSE Linux Enterprise Server 10 and 11,
and Red Hat Enterprise Linux 5.x

2-2

20282-20

Rev.1

Installing the Netezza CLI Client on a Linux/UNIX System

Table 2-1: Netezza Supported Platforms (continued)
Operating System

32-bit

64-bit

SPARC

SPARC

x86

x86

HP-UX 11i versions 1.6 and 2 (B.11.22
and B.11.23)

Itanium

Itanium

IBM AIX 6.1 with 5.0.2.1 C++ runtime
libraries

PowerPC

PowerPC

UNIX
Oracle Solaris 8, 9, 10
Oracle Solaris 10

Note: The Netezza client kits are designed to run on the vendor’s proprietary hardware
architecture. For example, the AIX, HP-UX, and Solaris clients are intended for each vendor's proprietary RISC architecture. The Linux client is intended for RedHat or SUSE on the
32-bit Intel architecture.

Installing the Netezza CLI Client on a Linux/UNIX System
The Netezza UNIX clients contain a tarfile of the client software for a platform and an
unpack script. You use the unpack script to install the client nz* commands and their necessary files to the UNIX client system. Table 2-1 lists the supported UNIX client operating
systems.
Note: If you plan to install the Netezza client on Red Hat Linux and/or SUSE Linux clients,
note that the client system must have the libssl.so.4 and libcrypto.so.4 packages installed
before you install the Netezza client. These libraries can be obtained from the package
repositories of the operating system vendor. For the instructions to obtain the libraries and
install them, contact your operating system administrator and/or see the information available on the web site for your client operating system.

Installing on Linux/UNIX Clients
For Netezza clients, the process to install the CLI is the same across the supported Linux
and UNIX platforms. To install the clients:
1. Insert the IBM Netezza Client Components DVD into your client system’s DVD drive.
Note: Make sure that you use the client release that matches the Netezza software
release of your Netezza system. As a best practice, do not use Netezza clients to manage Netezza systems that have a different Netezza release.
Note: If you have downloaded the client package (nz-*client-version.archive) to a directory on your client system, change to that directory and use a command such as tar -xzf
nz-*client-verson.tar.z to untar the package. Proceed to step 5 to run the unpack
command.
2. Log in as a root or superuser account.
3. Depending upon the auto-mounter settings, you may need to mount the media drive.
For example, on Linux the command is similar to:

20282-20

Rev.1

2-3

IBM Netezza System Administrator’s Guide

mount /media/cdrom
or
mount /media/cdrecorder

Table 2-2 describes other common mount commands for the supported UNIX clients. If
you encounter any problems mounting or accessing the media drive on your client system, refer to your operating system documentation or command man pages.
Table 2-2: Sample UNIX CD/DVD Mount Commands
Platform

Command

Solaris

mount -o ro -F hsfs /dev/dsk/c0t1d0s2 /tmp/cdrom

HP-UX

To mount the disk:
pfs_mountd &
pfsd &
pfs_mount /dev/dsk/c0t0d0 /cdrom

Export the library path, where the pathname is the location of the nz files.
Note that the location of the Netezza client files is /usr/local/nz or the location you choose to install them.
export SHLIB_PATH=//bin/lib

AIX

mount -v cdrfs -r /dev/cd0 /cdrom

4. To change to the mount point, use the cd command and specify the mount pathname
that you used in step 3. This guide uses the term /mountPoint to refer to the applicable
CD/DVD mount point location on your system, as used in step 3.
cd /mountPoint

5. Navigate to the directory where the unpack command resides and run the unpack command as follows:
./unpack

Note: On some UNIX systems such as Red Hat 5.3, the auto-mounter settings may not
provide execute permissions by default. If the unpack command returns a “permission
denied” error, you can copy the installation files from the disk to a local directory and
run the unpack command from that local directory.
Note: For installations on Linux, be sure to use the unpack in the linux directory, not
the linux64 directory (which contains only the executable for the 64-bit ODBC driver).
Note: On an HP-UX 11i client, /bin/sh may not be available. You can use the command
form sh ./unpack to unpack the client.
6. The unpack program checks the client system to ensure that it supports the CLI package and prompts you for an installation location. The default is /usr/local/nz for Linux,
but you can install the CLI tools to any location on the client. The program prompts you
to create the directory if it does not exist. Sample command output follows:
-----------------------------------------------------------------IBM Netezza -- NPS Linux Client 7.0
(C) Copyright IBM Corp. 2002, 2012 All Rights Reserved.
------------------------------------------------------------------

2-4

20282-20

Rev.1

Installing the Netezza Tools on a Windows Client

Validating package checksum ... ok
Where should the NPS Linux Client be unpacked? [/usr/local/nz]
Directory '/usr/local/nz' does not exist; create it (y/n)? [y] Enter
0%
25%
50%
75%
100%
|||||||||||||||||||||||||||||||||||||||||||||||||||
Unpacking complete.

After the installation completes, the Netezza CLI commands will be installed to the specified destination directory. In addition, the installer stores copies of the software licenses in
the /opt/nz/licenses directory.

Setting the Path for Netezza CLI Client Commands
You can run most of the CLI commands from the Netezza client systems, except for nzstart
and nzstop which run only on the host Netezza system. For more information about the CLI
commands and their locations, see “Command Locations” on page 3-4.
To run the CLI commands on Solaris, you must include /usr/local/lib in your environment
variable LD_LIBRARY_PATH. Additionally, to use the ODBC driver on Linux, Solaris, or HPUX, you must include /usr/local/nz/lib, or the directory path to nz/lib where you installed the
Netezza CLI tools.

Removing the CLI Clients from UNIX Systems
To remove the client CLI kits from a UNIX system, change to the directory where you
installed the clients (for example, /usr/local/nz) and manually delete the nz commands.

Installing the Netezza Tools on a Windows Client
The IBM Netezza Client Components — Windows contains the Windows nzsetup.exe command which installs the IBM Netezza Windows client tools. The installation program
installs the NzAdmin tool, several nz* command line executables and libraries, online help
files, and Netezza guides in PDF format.

Installation Requirements
The installation package requires a computer system running a supported Windows operating system such as Windows 2003, XP (32- and 64-bit), VISTA (32-bit), 2008 (32- and
64-bit) and Windows 7 (32- and 64-bit). The client system must also have either a CD/DVD
drive or a network connection.
Note: If you will be using or viewing object names that use UTF-8 encoded characters, your
Windows client systems require the Microsoft universal font to display the characters within
the NzAdmin tool. The Arial Unicode MS font is installed by default on Windows XP systems, but you may need to run a manual installation for other Windows platforms such as
2003 or others. For more information, see the Microsoft support article at
http://office.microsoft.com/en-us/help/hp052558401033.aspx.

20282-20

Rev.1

2-5

IBM Netezza System Administrator’s Guide

Installing the Netezza Tools
To install the Netezza tools on Windows:
1. Insert the IBM Netezza Client Components — Windows in your media drive and navigate to the admin directory.
Note: If you have downloaded the client package (nzsetup.exe) to a directory on your
client system, change to that directory.
2. Double-click or run nzsetup.exe.
This is a standard installation program that consists of a series of steps in which you
select and enter information used to configure the installation. You can cancel the
installation at any time.
The installation program displays a license agreement, which you must accept to install the
client tools. It also allows you to specify the following information:


Destination folder — You can use the default installation folder or specify an alternative location. The default folder is C:\Program Files\IBM Netezza Tools. If you choose a
different folder, the installation program creates the folder if one does not exist.



Setup type — Select the type of installation: typical, minimal, or custom.


Typical — Installs the nzadmin program, the help file, the documentation, and the
console utilities, including the loader.



Minimal — Installs the nzadmin program and help files.



Custom — Displays a screen where you can select to install any combination of the
administration application, console applications, or documentation.

After you complete the selections and review the installation options, the client installer
creates the Netezza Tools folder, which has several subfolders. You cannot change the subfolder names or locations.


Bin — Executables and support files



Doc — Copies of the Netezza user guides and an Acrobat Index to search the doc set



Help — Application help files



jre — Java runtime environment files for the Netezza tools



sys — Application string files



Uninstall Netezza Tools — Files to remove Netezza tools from the client system

The installation program displays a dialog when it completes, and on some systems, it
could prompt you to reboot the system before you use the application.
The installer stores copies of the software licenses in the installation directory, which is
usually C:\Program Files\IBM Netezza Tools (unless you specified a different location).
The installation program adds the Netezza commands to the Windows Start > Programs
menu. The program group is IBM Netezza and it has the suboptions IBM Netezza Administrator and Documentation. The IBM Netezza Administrator command starts the NzAdmin
tool. The Documentation command lists the PDFs of the installed documentation.
Note: To use the commands in the bin directory, you must open a Windows command line
prompt (a DOS prompt).

2-6

20282-20

Rev.1

Installing the Web Admin Interface

Environment Variables
Table 2-3 lists the operating system environment variables that the installation tool adds
for the Netezza console applications.
Table 2-3: Environment Variables
Variable

Operation

Setting

PATH

append

\bin

NZ_DIR

set

Installation directory (for example C:\Program Files\IBM
Netezza Tools)

Removing the IBM Netezza Tools
You can remove or uninstall the Windows tools using the Windows Add or Remove Programs
interface in the Control Panel. The uninstallation program removes all folders, files, menu
commands, and environment variables. The registry entries created by other Netezza applications, however, are not removed.
To remove the IBM Netezza Tools from a Windows client:
1. Click Start > Settings > Control Panel > Add or Remove Programs. (Note that the menu
options can vary with each Windows operating system type.)
2. Select IBM Netezza Tools, then click Remove or Uninstall. The removal usually completes in a few minutes. Wait for the removal to complete.
3. Using the File Explorer, check the installation location (which is usually c:\Program
Files\IBM Netezza Tools). If the Windows client was the only installed Netezza software, you can delete the IBM Netezza Tools folder to completely remove the
application.

Installing the Web Admin Interface
The Netezza Web Admin interface is a web-based software package that lets you monitor
and administer a Netezza system using a web browser on client systems. The package consists of web server software and the web page files. Web Admin supports the following
browser applications:


Internet Explorer 7 and later versions



Firefox 3 and later versions

Typically, you install the Web Admin package on the Netezza host system. If you have a
high availability Netezza system, you can perform the installation instructions on both the
active and standby hosts so that Web Admin is available following a cluster migration or
failover.
If you would like to offload the Web Admin interface from the Netezza host, you could also
install it on a Linux Red Hat Enterprise version 5.x or 6.x system which has access to the
Netezza server. The Web Admin client requires certain Red Hat RPMs that can vary for
each Red Hat OS version, so the output could differ based on the version. If any required
packages are missing, the unpack script prompts you to cancel the installation so that you
can install the missing packages. If you continue the installation, the unpack script

20282-20

Rev.1

2-7

IBM Netezza System Administrator’s Guide

attempts to install the packages using the yum command. The yum command must be correctly configured to retrieve packages from your configured repositories. (Contact your Red
Hat administrator for questions about yum package management and package sources/
repositories in your environment.)
For more information about the Web Admin interface, see “Using the Web Admin Application” on page 3-20.

Installing the RPM and Shared Library Files
The Web Admin server package consists of standard RPM files that are consistent with the
Linux Advanced Server system currently installed on Netezza host machines. Netezza provides an additional shared library that connects to a Netezza system from the web server.
The installation script does the following:


Prompts for a directory into which to install the web files. The default location is
/usr/local/nzWebAdmin.



Installs any required RPMs and copies the shared library to the proper location. If an
RPM file is already installed, the installation script displays a message and proceeds to
the next installation step.



Creates an SSL site certificate, which is used when connecting to the Web Admin
server through secure sockets layer (SSL) protocols.

The installation script takes a conservative approach when installing the RPM set and
libpq.so library file. It does not alter or overwrite RPM packages or other files that exist on
the target system. Therefore, the script looks for any of the packages on the system, and, if
they exist, it skips that RPM or file, and moves on to the next.

Installing the Web Admin Server and Application Files
To install the Web Admin server and its application files:
1. On the Netezza host or another Linux system, insert the IBM Netezza Client Components — Linux/UNIX into the media drive.
Note: If you have downloaded the Web Admin package (webadmin.package.tar.z) to a
directory such as /tmp on your Linux system, change to that directory and use a command such as tar -xzf webadmin.package.tar.z to untar the package. Proceed to Step 5.
2. Log in as a root or superuser account.
3. Mount the disk using a command similar to the following:
mount /media/cdrom

or
mount /media/cdrecorder

If you are not sure which command to use, run the ls /media command to see which
pathname (cdrom or cdrecorder) appears.
4. To change to the mount point, use the cd command and specify the mount pathname
that you used in step 3. This guide uses the term /mountPoint to refer to the applicable
CD/DVD mount point location on your system, as used in step 3.
cd /mountPoint/webadmin

2-8

20282-20

Rev.1

Installing the Web Admin Interface

5. Run the unpack command to add the software files to the system:
[root@nzhost1 ~]# ./unpack

The unpack script installs the software files for the Web Admin interface. During the
unpack process, you may be prompted for instructions to remove existing Web services
RPM packages, to choose whether to use SSL security for Web connections, and other
tasks. This sample output uses Enter to show that the user pressed the Enter key for these
types of prompts. Sample command output follows:
---------------------------------------------------------------------IBM Netezza -- NPS Web Admin 7.0
(C) Copyright IBM Corp. 2002, 2012 All Rights Reserved.
---------------------------------------------------------------------Validating package checksum ... ok
Directory '/usr/local/nzWebAdmin' does not exist; create it (y/n)? [y]
Enter
*********************************************************************
Unpacking WebAdmin files into: /usr/local/nzWebAdmin
*********************************************************************
0%
25%
50%
75%
100%
|||||||||||||||||||||||||||||||||||||||||||||||||||
Installing web services RPMs ...
Preparing...
####################################
1:apr
#######################################
Preparing...
####################################
1:apr-util
######################################
Preparing...
####################################
package curl-7.15.5-2.el5.i386 is already installed
Preparing...
####################################
1:distcache
######################################
Preparing...
####################################
package expat-1.95.8-8.2.1.i386 is already installed
Preparing...
####################################
1:freetype
######################################
Preparing...
####################################
1:gmp
#######################################
[Output abbreviated for documentation...]
1:postgresql-libs
######################################
Preparing...
####################################
1:unixODBC
#######################################
Do you want to support SSL only ? (y/n)? [y] Enter

[100%]
[100%]
[100%]
[100%]
[100%]
[100%]
[100%]
[100%]
[100%]
[100%]
[100%]
[100%]
[100%]
[100%]
[100%]

**********************************************************************
Previous odbc configuration moved to /etc/odbcinst.ini.30724
**********************************************************************
Starting httpd:

[

OK

]

Unpacking complete.

20282-20

Rev.1

2-9

IBM Netezza System Administrator’s Guide

The unpacking process automatically starts the Web Admin server. If you need to stop the
Web Admin server at any time, log in as root or a superuser account and use the following
command:
service httpd stop

To start the Web Admin server, log in as root or a superuser account and use the following
command:
service httpd start

Upgrading the Web Admin Interface
If you have installed an existing Web Admin client from a prior release, you can upgrade it
to a new version by removing the old Web Admin client (described in the next section) and
installing the new version.
Note: If you have both the Web Admin interface and the Netezza Performance Portal
installed on the same system and you want to upgrade the Web Admin interface, you must
remove the portal product first and then remove the Web Admin interface. You can then
install the new Web Admin client followed by the portal client software. For the instructions
to install and remove the portal software, see the IBM Netezza Performance Portal User’s
Guide.
To install the new Web Admin client, follow the steps described in the section “Installing
the Web Admin Server and Application Files” on page 2-8.

Removing the Web Admin Interface
You can remove or uninstall the Web Admin interface to remove it from the Linux system
entirely, or if you are planning to upgrade the client to a new version.
To remove the Web Admin interface from your Linux system:
1. Log in to the Linux system as the root user.
2. Change to the /usr/local/nzWebAdmin directory.
3. Run the following command to remove the software:
./uninstallwebclient

Note: During the removal, if you encounter errors that the httpd service failed to start, run
the ldconfig command and restart the httpd service (service httpd start).

Contents of the WebAdmin Directory
During the web server client installation, the installation script copies the software, documents, help files, RPM files, and scripts to the directory specified during the installation
(the default is /usr/local/nzWebAdmin).
This directory hierarchy must be maintained for the Web Admin interface and online help
to operate properly.

2-10

20282-20

Rev.1

Clients and Unicode Characters

Table 2-4 lists the directory structure.
Table 2-4: Directory Structure
Directory

Contents

/usr/local/nzWebAdmin/Admin

Web Admin software

/usr/local/nzWebAdmin/lib

libpq.so file

/usr/local/nzWebAdmin/RPMs/

LAS4 and RHEL5 subdirectories that contain packages
for the Linux operating system

/var/www/error

Contains error message files for the web server

/var/www/icons

Image files

Installing the Netezza SSL Site Certificate
When you access the Web Admin URL in a browser (https://hostname/admin.html), the
browser displays a warning message for the authentication certificate. The browser offers
you the option to permanently store the Netezza site certificate, which will suppress the
warning each time the site is accessed.
After you choose to install the certificate, a warning message should no longer appear when
connecting to the Web Admin interface.
Note: The hostname entered in the web address must match the name stored in the site
certificate. The hostname is detected by the setup script and is used when generating the
SSL certificate.

Clients and Unicode Characters
If you create object names which use characters outside the 7-bit-ASCII character range,
note that the nzsql command, the ODBC, JDBC, and OLE-DB drivers, the NzAdmin tool,
and the Web Admin interface all support the entering and display of those characters. On
Windows systems, users must ensure that they have appropriate fonts loaded to support
their character sets of choice.
Netezza commands that display object names such as nzload, nzbackup, and nzsession
can also display non-ASCII characters, but they must operate on a UTF-8 terminal or DOS
window to display characters correctly.
For UNIX clients, make sure that the terminal window in which you run these nz commands
uses a UTF-8 locale. The output in the terminal window may not align correctly.
Typically, Windows clients require two setup steps.
Note: This procedure is a general recommendation based on common practices. If you
encounter any difficulty with Windows client setup, refer to Microsoft Support to obtain the
setup steps for your specific platform and fonts.
1. Set the command prompt to use an appropriate True Type font that contains the
required glyphs. To select a font:
a. Select Start > Programs > Accessories.

20282-20

Rev.1

2-11

IBM Netezza System Administrator’s Guide

b. Right-click Command Prompt and then select Properties from the pop-up menu.
The Command Prompt Properties dialog box appears.
c. Select the Font tab. In the Font list, the True Type fixed width font(s) are controlled
by the registry setting HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows
NT\CurrentVersion\Console\TrueTypeFont.
On a standard US system, the font is Lucida Console (which does not contain UTF8 mapped glyphs for Kanji). On a Japanese system, the font is MS Gothic, which
contains those glyphs.
2. In a DOS command prompt window, change the code page to UTF-8 by entering the
following command:
chcp 65001

As an alternative to these DOS setup steps, the input/output from the DOS clients can be
piped from/to nzconvert and converted to a native code page, such as 932 for Japanese.
On a Windows system, the fonts that you use for your display must meet these following
Microsoft requirements as outlined on the Support site at http://support.microsoft.com/
default.aspx?scid=kb;EN-US;Q247815.

Client Timeout Controls
In some customer environments where users connect over VPNs to the Netezza appliance,
users may encounter issues where active SQL sessions time out due to VPN/TCP connection settings in the customer environment. For these environments, Netezza has added TCP
KEEPALIVE packet support with the following new settings in the /nz/data/postgresql.conf
file:


tcp_keepidle: The number of seconds between keepalive messages sent on an otherwise idle connection. A value of 0 uses the system default (7200 seconds). If users
report SQL client session disconnects, set this parameter to the recommended value of
900.



tcp_keepinterval: The number of seconds to wait for a keepalive response before
retransmitting the message. A value of 0 uses the system default (75 seconds).



tcp_keepcount: The number of retransmission attempts that must occur before the
connection is considered dead. A value of 0 uses the system default (9 attempts).

After you define (or modify) these settings in the postgresql.conf file, you must restart the
Netezza software to apply the changes.

2-12

20282-20

Rev.1

Netezza Port Numbers

Netezza Port Numbers
The Netezza system uses the following port numbers or environmental variables for the CLI
commands and the NzAdmin tool. Table 2-5 lists the default ports and corresponding environment variables:
Table 2-5: Netezza Port Numbers for Database Access
Port

Environment Variable

Description

5480

NZ_DBMS_PORT

The postgres port for the nzsql command, NzAdmin
tool, ODBC, and JDBC.

5481

NZ_CLIENT_MGR_PORT

The port for the CLI and NzAdmin tool messaging.

5482

NZ_LOAD_MGR_PORT

(Prior to Release 3.1, this port handled loads. As of
Release 3.1, this port is not required.)

5483

NZ_BNR_MGR_PORT

The port for the nzbackup and nzrestore
commands.

Note: Netezza personnel, if granted access for remote service, use port 22 for SSH, and
ports 20 and 21 for FTP.

Changing the Default Port Numbers
For security or port conflict reasons, you can change one or more default port numbers for
the Netezza database access.
Be very careful when changing the port numbers for the Netezza database access. Errors
could severely impact the operation of the Netezza system. If you are not familiar with editing resource shell files or changing environment variables, contact Netezza Support for
assistance.
Before you begin, make sure that you choose a port number that is not already in use. To
check the port number, you can review the /etc/services file to see if the port number is
already specified for another process. You can also use the netstat | grep port command to
see if the designated port is in use.
To change the default port numbers for your Netezza system:
1. Log in to the Netezza host as the nz user.
2. Change to the /nz/kit/sys/init directory.
3. Create a backup of the current nzinitrc.sh file:
[nz@nzhost init]$ cp nzinitrc.sh nzinitrc.sh.backup

4. Review the nzinitrc.sh file to see if the Netezza port(s) listed in Table 2-5 that you want
to change are already present in the file. For example, you may find a section that looks
similar to the following, or you might find these variables defined separately within the
nzinitrc.sh file.
# Application Port Numbers
# ------------------------

20282-20

Rev.1

2-13

IBM Netezza System Administrator’s Guide

# To change the application-level port numbers, uncomment the
following lines,
# and then change the numbers to their new values. Note that these
new values
# will need to be set on clients as well.
#
#
#
#
#

NZ_DBMS_PORT=5480;
NZ_CLIENT_MGR_PORT=5481;
NZ_LOAD_MGR_PORT=5482;
NZ_BNR_MGR_PORT=5483;
NZ_RECLAIM_MGR_PORT=5484;

export
export
export
export
export

NZ_DBMS_PORT
NZ_CLIENT_MGR_PORT
NZ_LOAD_MGR_PORT
NZ_BNR_MGR_PORT
NZ_RECLAIM_MGR_PORT

If you do not find your variable(s) in the file, you can edit the file to define each variable and its new port definition. To define a variable in the nzinitrc.sh file, use the format NZ_DBMS_PORT=value; export NZ_DBMS_PORT as shown above.
Note: As a hint, you can append the contents of the nzinitrc.sh.sample file to the nzinitrc.sh file to create an editable section of variable definitions. You must be able to log
in to the Netezza host as the root user; then, change to the /nz/kit/sys/init directory and
run the following command:
[nz@nzhost init]$ cat nzinitrc.sh.backup nzinitrc.sh.sample
>nzinitrc.sh
5. Using a text editor, edit the nzinitrc.sh file. For each port that you want to change,
remove the comment symbol (#) from the definition line and specify the new port number. For example, to change the NZ_DBMS_PORT variable value to 5486:
#
#
#
#

NZ_DBMS_PORT=5486;
NZ_CLIENT_MGR_PORT=5481;
NZ_LOAD_MGR_PORT=5482;
NZ_BNR_MGR_PORT=5483;
NZ_RECLAIM_MGR_PORT=5484;

export
export
export
export
export

NZ_DBMS_PORT
NZ_CLIENT_MGR_PORT
NZ_LOAD_MGR_PORT
NZ_BNR_MGR_PORT
NZ_RECLAIM_MGR_PORT

6. Review your changes carefully to make sure that they are correct and save the file.
Note: If you change the default port numbers, some of the Netezza CLI commands may
no longer work. For example, if you change the NZ_DBMS_PORT or NZ_CLIENT_MGR_
PORT value, commands such as nzds, nzstate, and others could fail because they
expect the default port value. To avoid this problem, copy the custom port variable definitions in the nzinitrc.sh file to the /export/home/nz/.bashrc file. You can edit the
.bashrc file using any text editor.
7. To place the new port value(s) into effect, stop and start the Netezza server using the
following commands:
[nz@nzhost init]$ nzstop
[nz@nzhost init]$ nzstart

Specifying Non-Default NPS Port Numbers for Clients
If your Netezza system uses non-default port numbers, your client users must specify the
port number when they connect using commands such as nzsql, nzload, or using clients
such as NzAdmin. For example, if you change the NZ_DBMS_PORT number from the
default of 5480, your client users need to specify the new port value, otherwise their commands will return an error that they could not connect to the database server at port 5480.

2-14

20282-20

Rev.1

Creating Encrypted Passwords

Some Netezza commands such as nzsql and nzload have a -port option that allows the user
to specify the DB access port. In addition, users could create local definitions of the environment variables to specify the new port number.
For example, on Windows clients, users could create an NZ_DBMS_PORT user environment
variable in the System Properties > Environment Variables dialog to specify the non-default
port of the Netezza system. For clients such as NzAdmin, the environment variable is the
only way to specify a non-default database port for a target Netezza system. For many systems, the variable name and value take effect immediately and are used the next time you
start NzAdmin. When you start NzAdmin and connect to a system, if you receive an error
that you cannot connect to the Netezza database and the reported port number is incorrect,
check the variable name and value to confirm that they are correct. You may need to reboot
the client system for the variable to take effect.
For a Linux system, you could define a session-level variable using a command similar to
the following:
$ NZ_DBMS_PORT=5486; export NZ_DBMS_PORT

For the instructions to define environment variables on your Windows, Linux, or UNIX client, refer to the operating system documentation for your client.
If a client user connects to multiple Netezza hosts that each use different port numbers,
those users may need to use the -port option on the commands as an override, or change
the environment variable’s value on the client before they connect to each Netezza host.

Creating Encrypted Passwords
Database user accounts must be authenticated during access requests to the Netezza database. For user accounts that use local authentication, Netezza stores the password in
encrypted form in the system catalog. For more information on encrypting passwords on the
host and the client, see the IBM Netezza Advanced Security Administrator’s Guide.
Note: Local authentication requires a password for every account. If you use LDAP authentication, a password is optional. During LDAP authentication, Netezza uses the services of
an LDAP server in your environment to validate and verify Netezza database users. For more
information on authentication, refer to “Logon Authentication” on page 8-17.


When using the Netezza CLI commands, the clear-text password must be entered on
the command line. Note that you can set the environment variable NZ_PASSWORD to
avoid typing the password on the command line, but the variable is stored in clear text
with the other environment variables.



To avoid displaying the password on the command line, in scripts, or in the environment variables, you can use the nzpassword command to create a locally stored
encrypted password.

Note: You cannot use stored passwords with ODBC or JDBC.
The nzpassword command syntax is:
nzpassword add -u user -pw password -host hostname

Where:


20282-20

Rev.1

The user name is the Netezza database user’s name in the Netezza system catalog. If
you do not specify the user name on the command line, the nzpassword command uses
the environment variable NZ_USER.

2-15

IBM Netezza System Administrator’s Guide



The password is the Netezza database user’s password in the Netezza system catalog or
the password specified in the environment variable NZ_PASSWORD. If you do not supply a password on the command line or in the environment variable, the system
prompts you for a password.



The hostname is the Netezza host. If you do not specify the hostname on the command
line, the nzpassword command uses the environment variable NZ_HOST. You can create encrypted passwords for any number of user name/host pairs.

When you use the nzpassword add command to cache the password, note that quotation
marks are not required around the user name or password values. You should only qualify
the user name or password with a surrounding set of single-quote double-quote pairs (for
example, '"Bob"') in cases where the value is case-sensitive. If you specify quoted or
unquoted names or passwords in nzpassword or other nz commands, you must use the
same quoting style in all cases.
If you qualify a case-insensitive user name with quotes (for example '"netezza"'), the command may still complete successfully, but this is not recommended and not guaranteed to
work in all command cases.
After you type the nzpassword command, the system sends the encrypted password to the
Netezza host where it is compared against the user name/password in the system catalog.


If the information matches, the Netezza stores the encrypted information in a local
password cache, and displays no additional message.


On Linux and Solaris, the password cache is the file .nzpassword in the user’s
home directory. Note that the system creates this file without access permissions
to other users, and refuses to honor a password cache whose permission allows
other users access.



On Windows, the password cache is stored in the registry.



If the information does not match, the Netezza displays a message indicating that the
authentication request failed. The Netezza also logs all verification attempts.



If the database administrator changed a user password in the system catalog, the existing nzpasswords are invalid.

Using Stored Passwords
If client users use the nzpassword command to store database user passwords on a client
system, they can supply only a database user name and host on the command line. Users
can also continue to enter a password on the command line if displaying clear-text passwords is not a concern for security.
If you supply a password on the command line, it takes precedence over the environment
variable NZ_PASSWORD. If the environment variable is not set, the system checks the
locally stored password file. If there is no password in this file and you are using the nzsql
command, the system prompts you for a password, otherwise the authentication request
fails.

2-16

20282-20

Rev.1

Using Stored Passwords

In all cases — using the -pw option on the command line, using the NZ_PASSWORD environment variable, or using the locally stored password stored through the nzpassword
command — the Netezza compares the password against the entry in the system catalog
for local authentication or against the LDAP account definition. The authentication protocol
is the same, and the Netezza never sends clear-text passwords over the network.
In Release 6.0.x, note that the encryption used for locally encrypted passwords has
changed. In prior releases, Netezza used the Blowfish encryption routines; Release 6.0 now
uses the Advanced Encryption Standard AES-256 standard. When you cache a password
using a release 6.0 client, the password is saved in AES-256 format unless there is an
existing password file in Blowfish format. In that case, new stored passwords will be saved
in Blowfish format.
If you upgrade to a Release 6.0.x or later client, the client can support passwords in either
the Blowfish format or the AES-256 format. If you want to convert your existing password
file to the AES-256 encryption format, you can use the nzpassword resetkey command to
update the file. If you want to convert your password file from the AES-256 format to the
Blowfish format, use the nzpassword resetkey -none command.
Older clients, such as those for Release 5.0.x and those earlier than Release 4.6.6, do not
support AES-256 format passwords. If your password file is in AES-256 format, the older
client commands will prompt for a password, which can cause automated scripts to hang.
Also, if you use an older client to add a cached password to or delete a cached password
from an AES-256 format file, you could corrupt the AES-256 password file and lose the
cached passwords. If you typically run multiple releases of Netezza clients, you should use
the Blowfish format for your cached passwords.

20282-20

Rev.1

2-17

IBM Netezza System Administrator’s Guide

2-18

20282-20

Rev.1

CHAPTER

3

Using the Netezza Administration Interfaces
What’s in this chapter
 Netezza CLI Overview
 SQL Command Overview
 NzAdmin Tool Overview
 Web Admin Overview

This chapter provides a high-level description of the Netezza administration interfaces,
such as the command line interface, NzAdmin, Web Admin interface, and the SQL
commands. This chapter describes how to access and use these interfaces. For information
about the Netezza Performance Portal, see the IBM Netezza Performance Portal User’s
Guide, which is available with the software kit for that interface.
Note: In general, the Netezza CLI commands are used most often to perform the various
administration tasks. Many of the tasks can also be performed using SQL commands or the
interactive interfaces. Throughout this guide, the primary task descriptions use the CLI
commands and reference other ways to perform the same task.

Netezza CLI Overview
You can use the Netezza command line interface (CLI) to manage the Netezza software,
hardware, and databases. Netezza Support may also ask you to run specific low-level diagnostic commands using the CLI to investigate problems. Through this guide, the Netezza
CLI commands are referred to as nz* commands.
The majority of the nz* commands reside on the Netezza host system. A few commands are
included with the Netezza client kits, and some additional nz* commands are available in
optional Support toolkits and other packages. This guide describes the default host and client nz* commands.

3-1

IBM Netezza System Administrator’s Guide

Summary of Commands
Table 3-1 describes the nz* commands you can use to monitor and manage the Netezza
system. These commands reside in the /nz/kit/bin directory on the Netezza host. Many of
these commands are also installed with the Netezza client kits and can be run from a
remote client workstation.
Table 3-1: Command Line Summary

3-2

Command

Description

For more information…

nzbackup

Backs up an existing
database.

For command syntax, see “nzbackup” on
page A-7. For more information, see “Using
the nzbackup Command” on page 10-10.

nzcontents

For command syntax, see “nzcontents” on
Displays the revision
and build number of all page A-7. For more information, see “Software
Revision Levels” on page 6-1.
the executables, plus
the checksum of
Netezza binaries.

nzconvert

Converts character
encodings for loading
with the nzload command or external
tables.

For command syntax, see “nzconvert” on
page A-8. For more information, refer to the
IBM Netezza Database User’s Guide.

nzds

Manages and displays
information about the
data slices on the
system.

For command syntax, see “nzds” on page A-8.

nzevent

Displays and manages
event rules.

For command syntax, see “nzevent” on
page A-12. For more information, see
Chapter 7, “Managing Event Rules.”

nzhistcleanupdb

Deletes old history
information from a history database. This
command resides in
/nz/kit/bin/adm.

For command syntax, see “nzhistcleanupdb”
on page A-17. For more information, refer to
Chapter 11, “Query History Collection and
Reporting.”

nzhistcreatedb

Creates a history database with all its tables,
views, and objects for
history collection and
reporting. This command resides in
/nz/kit/bin/adm.

For command syntax, see “nzhistcreatedb” on
page A-20. For more information, refer to
Chapter 11, “Query History Collection and
Reporting.”

nzhostbackup

Backs up the host
information, including
users and groups.

For command syntax, see “nzhistcreatedb” on
page A-20.

20282-20

Rev.1

Netezza CLI Overview

Table 3-1: Command Line Summary

20282-20

Rev.1

Command

Description

For more information…

nzhostrestore

Restores the host
information.

For command syntax, see “nzhostrestore” on
page A-24.

nzload

Loads data into database files.

For command syntax, see the IBM Netezza
Data Loading Guide.

nzodbcsql

A client command on
Netezza UNIX clients
that tests ODBC
connectivity.

See the IBM Netezza ODBC, JDBC, and OLE
DB Installation and Configuration Guide.

nzpassword

Stores a local copy of
the user’s password.

For command syntax, see “nzpassword” on
page A-33. For more information, see “Creating Encrypted Passwords” on page 2-15.

nzreclaim

Uses the SQL GROOM For command syntax, see “nzreclaim” on
page A-35. For more information, see “GroomTABLE command to
reclaim disk space from ing Tables” on page 9-18.
user tables, and also to
reorganize the tables.

nzrestore

Restores the contents
of a database backup.

For command syntax, see “nzrestore” on
page A-37. For more information, see “Using
the nzrestore Command” on page 10-22.

nzrev

Displays the current
software revision for
any Netezza software
release.

For command syntax, see “nzrev” on
page A-37. For more information, see “Software Revision Levels” on page 6-1.

nzsession

Shows a list of current For command syntax, see “nzsession” on
system sessions (load, page A-39. For more information, see “Managing Sessions” on page 9-21.
client, and sql). Supports filtering by
session type or user,
allows you to abort sessions, and change the
current job list for a
queued session job.

nzspupart

For usage information, see “nzspupart” on
Shows a list of all the
SPU partitions and the page A-43.
disks that support
them; controls regenerations for degraded
partitions.

nzsql

Invokes the SQL command interpreter.

For usage information, see Chapter 9, “Managing User Content on the Netezza Appliance.”
For command syntax, see the IBM Netezza
Database User’s Guide.

3-3

IBM Netezza System Administrator’s Guide

Table 3-1: Command Line Summary
Command

Description

For more information…

nzstart

Starts the system.

For command syntax, see “nzstart” on
page A-47. For more information, see “Managing the System State” on page 6-6.

nzstate

Displays the current
system state or waits
for a specific system
state to occur before
returning.

For command syntax, see “nzstate” on
page A-48. For more information, see “Displaying the Current System State” on
page 6-3.

nzstats

Displays system level
statistics.

For command syntax, see “nzstats” on
page A-50. For more information, see “Displaying Netezza Statistics” on page 13-1.

nzstop

Stops the system.

For command syntax, see “nzstop” on
page A-53. For more information, see “Managing the System State” on page 6-6.

nzsystem

Changes the system
state or displays the
current system
information.

For command syntax, see “nzsystem” on
page A-55. For more information, see “Managing the System State” on page 6-6.

Command Locations
Table 3-2 lists the default location of the Netezza CLI commands and whether they are
available in the various UNIX or Windows client kits. Remember to add the appropriate bin
directory to your search path to simplify command invocation.
Table 3-2: CLI Command Locations

3-4

C:\Program
Files\Netezza
Tools\Bin

Default Location

/nz/kit/
bin

Platform

Netezza Linux
Host
Client

Solaris
Client

HP
Client

AIX
Client

Windows Client

nzbackup



—

—

—

—

—

nzhistcleanupdb



—

—

—

—

—

nzhistcreatedb



—

—

—

—

—

nzhostbackup



—

—

—

—

—

nzhostrestore



—

—

—

—

—

nzrestore



—

—

—

—

—

nzstart



—

—

—

—

—

/usr/local/nz/bin

20282-20

Rev.1

Netezza CLI Overview

Table 3-2: CLI Command Locations
C:\Program
Files\Netezza
Tools\Bin

Default Location

/nz/kit/
bin

Platform

Netezza Linux
Host
Client

Solaris
Client

HP
Client

AIX
Client

Windows Client

nzstop



—

—

—

—

—

nzwebstart



—

—

—

—

—

nzwebstop



—

—

—

—

—

nzcontents





—



—

—

nzsql











—

nzreclaim











—

nzconvert













nzds













nzevent













nzhw













nzload





















nzodbcsql

/usr/local/nz/bin

nzpassword













nzrev













nzsession













nzspupart













nzstate













nzstats













nzsystem













Netezza CLI Command Syntax
All Netezza CLI commands have the following top-level syntax options:


-h — Displays help. You can also enter -help.



-rev — Displays the program’s software revision level. You can also enter -V.



-hc — Displays help for the subcommand (if the command has subcommands).

Note: For many Netezza CLI commands you can specify a timeout. This is the amount of
time the system waits before abandoning execution of the command. If you specify a time-

20282-20

Rev.1

3-5

IBM Netezza System Administrator’s Guide

out without a value, the system waits 300 seconds. The maximum timeout value is 100
million seconds.

Using the Netezza Commands
To run an nz* command, you must have access to the Netezza system (either directly on
the Netezza KVM or through a remote shell connection) or you must have installed the
Netezza client kit on your workstation. If you are accessing the Netezza system directly, you
must be able to log in using a Linux account (such as nz).
While some of the nz* commands can operate and display information without additional
access requirements, some commands and operations require that you specify a Netezza
database user account and password. The account may also require appropriate access and
administrative permissions to display information or process a command.
Several examples follow.


To display the state of a Netezza system using a Windows client command:
C:\Program Files\Netezza Tools\Bin>nzstate show -host mynps -u user
-pw passwd
System state is 'Online'.



To display the valid Netezza system states using a Windows client command:

C:\Program Files\Netezza Tools\Bin>nzstate listStates
State Symbol
-----------initialized
paused
pausedNow
offline
offlineNow
online
stopped
down

Description
-----------------------------------------------------------used by a system component when first starting
already running queries will complete but new ones are queued
like paused, except running queries are aborted
no queries are queued, only maintenance is allowed
like offline, except user jobs are stopped immediately
system is running normally
system software is not running
system was not able to initialize successfully

Note: In this example, note that you did not have to specify a host, user, or password. The
command simply displayed information that was already available on the local Windows
client.


To back up a Netezza database (you must run the command while logged in to the
Netezza system, as this is not supported from a client):
[nz@npshost ~]$ nzbackup -dir /home/user/backups -u user -pw
password -db db1
Backup of database db1 to backupset 20090116125409 completed
successfully.

Specifying Identifiers in Commands
When you use the Netezza commands and specify identifiers for users, passwords, database names, and so on, you can pass normal identifiers unquoted on the Linux command
line. The Netezza server performs the appropriate case-conversion for the identifier.
However, if you use delimited identifiers, the supported way to pass them on the Linux
command line is to use the following syntax:

3-6

20282-20

Rev.1

SQL Command Overview

'\'Identifier\''

The syntax is single-quote, backslash, single-quote, identifier, backslash, single-quote, single-quote. This syntax protects the quotes so that the identifier remains quoted in the
Netezza system.

SQL Command Overview
Netezza database users, if permitted, can perform some administrative tasks using SQL
commands while they are logged in via SQL sessions. For example, users can do the
following:


Manage Netezza users and groups, access permissions, and authentication



Manage database objects (create, alter, or drop objects, for example)



Display and manage session settings



Manage query history configurations

Throughout this document, SQL commands are shown in uppercase (for example, CREATE
USER) to stand out as SQL commands. The commands are case-insensitive and can be
entered using any letter casing. Users must have Netezza database accounts and applicable object or administrative permissions to perform tasks. For detailed information about
the SQL commands and how to perform various administrative tasks using them, see the
IBM Netezza Database User’s Guide.

nzsql Command
The nzsql command is a SQL command interpreter. You can use it on the Netezza host or
on UNIX client systems to create database objects, run queries, and manage the database.
Note: The nzsql command is not yet available on Windows client systems.
To invoke the nzsql command, enter:
nzsql [options] [security options] [dbname [user] [password]]

Table 3-3 describes the nzsql command options. For detailed information about the command options and how to use the command, see the IBM Netezza Database User’s Guide.
Table 3-3: nzsql Command Options
Argument

Description

-a

Echoes all input from a script.

-A

Specifies unaligned table output mode (-P format=unaligned).

-c 

Runs only a single query (or slash command) and exits.

-d 

Specifies the database name to which to connect.
• If you do not specify -d, the nzsql command uses the environ-

ment variable NZ_DATABASE.
• If there is no environment variable and you do not specify -d,

the nzsql command prompts you for a database name.
-e

20282-20

Rev.1

Echoes queries sent to the backend.

3-7

IBM Netezza System Administrator’s Guide

Table 3-3: nzsql Command Options
Argument

Description

-E

Displays queries that internal commands generate.

-f 

Executes queries from a file, then exits.

-F 

Sets the field separator (default: “|” (-P fieldsep=).

-h

Displays this help.

-H

Specifies the HTML table output mode (-P format=html).

-host 

Specifies the database server host.

-l

Lists available databases, then exits.

-n

Disables readline. Required when nzsql is used with an input
method such as Japanese, Chinese, or Korean

-o 

Sends query output to file name (or |pipe).

-P var[=arg]

Sets printing option var to arg.

-port 

Specifies the database server port (default: hardwired).

-pw 

Specifies the database user password.
• If you do not specify -pw, the nzsql command uses the environ-

ment variable NZ_PASSWORD.
• If there is no environment variable and you do not specify -pw,

the nzsql command prompts you for a password.
-q

Runs quietly (no messages, only query output).

-r

Suppresses the row count displayed at the end of the SQL output.

-R 

Sets the record separator (default: newline) (-P recordsep=).

-s

Specifies single step mode (confirm each query).

-S

Specifies single line mode (newline terminates query).

-t

Prints rows only (-P tuples_only).

-time

Prints the time taken by queries.

-T text

Sets HTML table tag options (width, border) (-P tableattr=).

-u 

Specifies the database user name.
• If you do not specify -u, the nzsql command uses the environ-

ment variable NZ_USER.
• If there is no environment variable and you do not specify -u,

the nzsql command prompts you for a user name.
-V

3-8

Shows the version information and exits.

20282-20

Rev.1

SQL Command Overview

Table 3-3: nzsql Command Options
Argument

Description

-v name=value

Sets the nzsql variable name to the specified value. You can specify one or more -v arguments to set several options, for example:
nzsql -v HISTSIZE=600 -v USER=user1 -v PASSWORD=password

-x

Turns on expanded table output (-P expanded).

-X

Does not read startup file (~/.nzsqlrc).

-securityLevel

Specifies the security level that you want to use for the session.
The argument has four values:
• preferredUnsecured — This is the default value. Specify this

option when you would prefer an unsecured connection, but
you will accept a secured connection if the Netezza system
requires one.
• preferredSecured — Specify this option when you want a

secured connection to the Netezza system, but you will accept
an unsecured connection if the Netezza system is configured
to use only unsecured connections.
• onlyUnsecured — Specify this option when you want an unse-

cured connection to the Netezza system. If the Netezza system
requires a secured connection, the connection will be rejected.
• onlySecured — Specify this option when you want a secured

connection to the Netezza system. If the Netezza system
accepts only unsecured connections, or if you are attempting
to connect to a Netezza system that is running a release prior
to 4.5, the connection will be rejected.
-caCertFile

Specifies the pathname of the root CA certificate file on the client
system. This argument is used by Netezza clients who use peer
authentication to verify the Netezza host system. The default
value is NULL which skips the peer authentication process.

Within the nzsql command interpreter, you can enter the following commands for help or to
execute a command:


\h — Help for SQL commands.



\? — Internal slash commands. See Table 3-4.



\g or terminate with semicolon — Execute a query.



\q — Quit.

nzsql Session History
The Netezza system stores the history of your nzsql session in the file $HOME/.nzsql_history. In interactive sessions, you can also use the up-arrow key to see the commands you
have executed.
By default, an nzsql batch session continues even if the system encounters errors. You can
control this behavior with the ON_ERROR_STOP variable, for example:

20282-20

Rev.1

3-9

IBM Netezza System Administrator’s Guide

nzsql -v ON_ERROR_STOP=

You do not have to supply a value; simply defining it is sufficient.
You can also toggle batch processing with a SQL script. For example:
\set ON_ERROR_STOP
\unset ON_ERROR_STOP

You can use the $HOME/.nzsqlrc file to store values, such as the ON_ERROR_STOP, and
have it apply to all future nzsql sessions and all scripts.

Displaying Database Information
You can use the nzsql internal slash commands to display information about databases and
objects. Table 3-4 describes some of the internal slash commands that display information
about objects or privileges within the database. You can display all the options using the \?
command within the nzsql interpreter.
Table 3-4: nzsql Internal Slash Commands
Argument

Description

\d 

Describe the named object such as a table, view,
sequence, and so on

\d{t|v|i|s|e|x}

List tables/views/indices/sequences/temp tables/external
tables

\d{m|y}

List materialized views/synonyms

\dS{t|v|i|s}

List system tables/views/indexes/sequences

\dM{t|v|i|s}

List system management tables/views/indexes/
sequences

\dp 

List user permissions

\dpu 

Lis permissions granted to a user

\dpg 

List permissions granted to a group

\d{u|U}

List users (u) or user groups (U)

\df[+]

List user-defined functions (+ for detailed information)

\da[+]

List user-defined aggregates (+ for detailed information)

Suppressing Row Count Information
You can use the nzsql -r option or the session variable NO_ROWCOUNT to suppress the row
count information that appears at the end of the query output. A sample query that displays
the row count follows:
mydb(myuser)=> select count(*) from nation;
COUNT
------25
(1 row)

3-10

20282-20

Rev.1

NzAdmin Tool Overview

To suppress the row count information, you can use the nzsql -r command when you start
the SQL command line session. When you run a query, the output will not show a row
count:
mydb(myuser)=> select count(*) from nation;
COUNT
------25

You can use the NO_ROWCOUNT session variable to toggle the display of the row count
information within a session, as follows:
mydb(myuser)=> select count(*) from nation;
COUNT
------25
(1 row)
mydb(myuser)=> \set NO_ROWCOUNT
mydb(myuser)=> select count(*) from nation;
COUNT
------25
mydb(myuser)=> \unset NO_ROWCOUNT
mydb(myuser)=> select count(*) from nation;
COUNT
------25
(1 row)

NzAdmin Tool Overview
NzAdmin is a windows-based application that runs Windows client systems. It allows users
to manage the system, obtain hardware information and status, and manage various
aspects of the user databases, tables, and objects.
Users must install the Netezza windows client application to access the NzAdmin tool, as
described in “Installing the Netezza Tools on a Windows Client” on page 2-5. Users must
have Netezza database accounts and applicable object or administrative permissions to
perform tasks.

Client Compatibility
The NzAdmin client is intended to monitor Netezza systems that are at the same Netezza
software release level as the client. The client can monitor Netezza hosts with older
releases, but the client functionality may be incomplete. For example, when you monitor
older Netezza systems, some of the System tab features such as system statistics, event
management, and hardware component state changes are typically disabled. The Database
tab features are usually supported for the older systems.
The NzAdmin client is not compatible with Netezza hosts that are running releases at a
later revision. As a best practice, when you upgrade your Netezza system software you
should also upgrade your client software to match.

20282-20

Rev.1

3-11

IBM Netezza System Administrator’s Guide

Starting the NzAdmin Tool
To start an NzAdmin session, click Start > Programs > IBM Netezza > IBM Netezza Administrator. You can also create a shortcut on the desktop, or run the nzadmin.exe using the
Run window or from a command window, as follows:

Figure 3-1: Sample Run Command Window
If you run the nzadmin.exe in a command window, you can optionally enter the following
login information on the command line to bypass the login dialog:


-host or /host and the name of the Netezza host or its IP address



-user or /user and a valid Netezza database user name



-pw or /pw and a valid password for the Netezza user. The NzAdmin tool also can use
cached passwords on your client system. To specify using a cached password, use the
-pw option without a password string.

You can enter these arguments in any order, but you must separate them with spaces or
commas. You can mix the - and / command forms.


If you enter all three arguments, NzAdmin bypasses the login dialog and connects you
to the host you have specified. If there is an error, NzAdmin displays the login dialog
with the host and user fields completed and you must enter the password.



If you specify only one or two arguments, NzAdmin displays the login dialog. You must
complete the remaining fields.



If you duplicate arguments, that is, specify -host red and -host blue, NzAdmin displays
a warning message and uses the first one (host red).

Note: The NzAdmin tool and Web Admin accept delimited (quoted) user names in their
respective login dialogs. You can also delimit user names passed when invoking the NzAdmin tool in a command window.

Logging In to NzAdmin
Unless otherwise specified on the command line, the NzAdmin login dialog box requires
three arguments: host, user name, and password. When you enter the password, the NzAdmin tool allows you to save the encrypted password on the local system. When you login
again, you only need enter the host and user name.

3-12

20282-20

Rev.1

NzAdmin Tool Overview

The drop down list in the host field displays the previous host addresses or names you have
used in the past.

Figure 3-2: Login Dialog Box

Connecting to the Netezza System
When you log on, the NzAdmin tool checks the client/host and major and minor versions of
the Netezza system for a match with the NzAdmin tool’s version.
If they do not match, the NzAdmin tool displays a warning message and disables certain
commands, which causes event rules, statistics and system hardware operations to be
unavailable.

Figure 3-3: Netezza Revision Warning Window
You can suppress subsequent warning messages for version incompatibility by selecting
Don’t warn me about this again and clicking OK.

20282-20

Rev.1

3-13

IBM Netezza System Administrator’s Guide

Displaying System Components
After you log on, the NzAdmin tool main window appears. The NzAdmin tool has two main
environment views: System and Database. When you click either tab, the system displays
the tree view on the left side and the data view on the right side. You can switch between
these views at any time; however, the system defaults to the System view. When you start
the NzAdmin tool again, it defaults to your last environment selection.

Figure 3-4: NzAdmin Main System Window
In the main hardware view, NzAdmin displays an image of the Netezza system, which could
be one or more racks for Netezza z-series systems or one or more SPAs for IBM Netezza
100, 1000, C1000, or IBM PureData System for Analytics N1001 systems. As you move
the cursor over the image, NzAdmin displays information such as hardware IDs and other
details and the mouse cursor changes to the hyperlink hand. Clicking the image allows you
to drill down to more information about the component.
In the status bar at the bottom of the window, the NzAdmin tool displays your user name
and the duration of the NzAdmin session. If the host system is not in the online state, the
status bar displays the message “The host is not online.”
You can access commands through the menu bar, the toolbar, or by right-clicking objects.

Interpreting the Color Status Indicators
Each component has a general status indicator based on a color. The color provides a simple indication of the state of the component.

3-14

20282-20

Rev.1

NzAdmin Tool Overview

Table 3-5 describes the colors and their meaning.
Table 3-5: Color Indicators
Status

Description
Green

Normal. The component is operating normally.

Yellow

Warning. The meaning depends on the specific component(s).

Red

Failed. The component is down or failed. It can also indicate that a
component is likely to fail, which is the case if two fans on the same
SPA are down.

Empty

Missing. A component is missing and no state is available.

Main Menu Commands
Table 3-6 lists the commands you can execute from the main menu.
Table 3-6: Main Menu Commands

20282-20

Rev.1

Command

Description

File > New

Allows you to create a database, table, view, materialized view, sequence, synonym, user or group.
Available only in Database tab.

File > System State

Allows you to change the system state.

File > Reconnect

Reconnects to the Netezza system with a different
hostname, address or user name.

File > Exit

Exits the application.

View > Toolbar

Shows/hides the application toolbar.

View > Status Bar

Shows/hides the status bar.

View > System Objects

Shows/hides system tables and views and applies to
object privilege lists in the Object Privileges
window.

View > SQL Statements

Displays the SQL Window that shows a subset of the
SQL commands NzAdmin has used in this session.

View > Refresh F5

Refreshes the entire view — What is refreshed
depends on whether you are viewing the System or
Database section of the NzAdmin tool.

3-15

IBM Netezza System Administrator’s Guide

Table 3-6: Main Menu Commands
Command

Description

Tools > Workload Management

Performance — Displays Summary, History, and
Graph Workload Management information.
Settings — Displays the System Defaults that you
can use to set the limits on session timeout, row set,
query timeout, and session priority and the
Resource Allocation that you can use to specify
resource usage among groups.

Tools > Table Skew

Table Skew — Displays any tables that meet or
exceed a specified skew threshold.

Tools > Table Storage

Table Storage — Displays table and materialized
view storage usage by database or by user.

Tools > Query History Configuration Query History Configuration — Displays a window
that you can use to create and alter query history
configurations, as well as to set the current
configuration.
Tools > Default Settings

Default Settings — Displays the materialized view
refresh threshold.

Tools > Options

Preferences — Displays the Preferences dialog box
that you can use to set the object naming preference and whether you want auto refresh.

Help > NzAdmin Help

Displays the online help for the NzAdmin tool.

Help > About NzAdmin

Displays the NzAdmin and Netezza revision numbers and copyright text.

Using the NzAdmin Tool Hyperlinks
Although you can view the system components by navigating the tree control, you can also
navigate to the major hardware components through hyperlinks embedded within the component images in the right pane. If you hover your mouse pointer over the system images,
NzAdmin displays information about the components and changes to a “link” hand icon to
show that the image has hyperlinks as shown in Figure 3-5.

3-16

20282-20

Rev.1

NzAdmin Tool Overview

Figure 3-5: NzAdmin Hyperlink Support
As you move the cursor over the SPA image, the NzAdmin tool displays the slot number,
hardware ID, role, and state of each SPU, and the mouse cursor changes to the hyperlink
hand. Clicking the SPU displays the SPU status window and positions the tree control to
the corresponding entry.

Administration Commands
You can access system and database administration commands from both the tree view and
the status pane of NzAdmin. In either case, a popup or context menu supports the commands related to the components displayed.


To activate a pop-up context menu, right-click a component in a list.



The Options hyperlink menu is located in the top bar of the window.

Setting Automatic Refresh
The NzAdmin tool can automatically refresh system and database status. You can also
manually refresh the current environment by clicking the refresh icon on the tool bar, or by
choosing refresh from individual option or context menus.
Complete the following steps to set auto refresh:
1. Click Tools > Options from the main menu.
2. In the Preferences dialog box, enable automatic refresh and specify a refresh interval.
The default is 60 seconds. You can specify any time between 60 and 9999 seconds.

20282-20

Rev.1

3-17

IBM Netezza System Administrator’s Guide

Figure 3-6: Preferences Dialog Box
If you enable auto refresh, the NzAdmin tool displays a refresh icon in the right corner of
the status bar. The system stores the refresh state and time interval, and maintains this
information across NzAdmin sessions. Therefore, if you set automatic refresh, it remains in
effect until you change it.
To reduce communication with the server, the NzAdmin tool refreshes data based on the
item you select in the left pane. Table 3-7 lists the items and corresponding data retrieved
on refresh.
Table 3-7: Automatic Refresh

3-18

Selected Item

Data Retrieved

Server (system view)
SPA Units
SPA ID n
SPU units

All topology and hardware state information.

Event rules

Event rules.

Individual statistic such as DBMS
Group

Specific statistic information.

Server (database view)

All databases and their associated objects, users,
groups, and session information.

Databases

All database information and associated objects.

Database 

Specific database, table, and view information.

Tables

Table information.

Views

View information.

Sequences

Sequences information.

20282-20

Rev.1

Web Admin Overview

Table 3-7: Automatic Refresh
Selected Item

Data Retrieved

Synonyms

Synonyms information.

Functions

User-defined functions information.

Aggregates

User-defined aggregates information.

Procedures

Stored procedure information.

Users

User information.

Groups

Group information.

Sessions

Session information.

If the NzAdmin tool is already communicating with the backend server, such as processing
a user command or performing a manual refresh, it does not execute an auto refresh.

Controlling NzAdmin Session Termination
When you stop the Netezza Server or a network error occurs, the NzAdmin tool displays an
error message and allows you to reconnect or exit the session.

Figure 3-7: Connection Error window


If you click Reconnect, the NzAdmin tool attempts to establish a connection to the
server.



If you click Exit, NzAdmin terminates your session.

Web Admin Overview
The Netezza Web Admin software package lets you monitor and administer a Netezza system from supported web browsers on client systems. Web Admin supports the following
browser applications:


Internet Explorer 7 and later versions



Firefox 3 and later versions

No software is required on the client systems other than the web browser application. The
Web Admin package consists of web server software and the web page files, which comprise the Web Admin interface.

20282-20

Rev.1

3-19

IBM Netezza System Administrator’s Guide

You can install the Web Admin server package on the Netezza host system, or on any Linux
system that can connect to the Netezza system. The Linux system should run an operating
system version that matches the Web Admin installation package.
Using the Web Admin interface you can do the following:


Display the status of Netezza hardware, user and system sessions, data storage usage,
database, tables, views, sequences, synonyms, functions, aggregates, stored procedures, active queries and query history, and users and groups.
Note: The query history information accessible from the Web Admin interface uses the
_v_qryhist and _v_qrystat views for backward compatibility. These views will be deprecated in the future. For details on the new query history feature, see Chapter 11,
“Query History Collection and Reporting.”



Create databases, views, sequences, synonyms, users, and groups.



Assign access privileges to users and groups, control group membership, manage
default/user/group settings, rename or change ownership.



Generate reports on table properties, workload management, statistics, and record
distribution.

This chapter provides a overview of the Netezza Web Admin interface. For additional details
see the online help.
Note: The Web Admin accepts delimited (quoted) user names in the login dialog.

Using the Web Admin Application
You connect to the Web Admin server by pointing your web browser at the main HTML login
page, admin.html, at the appropriate server name (for example: https://server_name/
admin.html).
The IBM Netezza Web Admin Interface page is a login page where you enter the name or IP
address of a Netezza server along with a valid Netezza database user name and password.
By default, Netezza uses the Secure Socket Layer (SSL) protocol to ensure that passwords
are encrypted when they travel from client to server. To connect using the secure sockets
layer, the web address must begin with the “https://” prefix. If the Web admin interface
does not use SSL, you can use the “http://” prefix.
Netezza provides a security certificate on the server that the client browser downloads and
flags. The web browser detects the Netezza certificate, and users should permanently
install it in the browser’s local storage. After the certificate is installed, users can connect
to the secure address without further interaction. For more information on how to permanently install the Netezza site certificate, see “Installing the Web Admin Server and
Application Files” on page 2-8.

Understanding the Web Admin Page Layout
All the Web Admin pages except for the login page are divided into three main sections:

3-20



A navigation pane along the left side.



A status area at the top.



A general information area that fills the remainder of the page.

20282-20

Rev.1

Web Admin Overview

Navigation Pane
The navigation page is on the left side of the page and contains the main list of site links.
This page is fixed and, with a few exceptions, is present on all pages within the site. Most
links are grouped within system and database commands.

Figure 3-8: Navigation Pane

Status Pane
The status pane is at the top of the page, and contains database status and system state,
time of last status update, host revision number, hostname or address, and user name and
authentication setting.

Figure 3-9: Status Pane
The status area also includes a search box, which you can use to search through system
tables. Depending on the search string you enter, the system finds the following items:


If the search string is numeric, the system searches for hardware identifiers or IP
addresses, such as a SPU or SPA.



If the search string is alphanumeric, the system searches for databases, tables, views,
sequences, synonyms, functions, aggregates, procedures, and user or group names.
The alphanumeric search uses the SQL ‘like’ operator, therefore you can augment the
search string with SQL pattern characters. For example, the search string ‘cust%’ finds
all occurrences of the customer table throughout all the databases in the system.

20282-20

Rev.1

3-21

IBM Netezza System Administrator’s Guide

System Summary Page
The System Summary is the interface’s home page. It is the first page you see after logging
in to a Netezza system. It provides a summary view of the system that consolidates session
information, hardware, disk usage, and database status/key activity. The colored text on the
page are links to additional detail or status.

Figure 3-10: System Summary Page

Drilldown Links
The Web Admin interface lets you drill down for more detailed information on system, hardware, and database objects. Many pages contain drilldown links, in text or graphical form.
For example:


In the Hardware View page, you can click on the rack image to drill down to a specific
SPA.



In the SPA Status page, you can click on a SPU within the SPA image to drill down to
detailed information on a SPU.



In the Table List page, you can click on a table name to drill down to table properties.

Action Buttons
At the top of many Web Admin pages, there are action links that provide additional navigation based on the current page’s content. For example, from the Table Properties page you
can select to view the table record distribution or statistics, or truncate or drop the table.

Online Help
The Web Admin interface provides you with two types of help:


3-22

Task-oriented help — Available when you click Help Contents in the navigation pane.

20282-20

Rev.1

Web Admin Overview



Context-sensitive help — Available when you click the question icon on each page.

Connecting to Systems Running Earlier Software Releases
If you connect to Netezza systems running an earlier software release, some commands
may fail because of inadequate permissions or the presence of status which is not available
in an earlier release.
The system displays the following error messages:

20282-20

Rev.1



“Information for this command is not available.”



“You either do not have the proper access privileges or the host software is not compatible with this version of WebAdmin.”

3-23

IBM Netezza System Administrator’s Guide

3-24

20282-20

Rev.1

CHAPTER

4

Managing Netezza HA Systems
What’s in this chapter
 Linux-HA and DRBD Overview
 Differences with the Previous Netezza HA Solution
 Linux-HA Administration
 DRBD Administration
 Administration Reference and Troubleshooting

The Netezza high availability (HA) solution uses Linux-HA and Distributed Replicated
Block Device (DRBD) as the foundation for cluster management and data mirroring. The
Linux-HA and DRBD applications are commonly used, established, open source projects for
creating HA clusters in various environments. They are supported by a large and active
community for improvements and fixes, and they also offer the flexibility for Netezza to add
corrections or improvements on a faster basis, without waiting for updates from third-party
vendors.
The IBM Netezza 1000, C1000, IBM PureData System for Analytics N1001, and NEC
InfoFrame DWH Appliances are HA systems, which means that they have two host servers
for managing Netezza operations. The host server (often referred to as host within the documentation) is a Linux server that runs the Netezza software and utilities. This chapter
describes some high-level concepts and basic administration tasks for the Netezza HA
environment.

Linux-HA and DRBD Overview
High-Availability Linux (also referred to as Linux-HA) provides the failover capabilities from
a primary or active Netezza host to a secondary or standby Netezza host. The main cluster
management daemon in the Linux-HA solution is called Heartbeat. Heartbeat watches the
hosts and manages the communication and status checks of services. Each service is a
resource. Netezza groups the Netezza-specific services into the nps resource group. When
Heartbeat detects problems that imply a host failure condition or loss of service to the
Netezza users, Heartbeat can initiate a failover to the standby host. For details about LinuxHA and its terms and operations, see the documentation at http://www.linux-ha.org.
Distributed Replicated Block Device (DRBD) is a block device driver that mirrors the content of block devices (hard disks, partitions, logical volumes, and so on) between the hosts.
Netezza uses the DRBD replication only on the /nz and /export/home partitions. As new

4-1

IBM Netezza System Administrator’s Guide

data is written to the /nz partition and the /export/home partition on the primary host, the
DRBD software automatically makes the same changes to the /nz and /export/home partition of the standby host.
The Netezza implementation uses DRBD in a synchronous mode, which is a tightly coupled
mirroring system. When a block is written, the active host does not record the write as complete until both the active and the standby hosts successfully write the block. The active
host must receive an acknowledgement from the standby host that it also has completed
the write. Synchronous mirroring (DRBD protocol C) is most often used in HA environments
that want the highest possible assurance of no lost transactions should the active node fail
over to the standby node. Heartbeat typically controls the DRBD services, but commands
are available to manually manage the services.
For details about DRBD and its terms and operations, see the documentation available at
http://www.drbd.org.

Differences with the Previous Netezza HA Solution
In prior releases, the Netezza HA solution leveraged Red Hat Cluster Manager as the foundation for managing HA host systems. The Linux-HA solution uses different commands to
manage the cluster. Table 4-1 outlines the common tasks and the commands used in each
HA environment.
Table 4-1: HA Tasks and Commands (Old Design and New Design)
Task

Old Command (Cluster Manager)

New Command (Linux-HA)

Display cluster status

clustat -i 5

Relocate NPS service

cluadmin -- service relocate nps

/nzlocal/scripts/heartbeat_admin.sh --migrate

Enable the NPS service

cluadmin -- service enable nps

crm_resource -r nps -p target_role -v started

Disable the NPS service

cluadmin -- service disable nps

crm_resource -r nps -p target_role -v stopped

Start the cluster on each
node

service cluster start

service heartbeat start

Stop the cluster on each
node

service cluster stop

service heartbeat stop

crm_mon -i5

Some additional points of differences between the solutions:


All Linux-HA and DRBD logging information is written to /var/log/messages on each
host. For more information about the log files, see “Logging and Messages” on
page 4-13.



In the new cluster environment, pingd has replaced netchecker (the Network Failure
Daemon). pingd is a built-in part of the Linux-HA suite.



The cluster manager HA solution also required a storage array (the MSA500) as a quorum disk to hold the shared data. A storage array is not used in the new
Linux-HA/DRBD solution, as DRBD automatically mirrors the data in the /nz and
/export/home partitions from the primary host to the secondary host.
Note: The /nzdata and /shrres file systems on the MSA500 are deprecated.

4-2

20282-20

Rev.1

Linux-HA Administration



In some customer environments that used the previous cluster manager solution, it was
possible to have only the active host running while the secondary was powered off. If
problems occurred on the active host, the Netezza administrator onsite would power off
the active host and power on the standby. In the new Linux-HA DRBD solution, both
HA hosts must be operational at all times. DRBD ensures that the data saved on both
hosts is synchronized, and when Heartbeat detects problems on the active host, the
software automatically fails over to the standby with no manual intervention.

Linux-HA Administration
When you start a Netezza HA system, Heartbeat automatically starts on both hosts. It can
take a few minutes for Heartbeat to start all the members of the nps resource group. You
can use the crm_mon command from either host to observe the status, as described in
“Monitoring the Cluster and Resource Group Status” on page 4-6.

Heartbeat Configuration
Heartbeat uses the /etc/ha.d/ha.cf configuration file first to load its configuration. The file
contains low-level information about fencing mechanisms, timing parameters, and whether
the configuration is v1 (old-style) or v2 (CIB). Netezza uses the v2 implementation.
Do not modify the file unless directed to in Netezza documentation or by Netezza Support.

CIB
The majority of the Heartbeat configuration is stored in the Cluster Information Base (CIB).
The CIB is located on disk at /var/lib/heartbeat/crm/cib.xml. Heartbeat synchronizes it automatically between the two Netezza hosts.
NEVER manually edit the CIB file! You must use cibadmin (or crm_resource) to modify the
Heartbeat configuration. Wrapper scripts like heartbeat_admin.sh will update the file in a
safe way.

Note: It is possible to get into a situation where Heartbeat will not start properly due to a
manual CIB modification—although the CIB cannot be safely modified without Heartbeat
being started (that is, cibadmin cannot run). In this situation, you can run /nzlocal/scripts/
heartbeat_config.sh to reset the CIB and /etc/ha.d/ha.cf to factory-default status. After
doing this, it is necessary to run /nzlocal/scripts/heartbeat_admin.sh --enable-nps to complete the CIB configuration.

Important Information about Host 1 and Host 2
In the Red Hat cluster manager implementation, the HA hosts were commonly referred to
as HA1 and HA2. The terms stemmed from the hardware and rack configurations as HA
systems were typically multi-rack systems, and HA1 was located in the “first” rack (usually
the leftmost rack from the front), while HA2 was in the “second” rack of the HA system.
Either HA1 or HA2 could serve as the active or standby host, although HA1 was most often

20282-20

Rev.1

4-3

IBM Netezza System Administrator’s Guide

the “default” active host and so HA1 is often synonymous with the active host. The names
HA1 and HA2 are still used to refer to the host servers regardless of their active/standby
role.
In IBM Netezza HA system designs, host1/HA1 is configured by default to be the active
host. You can run cluster management commands from either the active or the standby
host. The nz* commands must be run on the active host, but the commands run the same
regardless of whether host 1 or host 2 is the active host. The Netezza software operation is
not affected by the host that it runs on; the operation is identical when either host 1 or host
2 is the active host.
However, when host 1 is the active host, certain system-level operations such as S-Blade
restarts and reboots often complete more quickly than when host 2/HA2 is the active host.
An S-Blade reboot can take one to two minutes longer to complete when host 2 is the
active host. Certain tasks such as manufacturing and system configuration scripts can
require host 1 to be the active host, and they will display an error if run on host 2 as the
active host. The documentation for these commands indicates whether they require host 1
to be the active host, or if special steps are required when host 2 is the active host.

Managing Failover Timers
There are several failover timers that monitor Heartbeat operations and timings. The default
settings were chosen to cover the general range of Netezza system implementations.
Although Netezza has not encountered frequent need to change these values, each customer environment is unique. Failover timers should not be changed without consultation
from Netezza Support.
The failover timers are configured in /etc/ha.d/ha.cf.


Deadtime – specifies the failure detection time (default: 30 seconds). For a busy
Netezza system in a heavily loaded environment, you might need to increase this value
if you observe frequent “No local heartbeat” errors or “Cluster node returning after partition” errors in the /var/log/messages file.



Warntime – specifies the warning for late heartbeat (default: 10 seconds).



Keepalive – specifies the interval between liveness pings (default: 2 seconds).

You can change the settings by editing the values in ha.cf on both hosts and restarting
Heartbeat, but use care when editing the file.

Netezza Cluster Management Scripts
Netezza provides wrapper scripts for many of the common cluster management tasks.
These wrapper scripts help to simplify the operations and to guard against accidental configuration changes that could cause the Netezza HA operations to fail.
Note: Table 4-2 lists the common commands. Note that these commands are listed here
for reference, but they are described in detail in the IBM Netezza System Configuration

4-4

20282-20

Rev.1

Linux-HA Administration

Guide for your model type. Refer to that guide if you need to perform any of these
procedures.
Table 4-2: Cluster Management Scripts
Type

Scripts

Initial installation
scripts

heartbeat_config.sh: Sets up Heartbeat for the first time
heartbeat_admin.sh --enable-nps: Adds Netezza services to cluster control after initial installation

Hostname change

heartbeat_admin.sh --change-hostname

Fabric IP change

heartbeat_admin.sh --change-fabric-ip

Wall IP change

heartbeat_admin.sh --change-wall-ip

Manual migrate
(relocate)

heartbeat_admin.sh --migrate

Linux-HA status and
troubleshooting
commands:

crm_mon: Monitor cluster status
crm_verify: Sanity check configuration, and print status

Note: The following is a list of other Linux-HA commands available. This list is also provided as a reference, but it is highly recommended that you do not use any of these
commands unless directed to by Netezza documentation or by Netezza Support.
Linux-HA configuration commands:


cibadmin: Main interface to modify configuration



crm_resource: Shortcut interface for modifying configuration



crm_attribute: Shortcut interface for modifying configuration



crm_diff: Diff and patch two different CIBs

Linux-HA administration commands:


crmadmin: Low-level query and control



crm_failcount: Query and reset failcount



crm_standby: Mark a node as standby, usually for maintenance

Identifying the Active and Standby Nodes
There are two ways to determine which Netezza host is the active host and which is the
standby.

20282-20

Rev.1



Use the crm_resource command.



Review the output of the crm_mon command.

4-5

IBM Netezza System Administrator’s Guide

A sample crm_resource command and its output fofllow.
[root@nzhost1 ~]# crm_resource -r nps -W
crm_resource[5377]: 2009/01/31_10:13:12 info: Invoked: crm_resource -r
nps -W
resource nps is running on: nzhost1

The command output displays a message about how it was invoked, and then displays the
hostname where the nps resource group is running. This is the active host.
You can obtain more information about the state of the cluster and which host is active
using the crm_mon command. Refer to the sample output shown in the next section, “Monitoring the Cluster and Resource Group Status” on page 4-6.
Note: If the nps resource group is unable to start, or if it has been manually stopped (such
as by crm_resource -r nps -p target_role -v stopped), neither host is considered to be active.
If this is the case, crm_resource -r nps -W will not return a hostname.

Monitoring the Cluster and Resource Group Status
To check the state of the cluster and the nps resource group:
crm_mon -i5

Sample output follows. This command refreshes its display every five seconds, but you can
specify a different refresh rate (for example, -i10 is a ten-second refresh rate). Press Control-C to exit the command.
[root@nzhost1 ~]# crm_mon -i5
============
Last updated: Wed Sep 30 13:42:39 2009
Current DC: nzhost1 (key)
2 Nodes configured.
3 Resources configured.
============
Node: nzhost1 (key): online
Node: nzhost2 (key): online
Resource Group: nps
drbd_exphome_device (heartbeat:drbddisk):
Started
drbd_nz_device
(heartbeat:drbddisk):
Started
exphome_filesystem (heartbeat::ocf:Filesystem):
nz_filesystem
(heartbeat::ocf:Filesystem):
fabric_ip
(heartbeat::ocf:IPaddr):
Started
wall_ip
(heartbeat::ocf:IPaddr):
Started
nz_dnsmasq (lsb:nz_dnsmasq): Started nzhost1
mantravm
(lsb:mantravm): Started nzhost1
nzinit
(lsb:nzinit):
Started nzhost1
fencing_route_to_ha1
(stonith:apcmaster):
Started
fencing_route_to_ha2
(stonith:apcmaster):
Started

nzhost1
nzhost1
Started nzhost1
Started nzhost1
nzhost1
nzhost1

nzhost2
nzhost1

The host running the nps resource group is considered the active host. Every member of the
nps resource group will start on the same host. The output above shows that they are all
running on nzhost1, which means that nzhost1 is the active host.
Note: If the nps resource group is unable to start, or if it has been manually stopped (such
as by crm_resource -r nps -p target_role -v stopped), neither host is considered to be active.
If this is the case, crm_mon will either show individual resources in the nps group as
stopped, or it will not show the nps resource group at all.

4-6

20282-20

Rev.1

Linux-HA Administration

Although the crm_resource output shows that the MantraVM service is started, this is a
general status for Heartbeat monitoring. For details on the MantraVM status, use the service mantravm status command which is described in “Displaying the Status of the
MantraVM Service” on page 14-4.
Note: The crm_mon output also shows the name of the Current DC. The Designated Coordinator (DC) host is not an indication of the active host. The DC is an automatically assigned
role that Linux-HA uses to identify a node that acts as a coordinator when the cluster is in
a healthy state. This is a Linux-HA implementation detail and does not impact Netezza.
Each host is capable of recognizing and recovering from failure, regardless of which one is
the DC. For more information about the DC and Linux-HA implementation details, see
http://www.linux-ha.org/DesignatedCoordinator.
The resources under the nps resource group are as follows:


The DRBD devices:
drbd_exphome_device (heartbeat:drbddisk):
drbd_nz_device
(heartbeat:drbddisk):





Both filesystem mounts:
exphome_filesystem

(heartbeat::ocf:Filesystem):

Started nzhost1

nz_filesystem

(heartbeat::ocf:Filesystem):

Started nzhost1

The 10.0.0.1 IP setup on the fabric interface:
fabric_ip



(heartbeat::ocf:IPaddr):

Started nzhost1

(lsb:nz_dnsmasq):

Started nzhost1

The MantraVM service:
mantravm



Started nzhost1

The DNS daemon for Netezza:
nz_dnsmasq



(heartbeat::ocf:IPaddr):

The floating wall IP (external IP for HA1 + 3):
wall_ip



Started nzhost1
Started nzhost1

(lsb:mantravm): Started nzhost1

The Netezza daemon which performs necessary prerequisite work and then starts the
Netezza software:
nzinit

(lsb:nzinit):

Started nzhost1

The fence routes for internal Heartbeat use are not part of the nps resource group. If these
services are started, it means that failovers are possible:
fencing_route_to_ha1
fencing_route_to_ha2

(stonith:apcmaster):
(stonith:apcmaster):

Started nzhost2
Started nzhost1

nps Resource Group
The nps resource group contains the following services or resources:

20282-20

Rev.1



drbd_exphome_device



drbd_nz_device



exphome_filesystem



nz_filesystem

4-7

IBM Netezza System Administrator’s Guide



fabric_ip



wall_ip



nz_dnsmasq



mantravm



nzinit

The order of the members of the group matters; group members are started sequentially
from first to last. They are stopped sequentially in reverse order, from last to first. Heartbeat blocks on each member’s startup and will not attempt to start the next group member
until the previous member has started successfully. If any member of the resource group is
unable to start (returns an error or times out), Heartbeat performs a failover to the standby
node.
Note: The mantravm resource is not a blocking resource; that is, if the MantraVM service
does not start when the nps resource group is starting, the nps resource group does not wait
for the MantraVM to start.

Failover Criteria
During a failover or resource migration, the nps resource group is stopped on the active
host and started on the standby host. The standby host then becomes the active host.
It is important to differentiate between a resource failover and a resource migration (or relocation). A failover is an automated event which is performed by the cluster manager
without human intervention when it detects a failure case. A resource migration occurs
when an administrator intentionally moves the resources to the standby.
A failover can be triggered by any of the following events:


BOTH maintenance network links to the active host are lost.



ALL fabric network links to the active host are lost.



A user manually stops Heartbeat on the active host.



The active host is cleanly shut down, such as if someone issued the command
shutdown -h on that host.



The active host is uncleanly shut down, such as during a power failure to the system
(both power supplies fail).



If any member of the nps resource group cannot start properly when the resource group
is initially started.



If any one of the following members of the nps resource group fails after the resource
group was successfully started:


drbd_exphome_device or drbd_nz_device: These correspond to low-level DRBD
devices that serve the shared filesystems. If these devices fail, the shared data
would not be accessible on that host.



exphome_filesystem or nz_filesystem: These are the actual mounts for the DRBD
devices.



nz_dnsmasq: The DNS daemon for the Netezza system.

Note: If any of these resource group members experiences a failure, Heartbeat first tries to
restart or repair the process locally. The failover is triggered only if that repair or restart pro-

4-8

20282-20

Rev.1

Linux-HA Administration

cess does not work. Other resources in the group not listed above are not monitored for
failover detection.
The following common situations DO NOT trigger a failover:


Any of the failover criteria occurring on the STANDBY host while the active host is
healthy.
Note: Heartbeat may decide to fence (forcibly power cycle) the standby host when it
detects certain failures to try to restore the standby host to a state of good health.



A single maintenance network link to the active host is lost.



Losing some (but not all) of the fabric network links to the active host.



Network connectivity from the Netezza host (either active or standby) to the customer's
network is lost.



One or both network connections serving the DRBD network fail.



The MantraVM service fails. If the MantraVM service should fail for any reason, it will
not cause a failover of the nps resource group to the standby host.

Relocate to the Standby Node
The following commands can be used to manually relocate the nps resource group from the
active Netezza node to the standby node. At the conclusion of this process, the standby
node becomes the active node and the previous active node becomes the standby.
Note: In the previous Netezza Cluster Manager solution, HA1 is the name of the primary
node, and HA2 the secondary node. In Linux-HA/DRBD, either host could be primary; thus,
these procedures refer to one host as the active host and one as the standby host.
To relocate the nps resource group from the active host to the standby host:
[root@nzhost1 ~]# /nzlocal/scripts/heartbeat_admin.sh --migrate
Testing DRBD communication channel...Done.
Checking DRBD state...Done.
Migrating the NPS resource group from NZHOST1 to
NZHOST2................................................Complete.
20100112_084039 INFO : Run crm_mon to check NPS' initialization
status.

The command blocks until the nps resource group stops completely. To monitor the status,
use the crm_mon -i5 command. You can run the command on either host, although on the
active host you need to run it from a different terminal window.

Safe Manual Control of the Hosts (And Heartbeat)
In general, you should never have to stop Heartbeat unless the Netezza HA system requires
hardware or software maintenance or troubleshooting. During these times, it is important
that you control Heartbeat to ensure that it does not interfere with your work by taking STONITH actions to regain control of the hosts. The recommended practice is to shut down
Heartbeat completely for service.
To shut down the nps resource group and Heartbeat:
1. Identify which node is the active node using the following command:

20282-20

Rev.1

4-9

IBM Netezza System Administrator’s Guide

[root@nzhost1 ~]# crm_resource -r nps -W
resource nps is running on: nzhost1

2. Stop Heartbeat on the standby Netezza host:
[root@nzhost2 ~]# service heartbeat stop
Stopping High-Availability services:
[ OK ]

This command blocks until it completes successfully. It is important to wait and let the
command complete. You can check /var/log/messages for status messages, or you can
monitor progress on a separate terminal session using either of the following commands:
tail -f /var/log/messages
crm_mon -i5
3. Stop Heartbeat on the active Netezza host:
[root@nzhost1 ~]# service heartbeat stop
Stopping High-Availability services:
[ OK ]

In some rare cases, the Heartbeat cannot be stopped using this process. In these cases
you can force Heartbeat to stop as described in “Forcing Heartbeat to Shutdown” on
page 4-17.

Transition to Maintenance (Non-Heartbeat) Mode
To enter into maintenance mode:
1. While logged in to either host as root, display the name of the active node:
[root@nzhost1 ~]# crm_resource -r nps -W
resource nps is running on: nzhost1

2. As root, stop Heartbeat on the standby node (nzhost2 in this example):
[root@nzhost2 ~]# service heartbeat stop

3. As root, stop Heartbeat on the active node:
[root@nzhost1 ~]# service heartbeat stop

4. As root, make sure that there are no open nz sessions or any open files in the /nz and/
or /export/home shared directories. For details, see “Checking for User Sessions and
Activity” on page 4-19.
[root@nzhost1 ~]# lsof /nz /export/home

5. Run the following script in /nzlocal/scripts to make the Netezza system ready for nonclustered operations. The command prompts you for a confirmation to continue, shown
as Enter in the output.
[root@nzhost1 ~]# /nzlocal/scripts/nz.non-heartbeat.sh
--------------------------------------------------------------Thu Jan 7 15:13:27 EST 2010
File systems and eth2 on this host are okay. Going on.
File systems and eth2 on other host are okay. Going on.
This script will configure Host 1 or 2 to own the shared disks and
own the fabric.

4-10

20282-20

Rev.1

Linux-HA Administration

When complete, this script will have:
mounted /export/home and /nz
aliased 10.0.0.1 on eth2
run the rackenable script appropriate for this host
based on the last octet of eth2
being 2 for rack 1 or 3 for rack 2
To proceed, please hit enter. Otherwise, abort this. Enter
Okay, we are proceeding.
Thu Jan 7 15:13:29 EST 2010
Filesystem
1K-blocks
Used Available Use% Mounted on
/dev/sda6
16253924
935980 14478952
7% /
/dev/sda10
8123168
435272
7268604
6% /tmp
/dev/sda9
8123168
998808
6705068 13% /usr
/dev/sda8
8123168
211916
7491960
3% /var
/dev/sda7
8123168
500392
7203484
7% /opt
/dev/sda3
312925264
535788 296237324
1% /nzscratch
/dev/sda1
1019208
40192
926408
5% /boot
none
8704000
2228
8701772
1% /dev/shm
/dev/sda12
4061540
73940
3777956
2% /usr/local
/dev/drbd0
16387068
175972 15378660
2% /export/home
/dev/drbd1
309510044
5447740 288340020
2% /nz
Done mounting file systems
eth2:0
Link encap:Ethernet HWaddr 00:07:43:05:8E:26
inet addr:10.0.0.1 Bcast:10.0.15.255 Mask:255.255.240.0
UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1
Interrupt:122 Memory:c1fff000-c1ffffff
Done enabling IP alias
Running nz_dnsmasq:
nz_dnsmasq started.

[

OK

]

Ready to use NPS in non-cluster environment

6. As the nz user, start the Netezza software:
[nz@nzhost1 ~] nzstart

Transitioning from Maintenance to Clustering Mode
To reinstate the cluster from a maintenance mode:
1. Stop the Netezza software using the nzstop command.
2. Make sure Heartbeat is not running on either node. Use the service heartbeat stop
command to stop the Heartbeat on either host if it is running.
3. Make sure that there are no nz user login sessions, and make sure that no users are in
the /nz or /export/home directories; otherwise, the nz.heartbeat.sh command will not be
able to unmount the DRBD partitions. For details, see “Checking for User Sessions and
Activity” on page 4-19.
4. Run the following script in /nzlocal/scripts to make the Netezza system ready for clustered operations. The command prompts you for a confirmation to continue, shown as
Enter in the output.

20282-20

Rev.1

4-11

IBM Netezza System Administrator’s Guide

[root@nzhost1 ~]# /nzlocal/scripts/nz.heartbeat.sh
--------------------------------------------------------------Thu Jan 7 15:14:32 EST 2010
This script will configure Host 1 or 2 to run in a cluster
When complete, this script will have:
unmounted /export/home and /nz
Disabling IP alias 10.0.0.1 from eth2
To proceed, please hit enter. Otherwise, abort this. Enter
Okay, we are proceeding.
Thu Jan 7 15:14:33 EST 2010
Filesystem
1K-blocks
Used Available Use% Mounted on
/dev/sda6
16253924
935980 14478952
7% /
/dev/sda10
8123168
435272
7268604
6% /tmp
/dev/sda9
8123168
998808
6705068 13% /usr
/dev/sda8
8123168
211928
7491948
3% /var
/dev/sda7
8123168
500544
7203332
7% /opt
/dev/sda3
312925264
535788 296237324
1% /nzscratch
/dev/sda1
1019208
40192
926408
5% /boot
none
8704000
2228
8701772
1% /dev/shm
/dev/sda12
4061540
73940
3777956
2% /usr/local
Done unmounting file systems
eth2:0
Link encap:Ethernet HWaddr 00:07:43:05:8E:26
UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1
Interrupt:122 Memory:c1fff000-c1ffffff
Done disabling IP alias
Shutting down dnsmasq:
nz_dnsmasq stopped.
Ready to use NPS in a cluster environment

[

OK

]

Note: If the command reports errors that it is unable to unmount /nz or /export/home,
you must manually make sure that both partitions are mounted before running the
command again. The script may have unmounted one of the partitions, even if the
script failed. Otherwise the script may not run.
5. As root, start the cluster on the first node, which will become the active node:
[root@nzhost1 ~] service heartbeat start
Starting High-Availability services:
[

OK

]

6. As root, start the cluster on the second node, which will become the standby node:
[root@nzhost2 ~] service heartbeat start
Starting High-Availability services:
[

OK

]

Cluster Manager Events
You can configure the Cluster Manager to send events when a failover is caused by any of
the following:

4-12



Node shutdown



Node reboot

20282-20

Rev.1

DRBD Administration



Node fencing actions (STONITH actions)

To configure the Cluster Manager:
1. Log into the active host as the root user.
2. Using a text editor, edit the /nzlocal/maillist file as follows. Add the lines shown in bold
below.
#
#Email notification list for the cluster manager problems
#
#Enter email addresses of mail recipients under the TO entry, one
to a line
#
#Enter email address of from email address (if a non-default is
desired)
#under the FROM entry
#
TO:
admin1@yourcompany.com
admin2@yourcompany.com
FROM:
NPS001ClusterManager@yourcompany.com

Note: For the “TO” email addresses, specify one or more email addresses for the users
who wish to receive email about cluster manager events. For the “FROM” email
address, specify the email address that you want to use as the sender of the event
email.
3. Save and close the maillist file.
4. Log in as root to the standby host and repeat steps 2 and 3 on the standby host.
Note: The /nzlocal/maillist files should be identical on both hosts in the cluster.
5. After you configure the maillist files, test the event mail by shutting down or rebooting
either host in the cluster. Your specified TO addresses should receive email about the
event.

Logging and Messages
All the logging information is stored in the /var/log/messages file on each host. The log file
on the active host typically contains more information, but messages can be written to the
log files on both hosts. Any event or change in status for Heartbeat is well-documented in
this log file. If something should go wrong, you can often find the explanation in this log
file. If you are working with Netezza Support to troubleshoot Linux-HA or DRBD issues, be
sure to send a copy of the log files from both Netezza hosts.

DRBD Administration
DRBD provides replicated storage of the data in managed partitions (that is, /nz and /
export/home). When a write occurs to one of these locations, the write action is performed
at both the local node and the peer standby node. Both perform the same write to keep the
data in synchronization. The peer responds to the active node when finished, and if the
local write operation is also successfully finished, the active node reports the write as
complete.

20282-20

Rev.1

4-13

IBM Netezza System Administrator’s Guide

Read operations are always performed by the local node.
The DRBD software can be started, stopped, and monitored using the following command
(as root):
/sbin/service drbd start/stop/status

While you can use the status command as needed, you should only stop and start the
DRBD processes during routine maintenance procedures or when directed by Netezza Support. As a best practice, do not stop the DRBD processes on a healthy, active Netezza HA
host to avoid the risk of split-brain. For more information, see “Split-Brain.”

Monitoring DRBD Status
You can monitor the DRBD status using one of two methods:


service drbd status



cat /proc/drbd

Sample output of the commands follows. These examples assume that you are running the
commands on the primary (active) Netezza host. If you run them from the standby host,
note that the output shows the secondary status first, then the primary.
[root@nzhost1 ~]# service drbd status
drbd driver loaded OK; device status:
version: 8.2.6 (api:88/proto:86-88)
GIT-hash: 3e69822d3bb4920a8c1bfdf7d647169eba7d2eb4 build by root@nps22094, 2009-06-09
16:25:53
m:res cs
st
ds
p mounted
fstype
0:r1
Connected Primary/Secondary UpToDate/UpToDate C /export/home ext3
1:r0
Connected Primary/Secondary UpToDate/UpToDate C /nz
ext3
[root@nzhost1 ~]# cat /proc/drbd
version: 8.2.6 (api:88/proto:86-88)
GIT-hash: 3e69822d3bb4920a8c1bfdf7d647169eba7d2eb4 build by root@nps22094, 2009-06-09
16:25:53
0: cs:Connected st:Primary/Secondary ds:UpToDate/UpToDate C r--ns:15068 nr:1032 dw:16100 dr:3529 al:22 bm:37 lo:0 pe:0 ua:0 ap:0 oos:0
1: cs:Connected st:Primary/Secondary ds:UpToDate/UpToDate C r--ns:66084648 nr:130552 dw:66215200 dr:3052965 al:23975 bm:650 lo:0 pe:0 ua:0 ap:0 oos:0

In the sample output, note that the states of DRBD are one of the following:


Primary/Secondary — the “healthy” state for DRBD. One device is Primary and one is
Secondary.



Secondary/Secondary — DRBD is in a holding pattern. This usually occurs at boot time
or when the nps resource group is stopped.



Primary/Unknown — One node is available and healthy, the other node is either down
or the cable is not connected.



Secondary/Unknown — This is a rare case where one node is in standby, the other is
either down or the cable is not connected, and DRBD cannot declare a node as the primary/active node. If the other host also shows this status, the problem is most likely in
the connection between the hosts. Contact Netezza Support for assistance in troubleshooting this case.

The common Connection State values include the following:

4-14

20282-20

Rev.1

DRBD Administration



Connected — the normal and operating state; the host is communicating with its peer.



WFConnection — the host is waiting for its peer node connection; usually seen when
other node is rebooting.



Standalone — the node is functioning alone due to a lack of network connection with
its peer and will not try to reconnect. If the cluster is in this state, it means that data is
not being replicated. Manual intervention is required to fix this problem.

The common State values include the following:


Primary — the primary image; local on active host.



Secondary — the mirror image, which receives updates from the primary; local on
standby host.



Unknown — always on other host; state of image is unknown.

The common Disk State values include the following:


UpToDate — the data on the image is current.



DUnknown — this is an unknown data state; usually results from a broken connection.

Sample DRBD Status Output
The DRBD status prior to Heartbeat start:
M:res
0:r1
1:r0

cs
Connected
Connected

st
Secondary/Secondary
Secondary/Secondary

ds
UpToDate/UpToDate
UpToDate/UpToDate

p
C
C

mounted

fstype

The DRBD status when the current node is active and the standby node is down:
m:res
0:r1
1:r0

cs
WFConnection
WFConnection

st
Primary/Unknown
Primary/Unknown

ds
UpToDate/DUnknown
UpToDate/DUnknown

p
C
C

mounted
/export/home
/nz

fstype
ext3
ext3

The DRBD status as displayed from the standby node:
m:res
0:r1
1:r0

cs
Connected
Connected

st
Secondary/Primary
Secondary/Primary

ds
UpToDate/UpToDate
UpToDate/UpToDate

p
C
C

mounted

fstype

Split-Brain
Split-brain is an error state that occurs when the images of data on each Netezza host are
different. It typically occurs when synchronization is disabled and users change data independently on each Netezza host. As a result, the two Netezza host images are different, and
it becomes difficult to resolve what the latest, correct image should be.
Split-brain does not occur if clustering is enabled. The fencing controls prevent users from
changing the replicated data on the standby node. It is highly recommended that you allow
DRBD management to be controlled by Heartbeat to avoid the “split-brain” problems.

However, if a split-brain problem should occur, the following message appears in the /var/
log/messages file:

20282-20

Rev.1

4-15

IBM Netezza System Administrator’s Guide

Split-Brain detected, dropping connection!

While DRBD does have automatic correction processes to resolve split-brain situations, the
Netezza implementation disables the automatic correction. Manual intervention is
required, which is the best way to ensure that as many of the data changes are restored as
possible.
To detect and repair split-brain, work with Netezza Support to follow this procedure:
1. Look for “Split” in /var/log/messages, usually on the host that you are trying to make
the primary/active host. Let DRBD detect this condition.
2. Because split-brain results from running both images as primary Netezza hosts without
synchronization, check the Netezza logs on both hosts. For example, check the pg.log
files on both hosts to see when/if updates have occurred. If there is an overlap in times,
both images have different information.
3. Identify which host image, if either, is the correct image. In some cases, neither host
image may be fully correct. You must choose the image that is the more correct. The
host that has the image which you decide is correct is the “survivor”, and the other
host is the “victim”.
4. Perform the following procedure:
a. Log in to the victim host as root and run these commands:
Note: As a best practice, perform these steps for one resource at a time; that is, perform all the commands in steps b. and c. for r0 and then repeat them all for r1. There
is an all option, but use it carefully. The individual resource commands usually work
more effectively.
drbdadm secondary resource where resource can be r0, r1 or all
drbdadm disconnect resource where resource can be r0, r1 or all
drbdadm -- --discard-my-data connect resource where resource can
be r0, r1 or all
b. Log in to the survivor host as root and run this command:
drbdadm connect resource where resource can be r0, r1 or all
Note: The connect command may display an error that instructs you to run drbdadm
disconnect first.
5. You can check the status of the fix using drbdadm primary resource and the service
drbd status command. Make sure that you run drbdadm secondary resource before you
start Heartbeat.

Administration Reference and Troubleshooting
The following sections describe some common administration task references and troubleshooting steps.

4-16

20282-20

Rev.1

Administration Reference and Troubleshooting

IP Address Requirements
Table 4-3 is an example block of the eight IP addresses that are recommended for a customer to reserve for an HA system:
Table 4-3: HA IP Addresses
Entity

Sample IP Address

HA1

172.16.103.209

HA1 Host Management

172.16.103.210

MantraVM Management

172.16.103.211

Floating IP

172.16.103.212

HA2

172.16.103.213

HA2 Host Management

172.16.103.214

Reserved

172.16.103.215

Reserved

172.16.103.216

In the IP addressing scheme, note that there are two host IPs, two host management IPs,
and the floating IP, which is HA1 + 3.

Forcing Heartbeat to Shutdown
There may be times when you try to stop Heartbeat using the normal process as described
in “Safe Manual Control of the Hosts (And Heartbeat)” on page 4-9, but Heartbeat does
not stop even after a few minutes’ wait. If you must stop Heartbeat, you can use the following command to force Heartbeat to stop itself:
crmadmin -K hostname

You need to run this command twice. Then, try to stop Heartbeat again using service heartbeat stop. Note that this process is not guaranteed to stop all of the resources that
Heartbeat manages, such as /nz mount, drbd devices, nzbootpd, and so on.

Shutting Down Heartbeat on Both Nodes without Causing Relocate
If you stop Heartbeat on the active node first, Linux-HA identifies this as a resource failure
and will initiate a failover to the standby node. To avoid this, always stop Heartbeat on the
standby first. After it has stopped completely, you can stop Heartbeat on the active node.
See “Safe Manual Control of the Hosts (And Heartbeat)” on page 4-9.

Restarting Heartbeat after Maintenance Network Issues
If a host loses its maintenance network connection to the system devices, the Netezza HA
system will perform a fencing operation (STONITH) to stop the failed host. After the host
restarts, Heartbeat will fail to start on the reboot. After the maintenance network is
repaired, you must manually restart Heartbeat to resume normal cluster operations. To
restart Heartbeat on the recovered node, log in to that host as root and use the service
heartbeat start command.

20282-20

Rev.1

4-17

IBM Netezza System Administrator’s Guide

Resolving Configuration Problems
If you make a configuration change to the nps resource group or Heartbeat, and there are
problems following the change, you can often diagnose the problem from the status information of the crm_verify command:
crm_verify -LVVVV

You can specify one or more V characters. The more V’s that you specify, the more verbose
the output. Specify at least four or five V’s as a best practice, and increase the number as
needed. You can specify up to 12 V’s, but that large a number is not recommended.
Sample output follows:
[root@ nzhost1 ha.d]# crm_verify -LVVV
crm_verify[18488]: 2008/11/18_00:02:03 info: main: =#=#=#=#= Getting XML
=#=#=#=#=
crm_verify[18488]: 2008/11/18_00:02:03 info: main: Reading XML from: live
cluster
crm_verify[18488]: 2008/11/18_00:02:03 notice: main: Required feature set:
1.1
crm_verify[18488]: 2008/11/18_00:02:03 notice: cluster_option: Using
default value '60s' for cluster option 'cluster-delay'
crm_verify[18488]: 2008/11/18_00:02:03 notice: cluster_option: Using
default value '-1' for cluster option 'pe-error-series-max'
crm_verify[18488]: 2008/11/18_00:02:03 notice: cluster_option: Using
default value '-1' for cluster option 'pe-warn-series-max'
crm_verify[18488]: 2008/11/18_00:02:03 notice: cluster_option: Using
default value '-1' for cluster option 'pe-input-series-max'
crm_verify[18488]: 2008/11/18_00:02:03 notice: cluster_option: Using
default value 'true' for cluster option 'startup-fencing'
crm_verify[18488]: 2008/11/18_00:02:03 info: determine_online_status:
Node nzhost1 is online
crm_verify[18488]: 2008/11/18_00:02:03 info: determine_online_status:
Node nzhost2 is online

Fixed a Problem, but crm_mon Still Shows Failed Items
Heartbeat sometimes leaves error status on crm_mon output, even after an item is fixed. To
resolve this, use crm_resource in Cleanup Mode:
crm_resource -r name_of_resource -C -H hostname

For example, if the fencing route to ha1 is listed as failed on host1, use the following
command:
crm_resource -r fencing_route_to_ha1 -C -H host1

Output From crm_mon Does Not Show the nps Resource Group
If the log messages indicate that the nps resource group “cannot run anywhere”, the cause
is that Heartbeat tried to run the resource group on both HA1 and HA2, but it failed in both
cases. Search in /var/log/messages on each host to find this first failure. Search from the
bottom of the log for the message “cannot run anywhere” and then scan upward in the log
to find the service failures. You must fix the problem(s) that caused a service to fail to start
before you can successfully start the cluster.
After you fix the failure case, you must restart Heartbeat following the instructions in “Transitioning from Maintenance to Clustering Mode” on page 4-11.

4-18

20282-20

Rev.1

Administration Reference and Troubleshooting

Linux Users and Groups Required for HA
To operate properly, Heartbeat requires the following Linux user and groups which are
added automatically to each of the Netezza hosts during the Heartbeat RPM installation:


User: hacluster:x:750:750::/home/hacluster:/bin/bash



Groups:


hacluster:x:750:



haclient:x:65:

Do not modify or remove the user or groups because those changes will impact Heartbeat
and disrupt HA operations on the Netezza system.

Checking for User Sessions and Activity
Open nz user sessions and nz user activity can cause the procedures to stop Heartbeat and
to return to clustering to fail. Use the nzsession command to see if there are active database sessions in progress. For example:
[nz@nzhost1 ~]$ nzsession -u admin -pw password
ID
Type User Start Time
PID
Database State Priority
Name Client IP Client PID Command
----- ---- ----- ----------------------- ----- -------- ------ ------------ --------- ---------- -----------------------16748 sql ADMIN 14-Jan-10, 08:56:56 EST 4500 CUST
active normal
127.0.0.1
4499 create table test_2
16753 sql ADMIN 14-Jan-10, 09:12:36 EST 7748 INV
active normal
127.0.0.1
7747 create table test_s
16948 sql ADMIN 14-Jan-10, 10:14:32 EST 21098 SYSTEM active normal
127.0.0.1
21097 SELECT session_id, clien

The sample output shows three sessions: the last entry is the session created to generate
the results for the nzsession command. The first two entries are user activity, and you
should wait for those sessions to complete or stop them prior before you use the nz.heartbeat.sh or nz.non-heartbeat.sh commands.
To check for connections to the /export/home and /nz directory:
1. As the nz user on the active host, stop the Netezza software:
[nz@nzhost1 ~]$ /nz/kit/bin/nzstop

2. Log out of the nz account and return to the root account; then use the lsof command to
list any open files that reside in /nz or /export/home. Sample output follows:
[root@nzhost1 ~]# lsof /nz /export/home
COMMAND
PID USER
FD
TYPE DEVICE
SIZE
bash
2913
nz cwd
DIR 8,5
4096
indexall. 4493
nz cwd
DIR 8,5
4096
less
7399
nz cwd
DIR 8,5
4096
lsof
13205
nz cwd
DIR 8,5
4096
grep
13206
nz cwd
DIR 8,5
4096
tail
22819
nz
3r
REG 8,5
146995

20282-20

Rev.1

NODE
1497025
1497025
1497025
1497025
1497025
1497188

NAME
/export/home/nz
/export/home/nz
/export/home/nz
/export/home/nz
/export/home/nz
/export/home/nz/fpga_135.log

4-19

IBM Netezza System Administrator’s Guide

This example shows that there are several open files in /export/home. If necessary, you
could close the open files using a command such as kill and supplying the process ID (PID)
shown in the second column. Use caution with the kill command; if you are not familiar
with Linux system commands, contact Support or your Linux system administrator for
assistance.

4-20

20282-20

Rev.1

CHAPTER

5

Managing the Netezza Hardware
What’s in this chapter
 Netezza Hardware Components
 Hardware Management Tasks
 Managing Data Slices
 Power Procedures

This chapter describes administration tasks for hardware components of the Netezza appliance. Most of the administration tasks focus on obtaining status and information about the
operation of the appliance, and in becoming familiar with the hardware states. This chapter
also describes tasks to perform should a hardware component fail.

Netezza Hardware Components
The Netezza appliance has a number of hardware components that support the operation of
the device. The Netezza appliance consists of one or more racks of hardware, with host
servers, switches, SPUs, disks, power controllers, cooling devices, I/O cards, management
modules, and cables. However, in the day-to-day administration of the device, only a subset
of these components require administrative attention of any kind. Many of these components are redundant and hot-swappable to ensure highly available operation of the
hardware.
The key hardware components to monitor include the following:
Table 5-1: Key Netezza Hardware Components to Monitor
Component

Description

Comments/Management Focus

Host servers

Each Netezza HA system has one or two host
servers to run the Netezza software and supporting applications. If a system has two
host servers, the hosts operate in a highly
available (HA) configuration; that is, one
host is the active or primary host, and the
other is a standby host ready to take over
should the active host fail.

Tasks include monitoring of the hardware status of the active/standby hosts, and
occasional monitoring of disk space consumption on the hosts. At times, the host
may require Linux OS or health driver
upgrades to improve its operational software.

5-1

IBM Netezza System Administrator’s Guide

Table 5-1: Key Netezza Hardware Components to Monitor
Component

Description

Comments/Management Focus

Snippet processing
arrays (SPAs)

SPAs contain the SPUs and associated disk
storage which drive the query processing on
the Netezza appliance. IBM Netezza 100
systems have one host server and thus are
not HA configurations.

Tasks include monitoring of the SPA environment, such as fans, power, temperature, and
so on. SPUs and disks are monitored
separately.

Storage
Group

In the IBM Netezza High Capacity Appliance
C1000 model, disks reside within a storage
group. The storage group consists of three
disk enclosures: an intelligent storage enclosure with redundant hardware RAID
controllers, and two expansion disk enclosures. There are four storage groups in each
C1000 rack.

Tasks include monitoring the status of the
disks within the storage group.

Disks

Disks are the storage media for the user
databases and tables managed by the
Netezza appliance.

Tasks include monitoring the health and status of the disk hardware. Should a disk fail,
tasks include regenerating the disk to a spare
and replacing the disk.

Data slices

Data slices are virtual partitions on the disks
that contain user databases and tables. Each
partition has a redundant copy to ensure that
the data can survive one disk failure.

Tasks include monitoring the status or health
of the data slices and also the space consumption of the data slice.

Fans and
blowers

These components control the thermal cooling for the racks and components such as
SPAs and disk enclosures.

Tasks include monitoring the status of the
fans and blowers, and should a component
fail, replacing the component to ensure
proper cooling of the hardware.

Power
supplies

These components provide electrical power
to the various hardware components of the
system.

Tasks include monitoring the status of the
power supplies, and should a component fail,
replacing the component to ensure redundant
power to the hardware.

The Netezza appliance uses SNMP events (described in Chapter 7, “Managing Event
Rules”) and status indicators to send notifications of any hardware failures. Most hardware
components are redundant; thus, a failure typically means that the remaining hardware
components will assume the work of the component that failed. The system may or may not
be operating in a degraded state, depending upon the component that failed.
Never run the system in a degraded state for a long period of time. It is imperative to
replace a failed component in a timely manner so that the system returns to an optimal
topology and best performance.
Netezza Support and Field Service will work with you to replace failed components to
ensure that the system returns to full service as quickly as possible. Most of the system
components require Field Service support to replace. Components such as disks can be
replaced by customer administrators.

5-2

20282-20

Rev.1

Netezza Hardware Components

Displaying Hardware Components
You use the nzhw show command to display information about the hardware components of
your Netezza system. For details about the nzhw command syntax and options, see “nzhw”
on page A-26.
Note: You can also use the NzAdmin Tool or Web Admin interface to display hardware information and status.


To display the hardware summary, enter:
nzhw show

Figure 5-1 shows some sample output and highlights several important fields that describe
status and aspects of the hardware.
Hardware Type

Hardware State
Hardware ID

Description
HW ID
------------- ----Rack
1001
SPA
1002
EthSw
1003
MM
1003
SPU
1003
DiskEnclosure 1004
Fan
1005
Fan
1006
PowerSupply
1009
PowerSupply
1010
Disk
1011
Disk
1012
Disk
1013
...

Hardware Role

Location
Role
State
--------------------------- ------ -----rack1
Active Ok
spa1
Active Ok
spa1.ethsw1
Active Ok
spa1.mm1
Active Ok
spa1.spu7
Active Online
spa1.diskEncl4
Active Ok
spa1.diskEncl4.fan1
Active Ok
spa1.diskEncl4.fan2
Active Ok
spa1.diskEncl4.pwr1
Active Ok
spa1.diskEncl4.pwr2
Active Ok
spa1.diskEncl4.disk1 Active Ok
spa1.diskEncl4.disk2 Active Ok
spa1.diskEncl4.disk3 Active Ok

Figure 5-1: Sample nzhw show Output
For an IBM Netezza High Capacity Appliance C1000 system, the nzhw output shows the
storage group information, for example:

20282-20

Rev.1

5-3

IBM Netezza System Administrator’s Guide

Description
------------Rack
SPA
StorageGroup
StorageGroup
StorageGroup
StorageGroup
DiskEnclosure
Disk
Disk
Disk
Disk
Disk
...

HW ID
----1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012

Location
------------------------------rack1
spa1
spa1.storeGrp1
spa1.storeGrp2
spa1.storeGrp3
spa1.storeGrp4
spa1.storeGrp1.diskEncl1
spa1.storeGrp1.diskEncl1.disk5
spa1.storeGrp1.diskEncl1.disk1
spa1.storeGrp1.diskEncl1.disk12
spa1.storeGrp1.diskEncl1.disk9
spa1.storeGrp1.diskEncl1.disk10

Role
-----Active
Active
Active
Active
Active
Active
Active
Active
Active
Failed
Active
Active

State
-----Ok
Ok
Ok
Ok
Ok
Ok
Ok
Ok
Ok
Ok
Ok
Ok

Figure 5-2: Sample nzhw show Output (IBM Netezza C1000 Systems)

Hardware Types
Each hardware component of the Netezza system has a type that identifies the hardware
component. Table 5-2 describes the hardware types. You see these types when you run the
nzhw command or display hardware using the NzAdmin or Web Admin UIs.
Table 5-2: Hardware Description Types
Description

Comments

Rack

A hardware rack for the Netezza system

SPA

Snippet processing array (SPA)

SPU

Snippet processing unit (SPU)

Disk Enclosure

A disk enclosure chassis, which contains the disk devices

Disk

A storage disk, contains the user databases and tables

Fan

A thermal cooling device for the system

Blower

A fan pack used within the S-Blade chassis for thermal cooling

Power supply

A power supply for an enclosure (SPU chassis or disk)

MM

A management device for the associated unit (SPU chassis, disk enclosure). These devices include the AMM and ESM components, or a
RAID controller for an intelligent storage enclosure in a Netezza C1000
system.

Store Group

A group of three disk enclosures within an IBM Netezza C1000 system
managed by redundant hardware RAID controllers

Ethernet Switch Ethernet switch (for internal network traffic on the system)
Host

5-4

A high availability (HA) host on the Netezza appliance

20282-20

Rev.1

Netezza Hardware Components

Table 5-2: Hardware Description Types
Description

Comments

SASController

A SAS controller within the Netezza HA hosts

Host disk

A disk resident on the host that provides local storage to the host

Database accel- A Netezza Database Accelerator Card (DAC), which is part of the Serator card
Blade/SPU pair

Hardware IDs
Each hardware component has a unique hardware identifier (ID) which is in the form of an
integer, such as 1000, 1001, 1014, and so on. You can use the hardware ID to perform
operations on a specific hardware component, or to uniquely identify which component in
command output or other informational displays.


To display information about the component with the hardware ID 1001:
[nz@nzhost ~]$ nzhw show -id 1011
Description HW ID Location
Role
State
----------- ----- -------------------- ------ ----Disk
1011 spa1.diskEncl4.disk1 Active Ok

Hardware Location
Netezza uses two formats to describe the position of a hardware component within a rack.


The logical location is a string in a dot format that describes the position of a hardware
component within the Netezza rack. For example, the nzhw output shown in Figure 5-1
on page 5-3 shows the logical location for components; a Disk component description
follows:
Disk

1011 spa1.diskEncl1.disk1

Active Ok

In this example, the location of the disk is in SPA 1, disk enclosure one, disk position
one.
Similarly, the disk location for a disk on an IBM Netezza C1000 system shows the location including storage group:
Disk


1029

spa1.storeGrp1.diskEncl2.disk5

Active Ok

The physical location is a text string that describes the location of a component. You
can display the physical location of a component using the nzhw locate command. For
example, to display the physical location of disk ID 1011:
[nz@nzhost ~]$ nzhw locate -id 1011
Turned locator LED 'ON' for Disk: Logical
Name:'spa1.diskEncl4.disk1' Physical Location:'1st Rack, 4th
DiskEnclosure, Disk in Row 1/Column 1'

As shown in the command output, the nzhw locate command also lights the locator
LED for components such as SPUs, disks, and disk enclosures. For hardware components that do not have LEDs, the command displays the physical location string.

20282-20

Rev.1

5-5

IBM Netezza System Administrator’s Guide

Figure 5-3 shows an IBM Netezza 1000-12 system or an IBM PureData System for Analytics N1001-010 system with a closer view of the storage arrays and SPU chassis
components and locations.

Enclosure1
Disk Array 1

Enclosure2
Enclosure3

Disk Array 2

Enclosure4

Host 1
Host 2
KVM

SPU Chassis 1

Each disk array
has four disk
enclosures; each
enclosure has 12
disks, numbered
as follows:
1

2

3

4

5

6

7

8

9

10

11

12

SPU1 occupies slots 1 and 2;
SPU3 occupies slots 3 and 4, up
to SPU 11 which occupies slots
11 and 12.

SPU
Chassis 2

Figure 5-3: IBM Netezza Full-Rack System Components and Locations
Figure 5-4 shows an IBM Netezza C1000-4 system with a closer view of the storage groups
and SPU chassis components and locations.

5-6

20282-20

Rev.1

Netezza Hardware Components

Storage
Group 1
Storage
Group 2

Enclosure 1
Enclosure 2
Enclosure 3

Host
KVM
Storage
Group 3

Each disk array
has three disk
enclosures; each
enclosure has 12
disks, numbered
as follows:

Row 1

1

2

3

4

Row 2

5

6

7

8

Row 3

9

10

11

12

Storage
Group 4
SPU1 occupies slots 1
SPU3 occupies slots 3
SPU9 occupies slots 9
SPU 11 occupies slots

SPU Chassis 1

and 2;
and 4,
and 10,
11 and 12.

Figure 5-4: IBM Netezza C1000 System Components and Locations
For detailed information about the locations of various components in the front and back of
the system racks, see the Site Preparation and Specifications: IBM Netezza C1000 Systems guide.

Hardware Roles
Each hardware component of the Netezza system has a hardware role, which represents
how the hardware is being used. Table 5-3 describes the hardware roles. You see these
roles when you run the nzhw command or display hardware status using the NzAdmin or
Web Admin UIs.
Table 5-3: Hardware Roles
Role

Description

Comments

None

The None role indicates that the hardware is initialized, but it has yet to be discovered by the Netezza
system. This usually occurs during system startup
before any of the SPUs have sent their discovery
information.

All active SPUs must be discovered
before the system can transition from
the Discovery state to the Initializing
state.

Active

The hardware component is an active system participant. Failing over this device could impact the
Netezza system.

Normal system state

20282-20

Rev.1

5-7

IBM Netezza System Administrator’s Guide

Table 5-3: Hardware Roles
Role

Description

Comments

Assigned

The hardware is transitioning from spare to active. In
IBM Netezza 100, 1000, IBM PureData System for
Analytics N1001, and NEC InfoFrame DWH Appliances, this is the role when a disk is involved in a
regeneration. It is not yet active, so it cannot participate in queries.

Transitional state

Failed

The hardware has failed. It cannot be used as a spare. Monitor your supply of spare disks. Do
After maintenance has been performed, you must acti- not operate without spare disks.
vate the hardware using the nzhw command before it
can become a spare and used in the system.

Inactive

The hardware is not available for any system operations. You must activate the hardware using the nzhw
command before it can become a spare and used in
the system.

Mismatched

This role is specific to disks. If the disk has a UUID
that does not match the host UUID, then it is considered mismatched. You must activate the hardware
using the nzhw command before it can become a
spare and used in the system.

Spare

The hardware is not used in the current running
Normal system state. After a new disk
Netezza system, but it is available to become active in is added to the system, its role is set to
the event of a failover.
Spare.

Incompatible The hardware is incompatible with the system. It
should be removed and replaced with compatible
hardware.

To use the SPU as a spare, activate it,
otherwise, remove it from the system.
To delete it from the system catalog,
use the nzhw delete command.

Some examples are disks that are
smaller in capacity than the smallest
disk in use, or blade cards which are
not Netezza SPUs.

Hardware States
The state of a hardware component represents the power status of the hardware. Each
hardware component has a state. Table 5-4 describes the hardware states for all components except a SPU.
Note: SPU states are the system states, which are described in Table 6-3 on page 6-4.
You see these states when you run the nzhw command or display hardware status using the
NzAdmin or Web Admin UIs.

5-8

20282-20

Rev.1

Netezza Hardware Components

Table 5-4: Hardware States
State

Description

Comments

None

The None state indicates that the hardware is initialized, but it has yet to be discovered by the
Netezza system. This usually occurs during system
startup before any of the SPUs have sent their discovery information.

All active SPUs must be discovered before
the system can transition from the Discovery state to the Initializing state. If any
active SPUs are still in the Booting state,
there could be an issue with the hardware
startup.

Ok

The Netezza system has received the discovery
information for this device, and it is working
properly.

Normal state

Down

The device has been turned off.

Invalid
Online

The system is running normally. It can service
requests.

Missing

The system manager has detected a new device in This typically occurs when a disk or SPU
a slot that was previously occupied but not
has been removed and replaced with a
spare without deleting the old device. The
deleted.
old device is considered absent because
the system manager cannot find it within
the system.

Unreachable

The system manager cannot communicate with a
previously discovered device.

The device may have been failed or physically removed from the system.

Critical

The management module has detected a critical
hardware problem, and the problem component’s
amber service light may be illuminated.

Contact Netezza Support to obtain help
with identifying and troubleshooting the
cause the of the critical alarm.

Note: The system manager also monitors the management modules (MMs) in the system,
which have a status view of all the blades in the system. As a result, you may see messages
similar to the following in the sysmgr.log file:
2011-05-18 13:34:44.711813 EDT Info: Blade in SPA 5, slot 11 changed
from state 'good' to 'discovering', reason is 'No critical or warning
events'
2011-05-18 13:35:33.172005 EDT Info: Blade in SPA 5, slot 11 changed
from state 'discovering' to 'good', reason is 'No critical or warning
events'

A transition from “good” to “discovering” indicates that the IMM (a management processor
on the blade) rebooted and that it is querying the blade hardware for status. The blade
remains in the “discovering” state during the query. The IMM then determines whether the
blade hardware state is good, warning, or critical, and posts the result to the AMM. The system manager reports the AMM status using these log messages. You can ignore these
normal messages. However, if you see a frequent number of these messages for the same
blade, there may be an issue with the IMM processor on that blade.

20282-20

Rev.1

5-9

IBM Netezza System Administrator’s Guide

Data Slices, Data Partitions, and Disks
A disk is a physical drive on which data resides. In a Netezza system, host servers have several disks that hold the Netezza software, host operating system, database metadata, and
sometimes small user files. The Netezza system also has many more disks that hold the
user databases and tables. For IBM Netezza 1000 or IBM PureData System for Analytics
N1001 systems, 48 disks reside in one storage array for a total of 96 disks in a full rack
configuration. For IBM Netezza C1000 systems, 36 disks reside in each storage group, and
there are four storage groups in a rack for a total of 144 disks.
A data slice is a logical representation of the data saved on a disk. The data slice contains
“pieces” of each user database and table. When users create tables and load their data,
they distribute the data for the table across the data slices in the system using a distribution key. An optimal distribution is one where each data slice has approximately the same
amount of each user table as any other. The Netezza system distributes the user data to all
of the data slices in the system using a hashing algorithm.
A data partition is a logical representation of a data slice that is managed by a specific
SPU. That is, each SPU owns one or more data partitions, which contains the user data
that the SPU is responsible for processing during queries. For example, in IBM Netezza
1000 or IBM PureData System for Analytics N1001 systems, each SPU typically owns 8
data partitions although one SPU has only 6 partitions. For an IBM Netezza C1000 system,
each SPU owns 9 data partitions by default. SPUs could own more than their default number of partitions; if a SPU fails, its data partitions are reassigned to the other active SPUs
in the system.

IBM Netezza 100/1000 Storage Design
Figure 5-5 shows a conceptual overview of SPUs, disks, data slices, and data partitions in
an IBM Netezza 1000, IBM PureData System for Analytics N1001, IBM Netezza 100, or
NEC InfoFrame DWH Appliance. In the figure, the SPU owns 8 data partitions which are
numbered from 0 to 7. For SPU ID 1003, its first data partition (0) points to data slice ID
9, which is stored on disk 1070. Each data partition points to a data slice. As an example,
assume that disk 1014 fails and its contents are regenerated to a spare disk ID 1024. In
this situation, the SPU 1003’s data partition 7, which previously pointed to data slice 16
on disk 1014, has been updated to point to data slice 16 on the new disk 1024.

5-10

20282-20

Rev.1

Netezza Hardware Components

SPU
1003
0
1
2
3
4
5
6
7

9
1070

11

13

15

1032

1051

1013

10

12

14

16

1071

1033

1052

1014

Disks

SPU
1164
0
1
2
3
4
5
6
7

55
1134

57

59

61

1153

1096

1115

56

58

60

62

1135

1154

1097

1116

Disks

Figure 5-5: SPUs, Disks, Data Slices, and Data Partitions
If a SPU fails, the system moves all its data slices to the remaining active SPUs for management. The system moves them in pairs (the pair of disks that contain the primary and
mirror data slices of each other). In this situation, some SPUs will have 10 data partitions
(numbered 0 — 9).

IBM Netezza C1000 Storage Design
In a Netezza C1000 system, each storage group has an intelligent storage controller which
resides in disk enclosure 3. The intelligent storage controller contains two redundant RAID
controllers that manage the disks and associated hardware within a storage group. The
RAID controllers are caching devices, which improves the performance of the read and
write operations to the disks. The caches are mirrored between the two RAID controllers for
redundancy; each controller has a flash backup device and a battery to protect the cache
against power loss.
The RAID controllers operate independently of the Netezza software and hosts. For example, if you stop the Netezza software (such as for an upgrade or other maintenance tasks),
the RAID controllers continue to run and manage the disks within their storage group. It is
common to see the activity LEDS on the storage groups operating even when the Netezza
system is stopped. If a disk fails, the RAID controller initiates the recovery and regeneration
process; the regeneration continues to run even when the Netezza software is stopped. If

20282-20

Rev.1

5-11

IBM Netezza System Administrator’s Guide

you use the nzhw command to activate, fail, or otherwise manage disks manually, the RAID
controllers will ensure that the action is allowed at that time; in some cases, commands
will return an error when the requested operation, such as a disk failover, is not allowed.
The RAID controller caches are disabled when any of the following conditions occur:


Battery failure



Cache backup device failure



Peer RAID controller failure (that is, a loss of the mirrored cache)

When the cache is disabled, the storage group (and the Netezza system) experiences a performance degradation until the condition is resolved and the cache is enabled again.
Figure 5-6 shows an illustration of the SPU/storage mapping. Each SPU in a Netezza
C1000 system owns 9 user data slices by default. Each data slice is supported by a threedisk RAID 5 storage array. The RAID 5 array can support a single disk failure within the
three-disk array. (More than one disk failure within the three-disk array results in the loss of
the data slice.) Seven disks within the storage group in a RAID 5 array are used to hold
important system information such as the nzlocal, swap and log partition.

Data slice 1

SPU

Data slice 9

nzlocal, swap, logs
Figure 5-6: Netezza C1000 SPU and Storage Representation
If a SPU fails, the system manager distributes the user data partitions and the nzlocal and
log partitions to the other active SPUs in the same SPU chassis. A Netezza C1000 system
requires a minimum of three active SPUs; if only three SPUs are active and one fails, the
system transitions to the down state.

System Resource Balance Recovery
The system resource balance is an important part of overall system performance. When a
component fails, or when an administrator performs a manual failover, the resulting configuration (that is, topology) could result in unequal workloads among the resources and
possible performance impacts.
For example, the default disk topology for IBM Netezza 100/1000 or IBM PureData System
for Analytics N1001 systems configures each S-Blade with eight disks that are evenly distributed across the disk enclosures of its SPA, as shown in Figure 5-7. If disks failover and

5-12

20282-20

Rev.1

Hardware Management Tasks

regenerate to spares, it is possible to have an unbalanced topology where the disks are not
evenly distributed among the odd- and even-numbered enclosures. This causes one of the
SAS (also called HBA) paths, which are shown as the dark lines connecting the blade chassis to the disk enclosures, to carry more traffic than the other.
Enclosure1
Enclosure2
Enclosure3
Enclosure4

Balanced Topology

Unbalanced Topology

Figure 5-7: Balanced and Unbalanced Disk Topologies
The system manager can detect and respond to disk topology issues. For example, if an SBlade has more disks in the odd-numbered enclosures of its array, the system manager
reports the problem as an overloaded SAS bus. You can use the nzhw rebalance command
to reconfigure the topology so that half of the disks are in the odd-numbered enclosures
and half in the even-numbered. (The rebalance process requires the system to transition to
the “pausing now” state to accomplish the topology update.)
When the Netezza system restarts, the restart process checks for topology issues such as
overloaded SAS buses or SPAs that have S-Blades with uneven shares of data slices. If the
system detects a spare S-Blade for instance, it will reconfigure the data slice topology to
fairly distribute the workload among the S-Blades.

Hardware Management Tasks
This section describes some administration tasks for the hardware components that are
typically monitored and managed by Netezza administrators. These components include
the following:


Hosts



SPUs



Disks

Other hardware components of the system do not have special administration tasks. In general, should one of the other components such as a power supply, fan, host, or other
component fail, you and/or Netezza Support will be alerted. Netezza Support will work with
you to schedule Service so that the failed components can be replaced to restore full operations and hardware redundancy.

20282-20

Rev.1

5-13

IBM Netezza System Administrator’s Guide

Callhome File
The callHome.txt file resides in the /nz/data/config directory and it defines important information about the Netezza system such as primary and secondary administrator contact
information, as well as system information such as location, model number, and serial
number. Typically, the Netezza installation team member edits this file for you when the
Netezza system is installed onsite, but you can review and/or edit the file as needed to
ensure that the contact information is current. For more information about configuring callhome, see “Adding an Event Rule” on page 7-8.

Displaying Hardware Issues
You can display a list of the hardware components that have problems and require administrative attention using the nzhw show -issues command. This command displays such
problems as components that have failed or components that are in an “abnormal” state
such as: disks that are assigned, missing, incompatible, or unsupported; SPUs that are
incompatible.
For example, the following command shows two failed disks on the system:
[nz@nzhost ~]$ nzhw show -issues
Description HW ID Location
----------- ----- -------------------Disk
1034 spa1.diskEncl2.disk5
Disk
1053 spa1.diskEncl3.disk5

Role
-----Failed
Failed

State
----Ok
Ok

The disks should be replaced to ensure that the system has spares and an optimal topology.
You can also use the NzAdmin and Web Admin interfaces to obtain visibility to hardware
issues and failures.

Managing Hosts
In general, there are very few management tasks relating to the Netezza hosts. In most
cases, the tasks are best practices for the optimal operation of the host. For example:

5-14



Do not change or customize the kernel or operating system files unless directed to do
so by Netezza Support or Netezza customer documentation. Changes to the kernel or
operating system files could impact the performance of the host.



Do not install third-party software on the Netezza host without consulting Netezza Support. While management agents or other applications may be of interest, it is important
to work with Support to ensure that third-party applications do not interfere with the
host processing.



During Netezza software upgrades, host and kernel software revisions are verified to
ensure that the host software is operating with the latest required levels. The upgrade
processes may display messages informing you to update the host software to obtain
the latest performance and security features.



On IBM Netezza 1000, C1000, IBM PureData System for Analytics N1001, and NEC
InfoFrame DWH Appliances, Netezza uses DRBD replication only on the /nz and
/export/home partitions. As new data is written to the Netezza /nz partition and the
/export/home partition on the primary Netezza system, the DRBD software automatically makes the same changes to the /nz and /export/home partition of the standby
Netezza system.

20282-20

Rev.1

Hardware Management Tasks



Use caution when saving files to the host disks; in general, it is not recommended that
you store Netezza database backups on the host disks, nor use the host disks to store
large files that could grow and fill the host disks over time. Be sure to clean up and
remove any temporary files that you create on the host disks to keep the disk space as
available as possible for Netezza software and database use.

If the active host fails, the Netezza HA software typically fails over to the standby host to
keep the Netezza operations running. Netezza Support will work with you to schedule field
service to replace the failed host.

Managing SPUs
Snippet Processing Units (SPUs) or S-Blades are hardware components that serve as the
query processing engines of the Netezza appliance. Each SPU has CPUs and FPGAs as well
as memory and I/O to process queries and query results. Each SPU has associated data
partitions that it “owns” to store the portions of the user databases and tables that the SPU
processes during queries.
The basic SPU management tasks are as follows:


Monitor status and overall health



Activate a spare SPU



Deactivate a spare SPU



Failover a SPU



Locate a SPU in the Netezza rack



Reset (power cycle) a SPU



Delete a failed, inactive, or incompatible SPU



Replace a failed SPU

The following sections describe how to perform these tasks.
You can use the nzhw command to activate, deactivate, failover, locate, and reset a SPU, or
delete SPU information from the system catalog. For more information about the nzhw
command syntax and options, see “nzhw” on page A-26.
To indicate which SPU you want to control, you can refer to the SPU using its hardware ID.
You can use the nzhw command to display the IDs, as well as obtain the information from
management UIs such as NzAdmin or Web Admin.

Monitor SPU Status
To obtain the status of one or more SPUs, you can use the nzhw command with the show
options.


To show the status of all the SPUs:
[nz@nzhost ~]$ nzhw show -type spu
Description HW ID
----------- ----SPU
1003
SPU
1080
SPU
1081

20282-20

Rev.1

Location
---------spa1.spu7
spa1.spu1
spa1.spu3

Role
-----Active
Active
Active

State
-----Online
Online
Online

5-15

IBM Netezza System Administrator’s Guide

SPU
SPU
SPU


1082 spa1.spu11 Active Online
1084 spa1.spu5 Active Online
1085 spa1.spu9 Active Online

To show detailed information about SPU ID 1082:
[nz@nzhost ~]$ nzhw show -id 1082 -detail
Description HW ID Location
Role
State Serial Number Hw Version
Details
----------- ----- ---------- ------ ------ ------------- -----------------------------------------------------------------------------------------------------------------------SPU
1082 spa1.spu11 Active Online 99FB798
10.0
8
CPU Cores; 15.51GB Memory; Dac Serial Number 0921S58200090; 4
FPGAs; Fpga Version: 1.81; Ip Addr: 10.0.10.34;

Activate a SPU
You can use the nzhw command to activate a SPU that is inactive or failed.


To activate a SPU:
nzhw activate -u admin -pw password -host nzhost -id 1004

Deactivate a SPU
You can use the nzhw command to make a spare SPU unavailable to the system. If the
specified SPU is active, the command displays an error.


To deactivate a spare SPU:
nzhw deactivate -u admin -pw password -host nzhost -id 1004

Failover a SPU
You can use the nzhw command to initiate a SPU failover.


To failover a SPU, enter:
nzhw failover -u admin -pw password -host nzhost -id 1004

Locate a SPU
You can use the nzhw command to turn on or off a SPU’s LED and display the physical
location of the SPU. The default is on.


To locate a SPU, enter:
nzhw locate -u admin -pw password -host nzhost -id 1082
Turned locator LED 'ON' for SPU: Logical Name:'spa1.spu11' Physical
Location:'1st Rack, 1st SPA, SPU in 11th slot'



To turn off a SPU’s LED, enter:
nzhw locate -u admin -pw password -host nzhost -id 1082 -off
Turned locator LED 'OFF' for SPU: Logical Name:'spa1.spu11'
Physical Location:'1st Rack, 1st SPA, SPU in 11th slot'

5-16

20282-20

Rev.1

Hardware Management Tasks

Reset a SPU
You can use the nzhw command to power cycle a SPU (a hard reset).


To reset a SPU, enter:
nzhw reset -u admin -pw password -id 1006

Delete a SPU Entry from the System Catalog
You can use the nzhw command to remove a failed, inactive, or incompatible SPU from the
system catalog.


To delete a SPU entry, enter:
nzhw delete -u admin -pw password -host nzhost -id 1004

Replace a Failed SPU
If a SPU hardware component fails and must be replaced, Netezza Support will work with
you to schedule service to replace the SPU.

Managing Disks
The disks on the system store the user databases and tables that are being managed and
queried by the Netezza appliance. The basic disk management tasks are as follows:


Monitor status and overall health



Activate a inactive, failed, or mismatched disk



Deactivate a spare disk



Failover a disk



Locate a disk in the Netezza rack



Delete a failed, inactive, mismatched, or incompatible disk



Replace a failed disk

The following sections describe how to perform these tasks.
You can use the nzhw command to activate, deactivate, failover, and locate a disk, or delete
disk information from the system catalog. The following sections describe how to perform
these tasks. For more information about the nzhw command syntax and options, see
“nzhw” on page A-26.
As a best practice to protect against data loss, never remove a disk from an enclosure or
remove a RAID controller or ESM card from its enclosure unless directed to do so by
Netezza Support or when you are using the hardware replacement procedure documentation. If you remove an Active or Spare disk drive, you could cause the system to restart or
transition to the down state. Data loss and system issues can occur if these components are
removed when it is not safe to do so.
Note: Netezza C1000 systems have RAID controllers to manage the disks and hardware in
the storage groups. You cannot deactivate a disk on a C1000 system. Also, the commands
to activate, fail, or delete a disk may return an error if the storage group cannot support the
action at that time.

20282-20

Rev.1

5-17

IBM Netezza System Administrator’s Guide

To indicate which disk you want to control, you can refer to the disk using its hardware ID.
You can use the nzhw command to display the IDs, as well as obtain the information from
management UIs such as NzAdmin or Web Admin.

Monitor Disk Status
To obtain the status of one or more disks, you can use the nzhw command with the show
options.


To show the status of all the disks (note that the sample output is abbreviated for the
documentation):
[nz@nzhost ~]$ nzhw show -type disk
Description HW ID Location
----------- ----- --------------------Disk
1011 spa1.diskEncl4.disk1
Disk
1012 spa1.diskEncl4.disk2
Disk
1013 spa1.diskEncl4.disk3
Disk
1014 spa1.diskEncl4.disk4
Disk
1015 spa1.diskEncl4.disk5
Disk
1016 spa1.diskEncl4.disk6
Disk
1017 spa1.diskEncl4.disk7
Disk
1018 spa1.diskEncl4.disk8
Disk
1019 spa1.diskEncl4.disk9
Disk
1020 spa1.diskEncl4.disk10



Role
-----Active
Active
Active
Active
Active
Active
Active
Active
Active
Active

State
----Ok
Ok
Ok
Ok
Ok
Ok
Ok
Ok
Ok
Ok

To show detailed information about disk ID 1012:

[nz@nzhost ~]$ nzhw show -id 1011 -detail
Description HW ID Location
Version Details

Role

State Serial Number

Hw

----------- ----- -------------------- ------ ----- -------------------- --------- -----------------------------Disk
BC1D

1011 spa1.diskEncl4.disk1 Active Ok
931.51 GiB; Model ST31000640SS

9QJ3ARET00009909FJXQ

Activate a Disk
You can use the nzhw command to make an inactive, failed, or mismatched disk available
to the system as a spare.


To activate a disk:
nzhw activate -u admin -pw password -host nzhost -id 1004

In some cases, the system may display a message that it cannot activate the disk yet
because the SPU has not finished an existing activation request. Disk activation usually
occurs very quickly, unless there are several activations taking place at the same time. In
this case, later activations wait until they are processed in turn.
Note: For a Netezza C1000 system, you cannot activate a disk that is still being used by
the RAID controller for a regeneration or other task. If the disk cannot be activated, an error
message similar to the following appears:

5-18

20282-20

Rev.1

Hardware Management Tasks

Error: Can not update role of Disk 1004 to Spare - The disk is
still part of a non healthy array. Please wait for the array to
become healthy before activating.

Deactivate a Disk
You can use the nzhw command to make a spare disk unavailable to the system.


To deactivate a disk:
nzhw deactivate -u admin -pw password -host nzhost -id 5004

Note: For a Netezza C1000 system, you cannot deactivate a disk. The command is not supported on the C1000 platform.

Failover a Disk
You can use the nzhw command to initiate a failover. You cannot fail over a disk until the
system is at least in the initialized state.


To failover a disk, enter:
nzhw failover -u admin -pw password -host nzhost -id 1004

On a Netezza C1000 system, when you fail a disk, the RAID controller automatically starts
a regeneration to a spare disk. Note that the RAID controller may not allow you to fail a disk
if you are attempting to fail a disk in a RAID 5 array that already has a failed disk.
Note: For a Netezza C1000 system, the RAID controller still considers a failed disk to be
part of the array until the regeneration is complete. After the regen completes, the failed
disk is logically removed from the array.

Locate a Disk
You can use the nzhw command to turn on or off a disk’s LED. The default is on. The command also displays the physical location of the disk.


To turn on a disk’s LED, enter:
nzhw locate -u admin -pw password -host nzhost -id 1004
Turned locator LED 'ON' for Disk: Logical
Name:'spa1.diskEncl4.disk1' Physical Location:'1st Rack, 4th
DiskEnclosure, Disk in Row 1/Column 1'



To turn off a disk’s LED, enter:
nzhw locate -u admin -pw password -host nzhost -id 1004 -off
Turned locator LED 'OFF' for Disk: Logical
Name:'spa1.diskEncl4.disk1' Physical Location:'1st Rack, 4th
DiskEnclosure, Disk in Row 1/Column 1'

Delete a Disk Entry from the System Catalog
You can use the nzhw command to remove a disk that is failed, inactive, mismatched, or
incompatible from the system catalog. For Netezza C1000 systems, do not delete the hardware ID of a failed disk until after you have successfully replaced it using the instructions
in the Replacement Procedures: IBM Netezza C1000 Systems.


To delete a disk entry, enter:
nzhw delete -u admin -pw password -host nzhost -id 1004

20282-20

Rev.1

5-19

IBM Netezza System Administrator’s Guide

Replace a Failed Disk
If a disk hardware component fails and must be replaced, Netezza Support will work with
you to schedule service to replace the disk. Details are available in the Replacement Procedures Guide for your appliance model family.

Managing Data Slices
A data slice is a logical representation of the data saved in the partitions of a disk. The data
slice contains pieces of each user database and table. The Netezza system distributes the
user data to all of the disks in the system using a hashing algorithm.
Each data slice has an ID, and is logically owned by a SPU to process queries on the data
contained within that data slice.
The basic data slice management tasks are as follows:


Monitor status, space consumption, and overall health



Rebalance data slices to the available SPUs



Regenerate (or regen) a data slice after a disk failure



Display the current topology of the data slices

The following sections describe how to perform these tasks.
You can use the nzhw, nzds, and nzspupart commands to manage data slices and perform
these tasks.
To indicate which data slice you want to control, you can refer to the data slice using its
data slice ID. You can use the nzds command to display the IDs, as well as obtain the information from management UIs such as NzAdmin or Web Admin.

Displaying Data Slice Issues
You can quickly display a list of any data slices that have issues and which may require
administrative attention using the nzds show -issues command. This command displays
data slices that are in the Degraded state (a loss of data redundancy) or that are Repairing
(that is, the data is being regenerated to a spare disk).
[nz@nzhost
Data Slice
---------15
16
46

~]$ nzds show -issues
Status
SPU Partition
--------- ---- --------Repairing 1137 3
Repairing 1137 2
Repairing 1135 4

Size (GiB)
---------356
356
356

% Used
-----46.87
46.79
46.73

Supporting Disks
---------------1080,1086
1080,1086
1055,1098

You can also use the NzAdmin and Web Admin interfaces to obtain visibility to hardware
issues and failures.

Monitor Data Slice Status
To obtain the status of one or more data slices, you can use the nzds command with the
show options.


5-20

To show the status of all the data slices (note that the sample output is abbreviated for
the documentation):

20282-20

Rev.1

Managing Data Slices

[nz@nzhost
Data Slice
---------1
2
3
4
5
6
7
8

~]$ nzds show
Status
SPU
---------Repairing 1017
Repairing 1017
Healthy
1017
Healthy
1017
Healthy
1017
Healthy
1017
Healthy
1017
Healthy
1017

Partition
--------2
3
5
4
0
1
7
6

Size (GiB)
---------356
356
356
356
356
356
356
356

% Used
-----58.54
58.54
58.53
58.53
58.53
58.53
58.53
58.53

Supporting Disks
---------------1021,1029
1021,1029
1022,1030
1022,1030
1023,1031
1023,1031
1024,1032
1024,1032

Note: Data slice 2 in the sample output is regenerating due to a disk failure. For a Netezza
C1000 system, three disks hold the user data for a data slice; the fourth disk is the regen
target for the failed drive. The the RAID controller still considers a failed disk to be part of
the array until the regeneration is complete. After the regen completes, the failed disk is
logically removed from the array.


To show detailed information about the data slices that are being regenerated:

[nz@nzhost ~]$ nzds show -regenstatus -detail
Data Slice Status
SPU Partition Size (GiB) % Used Supporting Disks
Start Time
% Done
---------- --------- ---- --------- ---------- ------ ------------------------------------- -----2
Repairing 1255 1
3725
0.00
1012,1028,1031,1056
2011-07-01 10:41:44 23

The status of a data slice shows the health of the data slice. Table 5-5 describes the status
values for a data slice. You see these states when you run the nzds command or display
data slices using the NzAdmin or Web Admin UIs.
Table 5-5: Data Slice Status
State

Description

Healthy

The data slice is operating normally and the data is protected in a
redundant configuration; that is, the data is mirrored (for Netezza
100, Netezza 1000, or N1001 systems), or redundant (for
Netezza C1000 systems).

Repairing

The data slice is in the process of being regenerated to a spare
disk due to a disk failure.

Degraded

The data slice is not protected in a redundant configuration.
Another disk failure could result in loss of a data slice, and the
degraded condition impacts system performance.

Regenerate a Data Slice
If a disk is encountering problems or has failed, you perform a data slice regeneration to
create a copy of the primary and mirror data slices on an available spare disk. During regeneration, the regular system processing continues for the bulk of the regeneration.
Note: In the IBM PureData System for Analytics N1001 or the IBM Netezza 1000 and later
models, the system does not change states during a regeneration; that is, the system

20282-20

Rev.1

5-21

IBM Netezza System Administrator’s Guide

remains online while the regeneration is in progress. There is no synchronization state
change nor interruption to active jobs during this process. If the regeneration process
should fail or be stopped for any reason, the system transitions to the Discovering state to
establish the topology of the system.
You can use the nzspupart regen command or the NzAdmin interface to regenerate a disk.
If you do not specify any options, the system manager checks for any degraded partitions
and if found, starts a regeneration to the appropriate spare disk. An example follows:
nz@nzhost ~]$ nzspupart regen
Are you sure you want to proceed (y|n)? [n] y
Info: Regen Configuration - Regen configured on SPA:1 Data slice 20 and 19
.

You can then use the nzspupart show -regenstatus or the nzds show -regenstatus command
to display the progress and details of the regeneration. Sample command output follows for
the nzds command, which shows the status for the data slices:
[nz@nzhost ~]$ nzds -regenstatus
Data Slice Status
SPU Partition Size (GiB)
Start Time % Done
---------- --------- ---- --------- ------------------- -----19
Repairing 1057 3
356
5.80
0
20
Repairing 1057 2
356
5.81
0

% Used Supporting Disks
------ ---------------1040,1052

0

1040,1052

0

Sample output for the nzspupart command follows. In this example, note that the command shows more detail about the partitions (data, swap, NzLocal, and log) that are being
regenerated:
[nz@nzhost ~]$ nzspupart show -regenstatus
SPU Partition Id Partition Type Status
Size (GiB) % Used Supporting Disks
% Done Starttime
---- ------------ -------------- --------- ---------- ------ ---------------------------- ------ ------------------1057 2
Data
Repairing 356
0.13 1032,1035
0
2011-12-23 04:37:33
1039 101
Swap
Repairing 48
25.04 1030,1031,1032,1035,1036,1037
0
2011-12-23 04:37:33
1039 111
Log
Repairing 1
3.47
1032,1035
91.336 2011-12-23 04:37:33

If you want to control the regen source and target destimations, you can specify source
SPU and partition IDs, and the target or destination disk ID. The spare disk must reside in
the same SPA as the disk that you are regenerating. You can obtain the IDs for the source
partition in the nzspupart show -details command.


To regenerate a degraded partition and specify the information for the source and
destination:
nzspupart regen -spu 1035 -part 7 -dest 1024

Note: Regeneration can take several hours to complete. If the system is idle and has no
other activity except the regen, or if the user data partitions are not very full, the regeneration takes less time to complete. You can review the status of the regeneration using the
nzspupart show -regenStatus command. During the regeneration, note that user query per-

5-22

20282-20

Rev.1

Managing Data Slices

formance can be impacted while the system is busy processing the regeneration. Likewise,
user query activity can increase the time required for the regeneration.
A regeneration setup failure could occur if the system manager cannot remove the failed
disk from the RAID array, or if it cannot add the spare disk to the RAID array. If a regeneration failure occurs, or if a spare disk is not available for the regeneration, the system
continues processing jobs. The data slices that lost their mirror continue to operate in an
unmirrored or Degraded state; however, you should replace your spare disks as soon as possible and ensure that all data slices are mirrored. If an unmirrored disk should fail, the
system will be brought to a down state.

Rebalance Data Slices
Each SPU owns or manages a number of data slices for query processing. The SPUs and
their data slices must reside in the same SPA. If a SPU fails, the system manager reassigns
its data slices to the other active SPUs in the same SPA. The system manager randomly
assigns a pair of data slices (the primary and mirrors) from the failed SPU to an available
SPU in the SPA. The system manager ensures that each SPU has no more than two data
slices more than one of its peers.
After the failed SPU is replaced or reactivated, you must rebalance the data slices to return
to optimal performance. The rebalance process checks each SPU in the SPA; if a SPU has
more than two data slices more than another SPU, the system manager redistributes the
data slices to equalize the workload and return the SPA to an optimal performance topology. (The system manager changes the system to the discovering state to perform the
rebalance.)
In addition, if an S-Blade does not have an equal distribution of disks in the odd-numbered
versus even-numbered enclosures of its array, the system manager reports the problem as
an overloaded SAS bus. The nzhw rebalance command will also reconfigure the topology so
that half of the disks are in the odd-numbered enclosures and half in the even-numbered.
For more information, see “System Resource Balance Recovery” on page 5-12.
You can use the nzhw command to rebalance the data slice topology. The system also performs the rebalance check each time the system is restarted, or after a SPU failover or a
disk regeneration setup failure.


To rebalance the data slices:
nzhw rebalance -u admin -pw password

If a rebalance is not required, the command displays a message that a rebalance is not
necessary and exits without performing the step.
You can also use the nzhw rebalance -check option to have the system check the topology
and only report whether a rebalance is needed. If a rebalance is required, you can plan
when to run the operation during a lesser busy system time, for example.


To run a balance check:
nzhw rebalance -check -u admin -pw password

The command displays the message “Rebalance is needed” or “There is nothing to rebalance.” If a rebalance is needed, you can run the nzhw rebalance command to perform the
rebalance, or you could wait until the next time the Netezza software is stopped and
restarted to rebalance the system.

20282-20

Rev.1

5-23

IBM Netezza System Administrator’s Guide

Displaying the Active Path Topology
The active path topology defines the ports and switches that offer the best connection performance to carry the traffic between the S-Blades and their disks. For best system
performance, all links and components must remain balanced and equally loaded.
To display the current storage topology, use the nzds show -topology command:
[nz@nzhost ~]$ nzds show -topology
===============================================================================
Topology for SPA 1
spu0101 has 8 datapartitions: [ 0:1 1:2 2:11 3:12 4:10 5:9 6:18 7:17 ]
hba[0] 4 disks
port[2] 2 disks: [ 1:encl1Slot10 11:encl1Slot06 ] -> switch 0
port[3] 2 disks: [ 10:encl2Slot05 18:encl2Slot09 ] -> switch 1
hba[1] 4 disks
port[0] 2 disks: [ 2:encl2Slot01 12:encl2Slot06 ] -> switch 0
port[1] 2 disks: [ 9:encl1Slot05 17:encl1Slot12 ] -> switch 1
...............................................................................
spu0103 has 8 datapartitions: [ 0:22 1:21 2:16 3:15 4:13 5:14 6:5 7:6 ]
hba[0] 4 disks
port[2] 2 disks: [ 16:encl2Slot02 22:encl2Slot11 ] -> switch 0
port[3] 2 disks: [ 5:encl1Slot03 13:encl1Slot07 ] -> switch 1
hba[1] 4 disks
port[0] 2 disks: [ 15:encl1Slot08 21:encl1Slot11 ] -> switch 0
port[1] 2 disks: [ 6:encl2Slot03 14:encl2Slot07 ] -> switch 1
...............................................................................
spu0105 has 6 datapartitions: [ 0:19 1:20 2:7 3:8 4:4 5:3 ]
hba[0] 3 disks
port[2] 2 disks: [ 7:encl1Slot04 19:encl1Slot09 ] -> switch 0
port[3] 1 disks: [ 4:encl2Slot12 ] -> switch 1
hba[1] 3 disks
port[0] 2 disks: [ 8:encl2Slot04 20:encl2Slot10 ] -> switch 0
port[1] 1 disks: [ 3:encl1Slot01 ] -> switch 1
...............................................................................
Switch 0
port[1] 6 disks: [ 1:encl1Slot10 7:encl1Slot04 11:encl1Slot06 15:encl1Slot08
19:encl1Slot09 21:encl1Slot11 ] -> encl1
port[2] 6 disks: [ 2:encl2Slot01 8:encl2Slot04 12:encl2Slot06 16:encl2Slot02
20:encl2Slot10 22:encl2Slot11 ] -> encl2
Switch 1
port[1] 5 disks: [ 3:encl1Slot01
17:encl1Slot12 ] -> encl1

5:encl1Slot03

9:encl1Slot05 13:encl1Slot07

port[2] 5 disks: [ 4:encl2Slot12 6:encl2Slot03 10:encl2Slot05 14:encl2Slot07
18:encl2Slot09 ] -> encl2
===============================================================================

This sample output shows a normal topology for an IBM Netezza 1000-3 system. The command output is complex and is typically used by Netezza Support to troubleshoot problems.
If there are any issues to investigate in the topology, the command displays a WARNING
section at the bottom, for example:
WARNING: 2 issues detected
spu0101 hba [0] port [2] has 3 disks
SPA 1 SAS switch [sassw01a] port [3] has 7 disks

5-24

20282-20

Rev.1

Managing Data Slices

These warnings indicate problems in the path topology where storage components are overloaded. These problems can affect query performance and also system availability should
other path failures occur. Contact Support to troubleshoot these warnings.
To display detailed information about path failure problems, you can use the following
command:
[nz@nzhost ~]$ nzpush -a mpath -issues
spu0109: Encl: 4 Slot: 4 DM: dm-5 HWID: 1093 SN: number PathCnt: 1
PrefPath: yes
spu0107: Encl: 2 Slot:
PrefPath: yes

8 DM: dm-1

HWID: 1055 SN: number PathCnt: 1

spu0111: Encl: 1 Slot: 10 DM: dm-0
PrefPath: no

HWID: 1036 SN: number PathCnt: 1

If the command does not return any output, there are no path failures observed on the system. It is not uncommon for some path failures to occur and then clear quickly. However, if
the command displays some output, as in this example, there are path failures on the system and system performance could be degraded. The sample output shows that spu0111
is not using the higher performing preferred path (PrefPath: no) and there is only one path
to each disk (PathCnt: 1) instead of the normal 2 paths. Contact Netezza Support and
report the path failures to initiate troubleshooting and repair.
Note: It is possible to see errors reported in the nzpush command output even if the
nzds -topology command does note report any warnings. In these cases, the errors are still
problems in the topology, but they do not affect the performance and availability of the current topology. Be sure to report any path failures to ensure that problems are diagnosed and
resolved by Support for optimal system performance.

Handling Transactions during Failover and Regeneration
When a disk failover occurs, the system continues processing any active jobs while it performs a disk regeneration. No active queries need to be stopped and restarted.
If a SPU fails, the system state changes to the pausing -now state (which stops active jobs),
and then transitions to the discovering state to identify the active SPUs in the SPA. The
system also rebalances the data slices to the active SPUs.
After the system returns to an online state:


The system restarts transactions that had not returned data before the pause -now
transition.



Read-only queries begin again with their original transaction ID and priority.

Table 5-6 describes the system states and the way Netezza handles transactions during
failover.
Table 5-6: System States and Transactions

20282-20

Rev.1

System State

Active Transactions

New Transactions

Offline(ing) Now

Aborts all transactions.

Returns an error.

Offline(ing)

Waits for the transaction to finish. Returns an error.

Pause(ing) Now

Aborts only those transactions that Queues the transaction.
cannot be restarted.

5-25

IBM Netezza System Administrator’s Guide

Table 5-6: System States and Transactions
System State

Active Transactions

New Transactions

Pause(ing)

Waits for the transaction to finish. Queues the transaction.

The following examples provide specific instances of how the system handles failovers that
happen before, during, or after data is returned.


If the pause -now occurs immediately after a BEGIN command completes, before data
is returned, the transaction is restarted when the system returns to an online state.



If a statement such as the following completes and then the system transitions, the
transaction can restart because data has not been modified and the reboot does not
interrupt a transaction.
BEGIN;
SELECT * FROM emp;



If a statement such as the following completes, but the system goes transitions before
the commit to disk, the transaction is aborted.
BEGIN;
INSERT INTO emp2 FROM emp;



A statement such as the following can be restarted if it has not returned data, in this
case a single number that represents the number of rows in a table. This sample
includes an implicit BEGIN command.
SELECT count(*) FROM small_lineitem;



If a statement such as the following begins to return rows before the system transitions,
the statement will be aborted.
INSERT INTO emp2 SELECT * FROM externaltable;

Note that this transaction, and others that would normally be aborted, would be
restarted if the nzload -allowReplay option applied to the associated table.
Note: There is a retry count for each transaction. If the system transitions to
pause -now more than the number of retries allowed, the transaction is aborted.

Automatic Query and Load Continuation
When a SPU unexpectedly reboots or is failed-over, the system manager initiates a state
change from online to pause -now. During this transition, rather than aborting all transactions, the Netezza system aborts only those transactions that cannot be restarted.
The system restarts the following transactions:


Read-only queries that have not returned data. The system restarts the request with a
new plan and the same transaction ID.



Loads. If you have enabled load continuation, the system rolls back the load to the
beginning of the replay region and resends the data.

Once the system has restarted these transactions, the system state returns to online. For
more information, see the IBM Netezza Data Loading Guide.

5-26

20282-20

Rev.1

Power Procedures

Power Procedures
This section describes how to power on the Netezza and NEC InfoFrame DWH Appliance
systems as well as how to power-off the system. Typically, you would only need to power off
the system if you are moving the system physically within the data center, or in the event of
possible maintenance or emergency conditions within the data center.
The instructions to power on or off an IBM Netezza 100 system are available in the Site
Preparation and Specifications: IBM Netezza 100 Systems.
Note: To power cycle a Netezza system, you must have physical access to the system to
press power switches and to connect or disconnect cables. Netezza systems have keyboard/
video/mouse (KVM) units which allow you to enter administrative commands on the hosts.

PDU and Circuit Breakers Overview
On the IBM Netezza 1000-6 and larger models, and the IBM PureData System for Analytics N1001-005 and larger models, the main input power distribution units (PDUs) are
located at the bottom of the rack on the right and left sides, as shown in Figure 5-8.
OFF
ON

PDU circuit breakers (3
rows of 3 breaker pins).

Figure 5-8: Netezza 1001-6 and N1001-005 and Larger PDUs and Circuit Breakers


To close the circuit breakers (power up the PDUs), press in each of the 9 breaker pins
until they engage. Be sure to close the 9 pins on both main PDUs in each rack of the
system.



To open the circuit breakers (power off the PDUs), pull out each of the 9 breaker pins
on the left and the right PDU in the rack. If it becomes difficult to pull out the breaker
pins using your fingers, you could use a tool such as a pair of needle-nose pliers to gently pull out the pins.

On the IBM Netezza 1000-3 or IBM PureData System for Analytics N1001-002 models,
the main input power distribution units (PDUs) are located on the right and left sides of the
rack, as shown in Figure 5-9.

20282-20

Rev.1

5-27

IBM Netezza System Administrator’s Guide

OFF

OFF

ON

ON

OFF

Two circuit
breakers at the
top of the PDU.

ON

Figure 5-9: IBM Netezza 1000-3 and IBM PureData System for Analytics N1001-002 PDUs and Circuit Breakers
At the top of each PDU is a pair of breaker rocker switches. (Note that the labels on the
switches are upside down when you view the PDUs.)


To close the circuit breakers (power up the PDUs), you push the On toggle of the rocker
switch in. Make sure that you push in all four rocker switches, two on each PDU.



To open the circuit breakers (power off the PDUs), you must use a tool such as a small
flathead screwdriver; insert the tool into the hole labelled OFF and gently press until
the rocker toggle pops out. Make sure that you open all four of the rocker toggles, two
on each PDU.

Powering On the IBM Netezza 1000 and IBM PureData System for Analytics N1001
Follow these steps to power on IBM Netezza 1000 or IBM PureData System for Analytics
N1001 models:
1. Make sure that the two main power cables are connected to the data center drops;
there are two power cables for each rack of the system.
2. Do one of the following steps depending upon which system model you have:

5-28



For an IBM Netezza 1000-6 or larger model, or an IBM PureData System for Analytics N1001-005 or larger model, push in the 9 breaker pins on both the left and
right lower PDUs as shown in Figure 5-8 on page 5-27. (Repeat these steps for
each rack of the system.)



For an IBM Netezza 1000-3 or IBM PureData System for Analytics N1001-002
model, close the two breaker switches on both the left and right PDUs as shown in
Figure 5-9 on page 5-28.

20282-20

Rev.1

Power Procedures

3. The hosts will power on. Wait a a minute for the power processes to complete, then log
in as root to one of the hosts and confirm that the Netezza software has started as
follows:
a. Run the crm_mon command to obtain the cluster status:
[root@nzhost1 ~]# crm_mon -i5
============
Last updated: Tue Jun 2 11:46:43 2009
Current DC: nzhost1 (key)
2 Nodes configured.
3 Resources configured.
============
Node: nzhost1 (key): online
Node: nzhost2 (key): online
Resource Group: nps
drbd_exphome_device (heartbeat:drbddisk):
Started nzhost1
drbd_nz_device
(heartbeat:drbddisk):
Started nzhost1
exphome_filesystem (heartbeat::ocf:Filesystem): Started nzhost1
nz_filesystem
(heartbeat::ocf:Filesystem): Started nzhost1
fabric_ip
(heartbeat::ocf:IPaddr):
Started nzhost1
wall_ip
(heartbeat::ocf:IPaddr):
Started nzhost1
nz_dnsmasq (lsb:nz_dnsmasq):
Started nzhost1
nzinit
(lsb:nzinit):
Started nzhost1
fencing_route_to_ha1
(stonith:apcmaster):
Started nzhost2
fencing_route_to_ha2
(stonith:apcmaster):
Started nzhost1

b. Identify the active host in the cluster, which is the host where the nps resource
group is running:
[root@nzhost1 ~]# crm_resource -r nps -W
crm_resource[5377]: 2009/06/01_10:13:12 info: Invoked: crm_resource
-r nps -W
resource nps is running on: nzhost1

c. Log in as nz and verify that the Netezza server is online:
[nz@nzhost1 ~]$ nzstate
System state is 'Online'.

Powering Off the IBM Netezza 1000 or IBM PureData System for Analytics N1001
Follow these steps to power off an IBM Netezza 1000 or IBM PureData System for Analytics N1001 system:
1. Identify the active host in the cluster, which is the host where the nps resource group is
running:
[root@nzhost1 ~]# crm_resource -r nps -W
crm_resource[5377]: 2009/06/07_10:13:12 info: Invoked: crm_resource
-r nps -W
resource nps is running on: nzhost1

2. Log in as root to the standby host (nzhost2 in this example) and run the following command to stop heartbeat:
[root@nzhost2 ~]# service heartbeat stop

20282-20

Rev.1

5-29

IBM Netezza System Administrator’s Guide

3. Log in as root to the active host (nzhost1 in this example) and run the following command to stop heartbeat:
[root@nzhost1 ~]# service heartbeat stop

4. As root on the standby host (nzhost2 in this example), run the following command to
shut down the host:
[root@nzhost2 ~]# shutdown -h now

5. As root on the active host, run the following command to shut down the host:
[root@nzhost1 ~]# shutdown -h now

6. Wait until you see the power lights on both hosts shut off.
7. Do one of the following steps depending upon which IBM Netezza 1000 model you
have:


For an IBM Netezza 1000-6 or larger, or an IBM PureData System for Analytics
N1001-005 or larger model, pull out the 9 breaker pins on both the left and right
lower PDUs as shown in Figure 5-8 on page 5-27. (Repeat these steps for each
rack of the system.)



For an IBM Netezza 1000-3 or IBM PureData System for Analytics N1001-002
model, use a small tool such as a pocket screwdriver to open the two breaker
switches on both the left and right PDUs as shown in Figure 5-9 on page 5-28.

8. Disconnect the main input power cables (two per rack) from the data center power
drops. (As a best practice, do not disconnect the power cords from the plug/connector
on the PDUs in the rack; instead, disconnect them from the power drops outside the
rack.)

Powering on an IBM Netezza C1000 System
Follow these steps to power on an IBM Netezza C1000 System:
1. Make sure that the main power cables for each rack are connected to the data center
drops. For a North American power configuration, there are four power cables for the
first two racks of a Netezza C1000 (or two cables for a European Union power configuration); there are two power cables for each additional rack if present for that model.
2. Switch the breakers to ON on both the left and right PDUs. (Repeat these steps for
each rack of the system.)
3. Press the power button on both host servers and wait for the servers to start. This process can take a few minutes.
4. Log in to the host server (ha1) as root.
5. Change to the nz user account and run the following command to stop the Netezza
server:
[nz@nzhost1 ~]$ nzstop

6. Wait for the Netezza system to stop.
7. Log out of the nz account to return to the root account, then type the following command to power on the storage groups:
[root@nzhost1 ~]# /nzlocal/scripts/rpc/spapwr.sh -on all -j all

5-30

20282-20

Rev.1

Power Procedures

8. Wait five minutes and then type the following command to power on all the S-blade
chassis:
[root@nzhost1 ~]# /nzlocal/scripts/rpc/spapwr.sh -on all

9. Run the crm_mon command to monitor the status of the HA services and cluster
operations:
[root@nzhost1 ~]# crm_mon -i5

The output of the command refreshes at the specified interval rate of 5 seconds (-i5).
10. Review the output and watch for the resource groups to all have a Started status. This
usually takes about 2 to 3 minutes, then proceed to the next step. Sample output
follows:
============
Last updated: Tue Jun 2 11:46:43 2009
Current DC: nzhost1 (key)
2 Nodes configured.
3 Resources configured.
============
Node: nzhost1 (key): online
Node: nzhost2 (key): online
Resource Group: nps
drbd_exphome_device (heartbeat:drbddisk):
drbd_nz_device (heartbeat:drbddisk):
exphome_filesystem (heartbeat::ocf:Filesystem):
nz_filesystem (heartbeat::ocf:Filesystem):
fabric_ip (heartbeat::ocf:IPaddr):
wall_ip (heartbeat::ocf:IPaddr):
nz_dnsmasq (lsb:nz_dnsmasq):
nzinit (lsb:nzinit):
fencing_route_to_ha1 (stonith:apcmaster):
fencing_route_to_ha2 (stonith:apcmaster):

Started
Started
Started
Started
Started
Started
Started
Started
Started
Started

nzhost1
nzhost1
nzhost1
nzhost1
nzhost1
nzhost1
nzhost1
nzhost1
nzhost2
nzhost1

11. Press Ctrl-C to exit the crm_mon command and return to the command prompt.
12. Log into the nz account.
[root@nzhost1 ~]# su - nz

13. Verify that the system is online using the following command:
[nz@nzhost1 ~]$ nzstate
System state is ‘Online’.

Powering off an IBM Netezza C1000 System
Follow these steps to power off an IBM Netezza C1000 System:
Unless the system shutdown is an emergency situation, do not power down a Netezza
C1000 system when there are any amber (Needs Attention) LEDs illuminated in the storage groups. It is highly recommended that you resolve the problems that are causing the
Needs Attention LEDs before you power off a system to ensure that the power-up procedures are not impacted by the unresolved conditions within the groups.
1. Log in to host 1 (ha1) as root.
Note: Do not use the su command to become root.

20282-20

Rev.1

5-31

IBM Netezza System Administrator’s Guide

2. Identify the active host in the cluster, which is the host where the NPS resource group
is running:
[root@nzhost1 ~]# crm_resource -r nps -W
crm_resource[5377]: 2009/06/07_10:13:12 info: Invoked: crm_resource
-r nps -W
resource nps is running on: nzhost1

3. Log in to the active host (nzhost1 in this example) as nz and run the following command to stop the Netezza server:
[nz@nzhost1 ~]$ nzstop

4. Type the following commands to stop the clustering processes:
[root@nzhost1 ~]# ssh ha2 'service heartbeat stop'
[root@nzhost1 ~]# service heartbeat stop

5. On ha1, type the following commands to power off the S-blade chassis and storage
groups:
[root@nzhost1 ~]# /nzlocal/scripts/rpc/spapwr.sh -off all
[root@nzhost1 ~]# /nzlocal/scripts/rpc/spapwr.sh -off all -j all

6. Log into ha2 as root and shut down the Linux operating system using the following
command:
[root@nzhost2 ~]# shutdown -h now

The system displays a series of messages as it stops processes and other system activity. When it finishes, it displays the message “power down” which indicates that it is
now safe to turn off the power to the server.
7. Press the power button on Host 2 (located in the front of the cabinet) to power down
that NPS host.
8. On ha1, shut down the Linux operating system using the following command:
[root@nzhost1 ~]# shutdown -h now

The system displays a series of messages as it stops processes and other system activity. When it finishes, it displays the message “power down” which indicates that it is
now safe to turn off the power to the server.
9. Press the power button on Host 1 (located in the front of the cabinet) to power down
that NPS host.
10. Switch the breakers to OFF on both the left and right PDUs. (Repeat this step for each
rack of the system.)

NEC InfoFrame DWH PDU and Circuit Breakers Overview
The main input power distribution units (PDUs) are located on the right and left sides at
the rear of the rack, as shown in Figure 5-9. O-25 and O-50 systems both have one PDU on
each side of the rack O-100 has two PDUs on each side.

5-32

20282-20

Rev.1

Power Procedures

OFF

ON

ON

OFF
ON

Two circuit breakers
at the bottom
of each PDU.

OFF

Figure 5-10: NEC InfoFrame DWH ZA100 PDUs and Circuit Breakers
At the bottom of each PDU is a pair of breaker rocker switches.


To close the circuit breakers (power up the PDUs), you push in the ON toggle of the
rocker switch. Make sure that you push in all four rocker switches, two on each PDU.



To open the circuit breakers (power off the PDUs), you must use a tool such as a small
flathead screwdriver; insert the tool into the hole labelled OFF and gently press until
the rocker toggle pops out. Make sure that you open all of the rocker toggles, two on
each PDU.

Powering On the NEC InfoFrame DWH Appliance
Follow these steps to power on a NEC InfoFrame DWH appliance:
1. Make sure that the main power cables are connected to the data center drops; there are
two power cables for each rack of the ZA25 and ZA50 systems, and four power cables
for each rack of the ZA100 system.
2. Close the two breaker switches on both the left and right PDUs as shown in Figure 5-9
on page 5-28.
3. Press the power button on both host servers and wait for the servers to start. This process can take a few minutes.
4. Log in as root to one of the hosts and confirm that the Netezza software has started as
follows:
a. Run the crm_mon command to obtain the cluster status:

20282-20

Rev.1

5-33

IBM Netezza System Administrator’s Guide

[root@nzhost1 ~]# crm_mon -i5
============
Last updated: Tue Jun 2 11:46:43 2009
Current DC: nzhost1 (key)
2 Nodes configured.
3 Resources configured.
============
Node: nzhost1 (key): online
Node: nzhost2 (key): online
Resource Group: nps
drbd_exphome_device (heartbeat:drbddisk):
Started nzhost1
drbd_nz_device
(heartbeat:drbddisk):
Started nzhost1
exphome_filesystem (heartbeat::ocf:Filesystem): Started nzhost1
nz_filesystem
(heartbeat::ocf:Filesystem): Started nzhost1
fabric_ip
(heartbeat::ocf:IPaddr):
Started nzhost1
wall_ip
(heartbeat::ocf:IPaddr):
Started nzhost1
nz_dnsmasq (lsb:nz_dnsmasq):
Started nzhost1
nzinit
(lsb:nzinit):
Started nzhost1
fencing_route_to_ha1
(stonith:apcmastersnmp):Started nzhost2
fencing_route_to_ha2
(stonith:apcmastersnmp):Started nzhost1

b. Identify the active host in the cluster, which is the host where the nps resource
group is running:
[root@nzhost1 ~]# crm_resource -r nps -W
crm_resource[5377]: 2009/06/01_10:13:12 info: Invoked: crm_resource
-r nps -W
resource nps is running on: nzhost1

c. Log in as nz and verify that the Netezza server is online:
[nz@nzhost1 ~]$ nzstate
System state is 'Online'.

Powering Off an NEC InfoFrame DWH Appliance
Perform the following procedure to power off an NEC InfoFrame DWH appliance.
1. Logon to ha1 as root user.
Note: Do not issue the su - command to become root.
2. The heartbeat must be stopped.
To check the cluster state, type:
crm_mon -i5
If both hosts are online and all services in the nps resource group are started, then the
cluster is up.
If the cluster is down, go directly to step 3.
If the cluster is up, shutdown the standby node first:
a. Determine the active and standby nodes:
crm_resource -r nps -W
The active node will be listed, so the standby node is the one that is not listed.

5-34

20282-20

Rev.1

Power Procedures

b. To shutdown the standby node, go to the KVM on the standby node and type:
/sbin/service heartbeat stop
Wait until the standby node is down before proceeding.
Note: If you wish to monitor the state of the nodes, you can open another window
(ALT-F2) and run the command crm_mon -i5 in that window. This is optional.
c. When the standby node is down, go to the KVM on the active node and type:
/sbin/service heartbeat stop
Note: Wait until the active node is down before proceeding. Use separate terminal
instance with the crm_mon -i5 command to monitor the state of the active node.
3. Log in to ha2 as root, then shut down the Linux operating system using the following
command:
shutdown -h now
The system displays a series of messages as it stops processes and other system activity, and the system powers down.
4. Log in to ha1 as root, then shut down the Linux operating system using the following
command:
shutdown -h now
The system displays a series of messages as it stops processes and other system activity, and the system powers down.
5. Switch off the power to the PDU units (located in the rear of the cabinet) to completely
power down the rack. Make sure that you turn off power to all power switches.

20282-20

Rev.1

5-35

IBM Netezza System Administrator’s Guide

5-36

20282-20

Rev.1

CHAPTER

6

Managing the Netezza Server
What’s in this chapter
 Software Revision Levels
 System States
 Managing the System State
 System Errors
 System Logs
 System Configuration

This chapter describes how to manage the Netezza server and processes. The Netezza software that runs on the appliance can be stopped and started for maintenance tasks, so this
chapter describes the meaning and impact of system states. This chapter also describes log
files and where to find operational and error messages for troubleshooting activities.
Although the system is configured for typical use in most customer environments, you can
also tailor software operations to meet the special needs of your environment and users
using configuration settings.

Software Revision Levels
The software revision level is the release or version of the Netezza software that is running
on your Netezza appliance. The revision level typically includes a major release number, a
minor release (or service pack number) and possibly a patch number if you have updated
the release to a patch revision.

Displaying the Netezza Software Revision
You can use the nzrev command to display the current Netezza software revision. For more
information about the nzrev command syntax and options, see “nzrev” on page A-37.If you
enter the nzrev command with no arguments, Netezza returns the revision number string
that displays the major and minor number and the build number. Sample output follows:
nzrev
Release 7.0, Dev 1 [Build 24438]

When you enter the nzrev -rev command, Netezza returns the entire revision number string,
including all fields (such as variant and patch level, which in this example are both zero).
nzrev -rev
7.0.0-0.D-1.P-0.Bld-24438

6-1

IBM Netezza System Administrator’s Guide

From a client system, you can use the nzsystem showRev -host host -u user -pw password
command to display the revision information.

Displaying the Software Revision Levels
You can use the nzcontents command to display the revision and build number of all the
executables on the host. This command takes several seconds to run and results in multiple lines of output.
Note: Programs with no revisions are scripts or special binaries.
When you enter the nzcontents command, Netezza displays the program names, the revision stamps, the build stamps, and checksum. Note that the sample output below shows a
small set of output, and the checksum values have been truncated to fit the output messages on the page.
nzcontents

Program
-------------adm
nzbackup
nzcontents
nzconvert
nzds
...

Revision Stamp
Build Stamp
CheckSum
------------------------ --------------------------------- -------------Directory
7.0.0-0.D-1.P-0.Bld-24438 2012-07-28.24438.dev.cm.24438
1821...
ab685...
3a52...
7.0.0-0.D-1.P-0.Bld-24438 2012-07-28.24438.dev.cm.24438
d3f2...

Table 6-1 describes the components of the Revision Stamp fields.
Table 6-1: Netezza Software Revision Numbering
Major

Minor

Subminor

-Variant

.Stage

.Patch (P-n)

.Build (Bld-n)

Numeric
Incremented
for major
releases.

Numeric
Incremented
for minor
releases.

Numeric
Incremented
for service
packs.

Numeric
Usually 0.
Used very
rarely.

Alphanumeric
Indicates a stage
of a release. D
(development), A
(alpha), and B
(beta) are internal,
F (final) is for
released software.
The sample output
shows D-1.

Alphanumeric
Incremented for
fix releases.
Note that all
patches are
cumulative and
apply to a specific, existing
release.

Alphanumeric
Incremented
serially for
every production build.
Note that the
prefix cm
denotes a production build.

System States
The Netezza system state is the current operational state of the appliance. In most cases,
the system is online and operating normally. There may be times when you need to stop the
system to perform maintenance tasks or as part of a larger procedure.
You can manage the Netezza system state using the nzstate command. It can display as
well as wait for a specific state to occur. For more information about the nzstate command
syntax and options, see “nzstate” on page A-48.

6-2

20282-20

Rev.1

System States

Displaying the Current System State
You can use the nzstate command to display the current system state.
[nz@nzhost ~]$ nzstate
System state is 'Online'.

Table 6-2 lists the common system states and how they are invoked and exited.
Table 6-2: Common System States
States

Description

Invoked

Exited

Online

Select this state to make the Netezza fully
operational. This is the most common system state. In this state, the system is
ready to process or is processing user
queries.

The system enters this state
when you use the nzsystem
restart or resume command,
or after you boot the system.

The system exits the
online state when you
use the nzsystem stop,
offline, pause, or restart
commands.

Note: You can also use the nzsystem restart command to quickly stop and start all server software.

You can only use the nzsystem restart command on a running Netezza that is in a non-stopped state.
Offline

Select this state to interrupt the Netezza. The system enters this state
when you use the nzsystem
In this state, the system completes any
running queries, but displays errors for any offline command.
queued and new queries.

The system exits this
state when you use the
nzsystem resume or stop
command.

Paused

Select this state when you expect a brief The system enters the paused
state when you use the nzsysinterruption of server availability. In this
tem pause command.
state, the system completes any running
queries, but prevents queued or new queries from starting. Except for the delay
while in the paused state, users should not
notice any interruption in service.

The system exits the
paused state when you
use the nzsystem resume
or stop command, or if
there is a hardware failure on an active SPU.

Down

The system enters the down state if there
is insufficient hardware for the system to
function even in failover mode. For more
information about the cause of the Down
state, use the nzstate -reason command.

Stopped Select this state for planned tasks such as
installation of new software. In this state,
the system waits for currently running
queries to complete, prevents queued or
new queries from starting, and then shuts
down all Netezza software.

Not user invokeable.

You must repair the system hardware and then
use the nzsystem resume
command.

The system enters the
stopped state when you use
the nzsystem stop or the nzstop command. Note that if
you use the nzstop command,
the system aborts all running
queries.

The system exits the
stopped state when you
use the nzstart
command.

Note: When you specify the nzsystem pause, offline, restart, and stop commands, the system allows already running queries to finish unless you use the -now switch, which
immediately aborts all running queries. For more information about the nzsystem command, see “nzsystem” on page A-55.

20282-20

Rev.1

6-3

IBM Netezza System Administrator’s Guide

System States Reference
When the Netezza software is running, the system and SPUs can transition through the following operational states. The states that end in the letters “ing” (such as Pausing, Pausing
Now, Discovering) are typically transitional states that are very short in duration. The other
states such as those described in Table 6-2 on page 6-3 are usually the longer duration
states; the system usually remains in those states until operator action forces a state
change. Table 6-3 describes all of the system states.
Table 6-3: System States Reference
State

Description

Down

The system has not been configured (there is no configuration information for the data slices to SPU topology) or there is not enough
working hardware to operate the system even in failover.
The SPUs can never be in this state.

Discovered

The SPUs and other components are discovered, but the system is
waiting for all components to complete start-up before transitioning
to the initializing state.

Discovering

The system manager is in the process of discovering all the system
components that it manages.

Going Offline

The system is in an interim state going to offline.

Going Offline (Now) The system is in an interim state going to offline now.
Going Pre-Online

The system is in an interim state, going to pre-online.

Going to Maintain
Initialized

The system uses this state during the initial startup sequence.

Initializing

The system is initializing. You cannot execute queries or transactions in this state.

Maintain

6-4

Missing

The system manager has detected a new, unknown SPU in a slot
that was previously occupied but not deleted.

Offline (Now)

This state is similar to offline, except that the system stops user
jobs immediately during the transition to offline.
For more information, see Table 5-4 on page 5-9.

Online

The system is running normally. It can service requests.

Paused

The system is paused. You cannot run user jobs.

Paused (Now)

This state is similar to paused, except that the system stops user
jobs immediately during the transition to paused.
For more information, see Table 5-4 on page 5-9.

20282-20

Rev.1

System States

Table 6-3: System States Reference
State

Description

Pausing

The system is transitioning from online to paused. During this state
no new queries or transactions are queued, although the system
allows current transactions to complete, unless you have specified
the nzsystem pause -now command.

Pausing Now

The system is attempting to pause due to a hardware failure, or the
administrator entered the nzsystem pause -now command.

Pre-Online

The system has completed initialization. The system goes to the
resume state.

Resuming

The system is waiting for all its components (SPUs, SFIs, and host
processes) to reach the online state before changing the system
state to online.

Stopped

The system is not running. Note that commands assume this state
when they attempt to connect to a system and get no response.
The SPUs can never be in this state.

Stopped (Now)

This state is similar to stopped, except that the system stops user
jobs immediately during the transition to stopped.

Stopping

The system is transitioning from online to stopped.

Stopping Now

The system is attempting to stop, or the administrator entered the
nzsystem stop -now command.

Unreachable

The system manager cannot communicate with the SPU because it
has failed or it has been physically removed from the system.

Waiting for a System State
You can use the nzstate command to wait for a specific operational state to occur before
proceeding with other commands or actions. You can use the nzstate command to list the
system states that you can wait for, as follows:
[nz@nzhost ~]$ nzstate listStates
State Symbol
-----------initialized
paused
pausedNow
offline
offlineNow
online
stopped
down


Description
-----------------------------------------------------------used by a system component when first starting
already running queries will complete but new ones are queued
like paused, except running queries are aborted
no queries are queued, only maintenance is allowed
like offline, except user jobs are stopped immediately
system is running normally
system software is not running
system was not able to initialize successfully

To wait for the online state or else timeout after 10 seconds, enter:
nzstate waitfor -u admin -pw password -host nzhost -type online
-timeout 10

20282-20

Rev.1

6-5

IBM Netezza System Administrator’s Guide



To test scripts or do maintenance, enter:
nzsystem pause -force
nzstate waitfor -u admin -pw password -host nzhost -type paused
-timeout 300

Do some maintenance.
nzsystem resume
nzstate waitfor -u admin -pw password -host nzhost -type online
-timeout 120

Run a query.

Managing the System State
You can use the nzstart and nzstop commands respectively to start and stop the Netezza
system operations. The nzsystem command provides additional state change options, such
as allowing you to pause and resume the system, as well as restart the system.
Note: When you stop and start the Netezza system operations on a Netezza C1000 system,
the storage groups continue to run and perform tasks such as media checks and health
checks for the disks in the array, as well as disk regenerations for disks that fail. The RAID
controllers are not affected by the Netezza system state.
Note: All nzsystem subcommands, except the nzsystem showState and showRev commands, require the Manage System administrative privilege. For more information, see
“Administrator Privileges” on page 8-9.

Start the System
When you start the Netezza system, you bring the system and database processes fully
online so that the Netezza system is ready to perform user queries and other tasks.
You can use the nzstart command to start system operation if the system is in the stopped
state. The nzstart command is a script that initiates a system start by setting up the environment and invoking the startup server. The nzstart command does not complete until the
system is online. The nzstart command also verifies the host configuration to ensure that
the environment is configured correctly and completely; it displays messages to direct you
to files or settings that are missing or misconfigured.
For more information about the nzstart command syntax and options, see “nzstart” on
page A-47.


To start the Netezza system, enter:
nzstart
(startupsvr) Info: NZ-00022: --- program 'startupsvr' (23328)
starting on host 'nzhost' ... ---

Note: You must run nzstart on the host and be logged on as the user nz. You cannot run it
remotely from Netezza client systems.
For IBM Netezza 1000 or IBM PureData System for Analytics N1001 systems, a message
is written to the sysmgr.log file if there are any storage path issues detected when the system starts. The log displays a message similar to “mpath -issues detected: degraded disk
path(s) or SPU communication error” which helps to identify problems within storage
arrays. For more information about how to check and manage path failures, see “Hardware
Path Down” on page 7-22.

6-6

20282-20

Rev.1

Managing the System State

Stop the System
When you stop the Netezza system, you stop the database processes and services and thus
new user queries or tasks such as loads, backups, and others cannot run. Typically, you only
stop the server when directed to do so as part of a very specific administration procedure or
when you need to perform a major management task You can use the nzstop command to
stop a running system. (You can also use the nzsystem stop command, but nzstop is the
recommended method.) Stopping a system stops all Netezza host processes. Unless you
specify otherwise, stopping the system waits for all running jobs to complete. For more
information about the nzstop command syntax and options, see “nzstop” on page A-53.
Note: You must run nzstop on the host and be logged on as the user nz. You cannot run it
remotely.


To stop the system, enter:
nzstop



To stop the system or exit after attempting for five minutes (300 seconds), enter:
nzstop -timeout 300

Pause the System
Certain management tasks such as host backups require the system to be in the paused
state. When you pause the system, the system queues any new queries or work until the
system is “resumed.” By default, the system finishes the queries and transactions that
were already active at the time the pause command was issued.


To transition the system to the paused state:
[nz@nzhost ~]$ nzsystem pause
Are you sure you want to pause the system (y|n)? [n] y

Enter y to continue. The transition completes quickly on an idle system, but it can take
much longer if the system is busy processing active queries and transactions. When the
transition completes, the system enters the paused state, which you can confirm with the
nzstate command as follows:
[nz@nzhost ~]$ nzstate
System state is 'Paused'.

You can use the -now option to force a transition to the paused state, which causes the system to abort any active queries and transactions. As a best practice, you should use the
nzsession show -activeTxn command to display a list of the current active transactions
before you force the system to terminate them.

Resume the System
When a system is paused or offline, you can resume the normal operations by resuming the
system. When you resume the system from a paused state, it will start to process all the
transactions that were submitted and queued while it was paused. In some cases, the system will also restart certain transactions that support the restart operations.


To resume the system and return it to the online state:
[nz@nzhost ~]$ nzsystem resume

The command usually completes very quickly; you can confirm that the system has
returned to the online state using the following command:

20282-20

Rev.1

6-7

IBM Netezza System Administrator’s Guide

[nz@nzhost ~]$ nzstate
System state is 'Online'.

Take the System Offline
When you take the system offline, the system will not queue any new work or transactions.
The state only allows maintenance tasks to run. By default, the system finishes the queries
and transactions that were already active at the time the offline command was issued.


To transition the system to the paused state:
[nz@nzhost ~]$ nzsystem offline
Are you sure you want to take the system offline (y|n)? [n] y

Enter y to continue. The transition completes quickly on an idle system, but it can take
much longer if the system is busy processing active queries and transactions. When the
transition completes, the system enters the offline state, which you can confirm with the
nzstate command as follows:
[nz@nzhost ~]$ nzstate
System state is 'Offline'.

You can use the -now option to force a transition to the offline state, which causes the system to abort any active queries and transactions. As a best practice, you should use the
nzsession show -activeTxn command to display a list of the current active transactions
before you force the system to terminate them.

Restart the System
When a system is in the online state but a system problem has occurred, you can restart
the system which stops and starts all server software. You can only use the nzsystem restart
command on a running system that is in a non-stopped state.


To restart the system:
[nz@docspubox ~]$ nzsystem restart
Are you sure you want to restart the system (y|n)? [n] y

Overview of the Netezza System Processing
When you start the Netezza system, you automatically launch a number of system processes. Table 6-4 describes the Netezza processes.
Table 6-4: Netezza Processes
Process

Description

bnrmgr

• Handles incoming connections from the nzbackup and nzrestore

commands.
• Launches an instance of the backupsvr or restoresvr to handle each client

instance.
bootsvr

• Informs TFTP client (the SPUs and SFIs) of the location of their initial

program or download images on the host.
• Informs the SPUs where to upload their core file in the event that a SPU

is instructed to dump a core image for debugging purposes.

6-8

20282-20

Rev.1

Managing the System State

Table 6-4: Netezza Processes
Process

Description

clientmgr

• Handles incoming connections from nz applications.
• This is not unlike the postmaster that handles incoming connections from

nzsql, ODBC, and so on.
dbosDispatch

• Accepts execution plans from the postgres, backup, and restore

process(es).
• Dynamically generates C code to process the query, and cross-compiles

the query so that it can be run on the host.
• Broadcasts the compiled code to the SPUs for execution.

dbosEvent

• Receives responses and results from the SPUs. As appropriate, it may

have the SPUs perform additional steps as part of the query.
• Rolls up the individual result sets (aggregated, sorted, consolidated, and

so on) and sends the final results back to the client’s postgres, backup, or
restore process.
eventmgr

• Processes events and event rules. When an event occurs, such as the sys-

tem changes state, a hardware component fails or is restarted, the
eventmgr checks to see if any action needs to be taken based on the
event and if so, performs the action. The action could be sending an email message or executing an external program.
• For more information about event rules, see Chapter 7, “Managing Event

Rules.”
loadmgr

• Handles incoming connections from the nzload command.
• Launches an instance of the loadsvr to handle each instance of the

nzload command.
nzvacuumcat

• At boot time, the system starts the nzvacuumcat command, which in turn

invokes the internal VACUUM command on system catalogs to remove
unneeded rows from system tables and compact disk space to enable
faster system table scanning.
• During system operation, the nzvacuumcat program monitors the amount

of host disk space used by system tables in each database. It perfoms
this check every 60 seconds. If the system catalog disk space for a particular database grows over a threshold amount (128 KB), the nzvacuumcat
program initiates a system table vacuum (VACUUM) on that database.
• The VACUUM command works on system tables only after obtaining an

exclusive lock on all system catalog tables. If it is unable to lock the system catalog tables, it quits and retries. Only when the VACUUM
command succeeds does the nzvacuumcat program change the size of
the database.
• While the VACUUM command is working, the system prevents any new

SQL or system table activity to start. This window of time is usually about
1 to 2 seconds, but can be longer if significant amounts of system catalog updates/deletes have occurred since the last VACUUM operation.

20282-20

Rev.1

6-9

IBM Netezza System Administrator’s Guide

Table 6-4: Netezza Processes
Process

Description

postgres

• Validates the access rights (username, password, ACL).
• Parses the SQL, and generates the optimized execution plan.
• Returns the results set to the client application when the query finishes

executing.
Note that two default postgres jobs are associated with the sysmgr and
the sessionmgr processes.
postmaster

• Accepts connection requests from clients (nzsql, ODBC, and so on).
• Launches one postgres process per connection to service the client.

sessionmgr • Keeps the session table current with the state of the different sessions
that are running the system.
• For more information, see “Session Manager” on page 6-16.

startupsvr

• Launches and then monitors all of the other processes. If any system pro-

cess should die, the startupsvr follows a set of predefined rules, and
either restarts the failed process or restarts the entire system.
•

statsmgr

Controlled by /nz/kit/sys/startup.cfg

• Handles requests for statistics from the nzstats command.
• For more information, see “Statistics Server” on page 6-17.

statsSvr

• Communicates with the nzstats command to obtain host-side operational

statistics.
• Note that the nzstats command communicates with the sysmgr to obtain

SPU statistics.
sysmgr

• Monitors and manages the overall state of the system.
• Periodically polls the SPUs and SFIs to ensure that they are operational.
• Initiates state changes upon requests from the user or as a result of a

change in hardware status (for example, a SPU failure).

System States during Netezza Start-Up
When you boot the system, the Netezza software automatically starts. The system goes
through the following states:
1. Stopped
2. Discovering
3. Initializing
4. Preonlining
5. Resuming
6. Online

6-10

20282-20

Rev.1

System Errors

When you power up (or reset) the hardware, each SPU loads an image from its flash memory and executes it. This image is then responsible for running diagnostics on the SPU,
registering the SPU with the host, and downloading runtime images for the SPU’s CPU and
the FPGA disk controller. The system downloads these images from the host through TFTP.

System Errors
During system operation different types of errors can occur. Table 6-5 describes some of
those errors.
Table 6-5: Error Categories
Category

Description

Example

User error

An error on the part of the user, usually due to incorrect or invalid input.

Invalid user name, invalid SQL
syntax.

Component
failure

A hardware or software system compo- SPU/SFI failure; host process
nent failure.
crashes.

Environment
failure

A request of an environment facility
fails. This is often due to resource or
access problems.

Recoverable
internal error

A detected internal programming error Unknown case value or msg
that is not severe enough to abort the type; file close fails.
program.

Nonrecover
-able internal
error

A detected internal programming error Core, memory corruption, assert
or corrupt internal state that requires fails.
the program to abort.

A file is locked; a buffer is full.

The Netezza system can take the following actions when an error occurs:

20282-20

Rev.1



Display an error message — Presents an error message string to the users that
describes the error. Generally the system performs this action whenever a user request
is not fulfilled.



Try again — During intermittent or temporary failures, keep trying until the error condition disappears. The retries are often needed when resources are limited, congested, or
locked.



Fail over — Switches to an alternate or spare component, because an active component has failed. Failover is a system-level recovery mechanism and can be triggered by
a system monitor or an error detected by software trying to use the component.



Log the error — Adds an entry to a component log. A log entry contains a date and
time, a severity level, and an error/event description.



Send an event notification — Sends notification through e-mail or by running a command. The decision whether to send an event notification is based on a set of userconfigurable event rules.

6-11

IBM Netezza System Administrator’s Guide



Abort the program — Terminates the program, because it cannot continue due to an
irreparably damaged internal state or because continuing would corrupt user data.
Software asserts that detect internal programming mistakes often fall into this category, because it is difficult to determine that it is safe to continue.



Clean up resources — Frees or releases resources that are no longer needed. Software
components are responsible for their own resource cleanup. In many cases, resources
are freed locally as part of each specific error handler. In severe cases, a program
cleanup handler runs just before the program exits and frees/releases any resources
that are still held.

System Logs
All major software components that run on the host have an associated log. Log files have
the following characteristics:


Each log consists of a set of files stored in a component-specific directory. For managers, there is one log per manager. For servers, there is one log per session, and their log
files have pid and/or date (.) identifiers.



Each file contains one day of entries, for a default maximum of seven days.



Each file contains entries that have a timestamp (date and time), an entry severity
type, and a message.

The system rotates log files, that is, for all the major components there are the current log
and the archived log files.


For all Netezza components (except postgres) — The system creates a new log file at
midnight if there is constant activity for that component. If, however you load data on
Monday and then do not load again until Friday, the system creates a new log file dated
the previous day from the new activity, in this case, Thursday. Although the size of the
log files is unlimited, every 30 days the system removes all log files that have not been
accessed.



For postgres logs — By default, the system checks the size of the log file daily and
rotates it to an archive file if it is greater than 1 GB in size. The system keeps 28 days
(four weeks) of archived log files. (Netezza Support can help you to customize these
settings if needed.)

To view the logs, log onto the host as user nz. To enable SQL logging, see “Logging Netezza
SQL Information” on page 8-30. For more information about these processes, see “Overview of the Netezza System Processing” on page 6-8.

Backup and Restore Server
The backup and restore servers handle requests for the nzbackup/nzrestore commands. The
log files record the start and stop times of the nzbackup/nzrestore processes and starting
and stopping times of the backupsvr and restoresvr processes, respectively.

Log file
/nz/kit/log/backupsvr/backupsvr.log — Current backup log
/nz/kit/log/restoresvr/restoresvr.log — Current restore log

6-12

20282-20

Rev.1

System Logs

/nz/kit/log/backupsvr/backupsvr..YYYY-MM-DD.log — Archive backup log
/nz/kit/log/restoresvr/restoresvr..YYYY-MM-DD.log — Archive restore log

Sample Log Messages
2004-05-13 08:03:12.791696 EDT Info: NZ-00022: --- program 'bnrmgr' (5006) starting
on host romeo-8400 ... ---

Bootserver Manager
The bootsvr log file records the initiation of all SPUs on the system, usually when the system is restarted by the nzstart command and also all stopping and restarting of the bootsvr
process.

Log file
/nz/kit/log/bootsvr/bootsvr.log — Current log
/nz/kit/log/bootsvr/bootsvr.YYYY-MM-DD.log — Archived log

Sample Log Messages
2004-05-13 08:07:31.548940 EDT Info: Number of boots currently in progress= 12

Client Manager
The clientmgr log file records all connection requests to the database server and also all
stopping and starting of the clientmgr process.

Log file
/nz/kit/log/clientmgr/clientmgr.log — Current log
/nz/kit/log/clientmgr/clientmgr.YYYY-MM-DD.log — Archived log

Sample Log Messages
2004-05-13 14:09:31.486544 EDT Info: admin: login successful

Database Operating System
The dbos.log file records information about the SQL plans submitted to the database server
and also the restarting of the dbos process.

Log file
/nz/kit/log/dbos/dbos.log — Current log
/nz/kit/log/dbos/dbos.YYYY-MM-DD.log — Archived log

Sample Log Messages
2011-07-08 00:04:03.245043 EDT Info: NZ-00022: --- program 'dbos'
(16977) starting on host 'nzhost' ... --2011-07-06 14:33:04.773920 EDT Debug: startTx implicit RO tx
0x2f30410 cli 205 uid 1205 sid 16226 pid [20050]

20282-20

Rev.1

6-13

IBM Netezza System Administrator’s Guide

2011-07-06 14:33:05.141215
0x2f3040e cli 206 uid 1206
2011-07-06 14:33:05.142439
0x2f3040e cli 206 uid 1206

EDT
sid
EDT
sid

Info:
16225
Info:
16225

plan queued: planid 1 tx
pid [20049]
plan in GRA: planid 1 tx
pid [20049]



Plan ID — The plan number queued or started. This number relates to the corresponding execution plan in the nz/data/plans directory. The system increments it for each
new portion of SQL processed and resets it to 1 when you restart the system.



Q ID — The queue to which this plan has been assigned.



Tx ID — The unique transaction identifier.



cli — The ID of the client process.



UID — The unique ID of the dbos client. Every time a client connects it receives a
unique number.



SID — The ID related to the ID returned from the nzsession.



PID — The process ID of the calling process running on the Netezza host.

Event Manager
The eventmgr log file records system events and the stopping and starting of the eventmgr
process.

Log file
/nz/kit/log/eventmgr/eventmgr.log — Current log
/nz/kit/log/eventmgr/eventmgr.YYYY-MM-DD.log — Archived log

Sample Log Messages
2011-07-08 00:11:31.359916 EDT Info: NZ-00022: --- program
'eventmgr' (15113) starting on host 'D400-9E-D' ... --2011-07-08 00:11:57.798341 EDT Info: received & processing event
type = hwNeedsAttention, event args = 'hwType=spa, hwId=1006,
location=2nd rack, 1st spa, spaId=2, slotId=1, devSerial=,
errString=One or more drives are either invalid or contain wrong
firmware revision. Run 'sys_rev_check storagemedia' for more
details., eventSource=system' event source = 'System initiated
event'
2011-07-08 00:16:32.454625 EDT Info: received & processing event
type = sysStateChanged, event args = 'previousState=discovering,
currentState=initializing, eventSource=user' event source ='User
initiated event'

6-14



event type — The event that triggered the notification.



event args — The argument being processed.



errString — The event message, which can include hardware identifications and other
details.



eventSource — The source of the event; system is the typical value.

20282-20

Rev.1

System Logs

Flow Communications Retransmit
The flow communications retransmit log file records retransmission processes.

Log file
/nz/kit/log/fcommrtx/fcommrtx.log — Current log
/nz/kit/log/fcommrtx/fcommrtx.2006-03-01.log — Archived log

Sample Log Messages
2011-07-08 00:04:03.243429 EDT Info: NZ-00022: --- program
'fcommrtx' (2331) starting on host 'nzhost' ... ---

Host Statistics Generator
The hostStatsGen log file records the starting and stopping of the hostStatsGen process.

Log file
/nz/kit/log/hostStatsGen/hostStatsGen.log — Current log
/nz/kit/log/hostStatsGen/hostStatsGen.YYYY-MM-DD.log — Archived log

Sample Log Messages
2011-07-08 00:04:04.245426 EDT Info: NZ-00022: --- program
'hostStatsGen' (2383) starting on host 'D400-9E-D' ... --2011-07-08 00:11:08.447854 EDT Info: NZ-00023: --- program
'hostStatsGen' (2383) exiting on host 'D400-9E-D' ... ---

Load Manager
The loadmgr log file records details of load requests, and the stopping and starting of the
loadmgr.

Log file
/nz/kit/log/loadmgr/loadmgr.log — Current log
/nz/kit/log/loadmgr/loadmgr.YYYY-MM-DD.log — Archived log

Sample Log Messages
2011-07-08 00:02:02.898247 EDT Info: system is online - enabling
load sessions

Postgres
The postgres log file is the main database log file. It contains information about database
activities.

Log file
/nz/kit/log/postgres/pg.log — Current log
/nz/kit/log/postgres/pg.log.n — Archived log

20282-20

Rev.1

6-15

IBM Netezza System Administrator’s Guide

Sample Log Messages
2011-07-08 09:40:05.336743 EDT [12615] DEBUG: CheckPointTime =
300
2011-07-08 09:40:05.338733 EDT [12616] NOTICE: database system
was shut down at 2011-07-08 09:40:05 EDT
2011-07-08 09:40:07.354693 EDT [12625] DEBUG: connection:
host=127.0.0.1 user=ADMIN database=SYSTEM
2011-07-08 09:40:07.358223 EDT [12625] DEBUG: QUERY: SET
timezone = 'America/New_York'
2011-07-08 09:40:07.358507 EDT [12625] DEBUG: QUERY: select
current_catalog, current_user
2011-07-08 09:40:07.359773 EDT [12625] DEBUG: QUERY: begin local
transaction;
2011-07-08 09:40:07.359950 EDT [12625] DEBUG: QUERY: select
null;
2011-07-08 09:40:07.360159 EDT [12625] DEBUG: QUERY: commit

Session Manager
The sessionmgr log file records details about the starting and stopping of the sessionmgr
process, and any errors associated with this process.

Log file
/nz/kit/log/sessionmgr/sessionmgr.log — Current log
/nz/kit/log/sessionmgr/sessionmgr.YYYY-MM-DD.log — Archived log

Sample Log Messages
2011-07-08 02:16:52.745743 EDT Info: NZ-00022: --- program
'sessionmgr' (3735) starting on host 'nzhost' ... --2011-07-08 02:24:01.119537 EDT Info: NZ-00023: --- program
'sessionmgr' (3735) exiting on host 'nzhost' ... ---

SPU Cores Manager
The /nz/kit/log/spucores directory contains core files and other information that is saved
when a SPU aborts on the host. If several SPUs abort, the system creates a core file for two
of the SPUs.

Startup Server
The startupsvr log file records the start up of the Netezza processes and any errors encountered with this process.

Log file
/nz/kit/log/startupsvr/startupsvr.log — Current log
/nz/kit/log/startupsvr/startupsvr.YYYY-MM-DD.log — Archived log

6-16

20282-20

Rev.1

System Logs

Sample Log Messages
2011-07-08 00:03:54.649179 EDT Info: NZ-00022: --- program
'startupsvr' (932) starting on host 'D400-9E-D' ... --2011-07-08 00:03:54.650672 EDT Info: NZ-00307: starting the
system, restart = no
2011-07-08 00:03:54.650735 EDT Info: NZ-00313: running onStart:
'prepareForStart'
2011-07-08 00:03:54 EDT: Rebooting SPUs via RICMP ...
2011-07-08 00:03:57 EDT: Sending 'reboot' to all SPUs ... done
2011-07-08 00:03:57 EDT: Checking database directory sizes...

Statistics Server
The statssvr log file records the details of starting and stopping the statsSvr and any associated errors.

Log file
/nz/kit/log/statsSvr/statsSvr.log — Current log
/nz/kit/log/statsSvr/statsSvr.YYYY-MM-DD.log — Archived log

Sample Log Messages
2011-07-08 00:03:41.528687 EDT Info: NZ-00023: --- program
'statsSvr' (22227) exiting on host 'nzhost' ... --2011-07-08 00:04:04.249586 EDT Info: NZ-00022: --- program
'statsSvr' (2385) starting on host 'nzhost' ... ---

System Manager
The sysmgr log file records details of stopping and starting the sysmgr process, and details
of system initialization and system state status.

Log file
/nz/kit/log/sysmgr/sysmgr.log

Sample Log Messages
2011-07-08 00:04:04.248923 EDT Info: NZ-00022: --- program
'sysmgr' (2384) starting on host 'nzhost' ... ---

The nzDbosSpill File
The host data handling software in DbosEvent has a disk work area that the system uses for
large sorts on the host.
The Netezza system has two sorting mechanisms:


20282-20

Rev.1

The Host Merge that takes sorted SPU return sets and produces a single-sorted set. It
uses temporary disk space to handle SPU double-duty situations.

6-17

IBM Netezza System Administrator’s Guide



A traditional sorter that begins with a random table on the host and sorts it into the
desired order. It can use a simple external sort method to handle very large datasets.

The file on the Linux host for this disk work area is $NZ_TMP_DIR/nzDbosSpill. Within
DBOS there is a database that tracks segments of the file presently in use.
To avoid having a runaway query use up all the host computer's disk space, there is a limit
on the DbosEvent database, and hence the size of the Linux file. This limit is in the
Netezza Registry file. The tag for the value is startup.hostSwapSpaceLimit.

System Configuration
The system configuration file, system.cfg, contains configuration settings that the Netezza
system uses for system startup, system management, host processes, and SPUs. The system configuration file is also known as the system registry. Entries in the system.cfg file
allow you to control and tune the system.
As a best practice, you should not change or customize the system registry unless directed
to by Netezza Support or by a documented Netezza procedure. The registry contains
numerous entries, some of which are documented for use or for reference. Most settings are
internal and used only under direction from Netezza Support. Incorrect changes to the registry can cause performance impacts to the Netezza system. Many of the settings are
documented in Appendix D, “System Configuration File Settings.”
You can display the system configuration file settings using the nzsystem showRegistry
command. For more information, see “nzsystem” on page A-55.
Note: A default of zero in many cases indicates a compiled default not the actual value
zero. Text (yes/no) and numbers indicate actual values.

Display Configuration Information
You can use the nzsystem command to show system registry information and software revision level.


To display the system registry information:
nzsystem showRegistry -u bob -pw pass -host nzhost
#
# Netezza NPS configuration registry
# Date: 30-Apr-09 12:48:44 EDT
# Revision: 5.0.D1
#
# Configuration options used during system start
# These options cannot be changed on a running system
#
startup.numSpus
startup.numSpares
startup.simMode
startup.autoCreateDb
startup.spuSimMemoryMB
startup.noPad
startup.mismatchOverRide

6-18

=
=
=
=
=
=
=

6
0
no
0
0
no
yes

20282-20

Rev.1

System Configuration

startup.overrideSpuRev
startup.dbosStartupTimeout
...

= 0
= 300

The output from the command is very long; only a small portion is shown in the example.

Changing the System Registry
To add or change the value of a system registry setting, you use the nzsystem set -arg command. For details about the command and its arguments, see “nzsystem” on page A-55.
You must pause the system before changing a setting, and then resume the system after
completing the nzsystem set command.
Do not change your system settings unless directed to do so by Netezza Support.



For example, to specify a system setting:
nzsystem set -arg setting=value

20282-20

Rev.1

6-19

IBM Netezza System Administrator’s Guide

6-20

20282-20

Rev.1

CHAPTER

7

Managing Event Rules
What’s in this chapter
 Template Event Rules
 Managing Event Rules
 Template Event Reference

The Netezza event manager monitors the health, status, and activity of the Netezza system
operation and can take action when a specific event occurs. Event monitoring is a proactive
way to manage the system without continuous human observation. You can configure the
event manager to continually watch for specific conditions such as machine state changes,
hardware restarts, faults, or failures. In addition, the event manager can watch for conditions such as reaching a certain percentage of full disk space, queries that have been
running for longer than expected, and other Netezza system behaviors.
This chapter describes how to administer the Netezza system using event rules that you
create and manage.

Template Event Rules
Event management consists of creating rules that define conditions to monitor and the
actions to take when that condition is detected. The event manager uses these rules to
define its monitoring scope, and thus its behavior when a rule is triggered. Creating event
rules can be a complex process because you have to define the condition very specifically
so that the event manager can detect it, and you must define the actions to take when the
match occurs.
To help ease the process of creating event rules, Netezza supplies template event rules that
you can copy and tailor for your system. The template events define a set of common conditions to monitor with actions that are based on the type or impact of the condition. The
template event rules are not enabled by default, and you cannot change or delete the template events. You can copy them as “starter rules” for more customized rules in your
environment.
As a best practice, you should begin by copying and using the template rules. If you are
very familiar with event management and the operational characteristics of your Netezza
appliance, you can also create your own rules to monitor conditions which are important to
you. You can display the template event rules using the nzevent show -template command.
Note: Release 5.0.x introduced new template events for IBM Netezza 100, 1000, C1000,
and N1001, and later systems. Previous event template rules specific to the z-series plat-

7-1

IBM Netezza System Administrator’s Guide

form do not apply to IBM Netezza 1000 or IBM PureData System for Analytics N1001
systems and have been replaced by similar, new events.
Table 7-1 lists the predefined template event rules.
Table 7-1: Template Event Rules

7-2

Template Event Rule Name

Description

Disk80PercentFull

Notifies you when a disk’s space is more than 80 percent full. “Specifying Disk Space Threshold
Notification” on page 7-24.

Disk90PercentFull

Notifies you when a disk’s space is more than 90 percent full. “Specifying Disk Space Threshold
Notification” on page 7-24.

EccError

Notifies you when the system detects an error correcting code (ECC) error. For more information, see
“Monitoring for ECC Errors” on page 7-29.

HardwareNeedsAttention

Notifies you when the system detects a condition that
could impact the hardware. For more information, see
“Hardware Needs Attention” on page 7-21.

HardwareRestarted

Notifies you when a hardware component successfully
reboots. For more information, see “Hardware
Restarted” on page 7-24

HardwareServiceRequested

Notifies you of the failure of a hardware component,
which most likely requires a service call and/or
replacement. For more information, see “Hardware
Service Requested” on page 7-20.

HistCaptureEvent

Notifies you if there is a problem that prevents the current query history collection from writing files to the
staging area.

HistLoadEvent

Notifies you if there is a problem that prevents the
loading of the query history files in the staging area to
the target query history database.

HwPathDown

Notifies you when the status of a disk path changes
from the Up to the Down state (a path has failed). For
more information, see “Hardware Path Down” on
page 7-22.

NPSNoLongerOnline

Notifies you when the system goes from the online
state to another state. For more information, see
“Specifying System State Changes” on page 7-19.

RegenFault

Notifies you when the system cannot set up a data
slice regeneration.

20282-20

Rev.1

Template Event Rules

Table 7-1: Template Event Rules
Template Event Rule Name

Description

RunAwayQuery

Notifies you when a query exceeds a timeout limit. For
more information, see “Specifying Runaway Query
Notification” on page 7-26.

SCSIDiskError

Notifies you when the system manager detects that an
active disk has failed, or when an FPGA error occurs.

SCSIPredictiveFailure

Notifies you when a disk’s SCSI SMART threshold is
exceeded.

SpuCore

Notifies you when the system detects that a SPU process has restarted and resulted in a core file. For more
information, see “Monitoring SPU Cores” on
page 7-37.

SystemHeatThresholdExceeded

When any three boards in an SPA reach the red temperature threshold, the event runs a command to shut
down the SPAs, SFIs, and RPCs. For more information,
see “Monitoring System Temperature” on page 7-33.
Enabled by default for z-series systems only.

SystemOnline

Notifies you when the system is online. For more information, see “Specifying System State Changes” on
page 7-19.

SystemStuckInState

Notifies you when the system is stuck in the Pausing
Now state for more than the timeout specified by the
sysmgr.pausingStateTimeout (420 seconds). For more
information, see “Specifying System State Changes”
on page 7-19.

ThermalFault

Notifies you when the temperature of a hardware component has exceeded its operating thresholds. For
more information, see “Monitoring Hardware Temperature” on page 7-32.

Transaction Limit Event

Sends an email notification when the number of outstanding transaction objects exceeds 90% of the
available objects. For more information, see “Monitoring Transaction Limits” on page 7-38

VoltageFault

Notifies you when the voltage of a hardware component has exceeded its operating thresholds. For more
information, see “Monitoring Voltage Faults” on
page 7-37.

Note: Netezza may add new event types to monitor conditions on the system. These event
types may not be available as templates, which means you must manually add a rule to
enable them. For a description of additional event types that could assist you with monitoring and managing the system, see “Event Types Reference” on page 7-40.

20282-20

Rev.1

7-3

IBM Netezza System Administrator’s Guide

The action to take for an event often depends on the type of event (its impact on the system
operations or performance). Table 7-2 lists some of the predefined template events and
their corresponding impacts and actions.

Table 7-2: Netezza Template Event Rules
Template Name

Type

Notify

Severity Impact

Action

Disk80PercentFull
Disk90PercentFull

hwDiskFull
(Notice)

Admins,
DBAs

Moder- Full disk preate to
vents some
Serious operations.

Reclaim space or remove
unwanted databases or older
data. For more information,
see “Specifying Disk Space
Threshold Notification” on
page 7-24.

EccError

eccError
(Notice)

Admins,
NPS

Moderate

No impact.
Records correctable
memory
errors.

Ignore if occasional, replace
when occurs often. For more
information, see “Monitoring
for ECC Errors” on
page 7-29.

HardwareNeedsAttention

hwNeedsAt- Admins,
tention
NPS

Moderate

Possible
change or
issue that
could start to
impact
performance.

Investigate hardware problems and identify whether
steps may be required to
return the component to normal operations. For more
information, see “Hardware
Needs Attention” on
page 7-21.

HardwareRestarted

hwRestarted
(Notice)

Admins,
NPS

Moderate

Any query or
data load in
progress is
lost.

Investigate whether the
cause is hardware, software.
Check for SPU cores. For
more information, see “Hardware Restarted” on
page 7-24.

HardwareServiceRequested

hwServiceRequested
(Warning)

Admins,
NPS

Moder- Any query or
ate to
work in
Serious progress is
lost. Disk failures initiate a
regeneration.

7-4

Contact Netezza. For more
information, see “Hardware
Service Requested” on
page 7-20.

20282-20

Rev.1

Template Event Rules

Table 7-2: Netezza Template Event Rules
Template Name

Type

Notify

Severity Impact

Action

HistCaptureEvent

histCaptureEvent

Admins,
NPS

Moder- Query history
ate to
is unable to
Serious save captured history
data in the
staging area;
query history
will stop collecting new
data.

The size of the staging area
has reached the configured
size threshold, or there is no
available disk space in /nz/
data. Either increase the size
threshold or free up disk
space by deleting old files.

HistLoadEvent

histLoadEvent

Admins,
NPS

Moder- Query history
ate to
is unable to
Serious load history
data into the
database; new
history data
will not be
available in
reports until it
can be
loaded.

The history configuration
may have changed, the history database may have been
deleted, or there may be
some kind of session connection error.

HwPathDown

hwPathDown

Admins

Serious Query perforto
mance and
Critical possible system
downtime.

Contact Netezza Support. For
more information, see “Hardware Path Down” on
page 7-22.

NPSNoLongerOnline
SystemOnline

sysStateChanged
(Information)

Admins,
NPS,
DBAs

Varies

Availability
status.

Depends on the current
state. For more information,
see “Specifying System
State Changes” on
page 7-19.

RegenFault

regenFault

Admins,
NPS

Critical

May prevent
user data
from being
regenerated.

Contact Netezza Support. For
more information, see “Monitoring Regeneration Errors”
on page 7-29.

RunAwayQuery

runawayQuery
(Notice)

Admins,
DBAs

Moderate

Can consume
resources
needed for
operations.

Determine whether to allow
to run, manage workload. For
more information, see “Specifying Runaway Query
Notification” on page 7-26.

SCSIDiskError

scsiDiskError

Admins,
NPS

Serious Impacts system
performance.

20282-20

Rev.1

Schedule disk replacement
as soon as possible. See
“Monitoring Disk Errors” on
page 7-30.

7-5

IBM Netezza System Administrator’s Guide

Table 7-2: Netezza Template Event Rules
Template Name

Type

Notify

Severity Impact

Action

SCSIPredictiveFailure

scsiPredictiveFailure

Admins,
NPS

Critical

Adversely
affects
performance.

Schedule disk replacement
as soon as possible. See
“Monitoring for Disk Predictive Failure Errors” on
page 7-28.

SpuCore

spuCore

Admins,
NPS

Moderate

A SPU core
file has
occurred.

The system created a SPU
core file. See “Monitoring
SPU Cores” on page 7-37.

SystemHeatThresholdExceeded

sysHeatThreshold

Admins,
NPS

Critical

System
shutdown.

Before powering on machine,
check the SPA that caused
this event to occur. For more
information, see “Monitoring
System Temperature” on
page 7-33

SystemStuckInState

Admins,
systemStuckInStat NPS
e
(Information)

Moderate

A system is
stuck in the
“pausing
now” state.

Contact Support. See “Monitoring the System State” on
page 7-27.

ThermalFault

hwThermal- Admins,
Fault
NPS

Serious Can drastically reduce
disk life
expectancy if
ignored.

Contact Netezza Support. For
more information, see “Monitoring Hardware
Temperature” on page 7-32.

TrasactionLimitEvent

transaction- Admins,
LimitEvent NPS

Serious New transactions are
blocked if the
limit is
reached.

Abort some existing sessions
which may be old and
require cleanup, or stop/start
the Netezza server to close
all existing transactions.

VoltageFault

hwVoltageFault

Serious May indicate
power supply
issues.

For more information, see
“Monitoring Voltage Faults”
on page 7-37.

Admins,
NPS

Managing Event Rules
To start using events, you must create and enable some event rules. You can use any of the
following methods to create and activate event rules:


Copy and enable a template event rule



Add an event rule

You can copy, modify, and add events using the nzevent command or the NzAdmin interface. You can also generate events to test the conditions and event notifications that you
are configuring. The following sections describe how to manage events using the nzevent

7-6

20282-20

Rev.1

Managing Event Rules

command. The NzAdmin interface has a very intuitive interface for managing events,
including a wizard tool for creating new events. For information on accessing the NzAdmin
interface, see “NzAdmin Tool Overview” on page 3-11.

Copying a Template Event to Create an Event Rule
You can use the nzevent copy command to copy a predefined template for activation. The
following example copies a template event named NPSNoLongerOnline to create a new
user-defined rule of the same name, adds a sample email address for contact, and activates the rule:
nzevent copy -u admin -pw password -useTemplate -name
NPSNoLongerOnline -newName NPSNoLongerOnline -on yes -dst
jdoe@company.com

When you copy a template event rule, which is disabled by default, your new rule is likewise disabled by default. You must enable it using the -on yes argument. In addition, if the
template rule sends email notifications, you must specify a destination email address.

Copying and Modifying a User-Defined Event Rule
You can copy, modify, and rename an existing user-defined rule using the nzevent copy
command. The following example copies, renames, and modifies an existing event rule:
nzevent copy -u admin -pw password -name NPSNoLongerOnline -newName
MyModNPSNoLongerOnline -on yes -dst jdoe@company.com -ccDst
tsmith@company.com -callhome yes

When you copy an existing user-defined event rule, note that your new rule will be enabled
automatically if the existing rule is enabled. If the existing rule is disabled, your new rule is
disabled by default. You must enable it using the -on yes argument. You must specify a
unique name for your new rule; it cannot match the name of the existing user-defined rule.

Generating an Event
You can use the nzevent generate command to trigger an event for the event manager. If
the event matches a current event rule, the system takes the action defined by the event
rule.
You might generate events for the following cases:


To simulate a system event to test an event rule.



To add new events, because the system is not generating events for conditions for
which you would like notification.

If the event that you want to generate has a restriction, specify the arguments that would
trigger the restriction using the -eventArgs option. For example, if a runaway query event
has a restriction that the duration of the query must be greater than 30 seconds, use a
command similar to the following to ensure that a generated event is triggered:
nzevent generate -eventtype runawayquery -eventArgs 'duration=50'

In this example, the duration meets the event criteria (greater than 30) and the event is
triggered. If you do not specify a value for a restriction argument in the -eventArgs string,
the command uses default values for the arguments. In this example, duration has a
default of 0, so the event would not be triggered since it did not meet the event criteria.

20282-20

Rev.1

7-7

IBM Netezza System Administrator’s Guide



To generate a event for a system state change:
nzevent generate -eventType sysStateChanged
-eventArgs 'previousState=online, currentState=paused'

Deleting an Event Rule
You can delete event rules that you have created. You cannot delete the template events.


To delete an event rule, enter:
nzevent delete -u admin -pw password -name 

Disabling an Event Rule
To disable an event rule, do the following:


To disable an event rule, enter:
nzevent modify -u admin -pw password -name  -on no

Adding an Event Rule
You can use the nzevent add command to add an event rule. You can also use the NzAdmin
tool to add event rules using a wizard for creating events. Adding an event rule consists of
two tasks: specifying the event match criteria and specifying the notification method.
(These tasks are described in more detail following the examples.)
Note: Although the z-series events do not appear as templates on IBM Netezza 1000 or
N1001 systems, you could add them using nzevent if you have the syntax documented in
the previous releases. However, these events are not supported on IBM Netezza 1000 or
later systems.


To add an event rule that sends an e-mail message when the system transitions from
the online state to any other state, enter:
nzevent add -name TheSystemGoingOnline -u admin -pw password
-on yes -eventType sysStateChanged -eventArgsExpr '$previousState
== online && $currentState != online' -notifyType email -dst
jdoe@company.com -msg 'NPS system $HOST went from $previousState to
$currentState at $eventTimestamp.' -bodyText
'$notifyMsg\n\nEvent:\n$eventDetail\nEvent
Rule:\n$eventRuleDetail'

Note: If you are creating event rules on a Windows client system, use double quotes instead
of single quotes to specify strings.

Specifying the Event Match Criteria
The Netezza event manager uses the match criterion portion of the event rule to determine
which events generate a notification and which ones the system merely logs. A match
occurs if the event type is the same and the optional event args expression evaluates to
true. If you do not specify an expression, the event manager uses only the event type to
determine a match.

7-8

20282-20

Rev.1

Managing Event Rules

The event manager generates notifications for all rules that match the criteria, not just for
the first event rule that matches. Table 7-3 lists the event types you can specify and the
arguments and the values passed with the event. You can list the defined event types using
the nzevent listEventTypes command. Used only on z-series systems such as the 10000series, 8000z-series, and 5200-series systems.

Table 7-3: Event Types
Event Type

Tag Name

Possible Values

sysStateChanged

previousState, currentState, , 
eventSource

hwFailed

Used only on z-series systems such as the 10000-series, 8000zseries, and 5200-series systems.

hwRestarted

hwType, hwId, spaId,
spaSlot, devSerial, devHwRev, devFwRev

• spu, , ,

, ,
, 
• sfi, , , , , , 
• fan, , ,


• pwr, ,

, 
hwDiskFull

hwType, hwId, spaId,
spu, , ,, , 
, , 
For more information, see “Specifying Disk Space Threshold Notification” on page 7-24.

runawayQuery

sessionId, planId, duration

, , 

For more information, see “Specifying Runaway Query Notification”
on page 7-26.
custom1 or
custom2

smartThreshold

20282-20

Rev.1

User-specified rule. Use with the nzevent generate command.
For more information, see “Creating a Custom Event Rule” on
page 7-18.
Used only on z-series systems such as the 10000-series, 8000zseries, and 5200-series systems.

7-9

IBM Netezza System Administrator’s Guide

Table 7-3: Event Types
Event Type

Tag Name

Possible Values

eccError

hwType, hwId, spaId,
spaSlot, errType, errCode,
devSerial, devHwRev,
devFwRev

spu, , , , , , , ,


regenError

Used only on z-series systems such as the 10000-series, 8000zseries, and 5200-series systems.

diskError

Used only on z-series systems such as the 10000-series, 8000zseries, and 5200-series systems.

hwHeatThreshold

Used only on z-series systems such as the 10000-series, 8000zseries, and 5200-series systems.

sysHeatThreshold

errType, errCode, errString

, , , , 

For more information, see “Specifying System State Changes” on
page 7-19.
histCaptureEvent

configName, histType, storageLimit, loadMinThreshold,
loadMaxThreshold, diskFullThreshold, loadInterval,
nps, database, capturedSize,
stagedSize, storageSize,
dirName, errCode, errString

, , , , , ,
, , , ,
, , , ,


histLoadEvent

configName, histType, storageLimit, loadMinThreshold,
loadMaxThreshold, diskFullThreshold, loadInterval,
nps, database, batchSize,
stagedSize, dirName,
errCode, errString

, , , , , ,
, , , ,
, , , 

hwVoltageFault

hwType, hwId, label, location, curVolt, errString,
eventSource

• SPU, ,