Alpine Centricstor V3 1D Users Manual V3.1D User Guide

pmn to the manual 644f3bf2-b2ef-430f-a0df-a93ee0c9b827

2015-02-05

: Alpine Alpine-Centricstor-V3-1D-Users-Manual-355893 alpine-centricstor-v3-1d-users-manual-355893 alpine pdf

Open the PDF directly: View PDF PDF.
Page Count: 640

DownloadAlpine Alpine-Centricstor-V3-1D-Users-Manual- CentricStor V3.1D User Guide  Alpine-centricstor-v3-1d-users-manual
Open PDF In BrowserView PDF
CentricStor V3.1D
User Guide

Edition July 2007

Comments… Suggestions… Corrections…
The User Documentation Department would like to know your
opinion on this manual. Your feedback helps us to optimize our
documentation to suit your individual needs.
Feel free to send us your comments by e-mail to:
manuals@fujitsu-siemens.com

Certified documentation
according to DIN EN ISO 9001:2000
To ensure a consistently high quality standard and
user-friendliness, this documentation was created to
meet the regulations of a quality management system which
complies with the requirements of the standard
DIN EN ISO 9001:2000.
cognitas. Gesellschaft für Technik-Dokumentation mbH
www.cognitas.de

Copyright and Trademarks
Copyright © Fujitsu Siemens Computers GmbH 2007.
All rights reserved.
Delivery subject to availability; right of technical modifications reserved.
All hardware and software names used are trademarks of their respective manufacturers.
This manual was produced by
cognitas. Gesellschaft für Technik-Dokumentation mbH
www.cognitas.de

This manual is printed
on paper treated with
chlorine-free bleach.

Contents
1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

1.1

Objective and target group for the manual . . . . . . . . . . . . . . . . . . . . . . 20

1.2

Concept of the manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

1.3

Notational conventions

1.4

Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

2

CentricStor - Virtual Tape Library . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.1

The CentricStor principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.2
2.2.1
2.2.1.1
2.2.1.2
2.2.1.3
2.2.1.4
2.2.2
2.2.3
2.2.4
2.2.5

Hardware architecture . . . . . . . . . . . . . . .
ISP (Integrated Service Processor) . . . . . . . . .
VLP (Virtual Library Processor) . . . . . . . . .
ICP (Integrated Channel Processor) . . . . . . .
IDP (Integrated Device Processor) . . . . . . .
ICP_IDP or IUP (Integrated Universal Processor)
RAID systems for the Tape Volume Cache . . . . .
FibreChannel (FC) . . . . . . . . . . . . . . . . . .
FC switch (fibre channel switch) . . . . . . . . . . .
Host connection . . . . . . . . . . . . . . . . . . .

2.3

Software architecture

2.4

Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

2.5
2.5.1
2.5.2
2.5.3
2.5.4

Administering the tape cartridges . . . . . . . . . . . . . . . . .
Writing the tape cartridges according to the stacked volume principle
Repeated writing of a logical volume onto tape . . . . . . . . . . . .
Creating a directory . . . . . . . . . . . . . . . . . . . . . . . . . .
Reorganization of the tape cartridges . . . . . . . . . . . . . . . . .

U41117-J-Z125-7-76

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

26
27
27
28
29
29
30
31
31
32

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

35
35
36
36
37

Contents

2.6
2.6.1
2.6.2
2.6.3

Procedures . . . . . . . . . . . . . . . .
Creating the CentricStor data maintenance
Issuing a mount job from the host . . . . .
Scratch mount . . . . . . . . . . . . . . .

2.7

New system functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

2.8
2.8.1
2.8.2
2.8.3
2.8.4
2.8.4.1
2.8.4.2

Standard system functions . . . . . .
Partitioning by volume groups . . . . . .
“Call Home” in the event of an error . . .
SNMP support . . . . . . . . . . . . . .
Exporting and importing tape cartridges .
Vault attribute and vault status . . . .
Transfer PVG . . . . . . . . . . . . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

44
44
44
45
45
46
46

2.9
2.9.1
2.9.2
2.9.3
2.9.4
2.9.5
2.9.6
2.9.7
2.9.8
2.9.8.1
2.9.8.2
2.9.8.3
2.9.8.4
2.9.8.5
2.9.9

Optional system functions . . . . . . . . . . . . . . . . . .
Compression . . . . . . . . . . . . . . . . . . . . . . . . . .
Multiple library support . . . . . . . . . . . . . . . . . . . . .
Dual Save . . . . . . . . . . . . . . . . . . . . . . . . . . .
Extending virtual drives . . . . . . . . . . . . . . . . . . . .
System administrator’s edition . . . . . . . . . . . . . . . . .
Fibre channel connection for load balancing and redundancy .
Automatic VLP failover . . . . . . . . . . . . . . . . . . . . .
Cache Mirroring Feature . . . . . . . . . . . . . . . . . . . .
General . . . . . . . . . . . . . . . . . . . . . . . . . .
Hardware requirements . . . . . . . . . . . . . . . . . .
Software requirements . . . . . . . . . . . . . . . . . . .
Mirrored RAID systems . . . . . . . . . . . . . . . . . .
Presentation of the mirror function in GXCC . . . . . . .
Accounting . . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

47
48
49
50
52
52
52
52
55
55
55
56
57
58
59

3

Switching CentricStor on/off

3.1

Switching CentricStor on . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

3.2

Switching CentricStor off . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

4

Selected system administrator activities . . . . . . . . . . . . . . . . . . . . . . . 63

4.1
4.1.1
4.1.2

Partitioning on the basis of volume groups . . . . . . . . . . . . . . . . . . . . . 63
General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

38
38
39
42

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

U41117-J-Z125-7-76

Contents

4.1.3
4.1.3.1
4.1.3.2
4.1.3.3
4.1.3.4
4.1.3.5
4.1.3.6
4.1.3.7
4.1.3.8
4.1.3.9
4.1.3.10
4.1.3.11

System administrator activities . . . . . . . . . . . . . . . .
Adding a logical volume group . . . . . . . . . . . . . . .
Adding a physical volume group . . . . . . . . . . . . . .
Adding logical volumes to a logical volume group . . . . .
Adding physical volumes to a physical volume group . . .
Assigning an LVG to a PVG . . . . . . . . . . . . . . . .
Removing an assignment between an LVG and a PVG . .
Changing logical volumes to another group . . . . . . . .
Removing logical volumes . . . . . . . . . . . . . . . . .
Removing logical volume groups . . . . . . . . . . . . .
Removing physical volumes from a physical volume group
Removing physical volume groups . . . . . . . . . . . .

4.2

Cache management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

4.3
4.3.1
4.3.2
4.3.2.1
4.3.2.2

Dual Save . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
System administrator activities . . . . . . . . . . . . . . . . . . .
Assigning a logical volume group to two physical volume groups
Removing a Dual Save assignment . . . . . . . . . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

71
71
72
72
72

4.4
4.4.1
4.4.2
4.4.3
4.4.4
4.4.5
4.4.6
4.4.7

Reorganization . . . . . . . . . . . . . . . . . . . .
Why do we need reorganization? . . . . . . . . . . .
How is a physical volume reorganized? . . . . . . . .
When is a reorganization performed? . . . . . . . . .
Which physical volume is selected for reorganization?
Own physical volumes for reorganization backup . . .
Starting the reorganization of a physical volume . . .
Configuration parameters . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

73
73
74
75
76
78
78
79

4.5

Cleaning physical drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

4.6

Synchronization of the system time using NTP . . . . . . . . . . . . . . . . . . . 82

5

Operating and monitoring CentricStor . . . . . . . . . . . . . . . . . . . . . . . . 83

5.1
5.1.1
5.1.2
5.1.3
5.1.4

Technical design . . . . . . . . . . . . . .
General . . . . . . . . . . . . . . . . . . .
Principles of operation of GXCC . . . . . . .
Monitoring structure within a CentricStor ISP
Operating modes . . . . . . . . . . . . . .

5.2
5.2.1
5.2.2

Operator configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Basic configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

U41117-J-Z125-7-76

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

65
66
66
66
67
67
67
68
68
68
69
69

83
83
84
87
90

Contents

5.2.3
5.2.4
5.2.5
5.2.5.1
5.2.5.2

GXCC in other systems . . . . . . . . . . . . .
Screen display requirements . . . . . . . . . .
Managing CentricStor via SNMP . . . . . . . .
Connection to SNMP management systems
SNMP and GXCC . . . . . . . . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

5.3
5.3.1
5.3.2
5.3.2.1
5.3.3
5.3.4
5.3.4.1
5.3.4.2
5.3.4.3
5.3.4.4
5.3.5
5.3.6
5.3.6.1
5.3.6.2
5.3.6.3
5.3.6.4
5.3.6.5
5.3.7
5.3.8
5.3.9
5.3.10
5.3.11

Starting GXCC . . . . . . . . . . . . . . . . . . .
Differences to earlier CentricStor versions . . . . .
Command line . . . . . . . . . . . . . . . . . . . .
Explanation of the start parameter -aspect . . .
Environment variable XTCC_CLASS . . . . . . . .
Passwords . . . . . . . . . . . . . . . . . . . . . .
Optional access control for Observe mode . . .
Authentication . . . . . . . . . . . . . . . . . .
Suppressing the password query . . . . . . . .
Additional password query . . . . . . . . . . . .
Starting the CentricStor console . . . . . . . . . . .
Starting from an X11 server . . . . . . . . . . . . .
General notes on the X11 server architecture . .
Using the direct XDMCP interface . . . . . . . .
Starting from a UNIX system . . . . . . . . . .
Starting from a Windows system via Exceed . .
Starting from a Windows/NT system via XVision
GXCC welcome screen . . . . . . . . . . . . . . .
Selecting the CentricStor system . . . . . . . . . .
Establishing a connection after clicking on OK . . .
Authentication . . . . . . . . . . . . . . . . . . . .
Software updates . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

. 95
. 95
. 95
. 97
. 98
. 98
. 99
. 99
100
101
102
102
102
104
104
105
108
114
116
116
117
118

6

GXCC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

6.1
6.1.1
6.1.2
6.1.3
6.1.3.1
6.1.3.2
6.1.3.3
6.1.3.4
6.1.3.5
6.1.3.6
6.1.4
6.1.5

Main window . . . . . . . . . . . . . . . . . . .
Standard . . . . . . . . . . . . . . . . . . . . . .
Loss of a connection . . . . . . . . . . . . . . . .
Elements of the GXCC main window . . . . . . .
Title bar . . . . . . . . . . . . . . . . . . . .
Footer . . . . . . . . . . . . . . . . . . . . .
Function buttons and displays in the button bar
System information . . . . . . . . . . . . . .
Console messages . . . . . . . . . . . . . .
Function bar . . . . . . . . . . . . . . . . . .
Message window . . . . . . . . . . . . . . . . .
Asynchronous errors . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

92
92
92
92
93

119
119
120
121
121
121
123
123
124
124
125
125

U41117-J-Z125-7-76

Contents

6.1.6
6.1.6.1
6.1.6.2
6.1.7
6.1.8
6.1.9
6.1.9.1
6.1.10
6.1.11
6.1.12
6.1.12.1
6.1.13
6.1.14
6.1.14.1
6.1.14.2
6.1.15
6.1.16
6.1.17
6.1.18
6.1.19

Block diagram . . . . . . . . . . . . . . . . . . . .
Status information . . . . . . . . . . . . . . . .
Object information and object-related functions .
ICP object information . . . . . . . . . . . . . . . .
IDP object information . . . . . . . . . . . . . . . .
Functions of an ISP . . . . . . . . . . . . . . . . .
Show Details (XTCC) . . . . . . . . . . . . . .
Functions for all ISPs of a particular class . . . . . .
Information about the RAID systems . . . . . . . .
RAID system functions . . . . . . . . . . . . . . . .
Show complete RAID status . . . . . . . . . . .
Information on Fibre Channel fabric . . . . . . . . .
Functions of the Fibre Channel fabric . . . . . . . .
Controller Color Scheme . . . . . . . . . . . .
Show data fcswitch  [(trap)]
Information about the FC connections . . . . . . . .
Information on the archive systems . . . . . . . . .
ISP system messages . . . . . . . . . . . . . . . .
SNMP messages . . . . . . . . . . . . . . . . . .
Configuration Changed . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

126
133
133
134
135
135
135
135
136
137
137
138
138
139
139
140
140
141
141
142

6.2
6.2.1
6.2.2
6.2.2.1
6.2.2.2
6.2.2.3
6.2.2.4
6.2.2.5
6.2.3
6.2.3.1
6.2.4
6.2.4.1
6.2.4.2
6.2.5
6.2.5.1
6.2.5.2
6.2.6
6.2.6.1
6.2.6.2
6.2.6.3
6.2.6.4
6.2.6.5

Function bar . . . . . . . . . . .
Overview of GXCC functions . . .
File . . . . . . . . . . . . . . . . .
Save . . . . . . . . . . . . . .
Open . . . . . . . . . . . . . .
Show . . . . . . . . . . . . . .
Print . . . . . . . . . . . . . .
Exit . . . . . . . . . . . . . . .
Unit . . . . . . . . . . . . . . . . .
Select . . . . . . . . . . . . .
Options . . . . . . . . . . . . . . .
Settings . . . . . . . . . . . .
Show Current Aspect . . . . .
Autoscan . . . . . . . . . . . . . .
Start Autoscan/Stop Autoscan .
Settings . . . . . . . . . . . .
Tools . . . . . . . . . . . . . . . .
Global Status . . . . . . . . .
Get Remote/Expand Local File
Show Remote File . . . . . . .
Show System Messages . . .
GXCC Update/Revert Tool . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

143
143
145
145
146
146
146
147
147
147
150
150
151
152
152
153
154
154
154
156
158
159

U41117-J-Z125-7-76

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

Contents

6.2.7
6.2.7.1
6.2.7.2
6.2.7.3
6.2.7.4
6.2.8
6.2.8.1
6.2.9
6.2.9.1
6.2.9.2
6.2.9.3
6.2.9.4
6.2.9.5
6.2.9.6
6.2.9.7
6.2.9.8
6.2.9.9
6.2.9.10
6.2.9.11
6.2.9.12
6.2.9.13
6.2.9.14
6.2.9.15
6.2.9.16
6.2.9.17
6.2.10
6.2.10.1
6.2.10.2
6.2.10.3
6.2.10.4
6.2.10.5
6.2.10.6
6.2.10.7

Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RAID Filesystems . . . . . . . . . . . . . . . . . . . . . . . . . .
Logical Volume Groups . . . . . . . . . . . . . . . . . . . . . . .
Physical Volume Groups . . . . . . . . . . . . . . . . . . . . . . .
Distribute and Activate . . . . . . . . . . . . . . . . . . . . . . . .
Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Add/Select Profile . . . . . . . . . . . . . . . . . . . . . . . . . .
Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Show WWN’s . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Show Optional Functions . . . . . . . . . . . . . . . . . . . . . .
Show CS Configuration . . . . . . . . . . . . . . . . . . . . . . .
Diagnostic Snapshots . . . . . . . . . . . . . . . . . . . . . . . .
Logical Volume Operations . . . . . . . . . . . . . . . . . . . . .
Logical Volume Operations » Show Logical Volumes . . . . . . . .
Logical Volume Operations » Show Logical Volumes (physical view)
Logical Volume Operations » Change Volume Group . . . . . . . .
Logical Volume Operations » Add Logical Volumes . . . . . . . . .
Logical Volume Operations » Erase Logical Volumes . . . . . . . .
Physical Volume Operations . . . . . . . . . . . . . . . . . . . . .
Physical Volume Operations » Show Physical Volumes . . . . . . .
Physical Volume Operations » Link/Unlink Volume Groups . . . . .
Physical Volume Operations » Add Physical Volumes . . . . . . . .
Physical Volume Operations » Erase Physical Volumes . . . . . . .
Physical Volume Operations » Reorganize Physical Volumes . . . .
Setup for accounting mails . . . . . . . . . . . . . . . . . . . . . .
Help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Readme / LIESMICH . . . . . . . . . . . . . . . . . . . . . . . .
Direct Help / Direkthilfe . . . . . . . . . . . . . . . . . . . . . . .
System Messages . . . . . . . . . . . . . . . . . . . . . . . . . .
About GXCC... . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Revision Summary . . . . . . . . . . . . . . . . . . . . . . . . . .
Hardware Summary . . . . . . . . . . . . . . . . . . . . . . . . .
Online Manual . . . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

166
171
173
181
188
191
191
193
195
196
197
197
202
203
207
209
211
213
215
215
221
223
226
228
229
232
232
232
232
232
233
234
235

7

Global Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237

7.1

General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237

7.2

Operation of the Global Status Monitor . . . . . . . . . . . . . . . . . . . . . . . 239

7.3
7.3.1
7.3.1.1

Function bar of the Global Status Monitor . . . . . . . . . . . . . . . . . . . . . 239
File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
Print . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239

U41117-J-Z125-7-76

Contents

7.3.1.2
7.3.2
7.3.3
7.3.3.1
7.3.3.2
7.3.4
7.3.5

Exit . . . . . . . . . . . . . . . .
Config . . . . . . . . . . . . . . . .
Tools . . . . . . . . . . . . . . . . .
Global eXtended Control Center .
Show Balloon Help Summary . .
Statistics . . . . . . . . . . . . . . .
Help . . . . . . . . . . . . . . . . .

7.4

Global Status button bar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246

7.5
7.5.1
7.5.2
7.5.3

Display of the Global Status Monitor
Performance . . . . . . . . . . . . . .
Virtual Components . . . . . . . . . .
Physical Components . . . . . . . . .

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

247
249
251
254

7.6
7.6.1
7.6.1.1
7.6.1.2
7.6.1.3
7.6.1.4
7.6.2
7.6.2.1
7.6.2.2
7.6.2.3
7.6.2.4
7.6.2.5
7.6.2.6
7.6.2.7
7.6.2.8
7.6.2.9
7.6.2.10
7.6.2.11
7.6.2.12
7.6.2.13
7.6.2.14
7.6.2.15
7.6.2.16
7.6.3

History data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Recording analog operating data . . . . . . . . . . . . . . . . .
Overview of the displays . . . . . . . . . . . . . . . . . . . . . .
Selecting the time period . . . . . . . . . . . . . . . . . . . . .
Selecting the presentation mode . . . . . . . . . . . . . . . . .
Data which can be called via the function bar . . . . . . . . . . . . .
Statistics » History of . . . . . . . . . . . . . . . . . . . . . . .
Statistics » History of » Cache Usage . . . . . . . . . . . . . . .
Statistics » History of » Channel/Device Performance . . . . . .
Statistics » Logical Components . . . . . . . . . . . . . . . . . .
Statistics » Logical Components » Logical Drives . . . . . . . . .
Statistics » Logical Components »Logical Volumes (physical view)
Statistics » Logical Components » Logical Volumes (logical view)
Statistics » Logical Components » Logical Volume Groups . . . .
Statistics » Logical Components » Jobs of Logical Volume Groups
Statistics » Physical Components . . . . . . . . . . . . . . . . .
Statistics » Physical Components » Physical Drives . . . . . . .
Statistics » Physical Components » Physical Volumes . . . . . .
Statistics » Physical Components » Physical Volume Groups . . .
Statistics » Physical Components » Jobs of Physical Vol. Groups
Statistics » Physical Components » Reorganization Status . . . .
Statistics » Usage (Accounting) . . . . . . . . . . . . . . . . . .
Data which can be called via objects of the Global Status . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

257
258
258
258
262
263
264
264
265
266
267
268
271
272
273
275
276
277
279
283
289
291
293
297

7.7
7.7.1
7.7.1.1
7.7.1.2
7.7.1.3
7.7.1.4

History diagrams
Function/menu bar
File . . . . . .
Date . . . . .
Time . . . . .
Range . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

298
298
298
300
301
301

U41117-J-Z125-7-76

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

240
241
242
242
242
245
245

Contents

7.7.1.5
7.7.1.6
7.7.1.7
7.7.1.8
7.7.2
7.7.3
7.7.4
7.7.5
7.7.5.1
7.7.5.2
7.7.6
7.7.6.1
7.7.6.2
7.7.7
7.7.8

Run . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Toolbar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Status bar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Diagrams for the throughput (left-hand part of the screen) . . . . . .
Diagrams for virtual components (central part of the screen) . . . . .
ICP emulations . . . . . . . . . . . . . . . . . . . . . . . . . .
Cache Usage . . . . . . . . . . . . . . . . . . . . . . . . . . .
Diagrams of the physical components (right-hand part of the screen)
IDP statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Tape pool values . . . . . . . . . . . . . . . . . . . . . . . . . .
Exporting history data . . . . . . . . . . . . . . . . . . . . . . . . .
Command line tool for generating the history data . . . . . . . . . .

8

XTCC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325

8.1

General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325

8.2
8.2.1
8.2.2

Margins of the main XTCC window . . . . . . . . . . . . . . . . . . . . . . . . . 328
Title bar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
Status bar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328

8.3
8.3.1
8.3.1.1
8.3.1.2
8.3.1.3
8.3.1.4
8.3.1.5
8.3.2
8.3.2.1
8.3.3
8.3.3.1
8.3.3.2
8.3.3.3
8.3.3.4
8.3.3.5
8.3.4
8.3.4.1
8.3.4.2
8.3.4.3

Function bar . . . . . .
File . . . . . . . . . . . .
Select . . . . . . . .
Save . . . . . . . . .
Show . . . . . . . . .
Print . . . . . . . . .
Exit . . . . . . . . . .
Unit . . . . . . . . . . . .
Select . . . . . . . .
Options . . . . . . . . . .
Settings . . . . . . .
Toggle Size . . . . .
Toggle Aspect . . . .
Show Current Aspect
Apply Current Aspect
Autoscan . . . . . . . . .
Start . . . . . . . . .
Stop . . . . . . . . .
Settings . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

301
301
302
302
302
303
303
305
305
309
310
310
313
314
316

330
331
331
331
331
332
334
335
335
336
336
337
337
337
337
338
338
338
339

U41117-J-Z125-7-76

Contents

8.3.4.4
8.3.4.5
8.3.5
8.3.5.1
8.3.5.2
8.3.5.3
8.3.5.4
8.3.5.5
8.3.6
8.3.6.1
8.3.7
8.3.7.1
8.3.7.2
8.3.7.3
8.3.7.4
8.3.7.5
8.3.7.6

Scan Now . . . . . . . . . . . . . .
Interaction Timeout . . . . . . . . .
Tools . . . . . . . . . . . . . . . . . . .
XTCC Communications . . . . . . .
Get Remote/Expand Local File . . .
Show Remote File . . . . . . . . . .
Compare Local Files . . . . . . . . .
XTCC Update/Revert . . . . . . . .
Profile . . . . . . . . . . . . . . . . . .
Select . . . . . . . . . . . . . . . .
Help . . . . . . . . . . . . . . . . . . .
README / LIESMICH . . . . . . . .
Direct Help / Direkthilfe . . . . . . .
Mouse Functions / Maus-Funktionen
About XTCC... . . . . . . . . . . . .
CentricStor User Guide . . . . . . .
CentricStor Service Manual . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

340
340
341
341
342
342
343
343
344
346
348
348
348
348
349
350
351

8.4
8.4.1
8.4.2
8.4.3
8.4.4
8.4.5

Elements of the XTCC window
Display . . . . . . . . . . . . . .
Unexpected errors . . . . . . . .
Message window . . . . . . . .
Object-related functions . . . . .
Group display . . . . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

352
352
354
354
355
356

8.5
8.5.1
8.5.2
8.5.3
8.5.3.1
8.5.3.2
8.5.3.3
8.5.3.4
8.5.3.5
8.5.4
8.5.4.1
8.5.4.2
8.5.5
8.5.5.1
8.5.5.2
8.5.6
8.5.7
8.5.8
8.5.8.1
8.5.8.2

File viewer . . . . . . . . . . . . . . .
Opening the file viewer . . . . . . . .
Function bar . . . . . . . . . . . . . .
File . . . . . . . . . . . . . . . . . . .
Open (Text)/Open (Hex) . . . . . .
Save As . . . . . . . . . . . . . .
Re-read . . . . . . . . . . . . . .
Print . . . . . . . . . . . . . . . .
Exit . . . . . . . . . . . . . . . . .
AutoUpdate . . . . . . . . . . . . . .
Start . . . . . . . . . . . . . . . .
Stop . . . . . . . . . . . . . . . .
AutoPopup . . . . . . . . . . . . . . .
Enable . . . . . . . . . . . . . . .
Disable . . . . . . . . . . . . . . .
Highlight . . . . . . . . . . . . . . . .
Search down/up . . . . . . . . . . . .
Mode . . . . . . . . . . . . . . . . . .
1st Line -> Ruler/Selection -> Ruler
Text/Hex . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

360
360
360
361
361
361
361
361
361
362
362
362
362
362
362
363
364
365
365
365

U41117-J-Z125-7-76

.
.
.
.
.
.

.
.
.
.
.
.

Contents

8.5.8.3
8.5.8.4
8.5.8.5
8.5.9

Abort . . . . . . . . . . . .
Enlarge Font / Reduce Font
Tab Stop Interval . . . . . .
Help . . . . . . . . . . . . . .

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

365
365
366
366

8.6
8.6.1
8.6.2
8.6.2.1
8.6.2.2
8.6.2.3
8.6.2.4
8.6.2.5
8.6.2.6
8.6.2.7
8.6.2.8

ISP . . . . . . . . . . . . . . . . . .
Object information on the ISP . . . .
ISP functions . . . . . . . . . . . . .
Show Revision History . . . . . .
Version Consistency Check . . .
Show Diff. Curr./Prev. Version . .
Show Node Element Descriptors
Show Configuration Data . . . .
Show System Log . . . . . . . .
Show SNMP Data . . . . . . . .
Clean File System . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

367
367
368
369
370
371
372
373
375
375
375

8.7
8.7.1
8.7.1.1
8.7.1.2
8.7.1.3
8.7.1.4
8.7.1.5
8.7.2
8.7.2.1
8.7.2.2
8.7.2.3

Internal objects of the ISP . . . . . . . . . . . . . .
Representation of internal objects . . . . . . . . . . .
Hard disk drives . . . . . . . . . . . . . . . . . .
CD-ROM . . . . . . . . . . . . . . . . . . . . . .
Streamer . . . . . . . . . . . . . . . . . . . . . .
SCSI controller . . . . . . . . . . . . . . . . . . .
RAID controller . . . . . . . . . . . . . . . . . .
Functions of the ISP-internal objects . . . . . . . . .
Hard disk, CD-ROM, streamer, all internal objects
SCSI controller . . . . . . . . . . . . . . . . . . .
RAID controller . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

376
376
376
377
377
378
378
378
378
378
378

8.8
8.8.1
8.8.2
8.8.2.1
8.8.2.2
8.8.2.3

ESCON/FICON host adapter . . . . . . . . . . . . . .
Object information for the ESCON/FICON host adapter .
ESCON/FICON host adapter functions . . . . . . . . .
Show Node ID Details . . . . . . . . . . . . . . . .
Show Node Element Descriptors . . . . . . . . . .
Show Dump (prkdump) . . . . . . . . . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

379
379
381
381
382
383

8.9
8.9.1
8.9.2
8.9.2.1
8.9.2.2
8.9.2.3
8.9.2.4
8.9.2.5
8.9.3

Emulations of drives connected to OS/390 host adapters
Information on emulations . . . . . . . . . . . . . . . . . . .
Functions for individual 3490 emulations . . . . . . . . . . .
Show Error/Transfer Statistics . . . . . . . . . . . . . . .
Show Short Trace . . . . . . . . . . . . . . . . . . . . .
Show Path Trace . . . . . . . . . . . . . . . . . . . . . .
Show Error Log . . . . . . . . . . . . . . . . . . . . . .
Show Memory Log . . . . . . . . . . . . . . . . . . . . .
Functions for all 3490 emulations . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

384
384
385
386
387
388
389
390
390

U41117-J-Z125-7-76

Contents

8.10
8.10.1
8.10.1.1
8.10.1.2
8.10.1.3
8.10.2
8.10.2.1
8.10.2.2
8.10.2.3
8.10.2.4

Virtual 3490 drives . . . . . . . . . . . . . . . . . . . . . .
Object information and error messages for virtual 3490 drives
Error conditions indicated on the display . . . . . . . . .
Object information . . . . . . . . . . . . . . . . . . . . .
SIM/MIM error messages on virtual devices . . . . . . . .
Virtual drive functions . . . . . . . . . . . . . . . . . . . . .
Show SCSI Sense . . . . . . . . . . . . . . . . . . . . .
Show Medium Info (MIM) . . . . . . . . . . . . . . . . .
Show Service Info (SIM) . . . . . . . . . . . . . . . . . .
Unload and Unmount . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

391
391
392
393
393
394
395
396
397
397

8.11
8.11.1
8.11.2
8.11.2.1

FC-SCSI host adapter . . . . . . . . . . . .
Object information on FC-SCSI host adapters .
FC-SCSI host adapter functions . . . . . . . .
Perform Link Down/Up Sequence . . . . .

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

398
398
399
399

8.12
8.12.1
8.12.2
8.12.2.1
8.12.3

Emulations of SCSI drives (VTD) . . . . . . . .
Object information on emulations of SCSI devices
Functions for individual VTD emulations . . . . . .
Show Trace . . . . . . . . . . . . . . . . . .
Functions for all VTD emulations . . . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

399
399
401
401
401

8.13
8.13.1
8.13.2
8.13.2.1
8.13.2.2
8.13.2.3
8.13.2.4

Virtual SCSI drives . . . . . . . . . . .
Object information on virtual tape drives
Virtual generic drive functions . . . . . .
Show SCSI Sense . . . . . . . . . .
Show Medium Info (MIM) . . . . . .
Show Service Info (SIM) . . . . . . .
Unload and Unmount . . . . . . . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

402
402
404
404
404
404
404

8.14
8.14.1
8.14.2
8.14.2.1
8.14.3

VLS (Virtual Library Service) . . . .
Object information on VLSs . . . . . .
Functions for individual VLSs . . . . .
Show Trace . . . . . . . . . . . .
Global functions for all VLSs of an ISP

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

405
405
406
406
406

8.15
8.15.1
8.15.2

VMD (Virtual Mount Daemon) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
Object information on the Virtual Mount Daemon (VMD) . . . . . . . . . . . . . . . 407
VMD functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407

8.16
8.16.1
8.16.2
8.16.2.1
8.16.2.2

VLM (Virtual Library Manager) . .
Object information for the VLM . .
VLM functions . . . . . . . . . . .
Show Cache Status . . . . . .
Set HALT Mode/Set RUN Mode

U41117-J-Z125-7-76

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

408
408
408
408
410

Contents

8.17
8.17.1
8.17.2
8.17.2.1
8.17.2.2
8.17.2.3
8.17.2.4
8.17.2.5

RAID systems . . . . . . . . . . . . . . . . . . . .
Object information on RAID systems . . . . . . . .
Functions of RAID systems . . . . . . . . . . . . .
Show Complete RAID Status (all types) . . . . .
Show Mode Pages (CX500/CX3-20 and FCS80)
Show Mode Page Details . . . . . . . . . . . .
Show Log Pages . . . . . . . . . . . . . . . . .
Show Log Page Details . . . . . . . . . . . . .

8.18
8.18.1
8.18.2

PLM (Physical Library Manager) . . . . . . . . . . . . . . . . . . . . . . . . . . 416
Object information on the PLM . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
PLM functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416

8.19
8.19.1
8.19.2
8.19.3

PLS (Physical Library Service)
Object information on the PLS . .
Functions for individual PLSs . .
Functions for all PLSs . . . . . .

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

417
417
417
417

8.20
8.20.1
8.20.2
8.20.2.1
8.20.2.2
8.20.2.3
8.20.2.4

SCSI archive systems . . . . . . . .
Object information on archive systems
SCSI Archive system functions . . . .
Show Mode Pages . . . . . . . . .
Show Mode Page Details . . . . .
Show Log Pages . . . . . . . . . .
Show Log Page Details . . . . . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

418
418
419
419
419
419
419

8.21
8.21.1
8.21.2

PDS (Physical Device Service) . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
Object information on PDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
PDS functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420

8.22
8.22.1
8.22.2
8.22.2.1
8.22.2.2

SCSI controllers . . . . . . . . . . .
Object information on SCSI controllers
SCSI controller functions . . . . . . .
Rescan own Bus . . . . . . . . . .
Rescan all Busses . . . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

421
421
422
422
422

8.23
8.23.1
8.23.2
8.23.2.1
8.23.2.2
8.23.2.3
8.23.2.4
8.23.2.5
8.23.2.6
8.23.2.7
8.23.2.8

Cartridge drives (real) . . . . .
Object information on tape drives
Tape drive functions . . . . . . .
Show SCSI Sense . . . . . .
Show Log Pages . . . . . . .
Show Log Page Details . . .
Show Mode Pages . . . . . .
Show Mode Page Details . .
Show Vital Product Data . . .
Show Medium Info (MIM) . .
Show Service Info (SIM) . . .

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

423
423
425
426
426
427
427
428
429
431
432

.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

411
411
414
414
415
415
415
415

U41117-J-Z125-7-76

Contents

8.23.3
8.23.3.1

Global functions for tape drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
Remove Symbols of all Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . 433

8.24
8.24.1
8.24.2
8.24.2.1
8.24.2.2

MSGMGR (Message Manager) . . . . . . . . . . . . . .
Object information on the Message Manager (MSGMGR)
MSGMGR functions . . . . . . . . . . . . . . . . . . . .
Show Trace . . . . . . . . . . . . . . . . . . . . . .
Show Trap Trace . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

433
433
433
434
434

8.25
8.25.1
8.25.2
8.25.2.1

PERFLOG . . . . . . . . . . . .
Object information of PERFLOG .
PERFLOG functions . . . . . . .
Show Trace & Logging . . . .

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

435
435
436
436

8.26
8.26.1
8.26.2

ACCOUNTD (Account Daemon) . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
Object information of ACCOUNTD . . . . . . . . . . . . . . . . . . . . . . . . . . 437
Functions of the ACCOUNTD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437

8.27
8.27.1
8.27.2

MIRRORD (mirror daemon) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
Object information of MIRRORD . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
Functions of MIRRORD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438

8.28
8.28.1
8.28.2

S80D (S80 daemon) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
Object information of S80D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
Functions of S80D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439

8.29
8.29.1
8.29.2

VLPWATCH (VLPwatch daemon) . . . . . . . . . . . . . . . . . . . . . . . . . . 440
Object information of VLPWATCH . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
Functions of VLPWATCH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440

9

Explanation of console messages . . . . . . . . . . . . . . . . . . . . . . . . . 441

9.1

General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441

9.2
9.2.1
9.2.2
9.2.3
9.2.4
9.2.5
9.2.6
9.2.7
9.2.8
9.2.9
9.2.10
9.2.11

Message lines . . . . . . . . . . . . . .
SXCF... (CMF: Cache Mirroring Feature)
SXCH... (Channel: pcib/pcea) . . . . . .
SXCM... (CHIM) . . . . . . . . . . . . .
SXDN... (DNA: Distribute and Activate) .
SXDT... (DTV File System) . . . . . . .
SXFC... (FibreChannel Driver) . . . . . .
SXFP... (FibreChannel Driver) . . . . . .
SXFW... (Firmware) . . . . . . . . . . .
SXIB... (Info Broker) . . . . . . . . . . .
SXLA... (LANWATCH) . . . . . . . . . .
SXLV... (Log Volume) . . . . . . . . . .

U41117-J-Z125-7-76

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

445
445
447
449
450
450
452
454
455
456
458
458

Contents

9.2.12
9.2.13
9.2.14
9.2.15
9.2.15.1
9.2.15.2
9.2.15.3
9.2.15.4
9.2.16
9.2.17
9.2.18
9.2.19
9.2.20
9.2.21
9.2.22
9.2.23
9.2.24
9.2.25
9.2.26
9.2.27
9.2.28
9.2.29

SXMM... (Message Manager) . . . . . . . . . . . . . . . . .
SXPL... (PLM: Physical Library Manager) . . . . . . . . . . .
SXPS... (PLS: Physical Library Server) . . . . . . . . . . . .
SXRD... (FibreCAT: RAID) . . . . . . . . . . . . . . . . . . .
Messages of the monitoring daemon for the internal RAID
FibreCAT S80 messages . . . . . . . . . . . . . . . . .
FibreCAT CX500 and CX3-20 messages . . . . . . . . .
FibreCAT CX500 and CX3-20 messages . . . . . . . . .
SXRP... (RPLM: Recovery Physical Library Manager) . . . . .
SXSB... (Sadm Driver: SCSI bus error) . . . . . . . . . . . .
SXSC... (Savecore: organize coredump) . . . . . . . . . . .
SXSD... (SCSI Disks: driver shd) . . . . . . . . . . . . . . .
SXSE... (EXABYTE Tapes) . . . . . . . . . . . . . . . . . .
SXSM... (Server Management) . . . . . . . . . . . . . . . .
SXSW... (Software Mirror) . . . . . . . . . . . . . . . . . . .
SXTF... (Tape File System) . . . . . . . . . . . . . . . . . .
SXVD... (Distributed Tape Volume Driver) . . . . . . . . . . .
SXVL... (VLM: Virtual Library Manager) . . . . . . . . . . . .
SXVLS... (VT_LS: Virtual Tape and Library System) . . . . .
SXVS... (VLS: Virtual Library Server) . . . . . . . . . . . . .
SXVW... (VLPWATCH) . . . . . . . . . . . . . . . . . . . . .
SXVX... (Veritas File System) . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

459
465
482
485
485
488
489
490
491
494
495
495
496
497
510
510
516
517
521
522
523
537

9.3
9.3.1
9.3.2
9.3.3
9.3.4
9.3.5

Message complexes . . . . .
Timeout on the RAID disk array
Timeout on the MTC drives . .
Failure of RAID systems . . . .
Failover at the RAID system . .
Bus Reset for SCSI Controller .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

538
538
539
540
541
542

10

Waste disposal and recycling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543

11

Contacting the Help Desk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545

12

Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547

12.1
12.1.1
12.1.2
12.1.2.1

Integration of CentricStor V3.1 in SNMP
Structure . . . . . . . . . . . . . . . . . .
Activating SNMP on CentricStor . . . . . .
Configuring SNMP under CentricStor .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

547
547
548
548

U41117-J-Z125-7-76

Contents

12.1.2.2
12.1.2.3
12.1.3
12.1.3.1
12.1.3.2
12.1.3.3
12.1.3.4
12.1.3.5
12.1.3.6
12.1.4
12.1.4.1
12.1.4.2
12.1.4.3
12.1.5
12.1.5.1
12.1.5.2
12.1.5.3
12.1.5.4
12.1.5.5
12.1.6
12.1.7

Activating the configuration . . . . . . . . . . . . . . . .
Changes in central files . . . . . . . . . . . . . . . . . .
Monitoring CentricStor . . . . . . . . . . . . . . . . . . . . .
GXCC as a monitoring tool without SNMP . . . . . . . .
Monitoring using any SNMP Management Station . . . .
CentricStor Global System State . . . . . . . . . . . . .
GXCC on the SNMP Management Station . . . . . . . .
Sending a trap to the Management Station . . . . . . . .
Monitoring of CentricStor V2/V3.0 and V3.1 . . . . . . . .
Installation on the Management Station CA Unicenter . . . .
Reading in the GUI CD . . . . . . . . . . . . . . . . . .
Installation of the CA Unicenter extensions for CentricStor
Identification and editing of the CentricStor traps . . . . .
Working with CA Unicenter and CentricStor . . . . . . . . . .
CentricStor icon under CA Unicenter . . . . . . . . . . .
Identifying a CentricStor and assigning the icon . . . . . .
Receipt and preparation of a CentricStor trap . . . . . . .
Monitoring CentricStor using ping and MIB-II . . . . . . .
Calling the GXCC from the pop-up menu of CA Unicenter
Monitoring of CentricStor V2/V3.0 and V3.1 with CA Unicenter
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

548
548
549
549
550
551
551
551
552
552
552
553
553
554
554
555
556
557
557
557
557

12.2
12.2.1
12.2.2
12.2.3
12.2.4

E-mail support in CentricStor . .
Sendmail configuration . . . . . .
Setting up the DNS domain service
Configuring the e-mail template . .
Description of the e-mail formats .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

558
558
558
560
561

12.3
12.3.1
12.3.2
12.3.3
12.3.4
12.3.5
12.3.5.1
12.3.5.2
12.3.5.3
12.3.5.4
12.3.5.5
12.3.6
12.3.7
12.3.7.1
12.3.7.2
12.3.7.3

Transferring volumes . . . . . . . . . . .
Introduction . . . . . . . . . . . . . . . .
Export procedure . . . . . . . . . . . . .
Import procedure . . . . . . . . . . . . .
Special features of the PVG TR-PVG . . .
Additional command line interface (CLI) . .
Transfer-out . . . . . . . . . . . . . .
Removing PVs and LVs . . . . . . . .
Adding a PV to the transfer-in . . . . .
Removing an LV from a transfer list . .
Skipping an LV / removing a PV . . . .
Special situations . . . . . . . . . . . . .
Library commands . . . . . . . . . . . . .
ADIC library with DAS server . . . . .
StorageTek Library with ACSLS server
Fujitsu Library with LMF server (PLP) .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

562
562
563
564
565
566
566
568
568
570
570
570
571
571
571
571

U41117-J-Z125-7-76

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

Contents

12.4
12.4.1
12.4.1.1
12.4.1.2
12.4.1.3
12.4.2
12.4.3
12.4.4
12.4.4.1
12.4.4.2
12.4.5
12.4.6
12.4.7
12.4.8
12.4.9
12.4.10
12.4.11

Licenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Xpdf, gzip . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Preamble . . . . . . . . . . . . . . . . . . . . . . . . . . .
GNU GENERAL PUBLIC LICENSE . . . . . . . . . . . . . .
Appendix: How to Apply These Terms to Your New Programs
Firebird . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sendmail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
XML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Licence for libxslt except libexslt . . . . . . . . . . . . . . . .
Licence for libexslt . . . . . . . . . . . . . . . . . . . . . . .
NTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
tcpd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PRNGD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
openssh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
openssl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
tcl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
tk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Glossary

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

572
572
572
573
577
579
589
590
590
591
592
596
596
597
604
607
608

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 609

Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 619

Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 629

U41117-J-Z125-7-76

1 Introduction

With CentricStor, a virtual tape robot system is placed in front of the real tape robot system
(with the real drives and cartridges). In this way the host and the real archive are fully
decoupled. The virtual tape robot system knows what are referred to as virtual (logical)
drives and virtual (logical) volumes. The core element here consists principally of a disk
system as data cache, guaranteeing not only extremely high-speed access to the data, but
also, thanks to the large number of virtual drives (up to 512) and logical volumes (up to
500 000) which can be generated, that the bottlenecks which occur in a real robot system
can be cleared.

U41117-J-Z125-7-76

19

Objective and target group for the manual

Introduction

The host is connected using the following connection technologies:
●

ESCON channels

●

FibreChannel

●

FICON

Communication between the individual control units takes place via the LAN in CentricStor,
the transport of the user data to and from the RAID system via the FibreChannel.
The physical drives can be connected to the backend via both FibreChannel and SCSI
technology.

1.1 Objective and target group for the manual
This manual provides all the information you need to operate CentricStor. It is thus aimed
at operators and system administrators.

1.2 Concept of the manual
This manual describes how to use CentricStor in conjunction with a BS2000/MVS system
and Open Systems.
It supplies all the information you need to commission and administer CentricStor:
CentricStor - Virtual Tape Library
This chapter describes the CentricStor hardware and software architecture. It details the
operating procedures, so that you can gain an understanding of the way the system works.
It also contains information on the technical implementation, and a description of new and
optional components.
Switching CentricStor on/off
This chapter describes how to power up and shut down CentricStor.
Selected system administrator activities
This chapter contains information on selected system administrator activities in GXCC and
XTCC, the graphical user interface of CentricStor.
Operating and monitoring CentricStor
This chapter describes the technical concept for operating and monitoring CentricStor, and
explains how GXCC and XTCC are started.
GXCC
This chapter describes the GXCC program used to operate and monitor CentricStor.
20

U41117-J-Z125-7-76

Introduction

Notational conventions

Global Status
The Global Status Monitor provides a graphical display of all important operating data in a
window.
XTCC
The program XTCC is used mainly to monitor the individual CentricStor computers (ISPs)
including the peripheral devices connected to the computers.
Explanation of console messages
This chapter describes the most important of the console messages. And as far as possible
suggests a way of solving the problem.
Appendix
The Appendix contains additional information concerning CentricStor.
Glossary
This chapter describes the most important CentricStor specific terms.

1.3 Notational conventions
This manual uses the following symbols and notational conventions to draw your attention
to certain passages of text:
This symbol indicates actions that must be performed by the user
(e.g. keyboard input).

Ê

!
i
[ ... ]

This symbol indicates important information (e.g. warnings).
This symbol indicates information which is particularly important for the
functionality of the product.
Square brackets are used to enclose cross-references to related publications,
and to indicate optional parameters in command descriptions.

Names, commands, and messages appear throughout the manual in typewriter font
(e.g. the SET-LOGON-PARAMETERS command).

1.4 Note
CentricStor is subject to constant development. The information contained in this manual is
subject to change without notice.

U41117-J-Z125-7-76

21

Eine Dokuschablone von Frank Flachenecker
by f.f. 1992

2 CentricStor - Virtual Tape Library
2.1 The CentricStor principle

Drive
Drive
Drive
Drive

Host

Robots

Data cartridges

Conventional host robot system

Figure 1: Conventional host robot system

In a conventional real host robot system, the host system requests certain data cartridges
to be mounted in a defined real tape drive. As soon as the storage peripherals (robots,
drives) report that this has been completed successfully, data transfer can begin. In this
case, the host has direct, exclusive access to the drive in the archive system. It is crucial
that a completely static association be defined between the application and the physical
drive.

U41117-J-Z125-7-76

23

The CentricStor principle

CentricStor - Virtual Tape Library

Robots

Data cartridges

CentricStor

Drive
Drive
Drive
Drive

Physical volumes

Host

Disk cache

Logicaldrive
drive
Logical
Logical
drive
Logical
drive
Logical
drive
Logical
drive
Logical
drive
Logical
drive
Logical
drive
Logical
drive
Logical
drive
Logical
drive
Logical
drive
Logical
drive
Logical
drive
Logical
drive
Logical
drive
Logical
drive
Logical
drive
Logical
drive
Logical
drive
Logical
drive
Logical drive
Logical drive

logical volumes

Host robot system with CentricStor

Figure 2: Host robot system with CentricStor

With CentricStor, a virtual archive system is installed upstream of the real archive system
with the physical drives and data cartridges. This enables the host to be completely isolated
from the real archive. The virtual archive system contains a series of logical drives and
volumes. At its heart is a data buffer, known as the disk cache, in which the logical volumes
are made available. This guarantees extremely fast access to the data, in most cases
allowing both read and write operations to be performed much more efficiently than in
conventional operation.

i

Instead of the term logical drives (or volumes), the term virtual drives (or volumes)
is sometimes also used. These terms should be regarded as synonyms. In this
manual the term logical is used consistently when drives and volumes in
CentricStor are meant, and physical when the real peripherals are meant.

The virtual archive system is particularly attractive, as it provides a large number of logical
drives compared to the number of physical drives. As a result, bottlenecks which exist in a
real archive can be eliminated or avoided.
From the host’s viewpoint, the logical drives and volumes act like real storage peripherals.
When a mount job is issued by a mainframe application or an open systems server, for
example, the requested logical volume is loaded into the disk cache. If the application then
writes data to the logical drive, the incoming data stream is written to the logical volume
created in the disk cache.
The Library Manager of the virtual archive system then issues a mount job to the real
archive system asynchronously and completely transparently to the host. The data is read
out directly from the disk cache and written to a physical tape cartridge. The physical
volume is thus updated with optimum resource utilization.
Logical volumes in the disk cache are not erased immediately. Instead, data is displaced in
accordance with the LRU principle (Least Recently Used). Sufficient space for this must be
allocated in the disk cache.

24

U41117-J-Z125-7-76

CentricStor - Virtual Tape Library

The CentricStor principle

As soon as a mount job is issued, the Library Manager checks whether the requested
volume is already in the disk cache. If so, the volume is immediately released for processing
by the application. If not, CentricStor requests the corresponding cartridge to be mounted
onto a physical drive, and reads the logical volume into the disk cache.
CentricStor thus operates as a very large, extremely powerful, highly intelligent data buffer
between the host level and the real archive system.
It offers the following advantages:
●
●
●

removal of device bottlenecks through virtualization
transparency to the host thanks to the retention of interfaces unchanged
support for future technologies by isolating the host from the archive system

CentricStor thus provides a long-term, cost-effective basis for modern storage
management.

U41117-J-Z125-7-76

25

Hardware architecture

CentricStor - Virtual Tape Library

2.2 Hardware architecture
Mainframe

CentricStor
LAN

VLP
VLM

ESCON
Director

PLM
LAN

LAN
ESCON

ICP

LAN

Robots

IDP

FC

Switch

SCSI

SCSI

Virtual
tape drives

UNIX/Windows
FICON
Switch

CentricStor
Console

Real
tape drives

FC Switch

ICP

IDP

FICON
SCSI
SCSI

Virtual
tape drives

Real
tape drives

TVC
Mainframe
Figure 3: Example of a CentricStor configuration

In this example, CentricStor comprises the following hardware components:
●
●
●
●
●
●
●
●

a VLP (Virtual Library Processor), which monitors and controls the CentricStor
hardware and software components
two ICPs (Integrated Channel Processors), which communicate with the hosts via
ESCON (via ESCON Director), FICON (via FICON switch) or FC (via FC switch)
two IDPs (Integrated Device Processors), which communicate with the tape drives in
the robot system via SCSI or FC
one or more RAID systems for the TVC (Tape Volume Cache) for buffering logical
volumes
an FC switch, which is used by the ICP, IDP, and VLP to transfer data
a CentricStor console for performing configuration and administration tasks
a LAN connection between CentricStor and the robot system
a LAN connection, which is used by the ICP, IDP, and VLP for communication

The PLM (Physical Library Manager) and VLM (Virtual Library Manager) are software
components which are particularly important for system operation (see page 34).
26

U41117-J-Z125-7-76

CentricStor - Virtual Tape Library

Hardware architecture

2.2.1 ISP (Integrated Service Processor)
CentricStor is a group of several processors, each running special software (UNIX
derivative) as the operating system. These processors are referred to collectively as the ISP
(Integrated Service Processor). Depending on the peripheral connection, the hardware
configuration, the software configuration, and the task in the CentricStor system, a
distinction is made between the following processor types:
–
–
–
–

VLPs (optional: SVLP = standby VLP)
ICPs
IDPs
ICP_IDP

To permit communication between the processors, they are interconnected by an internal
LAN. The distinguishing characteristics of these processors are described in the following
sections.
2.2.1.1

VLP (Virtual Library Processor)
The processor of the type VLP can be included twice to provide failsafe performance. Only
one of the two plays an active role at any given time: the VLP Master. The other, the Standby
VLP (SVLP), is ready to take over the role of the VLP Master should the VLP Master fail
(see section “Automatic VLP failover” on page 52). The two VLPs are connected to each
other and to the ICPs, IDPs and TVC via FC.

CentricStor

VLP

LAN

FC FC
Figure 4: Internal VLP connections

The main task of the VLP Master is the supervision and control of the hardware and
software components, including the data maintenance of the VLM and the PLM. Communication takes place via the LAN connection

i

U41117-J-Z125-7-76

The software which controls CentricStor (in particular, the VLM and PLM) is
installed on all the processors (VLP, ICP, and IDP) but is only activated on one
processor (the VLP Master).

27

Hardware architecture

2.2.1.2

CentricStor - Virtual Tape Library

ICP (Integrated Channel Processor)
The ICP is the interface to the host systems connected in the overall system.

Hosts
BS2000/OSD,
z/OS and OS/390

ESCON

z/OS and OS/390
BS2000/OSD,
Open Systems

LAN

CentricStor

ICP

FC
FC

FICON

ICP

FC
FC

FCP

ICP

FC
FC

Figure 5: External and internal ICP connections

Depending on the type of host system used, it is possible to equip an ICP with a maximum
of 4 ESCON boards on the host side (connection with BS2000/OSD, z/OS or OS/390), with
one or two FICON ports (connection with z/OS or OS/390), or with one or two FC boards
(BS2000/OSD or open systems). A mixed configuration is also possible.The ICP also has
an internal FC board (or two in the case of redundancy) for connecting to the RAID disk
system.
The main task of the ICP is to emulate physical drives to the connected host systems.
The host application issues a logical mount job for a logical drive in an ICP connected to a
host system (see section “Issuing a mount job from the host” on page 39). The data transferred for the associated logical volume is then stored by the ICP directly in the RAID disk
system.

i

The virtual CentricStor drives support a maximum block size of 256 KB.

Communication with the other processors takes place over a LAN connection.

28

U41117-J-Z125-7-76

CentricStor - Virtual Tape Library

2.2.1.3

Hardware architecture

IDP (Integrated Device Processor)
The IDP is the interface to the connected tape drives.

CentricStor
FC
FC

IDP

Robots
SCSI or FC
SCSI or FC

Figure 6: Internal and external IDP connections

The IDP is responsible for communication with real tape drives. To optimize performance,
only two real tape drives should be configured per IDP.
Because of the relatively short length of a SCSI cable (approx. 25 m), the CentricStor IDPs
are typically installed directly in the vicinity of the robot archive if a SCSI connection is to be
used to connect the drives.
It is capable of updating tape cartridges onto which data has already been written by
appending a further logical volume after the last one.
A cartridge filled in this way with a number of logical volumes is also referred to as a stacked
volume (see section “Administering the tape cartridges” on page 35).
Communication with the other processors takes place over a LAN connection.
2.2.1.4

ICP_IDP or IUP (Integrated Universal Processor)
Hosts
Interfaces
to the host

CentricStor

ICP_IDP

Robots
Interfaces to
tape drives

LAN FC FC

An ICP_IDP provides the features of a VLP, an ICP and an IDP. This processor has interfaces to the hosts and to the tape drives.
However, the performance is a great deal lower than if its functions are distributed on its
own processors of the types VLP, ICP and IDP.
IUP (Integrated Universal Processor) is a synonym for ICP_IDP.

U41117-J-Z125-7-76

29

Hardware architecture

CentricStor - Virtual Tape Library

2.2.2 RAID systems for the Tape Volume Cache
A TVC (Tape Volume Cache) is the heart of the entire virtual archive system. It represents
all of the Tape File Systems in which the logical volumes can be stored temporarily. One or
more RAID systems (up to 8) are used for this.
Each RAID system contains at least the basic configuration, which consists of FC disks and
2 RAID controllers. It can also be equipped with up to 7 extensions, which in turn constitute
a fully equipped shelf with FC or ATA disks. A RAID system consists of shelves which in
CentricStor are always fully equipped with disks. The TVC illustrated in the figure below
contains 2 RAID systems with a total of 12 equipped shelves:

TVC
1st RAID system

extension
extension
extension
extension
extension
extension
Contr. 0

Contr. 1

basic config.
extension
extension
extension

Contr. 0

Shelf Shelf Shelf Shelf Shelf Shelf Shelf Shelf

extension

Shelf Shelf Shelf Shelf Shelf Shelf Shelf Shelf

basic config.

2nd RAID system

Contr. 1

Figure 7: 2 RAID systems form the TVC

In the case of the FibreCat CX3-20, for example, the 300-GB FC disks used offer a net capacity of 900 GB per RAID group. Here the basic configuration and each extension contain
3 RAID groups, resulting in a net capacity of 3 * 0.9 TB = 2.7 TB for each shelf. The net
capacity of the maximum configuration of a RAID system is therefore 8 * 2.7 TB = 21.6 TB.
One RAID group is used for one cache file system, which means that the basic configuration
and each extension contain 3 cache file systems and one RAID system with the maximum
configuration with 24 cache file sytems.

30

U41117-J-Z125-7-76

CentricStor - Virtual Tape Library

Hardware architecture

The metadata of the logical volumes to be written or read is stored on the 1st RAID system,
as a result of which the usable capacity of this RAID system is reduced by 16 GB.
A CentricStor can contain up to 8 RAID.
The number of cache file systems determines the number of logical volumes available (up
to 500,000). At least one cache file system is required for each 100,000 logical volumes.
The Cache Mirroring Feature (CMF) requires an additional cache file system for possible
recovery measures. Under these conditions the following minimum requirements consequently apply for logical volumes with the standard size of 900 MB:
Logical volumes

Cache file systems required

100,000

At least 2

200,000

At least 3

300,000

At least 4

400,000

At least 5

500,000

At least 6

When larger logical volumes are used (2 - 200 GB, see the section “New system functions”
on page 43), correspondingly more cache file systems can be required. When the Cache
Mirroring Feature (see the page 55) is used, all cache file systems are mirrored to RAID
system pairs and therefore require double the disk resources. The capacity is therefore
reduced by 50%.

2.2.3 FibreChannel (FC)
The entire flow of data between all CentricStor components (ISPs and external RAID systems) is handled via an internal SAN which can provided with redundancy. It is implemented
by one high-performance FC switch or, if redundancy is provided, by two high-performance
FC switches.
2 FC technologies are available, Multi Mode and Single Mode. In Multi Mode the devices
which are connected via Fibre Channel can be located up to 300 m from each other; in Single Mode the distance can be as much as 10 km. The FC controllers used in CentricStor
support bandwidths between 1Gb/s (Gigabits per second) and 4 Gb/s.

2.2.4 FC switch (fibre channel switch)
In the CentricStor models VTA 1500-5000, the entire flow of data between all CentricStor
components is handled by means of an FC switch.

U41117-J-Z125-7-76

31

Software architecture

CentricStor - Virtual Tape Library

This SAN-based design means that each CentricStor component is in a position to access
the TVC.

2.2.5 Host connection
The host connection on the ICP is implemented using the following connection
technologies:
Host system

Operating system

Connection

Mainframe

BS2000/OSD

ESCON or FibreChannel

z/OS and OS/390

ESCON or FICON

Bull

ESCON

Unisys

ESCON

Reliant UNIX

FibreChannel

Solaris

FibreChannel

Microsoft Windows

FibreChannel

AIX

FibreChannel

HP-UX

FibreChannel

Open Systems

FibreChannel with ESCON or FICON connections can be operated in mixed mode on an
ICP.

!

CAUTION!
Simultaneous operation of ESCON and FICON connections is not permitted on the
same ICP.

2.3 Software architecture
The functions VLP, ICP and IDP which are described in the following sections are not
necessarily separate hardware components.
In large CentricStor configurations (VTA 1500-5000) all functions are normally implemented
in separate hardware components. In smaller hardware configurations (VTA 500/1000,
VTC, SBU), several of these functions are implemented on one hardware component. In the
VTC all functions, including the RAID system, are combined in a hardware component.
If, for example, an ICP is designated an Integrated Channel Processor, this is to be understood as a function and not as a hardware component.

32

U41117-J-Z125-7-76

CentricStor - Virtual Tape Library

MSP/XSP
LIB/SP
z/OS
OS/390

Software architecture

CentricStor
VLS
VLP
VLMF

PLS

ADIC
AMU/SDLC
DAS

HACC
CSC

StorageTek

VAMU
PLS

ACSLS

BS2000
ROBAR

VACS

Jukebox

CSC

PLS

Accessor

VDAS
Open
Systems
ACSLS

PLP
VJUK1

PLS

LMF LITE

DAS-ACI
SCSI

VLM

PLM

Backup
software

TVC

CentricStor
console

Figure 8: Central role of the VLP in a CentricStor configuration1

VLP (Virtual Library Processor)
The VLP is responsible for the coordination of the entire CentricStor system. Although the
software can be activated on any of the ICP or IDP systems, it is recommended for performance reasons that you either provide a separate VLP, or activate the components of the
VLP on one of the IDPs, since the CPU utilization is at its lowest here.
The use of a second VLP (SVLP) is optionally possible.

1

VJUK runs on an ICP.

U41117-J-Z125-7-76

33

Software architecture

CentricStor - Virtual Tape Library

VLM (Virtual Library Manager)
Each robot job from the requesting host system is registered in the VLM. To support the
libraries, corresponding emulations (VLMF, VAMU, VACS, VDAS, VJUK) are used in
CentricStor.
The TVC is administered exclusively by the VLM.
The VLM data maintenance contains the names of the logical volumes with which the TVC
is to work.
PLM (Physical Library Manager)
The PLM coordinates all jobs issued to the connected peripherals (robot drives). The PLM’s
data maintenance facility stores information about where and on which physical volume
each logical volume is stored.
VLS (Virtual Library Service)
There may be various different instances of the VLS, depending on the type and number of
connected host systems:
Host connection

Instance

Library

BS2000/OSD, z/OS and OS/390

VAMU

ADIC

Open Systems Server (UNIX, Windows)

VDAS

CSC Clients of BS2000/OSD

VACS

StorageTek

LIB/SP Clients from Fujitsu

VLMF

Fujitsu

Open Systems Clients, UNIX and Windows

VJUK

SCSI

Open Systems Server (UNIX, Windows) with ACSLS

PLS (Physical Library Service)
The PLS is the link between CentricStor and the robot archive. Jobs to the robots, e.g.
moving a tape cartridge in the robot archive, are issued at the behest of the PLM.

34

U41117-J-Z125-7-76

CentricStor - Virtual Tape Library

Operation

2.4 Operation
CentricStor is operated via the graphical user interfaces GXCC (Global Extended Control
Center) and XTCC (Extended Tape Control Center). These are used to perform all
administration and configuration tasks.
Using this control center, it is possible to display the current operating statuses of all
CentricStor components, together with a large amount of performance and utilization data.

i

For a description, refer to chapter “Operating and monitoring CentricStor” on
page 83, chapter “GXCC” on page 119 and chapter “XTCC” on page 325.

2.5 Administering the tape cartridges
Tape cartridge administration is performed separately by the PLM for each physical volume
group (PVG) (see also section “Partitioning on the basis of volume groups” on page 63).
Each PVG has its own scratch pool. All reorganization parameters can be set separately for
each PVG.

2.5.1 Writing the tape cartridges according to the stacked volume principle
The figure below shows the location of logical volumes on the magnetic tape:
Logical
volume 1

Logical
volume 2

Logical
volume 3

Logical
volume 4

Figure 9: Position of the logical volumes on the magnetic tape

Each tape cartridge of the robot archive is administered by CentricStor as a stacked
volume, where a series of logical volumes is stored consecutively on the tape. In this way,
tapes are filled almost to capacity. There will be a small section of unused tape, since a
logical volume will always be written in full onto a physical tape cartridge (no continuation
tape processing).

U41117-J-Z125-7-76

35

Administering the tape cartridges

CentricStor - Virtual Tape Library

2.5.2 Repeated writing of a logical volume onto tape

Tape header

Tape header

If a logical volume which has already been saved onto tape is written to tape a second time
following an update, the first backup will be declared invalid. The current volume is
appended after the last volume of this tape or another tape with sufficient storage space.
LV0013

LV0011

LV0008

LV0002

LV0021

PV0000

LV0013

PV0001

Figure 10: Repeated writing of a logical volume onto tape

In the example above, the logical volume LV0013 on physical volume PV0000 is declared
invalid and is written anew to physical volume PV0001.

2.5.3 Creating a directory
After each write operation a directory is created at the end of the tape. This permits highspeed data access during a later read/write operation.

LV0024

LV2008

LV2413

Contents

Tape-Header

Directory

Figure 11: Creating a directory on tape

36

U41117-J-Z125-7-76

CentricStor - Virtual Tape Library

Administering the tape cartridges

2.5.4 Reorganization of the tape cartridges

LV0000

LV0003 LV0004 LV0005

PV0000

LV0006 LV0007 LV0008
LV0010

LV0009
CentricStor

LV0002

LV0001

LV0011

PV0002

:
:

:
:

LV0037
VLM

PV0001

Read tapes

When a logical volume is released by the host’s volume management facility (e.g. MAREN
in BS2000/OSD), it is flagged accordingly in the CentricStor data maintenance facility which
contains the metadata for each volume. This process, combined with updates (see the
section section “Creating a directory” on page 36), will cause the areas containing invalid
data on the real tape cartridges to increase more and more over time (stacked volume with
gaps). If the number of scratch tapes for a CentricStor system falls below a configurable
lower limit, the PLM automatically performs a reorganization by using the VLM to load any
logical volumes still valid into the RAID system and then, so to speak, moving them
piecemeal onto scratch tapes.

PV0007

PLM

LV0000

LV0002

LV0001

LV0003 LV0004 LV0005
LV0009

PV0000

LV0006 LV0007 LV0008
LV0010

LV0011

PV0002

:
:

:
:
LV0037 LV0001
LV0006 LV0007 LV0009

PV0001

LV0002

LV0003
LV0011

PV0007
PV0008

Write tapes

TVC

Scratch tapes

PV0008

Figure 12: Example of a reorganization

Read tape:

Tape cartridge that still contains valid data but has no free space for write
operations
Scratch tape: Tape cartridge that only contains invalid data and has been released for
rewriting
Write tape:
Tape cartridge that still contains space for write operations

U41117-J-Z125-7-76

37

Procedures

CentricStor - Virtual Tape Library

2.6 Procedures
2.6.1 Creating the CentricStor data maintenance
Initial situation:

CentricStor is installed and configured. As yet, there is no data on the
RAID system. The tape cartridges of the robots are blank.

To start CentricStor, the PLM and VLM data maintenance facility must be created:

CentricStor
VLM
LV0000
LV0001
LV0002
LV0003
LV0004
LV0005
LV0006
LV0007
LV0008
LV0009
LV0010
LV0011

Host

1

PLM
PV0000
PV0001
PV0002
PV0003
PV0004
PV0005
PV0006
PV0007
PV0008
PV0009
PV0010
PV0011

BS2000
CSC
ROBAR
MAREN
LV0000
LV0001
LV0002
LV0003

RAID
Logical volumes

3

2

Robots
Tape drive
Physical volumes
PV0012
PV0009
PV0006
PV0003
PV0000
PV0000
PV0000
PV0007
PV0004
PV0001
PV0000
PV0000
PV0008
PV0005
PV0002

Figure 13: CentricStor after the VLM and PLM data maintenance have been created

1. The names of the logical volumes which are to be loaded into the RAID disk array later
are entered in the VLM data maintenance (see the section “Logical Volume Operations
» Add Logical Volumes” on page 211).
In the example, these are the logical volumes LV0000 to LV2000. These volumes still
do not contain any data.
2. The names (VSNs) of the physical volumes present in the robots which are to be used
in CentricStor are entered in the PLM data maintenance (see the section “Physical
Volume Operations » Add Physical Volumes” on page 223). In the example, these are
the volumes PV0000 to PV0100.
3. The logical volumes are made known in BS2000/OSD (example of a storage location:
“VTLSLOC”).
CentricStor is then ready for operation.
38

U41117-J-Z125-7-76

CentricStor - Virtual Tape Library

Procedures

2.6.2 Issuing a mount job from the host
Initial situation:

The logical volume LV0005 is already located on the physical volume
PV0002.

CentricStor
a

2 VLM
f

Host

LV0005
Data

3

LV0000
LV0001
LV0002
LV0003
LV0004
LV0005
LV0006
LV0007
LV0008
LV0009
LV0010
LV0011

1
4

M

5
e

M

M
D

PLM b
PV0000
LV0000
LV0001
LV0003
PV0001
LV0004
LV0017
PV0002
LV0027
LV0005
LV0013
PV0004

RAID
Logical volumes

LV0005
Data

d

c

Robots
PV0002

Tape drive

LV0027 LV0005 LV0013

Physical volumes
PV0012
PV0009
PV0006
PV0003
PV0000
LV0000 LV0001 LV0003
PV0000
PV0000
PV0007
PV0004
PV0001
LV0004
LV0017
PV0000
PV0000
PV0008
PV0005
PV0002
LV0061 LV0073

Figure 14: Procedure for a mount job

A mount job is executed as follows:
1. The host issues a mount job for logical volume LV0005, which is then accepted by the
VLM.
The VLM does not know at this point what task is involved:
–
–
–

read the volume or a part thereof
append a file to the end of the volume
overwrite the entire volume

2. The VLM checks its data maintenance to establish whether the logical volume LV0005
specified by the host is available and whether there is a corresponding free storage
space on the RAID system.
If the RAID system does not have enough free capacity at this point, the LRU (Least
Recently Used) procedure is employed to delete the oldest data from the RAID system.
If a sufficient number of old files cannot be deleted, the mount job is suspended (“Mount
queued”).

U41117-J-Z125-7-76

39

Procedures

CentricStor - Virtual Tape Library

Depending on whether the logical volume is still in the RAID system or is only on
a physical volume, the following two situations arise:
Case 1:

The volume is migrated to tape and is no longer located in the RAID system.
a) The VLM issues a request to the PLM to read the logical volume LV0005
into the RAID system.
b) The PLM checks its data maintenance to determine the physical volume
on which the requested logical volume LV0005 is located: PV0002.
c) The PLM requests the robot to mount the real tape cartridge PV0002
onto a free tape drive.
d) The data of the logical volume LV0005 is loaded from the tape drive into
the RAID system.
e) A flag is set in the VLM data maintenance to indicate that the logical
volume LV0005 is in the RAID system.
f)

Case 2:

Only at this point does the VLM grant the host access to the volume
(mount acknowledged).

The volume is present in the RAID system.
The VLM immediately grants the host access to the volume.

3. The host performs read and write accesses on the logical volume.
4. The host issues an unmount job.

i

In contrast to a real archive system, the job will be confirmed immediately.

5. The VLM checks whether the logical volume in the RAID system has been modified.
Case 1:

The logical volume has not been modified.
No further action is taken, since the copy of the logical volume on the
physical volume is still valid.

40

U41117-J-Z125-7-76

CentricStor - Virtual Tape Library

Case 2:

Procedures

The logical volume has been modified.
a) The VLM informs the PLM that the logical volume is to be copied onto
tape.
b) The PLM selects a suitable tape cartridge: a completely new tape, a
scratch tape, or a tape onto which writing has not yet resulted in an
overflow. If this cartridge is not yet mounted, the PLM checks whether a
real drive is available in the robot archive at this point.
c) The PLM requests the selected real tape cartridge to be mounted, if
required, and begins data transfer from the RAID system to the tape.

i

U41117-J-Z125-7-76

The data of the logical volume is retained on the RAID system until deleted by
the VLM in accordance with the LRU procedure.

41

Procedures

CentricStor - Virtual Tape Library

2.6.3 Scratch mount
To prevent reading in from the physical medium in cases where a logical volume is to be
rewritten anyway, under certain circumstances CentricStor performs a “scratch mount”.
The special features of the scratch mount in CentricStor are as follows:
–

If the logical volume is migrated, i.e. it is no longer in the TVC, only a “stub” is made
available for the application. This stub contains only the tape headers.

–

As this stub is always kept in the TVC a scratch mount can always be performed very
quickly as no restore is required from the physical tape.

–

For the application this means that only access to the tape headers is possible.

i

If a scratch mount is performed incorrectly this can result in read errors when
an attempt is made to access the other data. In this case the data is not lost:
When a subsequent “normal” mount is performed it is available again.

CentricStor performs a scratch mount under the following conditions, depending on the
frontend (interface of the virtual library):
VAMU

The mount command supports a flag which can be used to indicate that the
mount is to be performed as a scratch mount.

VDAS

There is a special DAS_MOUNT_SCRATCH command (used only by FSC Networker). In this case CentricStor performs a scratch mount.

VACS

A scratch mount is performed in the following two cases:
–
–

VLMF

A scratch mount is performed in the following two cases:
–
–

VJUK

42

“Mount_scratch” with the “pool-ID” parameter without specification of a particular volume
Mount on a specific volume if this is contained in a pool whose pool ID is not
0

Mount with the “scratch” command with specification of a pool or specific volume
Mount of a volume that is marked as “scratch”

No scratch mount is used

U41117-J-Z125-7-76

CentricStor - Virtual Tape Library

New system functions

2.7 New system functions
CentricStor Version 3.1C for the first time provides the option of creating logical volumes
(LVs) more than 2 GB in size as a standard feature. The LV size can be selected in discrete
steps for each logical volume group (LVG):
STANDARD:
EXTENDED:

●
●

900 MB
2 GB
5 GB
10 GB
20 GB
50 GB
100 GB
200 GB

The DTV file system must be migrated for CentricStor systems configured with
Version 3.0 or earlier. This is done by the service staff.

i

For the user, using large logical volumes is basically no different from the way logical
volumes have been used to date.
The following special aspects must be taken into consideration:
–

The LV size of an existing LVG can be increased if the PVs (physical volumes) of the
PVG (physical volume group) which is linked to the LVG has the necessary capacities
(see the section “Logical Volume Groups” on page 173).

–

The LV size of an existing LVG cannot be decreased (see the section “Logical Volume
Groups” on page 173).

–

The size of the LVG "TR-LVG" cannot be modified (see the section “Logical Volume
Groups” on page 173).

–

An LVG with LVs > 2 GB can be assigned to a PVG only if the capacity of the PVs
already assigned is twice as large as the LV size (see the section “Physical Volume
Operations » Link/Unlink Volume Groups” on page 221).

–

PVs can be assigned to a PVG only if their capacity is greater than or equal to the LV
size of the LVG which is linked to the PVG (see the section “Physical Volume Operations » Add Physical Volumes” on page 223).

–

The TVC must be large enough to permit the use of large LVs. If the TVC is too small,
frequent displacement of LVs must be reckoned with. This can have a significant effect
on the LV mount times depending on the volume size and the drive type (e.g. with 200
GB approx. 90-120 min.).

U41117-J-Z125-7-76

43

Standard system functions

CentricStor - Virtual Tape Library

2.8 Standard system functions
The following functions are standard in every CentricStor system:
●
●
●
●

Partitioning by volume groups
“Call Home” in the event of an error
SNMP support
Exporting and importing tape cartridges

2.8.1 Partitioning by volume groups
CentricStor supports a volume group concept. This provides the following benefits:
–

It can be ensured that the copies of a logical volume created by an application are
stored on two different physical volumes (data security in case a magnetic tape
cartridge becomes unreadable).

–

The storing of logical volumes of different host systems or applications on one and the
same magnetic tape cartridge can be prevented.

The volume group concept is a prerequisite for “Dual Save” (see the section “Dual Save” on
page 50).

2.8.2 “Call Home” in the event of an error
In the event of serious errors in CentricStor operation, the following measures are initiated
automatically:
–

The error is reported to a hotline using “Call Home”.
In the event of connection via ROBAR, information is also sent to the BS2000 host via
“Hot Messages”.

44

–

The error report can be transferred to a Service Access System (SAS) so that specific
responses can be triggered there. In addition, it is possible to send an SMS when
certain messages are issued.

–

The responses to the individual error events are preset for various service provider profiles. One of these can be selected. In addition, the selected default can be adjusted on
a customer-specific basis.

U41117-J-Z125-7-76

CentricStor - Virtual Tape Library

Standard system functions

2.8.3 SNMP support
It is possible to integrate CentricStor into remote monitoring by an SNMP Management
Station such as “CA Unicenter” or “Tivoli”.
In the event of system errors (error weighting EMERGENCY, ALERT, ERROR, CRITICAL),
CentricStor sends a trap to the SNMP Management Station, which causes the CentricStor
icon to change color (insofar as this is supported by the SNMP Management Station).
Furthermore, a status trap with the weightings green, yellow and red is sent periodically to
the Management Station.
Application launching enables the CentricStor administration software “GXCC” to be
started simply on the SNMP Management Station by means of a mouse click.

2.8.4 Exporting and importing tape cartridges
The options for exporting and importing tape cartridges (physical volumes) which are
offered by CentricStor can be used for various purposes:
●

Storing the backup data at a disaster-proof location, e.g. in a fire-resistant room or at a
large distance from the CentricStor system

●

Manual archiving of data which is accessed extremely rarely, e.g. because it is only required when a disaster occurs

●

Exchanging data between independent systems at separate locations in order to guard
against local disaster sby means of redundant data storage

●

Transfer of bulk data when extremely large distances are involved in order to save on
line costs or if there is a lack of infrastructure

Two standard functions are available for exporting/importing tape cartridges:
●

Setting the vault attribute for a physical volume group (PVG) and setting the vault status
for a physical volume (PV)

●

Use of the transfer PVG (TR-PVG)

These functions are totally separate from the tape management tool of the host applications
and are controlled solely by the CentricStor system administrator.

U41117-J-Z125-7-76

45

Standard system functions

2.8.4.1

CentricStor - Virtual Tape Library

Vault attribute and vault status
The vault attribute is assigned to a physical volume group (PVG) by means of the GXCC
function Configuration ➟ Physical Volume Groups in the Type entry field (see page 187). The
associated tape cartridges (PVs) can be placed in vault status using the following command:
plmcmd conf -E

-V 

-G 

They are then locked for all read and write operations until vault status is cancelled again
using the following command:
plmcmd conf -I

-V 

-G 

While vault status is set, the tape cartridges can be removed from the tape library and
stored at a safe location (hence the status name vault). However, like all the logical volumes
contained on them, they are still administered by CentricStor.
An attempt to read from a tape cartridge which is in vault status is responded to with the
system message SXPL049 (see page 88). When a logical volume (LV) of such a tape cartridge is saved again by a host application, a different tape cartridge is used and the old LV
on the vault tape cartridge is flagged as invalid. Tape cartridges in vault status are also excluded from reorganization (see section “Reorganization” on page 73).
2.8.4.2

Transfer PVG
A so-called transfer PVG and a transfer LVG which is linked to this are permanently installed
in CentricStor for this export/import function. The logical or physical columes which are to
be exported or imported are temporarily added to these volume groups.
The LVs to be exported are also copied to tape cartridges of the transfer PVG. The original
LVs continue to belong to their former LVG. Their backup to tape cartridges of the PVG assigned to this LVG and access by the host applications are not affected by the export.
The system administrator alone is reponsible for controlling the copy operation for the LVs
concerned and for synchronizing this operation with their use by the host applications. CentricStor keeps no management data for these copy operations and does not know whether
or not an LV was exported via a transfer PVG.
When the required LVs have been copied, the tape cartridges can be removed from the
transfer PVG and transported to another CentricStor system. There the tape cartridges are
added to the transfer PVG and the LVs contained on them are read in. To do this it is necessary that all these LVs should already exist and be assigned to a normal LVG.
Further information on the export/import function via transfer PVG is provided in section
“Transferring volumes” on page 562.

46

U41117-J-Z125-7-76

CentricStor - Virtual Tape Library

Optional system functions

2.9 Optional system functions
CentricStor is available in a variety of configuration levels, in each of which further
customer-specific extensions (e.g. larger disk cache) are possible.
In addition to the basic configuration, optional functions are available which allow you to
customize the CentricStor functionality to suit your needs:
●
●
●
●
●
●
●
●
●

Compression
Multiple library support
Dual Save
Extending virtual drives
System administrator’s edition
Fibre channel connection for load balancing and redundancy
Automatic VLP failover
Cache Mirroring Feature
Accounting

These optional system functions are released by means of key disks.

U41117-J-Z125-7-76

47

Optional system functions

CentricStor - Virtual Tape Library

2.9.1 Compression
The figure below illustrates the principle of software compression of logical volumes:

Data
from
host

ICP

Logical
volumes

TVC

EMTAPE
VTD

Figure 15: Principle of compressing logical volumes

Just as a physical drive can perform data compression, so also can the tape drive emulations (EMTAPE1 or VTD2) once they have been released3 on the ICP. In this way, the logical
volumes can be stored in compressed form in the TVC. This results in a whole range of
advantages:
●

Disk cache utilization is significantly improved depending on the compression level, i.e.
without changing the cache size, it is possible to keep considerably more logical
volumes “online” in the cache than without compression, frequently resulting in a very
high-performance response time vis-à-vis the host system.

●

The performance of the overall system is improved due to the fact that the load on the
FC network is reduced by the compression factor.

●

In the case of data quantities greater than 900 MB, the number of logical volumes is
reduced.
Example (Standard)
To save a 4 GB file on standard volumes (900 MB) without compression, you will
need five logical volumes. If we assume a compression factor of 3, then only two
logical volumes will be necessary.

●

Within the CentricStor migration concept (i.e. the relocation of volumes from the real
robot archive to the CentricStor archive while retaining the volume number), it is
currently necessary to identify all volumes whose size exceeds 800 MB after hardware
compression. If software compression is switched on for the logical drives, however,
then automatic 1:1 conversion will also be possible for these volumes.

Compression can be set separately for each drive (this is done using Service).
The “Compression” attribute can be set to “ON”, “OFF” or “HOST” for each drive.

1

Mainframes

2

Open systems

3

Compression only works with a block size of at least 1 Kbyte.

48

U41117-J-Z125-7-76

CentricStor - Virtual Tape Library

Optional system functions

In BS2000/OSD (“HOST” attribute), compression is controlled on the basis of the tape type:
–
–

TAPE-C3:
TAPE-C4:

compression off
compression on

In UNIX, the compression setting can be selected by the device nodes.
The compression setting can be passed in ESCON or SCSI command to the tape
emulation, and the compressed data is stored block-by-block on the logical volume (the
VLM and PLM do not have any information about this).

i

If the data is already compressed on the host, e.g. if backup data is supplied in
compressed format by a NetWorker client, then compression should be switched off
for this logical volume on the ICP, so that the load on the CPU of the ICP can be kept
to a minimum.

2.9.2 Multiple library support
One of the important characteristics of CentricStor is the parallel connection of multiple real
robot archives of different types.

Host1

Host2

Hosts

CentricStor

ADIC

StorageTek

IBM Cashion

Fujitsu robot

Robot archive

Figure 16: Example of multiple library support

The number of robot archives that can be operated in parallel is theoretically unlimited.
However, since at least one physical volume group is required per library, it is only possible
to support as many libraries as there are corresponding volume groups.

U41117-J-Z125-7-76

49

Optional system functions

CentricStor - Virtual Tape Library

All supported robot archive types are permitted:
–
–
–
–
–

ADIC AML systems (with DAS)
ADIC scalar systems (with DAS or SCSI)
StorageTek systems (with ACSLS or SCSI)
IBM Cashion
Fujitsu robot (with LMF)

Please refer to the current product information for the library and drive type configurations
currently available. It is possible to have different drive types within the same archive.
However, a separate physical volume group must be configured for each drive type (see
section “Partitioning on the basis of volume groups” on page 63).

2.9.3 Dual Save
Based on the volume group functionality (see page 63), CentricStor offers the Dual Save
function. This involves making a copy of a logical volume on a second physical volume,
which may be located either in the same robot archive (Dual Local Save) or in a remote
robot archive (Dual Remote Save). This ensures the highest possible level of data security.
If a physical volume which usually contains a large number of logical volumes is in some
way corrupted (e.g. due to a tape error), CentricStor can access a copy of this logical
volume created on a different physical volume. If the copy is located in a second robot
archive, then even the complete destruction of the first robot archive would not cause any
irrevocable loss of data.
In many computer centers, for example, it is currently common practice to move the
volumes written during a backup operation (or copies generated by the application) to a
secure location directly on completion of the backup. The Dual Remote Save functionality
provides an elegant means of automating this procedure. Not only does it relieve the host
application of any copy or move operations, it also eliminates the need to transport the
cartridges to a second archive (and back again). The associated risk of data manipulation
is thus excluded.

50

U41117-J-Z125-7-76

CentricStor - Virtual Tape Library

Host2
LVG2

Host1
LVG1

LVG 1
LV0001 CentricStor
LV0002
LV0003
...........
LV3000

PVG 1
PV0001
PV0002
PV0003
...........
PV0300

Optional system functions

PVG 2
PV0301
PV0302
PV0303
...........
PV0600

LVG 2
LV3001
LV3002
LV3003
...........
LV6000

PVG 3
PV0701
PV0702
PV0703
...........
PV0800

Archive1

PVG 4
PV0801
PV0802
PV0803
...........
PV0900
Archive2

Figure 17: Example of Dual Save functionality

In accordance with the assignment rules for the volume group functionality (see page 64),
the logical volumes from LVG 1 (LV0001-LV3000) are mirrored on the physical volumes of
PVG 1 (PV0001-PV0300) and PVG 2 (PV0301-PV0600) in the robot Archive1. The logical
volumes of LVG 2 (LV3001-LV6000) are duplicated in Archive1 on PVG 3 (PV0701PV0800) and in Archive2 on PVG 4 (PV0801-PV0900), where the two robots are located
some distance apart.

U41117-J-Z125-7-76

51

Optional system functions

CentricStor - Virtual Tape Library

2.9.4 Extending virtual drives
This option allows you to increase the number of logical drives from the standard 32 per ICP
to up to 64 per ICP. This makes it possible to operate up to 256 logical drives in a single
CentricStor system.

2.9.5 System administrator’s edition
The “System Administrator Edition” (SAE) option provides a graphical user interface for
administering the CentricStor system from a remote PC workstation.
The operator PC is included as part of the scope of delivery. This machine can be used to
monitor a number of CentricStor systems.

2.9.6 Fibre channel connection for load balancing and redundancy
This option provides the CentricStor system with a second internal FC network for data
transfer. This enables operation to be continued without interruption even when a switch
fails (in normal operation the data stream is distributed to both switches).

2.9.7 Automatic VLP failover
Typically almost all CentricStor control functions run on the VLP. This processor is largely
protected against disk errors by RAID system disks. If this processor were to fail nevertheless, the CentricStor system would have no controller and thus no longer be operable.
Ongoing save jobs would be completed, but new ones would no longer be accepted.
To prevent this situation occurring, the “automatic VLP failover” function is provided
(AutoVLP failover).

i

A release via key is required for the "automatic VLP failover" function, and the SVLP
must be configured to use it. This is done by the maintenance staff.

Further prerequisites:
–
–

52

The VLP and the standby processor SVLP must be equipped with an external and an
internal LAN interface.
The standby processor SVLP must be equipped and configured like the VLP.

U41117-J-Z125-7-76

CentricStor - Virtual Tape Library

Optional system functions

If the “automatic VLP failover” function has been activated, the following actions are
no longer permitted in the system:

i

–
–
–

changing the LAN configuration
rebooting or shutting down of the VLP (init 0 or init 6: these commands
cause a failover!)
disconnecting a LAN or FC cable

If the VLP fails, the scenario is as follows:
1. The VLP fails in the CentricStor system:
ISP

SVLP

VLP

ISP

FC fabric
Figure 18: Failure of the VLP

The SVLP is active in the system and monitoring the VLP. If the VLP fails, the SVLP
takes over control of CentricStor.
2. The SVLP is activated automatically:
SVLP

ISP

ISP

Activation

FC fabric
Figure 19: Activation of the SVLP using the AutoVLP failover function

During the switchover operation, which can last up to 5 minutes, this procedure is interpreted on the host side as a mount delay and a new connection setup to the robot
control. All backup jobs continue to run normally.
The switchover involves reconfiguring the two ISPs (VLP/SVLP): they swap their external IP addresses and tasks.

U41117-J-Z125-7-76

53

Optional system functions

CentricStor - Virtual Tape Library

3. After the defective processor has been repaired, it is integrated once again into the
overall system and takes over the role of the SVLP:
ISP

SVLP

VLP

ISP

Activation

FC fabric
Figure 20: Activation of the defective processor for the SVLP

The status, i.e. AutoVLP failover active or inactive, is clearly visible on the GUI:

Figure 21: Display of the AutoVLP failover status on the GUI

i

The left-hand triangle is only displayed if an SVLP is configured.

If the left-hand triangle below the VLP is green, this means that AutoVLP failover is
activated. If it is red, AutoVLP failover is not activated. In addition, the text “AutoVLPFailover OFF” is displayed in red in the text window on the right.

!

CAUTION!
The function must have the same status on the VLP and SVLP: enabled or not
enabled (ON or OFF).

When the AutoVLP failover function is configured and activated, VLP monitoring on this ISP
is activated automatically with every reboot.

54

U41117-J-Z125-7-76

CentricStor - Virtual Tape Library

Optional system functions

2.9.8 Cache Mirroring Feature
2.9.8.1

General
CentricStor V3.1 provides users with enhanced data security and greater protection against
data loss through disasters, promptly for all nearline data. Data stored on the internal hard
disk system is mirrored synchronously to a second cluster location. This is done via 2-Gbit
FibreChannel connections, also over long distances. Even if one location is totally
destroyed, all the saved data is available which is backed up on a CentricStor configuration
of this type. As the status of the data is at all times identical on both systems, a restart is
significantly quicker and simpler. No modifications to applications or data backup processes
are required.

2.9.8.2

Hardware requirements
A functioning mirror always requires two RAID systems. In CentricStor a maximum of 8
RAID systems are supported, i.e. a maximum of 4 RAID system pairs can be set up for
mirroring.
By definition a RAID system pair can only be set up when the following conditions apply:
●

The RAID IDs begin with an odd ID.

●

The RAID IDs of these systems are in unbroken ascending order.

As a result, a maximum of four possible RAID ID pairs are possible: 1+2, 3+4, 5+6 and 7+8.
A CentricStor system can contain two possible types of RAID mirror pairs:
–

Potential mirror pairs
These pairs do satisfy the above-mentioned hardware requirements, but secondary
caches (mirror caches) must also be provided by a corresponding LUN assignment
(see the section “Mirrored RAID systems” on page 57). This is done by customer
support.
Potential mirror pairs can be recognized in GXCC by a thicker, black separating line (see
the section “Presentation of the mirror function in GXCC” on page 58).

–

Genuine mirror pairs
These pairs satisfy all hardware requirements. They contain primary and secondary
caches (section “Mirrored RAID systems” on page 57) and are identified in GXCC by a
white dot (see the section “Presentation of the mirror function in GXCC” on page 58).

U41117-J-Z125-7-76

55

Optional system functions

2.9.8.3

CentricStor - Virtual Tape Library

Software requirements
The “vtlsmirr” key must have been read in and enabled for the mirror function. This is done
by customer support.
Assuming that the hardware requirements are satisfied (see the section above) and the
RAID systems have been defined by the corresponding LUN assignment (see the section
“Mirrored RAID systems” on page 57), the overall system is configured as a mirror system
solely through the existence of the key. No operator intercvention is required for this
purpose.
Example
After the mirror key has been read into a CentricStor system with 6 RAID systems, the
following configuration is established:
RAID mirror pair 1

RAID mirror pair 2

RAID mirror pair 3

1stRAID 2ndRAID

3rdRAID 4thRAID

5thRAID 6thRAID

ID 1

ID 2

Genuine pair

ID 3

ID 4

Genuine pair

ID 6

ID 7

Potential pair

Figure 22: “Genuine” and “potential” RAID mirror pairs in a CentricStor system

The first and second RAIDs and also the third and fourth RAIDs form genuine mirror pairs
as the IDs here begin with an odd number and are in unbroken ascending order.
The RAID systems with IDs 6 and 7 do not satisfy the hardware requirements and therefore
form a potential pair. They can be turned into a genuine mirror pair by changing ID 7 to ID 5.

56

U41117-J-Z125-7-76

CentricStor - Virtual Tape Library

2.9.8.4

Optional system functions

Mirrored RAID systems
A mirrored CentricStor system has 1 to a maximum of 4 RAID mirror pairs.
RAID mirror pair 1

RAID mirror pair 2

RAID mirror pair 3

1stRAID 2ndRAID

3rdRAID 4thRAID

5thRAID 6thRAID

ID 1

ID 2

ID 3

ID 4

ID 5

ID 6

Figure 23: Example of a CentricStor mirror system with 3 RAID mirror pairs

In a RAID mirror pair, one RAID system contains only primary caches, the other only
secondary caches (mirror caches):
RAID mirror pair

Example of the LUN assignments:
1st RAID

1st RAID

2nd RAID

(P) LUN0
(P) LUN1

Primary
cache

Secondary
cache

(P) LUN2
(P) LUN3
(P) LUN4
(P) LUN5
(P) LUN6
(P) LUN7

2nd RAID
(S) LUN8
(S) LUN9
(S) LUN10
(S) LUN11
(S) LUN12
(S) LUN13
(S) LUN14
(S) LUN15

P = Primary cache
S = Secondary cache

Figure 24: Primary and secondary caches in a RAID mirror pair

Such a mirror pair is defined by the corresponding assignment of the LUNs, as shown in the
example (where x is in the range 0 through 7) below:
Assignment of the LUNs for DTV
caches (/cache/...)

U41117-J-Z125-7-76

1st RAID

2nd RAID

(P) x + 0

(S) x + 8

(P) x + 1

(S) x + 9

(P) x + 2

(S) x + 10

(P) x + 3

(S) x + 11

(P) x + 4

(S) x + 12

(P) x + 5

(S) x + 13

(P) x + 6

(S) x + 14

(P) x + 7

(S) x + 15

57

Optional system functions

2.9.8.5

CentricStor - Virtual Tape Library

Presentation of the mirror function in GXCC
In GXCC the mirror functions of a double RAID system are indicated by two arrows.
Example

Figure 25: Presentation of the mirror function in GXCC

Genuine RAID pairs are indicated with a white dot, potential pairs by a thicker black line
between the boxes on the right-hand side

i

58

The display can contain an odd number of RAID systems if, for example, a defective
RAID system has been separated from the CentricStor system. Further information
on this is provided in the section “RAID symbol for mirror mode” on page 131.

U41117-J-Z125-7-76

CentricStor - Virtual Tape Library

Optional system functions

2.9.9 Accounting
On the one hand this function permits the accounting data of logical volume groups to be
displayed in GXCC (see the section “Statistics » Usage (Accounting)” on page 293).
Example

On the other hand this function enables the current accounting data to be sent by e-mail at
defined times (see the section “Setup for accounting mails” on page 229).

U41117-J-Z125-7-76

59

Eine Dokuschablone von Frank Flachenecker
by f.f. 1992

3 Switching CentricStor on/off
IMPORTANT!
The vendor recommends that CentricStor should not be switched off. This should
only be done in exceptional circumstances.

!

3.1 Switching CentricStor on
Before switching CentricStor on, you must ensure that the units with which
CentricStor is to communicate, i.e. host computers, ROBAR-SV systems (in the
case of host connection via ROBAR), the robot control processor, and the tape
robots are already up and running.

i

The following sequence must be followed when switching on the individual CentricStor
components:
1. Switch on the LAN hubs and switches (see corresponding operating instructions).
2. Switch on the fibre channel switches (see corresponding operating instructions).

i

When connecting open systems:
The external FC switches must now also be switched on, as otherwise the
ICPs will not establish a point-to-point connection.

3. Switch on the RAID systems (see corresponding operating instructions).
Wait a minute after the “System Ready” status has been reached after the RAID systems have been started up.
4. Switch on the ICPs/IDPs/VLP by pressing the POWER ON/OFF button:

Figure 26: POWER ON/OFF button on the ISP (example TX300)

U41117-J-Z125-7-76

61

Switching CentricStor off

Switching CentricStor on/off

Using GXCC or XTCC check that all the necessary CentricStor processes are running
(all processor boxes must be green).
5. BS2000/OSD:
Case 1: Host connection via ROBAR
Ê

Start ROBAR-SV (with the menu program robar or robar_start; see
ROBAR manual [3]).

Case 2: Host connection via CSC
Ê

Start CSC (see CSC manual [4]).

3.2 Switching CentricStor off
CentricStor can be switched off only in Service mode! As this mode is explained in
the CentricStor Service Manual, only a brief description is provided below.

i

The following sequence must be followed when switching off the individual CentricStor
components:
1. BS2000/OSD, z/OS and OS/390:
DETACH or VARY OFFLINE all logical drives on the host.
2. CentricStor is switched off via the GXCC user interface:
Ê

Activate the “Shutdown” function (see the Service Manual).
All CentricStor processors (VLP, IDPs, ICPs) and - if the “power off” option is
activated - the connected RAID system are then shut down gracefully and switched
off.

Ê

Wait for 5 minutes.

3. Switch off the hubs/switches (see corresponding operating instructions):
–
–

62

LAN hubs
fibre channel switches

U41117-J-Z125-7-76

4 Selected system administrator activities
4.1 Partitioning on the basis of volume groups
4.1.1 General
By partitioning on the basis of volume groups, it is possible to combine certain logical
volumes to form a logical volume group (LVG) and certain physical volumes to form a
physical volume group (PVG).
Using rules which create associations between logical and physical volume groups, it is
possible to have CentricStor copy the logical volumes belonging to a particular LVG exclusively onto the physical volumes of the assigned PVG.
Partitioning on the basis of volume groups offers the following advantages:
●

It allows you to store the logical volumes of various host systems or applications on
different physical volumes.

●

In the case of Dual Save1, it allows you to store copies of a logical volume on two
different physical volumes. This offers an extra degree of data security for situations
where a tape becomes unreadable, for example (see section “Dual Save” on page 71).

Normally CentricStor has four volume groups:
–
–
–
–

the logical volume group “BASE”
the physical volume group “BASE”
the logical volume group “TR-LVG”
the physical volume group “TR-PVG”

The TR-LVG and TR-PVG volume groups are used to transfer logical and physical volumes
(see the section “Transferring volumes” on page 562).

i
1

Each physical volume group has its own local free pool from which new volumes
can be taken as the need arises and to which freed volumes can be returned
(e.g. following reorganization).

This assumes that the Dual Save functionality has been released (see page 71).

U41117-J-Z125-7-76

63

Partitioning on the basis of volume groups

UNIX system

BS2000 host

LVG1
LV0001
LV0002
LV0003
...........
LV3000

PVG1
PV0001
PV0002
PV0003
...........
PV0300

CentricStor

Archive

BS2000 UNIX
data
Data

LVG2
LV3001
LV3002
LV3003
...........
LV6000

PVG2
PV0501
PV0502
PV0503
...........
PV0600

Selected system administrator activities

You have two different systems (a BS2000
host and a UNIX system) using CentricStor
in conjunction with an archive system. By
grouping volumes, it is hoped to achieve a
situation where BS2000 data and UNIX
data are stored on different physical
volumes.
The logical volumes of the BS2000 host are
assigned to the logical volume group LVG1,
while those of the UNIX system are
assigned to the logical volume group LVG2.
These logical volumes can (but need not
necessarily) be assigned to various physical
volume groups.
As a result of these assignments, BS2000
data will now be stored on the physical
volumes PV0001 through PV0300, while
UNIX files will be stored on the physical
volumes PV0501 through PV0600.

Figure 27: Example of partitioning on the basis of volume groups

4.1.2 Rules
Logical volume groups:
–

It is possible to configure up to 512 logical volume groups.

i
–

64

By default, CentricStor always has at least two logical volume groups (“BASE”
and “TR-LVG“). These are available in addition to the freely configureable
volume groups.

Each logical volume in CentricStor belongs to precisely one logical volume group.

U41117-J-Z125-7-76

Selected system administrator activities

Partitioning on the basis of volume groups

Physical volume groups:
–

It is possible to configure up to 100 physical volume groups1.

i

By default, CentricStor always has at least two physical volume groups (“BASE”
and “TR-LVG”). These exist in addition to the freely configurable volume groups.

–

All physical volumes of a physical volume group belong to the same physical library.

–

A physical volume group does not possess any tape drives, it is instead linked to a tape
library. This tape library can be part of a real tape library, and may only contain tape
drives of a single type.

–

A physical library can contain several physical volume groups.

4.1.3 System administrator activities
This section contains brief information on the main system administrator activities:
–
–
–
–
–
–
–
–
–
–
–

1

“Adding a logical volume group” on page 66
“Adding a physical volume group” on page 66
“Adding logical volumes to a logical volume group” on page 66
“Adding physical volumes to a physical volume group” on page 67
“Assigning an LVG to a PVG” on page 67
“Removing an assignment between an LVG and a PVG” on page 67
“Changing logical volumes to another group” on page 68
“Removing logical volumes” on page 68
“Removing logical volume groups” on page 68
“Removing physical volumes from a physical volume group” on page 69
“Removing physical volume groups” on page 69

Cleaning and transfer groups are not included here.

U41117-J-Z125-7-76

65

Partitioning on the basis of volume groups

4.1.3.1

Selected system administrator activities

Adding a logical volume group
●

The form and detailed information are provided in the section “Logical Volume Groups”
on page 173
1. Click on the “NEW” button.
2. The following must be entered:
Name
Type
Location
Comment

Name of the new logical volume group
Extended (2 GB, ... , 200 GB) or standard (900 MB)
Cache area (floating or defined explicitly)
Comment

3. Click on the “OK” button.
The entries become effective with the next “Distribute and Activate” (see page 188).
4.1.3.2

Adding a physical volume group
●

The form and detailed information are provided in the section “Physical Volume Groups”
on page 181.
1. Click on the “NEW” button.
2. A large number of entries need to be made. The description of the individual fields
is provided on page 183.
You will find further information in the section “Creating a new physical volume
group” on page 187.
3. Click on the “OK” button.
The entries become effective with the next “Distribute and Activate” (see page 188).

4.1.3.3

Adding logical volumes to a logical volume group
●

The form and detailed information are provided in the section “Logical Volume Operations » Add Logical Volumes” on page 211.
The following information must be specified:
– the VSN of the first logical volume
– the logical volume group
– the number of logical volumes
The logical volumes are then incorporated in the CentricStor pool.

66

U41117-J-Z125-7-76

Selected system administrator activities

4.1.3.4

Partitioning on the basis of volume groups

Adding physical volumes to a physical volume group
Only physical volumes contained in the physical library may be specified.

i

The form and detailed information are provided in the section “Physical Volume Operations » Add Physical Volumes” on page 223.

●

The following information must be specified:
– the VSN of the first physical volume
– an entry specifying whether the header of the added volume should be unconditionally overwritten with a CentricStor header
– the physical volume group (see section “Adding a physical volume group” on
page 66)
– the number of physical volumes
– the type of physical volumes
The physical volumes are then incorporated in the CentricStor pool.
4.1.3.5

Assigning an LVG to a PVG
The form and detailed information are provided in the section “Physical Volume Operations » Link/Unlink Volume Groups” on page 221.

●

The following elements must be selected:
– the logical volume group
– the physical volume group (original)
– a second physical volume group (copy, only applies for “Dual Save”)
The logical volume group is then assigned to the selected physical volume group(s).
4.1.3.6

Removing an assignment between an LVG and a PVG
Before executing this function, all logical volumes must be removed from the logical
group.

i
●

The form and detailed information are provided in the section “Physical Volume Operations » Link/Unlink Volume Groups” on page 221.
The following elements must be selected:
– the logical volume group
– the physical volume group
The original physical volume group must be set to ’-unlinked-’. If a Dual-Save LVG
exists, the physical Dual-Save PVG must also be set to '-unlinked-'.
The assignment between the logical and physical volume groups is then removed.

U41117-J-Z125-7-76

67

Partitioning on the basis of volume groups

4.1.3.7

Selected system administrator activities

Changing logical volumes to another group
The form and detailed information are provided in the section “Logical Volume Operations » Change Volume Group” on page 209.

●

The following information must be specified:
– Specification whether all volumes (“all”) or just a certain number (“range”) of
volumes of the original logical volume group are to be moved to the new group.
If only part of the original group is to be transferred, the VSN of the first logical
volume and the number of affected volumes must also be specified.
– Original logical volume group (“Source Logical Volume Group”)
– New logical volume group (“Target LVG”)
The logical volumes are then assigned to the new logical volume group.
4.1.3.8

Removing logical volumes
Logical volumes should only be removed after being released by the host.

i
●

The form and detailed information are provided in the section “Logical Volume Operations » Erase Logical Volumes” on page 213.
The following information must be specified:
– the VSN of the first logical volume
– the number of logical volumes

i

The logical volume group need not be specified, since all VSNs within
CentricStor are unique.

The logical volumes are then removed from the CentricStor pool.
4.1.3.9

Removing logical volume groups
Logical volume groups which have been made known to the system with the “Distribute and
Activate” function can be removed from the “Logical Volume Groups” form (see page 173).
However, this is possible only if the following prerequisites are satisfied:
–
–

The logical volume group concerned may no longer be linked to a physical volume
group.
The logical volume group may not contain any logical volumes.

The two logical volume groups BASE and TR-LVG cannot be removed.
1. Select the logical volume group to be removed in the list.
2. Click on the “To Be Deleted” button (see page 175) and select “YES”.
3. Click on the “OK” button.

68

U41117-J-Z125-7-76

Selected system administrator activities

4.1.3.10

Partitioning on the basis of volume groups

Removing physical volumes from a physical volume group
Only scratch tapes which do not contain any valid logical volumes can be removed,
unless the physical volumes have been reorganized prior to doing this (flag is set).

i
●

The form and detailed information are provided in the section “Physical Volume Operations » Erase Physical Volumes” on page 226.
The following information must be specified:
– the VSN of the first physical volume
– the physical volume group
– the number of physical volumes
– flag for switching on/off a preceding reorganization
The physical volumes are then removed from the CentricStor pool. They are no longer
used and can be removed from the library.

4.1.3.11

Removing physical volume groups
Physical volume groups which have been made known to the system with the “Distribute
and Activate” function can be removed from the “Physical Volume Groups” form (see
page 181). However, this is possible only if the following prerequisites are satisfied:
–
–

The physical volume group concerned may no longer be linked to a logical volume
group.
The physical volume group may not contain any physical volumes.

The two physical volume groups BASE and TR-PVG cannot be removed.
1. Select the physical volume group to be removed in the list.
2. Click on the “To Be Deleted” button (see page 183) and select “YES”.
3. Click on the “OK” button.

U41117-J-Z125-7-76

69

Cache management

Selected system administrator activities

4.2 Cache management
This functionality enables individual cache file systems to be reserved for exclusive use by
particular LV groups.
LV groups which are not assigned to a cache file system are distributed to the remaining
caches (“FLOATING” setting).
Cache file system
LVG1
LV0001
LV0002
LV0003
...........
LV3000

FLOATING

/cache/101

LVG4
LVG3
LV3001
LVG2
LV3001
LV3002
LV3001
LV3002
LV3003
LV3002
LV3003
...........
LV3003
...........
LV6000
...........
LV6000
LV6000

In this example the LV group
LVG1 is assigned the cache
file system /cache/101.
The LV groups LVG2, LVG3
and LVG4 are distributed to
the remaining caches
(FLOATING).

Figure 28: Example of the exclusive use of the cache file system by LV groups

In concrete terms this means:
–
–
–

An assignment of cache file system to LV group is defined by a configuration.
An LV can be assigned to precisely one cache file system.
Multiple LV groups can be assigned to a cache file system.

Possible applications:
●

“Location” of the logical volumes
The cache management function can be used to ensure that volumes are at a particular
location or on a particular RAID system.

●

Cache residence of the volumes
The volumes are always in the cache file system.
Benefit:

Access to volumes of an LV group which is assigned to a particular cache
file system is extremely quick.
The reason for this is that the volumes are always in the cache file system.
The volumes are displaced only if the volume of data on these volumes
exceeds the capacity of the file system.

However, it must be ensured that the volume of data on the volumes does not exceed
the capacity of the cache file system.

70

U41117-J-Z125-7-76

Selected system administrator activities

Dual Save

The specification of whether a logical volume group is defined as “FLOATING” or with cache
residence in a particular cache is made in the “Location” field when the logical volume group
is defined (see section “Logical Volume Groups” on page 173).
The settings for the cache file system can be altered later at any time.

4.3 Dual Save
4.3.1 General
Dual Save (see page 50) is an optional system function which must be purchased
separately from the CentricStor basic configuration. It is released by the service engineer
by means of a key disk.
In order to use the Dual Save function, you must have at least two physical volume groups
(see section “Partitioning on the basis of volume groups” on page 63).
If this prerequisite is fulfilled, the Dual Save function will cause each logical volume to be
duplicated in two different physical volume groups. If you have two robots installed at
different locations, you can enhance data security even further.

i

If a Dual-Save library should fail completely, logical volumes with the status “dirty”
cannot be saved to tape. They remain in the cache without being saved.
Only when the library is once more in the normal status (e.g. after a repair) are the
dirty volumes saved to tape.
If the failure of the library lasts for a long time, more and more volumes are placed
in the “dirty” status until CentricStor ultimately becomes inoperable.

U41117-J-Z125-7-76

71

Dual Save

Selected system administrator activities

4.3.2 System administrator activities
4.3.2.1

Assigning a logical volume group to two physical volume groups
The form and detailed information are provided in the section “Physical Volume Operations » Link/Unlink Volume Groups” on page 221.

●

The following information must be selected:
– the name of the logical volume group
– the names of the two physical volume groups: PVG (Original) and (Copy)
The logical volumes are then saved to two different physical volume groups.
4.3.2.2

Removing a Dual Save assignment
Before using this function, all logical volumes must be removed from the group.

i
●

The form and detailed information are provided in the section “Physical Volume Operations » Link/Unlink Volume Groups” on page 221.
–

After the logical volume group has been selected the two PVGs (Original and Copy)
must be set to ’-unlinked-’.

The Dual Save assignment between the logical volume group and the two specified
physical volume groups is then removed. The logical volume group is then an LVG
without a connection to a physical volume group.

72

U41117-J-Z125-7-76

Selected system administrator activities

Reorganization

4.4 Reorganization
A brief overview of the reorganization of tape cartridges can be found on page 37.

i

4.4.1 Why do we need reorganization?
Reorganizations are performed for the following four reasons:
1. Effective use of the physical volumes’ capacity
There are two situations in which logical volumes may be rendered invalid on a physical
volume:
–

When removing logical volumes (see section “Logical Volume Operations » Erase
Logical Volumes” on page 242), the VLM sends an internal delete command to the
PLM. This causes the PLM to remove the logical volumes from its pool, and flag the
affected areas of the physical volumes in its data maintenance facility (PV file) as
invalid.

–

If the host modifies a logical volume, the VLM sends a save request to the PLM. This
causes the PLM to save the new version of the logical volume by appending it to the
same physical volume or a different physical volume. The old version of the logical
volume then becomes invalid.
Over time, the second situation in particular causes a build-up of invalid logical
volumes on a physical volume. If a physical volume contains nothing but invalid
logical volumes, it becomes a scratch tape and can be overwritten.
The purpose of reorganization is to free up any physical volumes with a very low
occupancy level, i.e. to relocate any logical volumes still valid to another physical
volume (write tape).

2. Refreshing the physical volumes
Physical volumes are subject to physical and chemical aging, which means that even
without read and write accesses they can become unusable after a long time. Regular
reorganization of physical volumes which have not been accessed for a long time refreshes the magnetization of the tapes and prevents age-related loss of the magnetization.
3. Occurrence of a read or write error (faulty status)
Physical volumes on which a read or write error has occurred and which are thus in faulty status are reorganized so that they can be taken out of service and the logical volumes affected can be backed up again.

U41117-J-Z125-7-76

73

Reorganization

Selected system administrator activities

4. Physical volume inaccessible status
The PLM can no longer access the physical volume. This can be due to the following
reasons:
–
–

The robot cannot access the physical volume.
The tape header cannot be read.

The logical volumes affected may need to be read in again from a backup copy (dual
save) and backed up again.

4.4.2 How is a physical volume reorganized?
To prevent the reorganization process from overloading the system, the PLM always
reorganizes only one physical volume at a time. Once this physical volume has been
completely cleared (all logical volumes on the tape are invalid) to become a scratch tape,
the reorganization of the next physical volume can begin.
Since logical volumes cannot be copied directly from one tape to another, they are stored
temporarily in the TVC as follows:
1. The PLM selects a logical volume on the physical volume which is to be reorganized
and sends a “Move” request for each logical volume to the VLM.
2. The VLM checks whether this logical volume is located in the TVC. If it is, it sends a
“Restore” request to the PLM.
3. As soon as the TVC has a copy of the logical volume (again), the VLM sends a “Save”
request to the PLM. This causes the logical volume to be copied to another write tape.
From the point of view of the PLM, the logical volume has now been moved.
The PLM issues “Move” requests to the VLM for all valid logical volumes on a physical
volume in the ascending order of the block numbers on the tape. Once again, to prevent a
system overload, only a certain number of “Move” requests are initially sent. A further
“Move” request is not released until the preceding one has been completed successfully.

74

U41117-J-Z125-7-76

Selected system administrator activities

Reorganization

4.4.3 When is a reorganization performed?
Depending on the type of event or status which triggers reorganization, the PLM performs
reorganization either immediately after the event occurs or within a configurable time of day
interval.
The following three events cause reorganization to be triggered immediately regardless of
the time of day:
●

Explicitly by means of a user command
It is possible for the user to explicitly request the reorganization of a physical volume via
the GXCC user interface (see section “Starting the reorganization of a physical volume”
on page 78). This event has priority over all other reasons for reorganization which may
occur simultaneously. Any reorganization which may be running for the physical volume
group concerned is aborted.

●

Hard minimum event
This event has occurred whenever one of the following two conditions are fulfilled:
–
–

The number of scratch tapes falls below the hard minimum specified in the GXCC
menu “Physical Volume Groups” (see page 187).
There are read tapes present with any occupancy level.

If the number of scratch tapes falls below the hard minimum, the following system
message is issued (see page 75):
SXPL008 ... PLM(#8): WARNING: hard minimum of free PVs () of PVgroup  reached

Once the number of scratch tapes exceeds the hard limit again, the “all clear” is given
(see page 75):
SXPL009 ... PLM(#9): NOTICE: number of free PVs of PV-group  over
hard minimum () again
●

Absolute minimum event
If the number of scratch tapes falls below the absolute minimum, the PLM will reject all
normal “Save” requests and will only process those issued in the context of the reorganization.
This is because the PLM itself requires a number of scratch tapes for reorganization
purposes. Without these, it could find itself in a dead-lock situation.
If the number of scratch tapes falls below the hard minimum, the following message will
be written to the file klog.msg (see page 76):
SXPL010 ... PLM(#10): WARNING: absolute minimum of free PVs () of
PV-group  reached

U41117-J-Z125-7-76

75

Reorganization

Selected system administrator activities

Once the number of scratch tapes reaches the hard limit again, the “all clear” is given
(see page 76):
SXPL011 ... PLM(#11): NOTICE: number of free PVs of PV-group  over
absolute minimum () again

For the following statuses, reorganization is only initiated within the configured time of day
interval. When several of these statuses exist simultaneously, the PLM prioritizes the reorganization of the physical volumes affected in the specified order.
●

Physical volumes which have reached refreshing age
Once the data on physical volumes exceeds a certain age, the physical volumes are
reorganized in accordance with the settings in the physical volume group (see section
“Physical Volume Groups” on page 187). In the process, the logical volumes are written
anew to another physical volume.

●

Physical volumes in the faulty or inaccessible status

●

Soft minimum status
This status exists when the number of scratch tapes has fallen below the configured soft
minimum and at the same time read tapes exist whose occupancy level is below the configured percentage value (Fill Grade parameter).

When the hard minimum is fallen below and at the same time physical volumes in faulty or
inaccessible status or which have reached the refreshing age exist, these physcial volumes
are not taken into account for reorganization. When this situation occurs, highest priority is
assigned to the most effective method of obtaining new scratch tapes: physical volumes in
faulty or inaccessible status cannot be reused anyway, and those which have reached the
refreshing age normally have a high occupancy level and can easily cope with a delay of a
few hours which is slight in comparison to their age.

4.4.4 Which physical volume is selected for reorganization?
Selection of a physical volume for reorganization takes place randomly in the following
groups and does not depend on its occupancy level:
●

Physical volumes selected by means of an explicit command

●

Physical volumes which have reached the refreshing age

●

Physical volumes in faulty or inaccessible status

Further physical volumes are queued for reorganization only if the first limit value for the
number of scratch tapes (soft minimum) is fallen below. In the group affected in this case, the
next physical volume selected is the one for which the lowest costs for copying the logical
volumes are estimated.

76

U41117-J-Z125-7-76

Selected system administrator activities

Reorganization

Only physical volumes in read status on which the relative proportion of valid data is less
than the percentage value configured in the Fill Grade parameter are taken into account. If
a physical volume is in write status and the percentage value for its valid data drops below
the Fill Grade value, it is placed in read status and is therefore a candidate for reorganization.
The costs are estimated according to the following formula:
( N * estimate1 ) + ( M / estimate2 )
where
N
estimate1

Number of valid logical volumes on the physical volume
Estimated overhead, in seconds, for each logical volume which is to be written (configuration parameter Write Overhead)
Sum, in MiB, of the data contained on the valid logical volumes
Estimated write performance in MiB/sec (configuration parameter Write
Throughput)

M
estimate2

When the two estimated values are configured, it must be borne in mind that these do not
depend solely on the hardware characteristics of the tape drives, but also to a large degree
on the relative size of the valid logical volumes. For example, large logical volumes have
fairly certainly been displaced from the TVC and would have to be read in first, which practically doubles the time required and halves the write performance.
Example
pos PV
TL
PVG
state
:
15 CSJ016 JAGUAR JAG001 _r__
16 CSJ017 JAGUAR JAG001 _r__
:

next-bl

LVs -

val

cap/GB

valid/GB

2518971
1109297

1156
16

583 279.397
1 279.397

0.000
15.795

valid %
0
5

The default values (Write Overhead = 3, Write Throughput = 5) result in the following costs:
PV

Number of valid LVs

Valid data volume (MiB)

Estimated costs

CSJ016

583

0

1749

CSJ017

1

16155

3234

CSJ016 is therefore selected.
Example with Write Overhead = 3 and Write Throughput = 20:
PV

Number of valid LVs

Valid data volume (MiB)

Estimated costs

CSJ016

583

0

1749

CSJ017

1

16155

810

CSJ017 is therefore selected.

U41117-J-Z125-7-76

77

Reorganization

Selected system administrator activities

4.4.5 Own physical volumes for reorganization backup
The PLM distinguishes between backup requests from the host and backup requests which
are caused by a reorganization. As long as the number of scratch tapes is above the hard
minimum, the PLM attempts to use a physical volume exclusively for the request type involved. The reason for this is as follows: the logical volumes affected by the same request type
are more similar to each other in terms of the retention period of their data than to those
affected by the other request type. Consequently, in the event of separate backup according
to the request type, either a very high or very low occupancy level of the physical volumes
is more probable than a medium occupancy level, and the tape backup is therefore more
efficient.
However, as a result the number of mount requests during reorganization increases. If the
separation of physical volumes for host backup requests and for reorganization consequently proves to be disadvantageous, the service staff can suppress this by means of a
configuration switch.

4.4.6 Starting the reorganization of a physical volume
The form and detailed information are provided in the section “Physical Volume Operations
» Reorganize Physical Volumes” on page 257.
The following information must be specified:
–

the VSN of the physical volume

–

the name of the physical volume group

If another physical volume is currently being reorganized either explicitly or automatically,
this process is aborted and reorganization of the physical volume currently specified in
GXCC is initiated.

78

U41117-J-Z125-7-76

Selected system administrator activities

Reorganization

4.4.7 Configuration parameters
All configuration parameters can be set specifically for each physical volume group.
It must be ensured that a dependency on the number of available drives exists and that not
too many reorganizations take place in parallel, otherwise these will be delayed unnecessarily on account of the lack of drives. Each reorganization requires two drives: one for reading in and one for writing.

i

The form and detailed information are provided in the section “Physical Volume
Operations » Reorganize Physical Volumes” on page 257.

Time Frame
This parameter defines the time of day interval within which the reorganizations resulting from the soft minimum limit value being fallen below again, for refreshing and for restoring the backups for physical volumes in faulty or inaccessible status should take
place.
The interval should be in an off-peak period.
Default:
10:00 - 14:00
Soft Minimum
The minimum number of physical volumes (scratch tapes) which, if fallen below,
automatically triggers a reorganization process.
Default:
30
Recommendation:
Empty physical volumes required per week + Absolute Minimum
Hard Minimum
If the number of free physical volumes (scratch tapes) specified here is fallen below, a
reorganization run is started immediately, i.e. regardless of the Time Frame parameter.
Default:
8
Recommendation:
Empty physical volumes required per week + Absolute Minimum
Absolute Minimum
Absolute minimum number of free physical volumes (scratch tapes). When this
minimum is reached, all resources are used with priority for reorganization. The
following hierarchy must be observed:
Soft Minimum > Hard Minimum > Absolute Minimum.
Default:
4
Recommendation:
Number of Physical Device Services
Fill Grade
This parameter defines a particular percentage value for the proportion of valid data in
relation to the total amount of written date on a physical volume.
All physical volumes in read status on which the percentage of valid data is below this
limit are candidates for reorganization.

U41117-J-Z125-7-76

79

Reorganization

Selected system administrator activities

When the percentage of valid data on a physical volume which is in write status and is
not currently mounted in a Physical Device Service is below this limit value and at the
same time a reorganization is in progress because a scratch tape limit value has been
fallen below, this physical volume is placed in read status, and it is therefore a candidate
for reorganization.
Default:
70
Parallel Request Number
When a PV is reorganized, a movement request for each logical volume of this physical
volume is sent to the VLM. The parameter defines the number of such movement
requests which can be processed in parallel.
The value specified may not be too high for the following reasons:
– Space must be created in the TVC for each logical volume which is to be read in,
i.e. under certain circumstances other logical volumes are displaced unnecessarily.
– The VLM limits the number of logical volumes for reorganization per cache. When
this value is reached, subsequent “Move” requests must wait.
Default:
5
Move Cancel Time
The PLM monitors the progress of the reorganization of a physical volume. This value,
specified in seconds, is used for this purpose.
If the status of the reorganization of a physical volume remains unchanged for this period, the reorganization of this physical volume is aborted and, if applicable, the next volume is reorganized.
The timer is reset for each of the individual steps listed in the section “When is a reorganization performed?” on page 75.
Default:
1800
Write Throughput
This parameter specifies the estimated write performance, in MiB/s, for reorganization
of a physical volume. It plays a part in determining the physical volume for which the
shortest reorganization time is to be expected (see section “Which physical volume is
selected for reorganization?” on page 76).
Default:
5
Write Overhead
This parameter specifies the estimated overhead, in seconds, for each logical volume
which is to be written. It plays a part in determining the physical volume for which the
shortest reorganization time is to be expected (see section “Which physical volume is
selected for reorganization?” on page 76).
Default:
3

80

U41117-J-Z125-7-76

Selected system administrator activities

Cleaning physical drives

PLM Refresh Interval
Number of days after which the physical volumes in this group are to be recopied. The
count starts with the day on which the physical volume switched from scratch status to
write status. This value must be defined in accordance with the recommendations of the
tape manufacturer.
Default:
365

4.5 Cleaning physical drives
The cleaning of physical drives can be carried out by the robots, or by CentricStor
(see section “Physical Volume Operations » Add Physical Volumes” on page 223).

i

Generally speaking, physical drives are cleaned automatically by the robots which means
that it is only necessary to check the cleaning tapes regularly.
However, the following robots are the exception to this rule:
SCALAR 1000 with a direct SCSI connection (not via DAS/ACI or SDLC) with
MAGSTAR drives

●

Since SCALAR 1000 has no special interface to the MAGSTAR drives that allow it to
see a clean request from the MAGSTAR drives, the system administrator must regularly
check the operating panel of the MAGSTAR drives.
MAGSTAR drives indicate a clean request by issuing a *CLEAN message to their
operating panel. And then the system administrator must trigger the cleaning process
by hand from the SCALAR 1000 operating panel.
SCALAR 100

●

SCALAR 100 also does not have an automatic cleaning feature. The drives indicate a
clean request via a special clean symbol (stylized broom) on the drive field of the
SCALAR 100 operating panel.
In this case, the system administrator must also trigger the cleaning process by hand
from the SCALAR 100 operating panel.
If the robots you are using do not offer an automatic cleaning function, CentricStor can also
take on the cleaning of physical drives.

i

U41117-J-Z125-7-76

Cleaning by CentricStor is carried out if the cleaning PVG that is automatically
created for each tape library provides cleaning tapes (see section “Physical Volume
Operations » Add Physical Volumes” on page 223 and section “Physical Components” on page 254).

81

Synchronization of the system time using NTP

Selected system administrator activities

4.6 Synchronization of the system time using NTP
In CentricStor the configuration with regard to NTP is carried out automatically, which
means that the file /etc/ntp.conf is created with the appropriate entries for each
computer.
It is no longer necessary for the system administrator to modify the files.
Exceptions
–

If the first NTP server (VTLS Message Manager) is to be configured as the NTP client
of an NTP server in an external LAN, the appropriate entry must be made by hand in the
/etc/ntp.conf file on this computer.

–

If the files /etc/ntp.conf are not to be updated automatically (because the computer
has been specially onfigured with regard to NTP, the entry #static must be made in
the /etc/ntp.conf file for all computers. If this is the case, these files will not be
modified.

!

82

CAUTION!
CMF is based on a correct time setting. An incorrect NTP configuration can result
in data loss.

U41117-J-Z125-7-76

5 Operating and monitoring CentricStor
5.1 Technical design
5.1.1 General
CentricStor monitoring and operation is carried out on two levels by GXCC and XTCC.
.

CentricStor

GXCC

VLP
LAN

GXCC
XTCC

LAN

CentricStor
console

ESCON
ESCON

ICP

IDP

FC

GXCC

GXCC

FC

XTCC

XTCC

SCSI
SCSI

FC fabric
FICON
FICON

ICP

IDP

GXCC

GXCC

XTCC

XTCC

SCSI
SCSI

TVC
Figure 29: GXCC/XTCC on the CentricStor ISPs (example VTA 2000-5000)

U41117-J-Z125-7-76

83

Technical design

Operating and monitoring CentricStor

GXCC (Global Extended Control Center) is a program with an X user interface that provides
a complete graphical representation of a CentricStor system, and covers all connected
devices and ISPs (Integrated Service Processors) such as ICPs (Integrated Channel
Processors), IDPs (Integrated Device Processors), and VLPs (Virtual Library Processors).
GXCC processes all ISPs and other components of a CentricStor cluster as if they were a
single unit.
Displays and operations within an ISP are implemented in the downstream XTCC application (Extended Tape Control Center). An XTCC application is started by choosing the
“Show Details” command from the function menu of an ISP.
GTCC and XTCC are standard components of the CentricStor software package, and are
installed on all the CentricStor ISPs. They can also be operated on a computer
(workstation) that is running independently of CentricStor. To permit this, a GUI CD is
supplied with each CentricStor which can be used to install the GUI software for monitoring
a CentricStor system under the operating systems MS-Windows 95/98/NT/2000/XP,
LINUX, SOLARIS and SINIX-Z.

5.1.2 Principles of operation of GXCC
As shown in the figure below, the CentricStor user interface is represented by the interaction
of three components:
–

InfoBrokers exchange information with the individual CentricStor processes. An
InfoBroker is an object-oriented data maintenance system containing all information
relevant to the system. This includes measured values supplied by the monitoring
programs of the CentricStor components.

–

GXCC and XTCC receives information from the various InfoBrokers and presents it in
graphical format.

–

An X11 server provides any on-screen display requied and processes commands
entered via your keyboard or mouse.

These three components communicate with each other on the basis of the TCP/IP protocol.
The InfoBroker, GXCC, and the X11 server can thus reside on the same system, or be
distributed between two or three systems connected via TCP/IP. Please note that the flow
of data between the InfoBroker and GXCC is considerably less than that between GXCC
and the X server.

i

Please refer to the product data sheet for information on the supported standard
and optional configurations of the user interface.

CentricStor utilizes numerous components, all of which are monitored and managed by
GXCC. There are several options for accessing these components.

84

U41117-J-Z125-7-76

Operating and monitoring CentricStor

Technical design

The figure below shows the components and the connections used for control and
monitoring (the Fibre Channel networking and the paths to the hosts are not shown):
CentricStor

Remote computer

VLP

GXCC
ouput data

GXCC

InfoBroker

IDP
InfoBroker

FC fabric

X11
server

GXCC

ICP
InfoBroker

SCSI components

SNMP components

SCSI or FC interface
LAN or TCP/IP connection within processor
LAN
Figure 30: GXCC components with X11 server as remote computer

In this example GXCC runs on a CentricStor computer. The data is made available by the
VLP InfoBroker. All GXCC output data is sent to the remote computer (X11 server) and
there displayed on the screen.
In the case of a low-speed data connection between CentricStor and the remote computer
the large data quantities to be transferred result in performance problems.
Consequently a configuration without X11 server provides a better solution:
CentricStor

VLP

Remote computer
GXCC user data in ASCII

InfoBroker

GXCC

GXCC

New data available?

FC fabric
Figure 31: Components of GXCC with a remote computer (not an X11 server)

U41117-J-Z125-7-76

85

Technical design

Operating and monitoring CentricStor

In this configuration GXCC runs on the remote computer (e.g. Windows PC) and uses the
interfaces of its user interface directly. At short intervals GXCC inquires of the CentricStor
VLP whether there is new data. Here only 20 bytes are transferred. If new data is available,
the VLP sends the GXCC user data to the remote computer, which edits the data and
forwards it to the output screen.
ISP
Each ISP has its own InfoBroker which gathers information on the local software components via optimized interfaces. This information is then passed on to GXCC over the local
CentricStor network.
Components managed via SNMP (FC switches)
These components can only be controlled and monitored using SNMP mechanisms. The
control component, referred to as the SNMP manager, monitors these stations and receives
traps. During configuration, you define the ISP in the CentricStor network on which the
underlying SNMP manager for GXCC is to be started.
In GXCC, all of the FC switches are represented as FC fabric.
SCSI-controlled components (tape drives, certain libraries)
All tape drives and some archives are controlled and managed by means of mechanisms
contained in the SCSI protocol. The associated InfoBroker instance is located in the ISP of
the CentricStor system to which the SCSI or FC interface leads.

86

U41117-J-Z125-7-76

Operating and monitoring CentricStor

Technical design

5.1.3 Monitoring structure within a CentricStor ISP
The figure on page 89 contains a more detailed representation of how GXCC monitors the
individual CentricStor control components. This figure should also be regarded as one
example of the many configurations possible.
The figure shows the logical or physical connections used by GXCC for monitoring and
control purposes. The internal Fibre Channel system is depicted only insofar as it is used
in the management of the RAID system. The thick continuous lines represent TCP/IP
connections which alternate between processors. The broken lines represent connections
that may also exist within an ISP. All other interfaces are represented by thin lines.
The central monitoring point in each ISP is the InfoBroker and the associated RequestBroker. All InfoBrokers in the CentricStor network have exactly the same configuration and
are considered peers. They provide special interfaces for communicating with all
CentricStor control components. These components are present in latent form in all ISPs.
During the configuration process, you define which components are actually activated in
which ISPs. Inactive control components are shown in blue in the figure below. While the
InfoBroker only ’knows’ the components of the local ISP, the affiliated RequestBrokers
exchange configuration information with the RequestBrokers of the other ISPs, and thus
’see’ CentricStor as an overall unit.
XTCC always monitors a single ISP. As a result, XTCC connects directly to the InfoBroker
of ’its’ ISP.
The following example of many possible CentricStor configurations. In principle the
individual processes can be distributed over the ISPs almost without restriction. Only those
processes which require supervisor access must be started on one ISP.

U41117-J-Z125-7-76

87

Technical design

Operating and monitoring CentricStor

The table below lists the control components:
Name

Function

Comment

LD

Logical Device: emulation of a drive.

Must run on the ISP in which the
associated host interface (ESCON/
FICON/FC) is installed (ICP).

MSGMGR Message Manager: filters and stores Only one instance throughout
system messages. Triggers actions in CentricStor.
response to certain situations (e.g.
SNMP traps).
PDS

Physical Device Service: drives one
physical tape drive.

PERFLOG Performance Logging: captures and
stores performance-related system
data.

Must run on the ISP in which the
associated SCSI interface is installed
(IDP).
Only one instance throughout
CentricStor

PLM

Physical Library Manager: manages Only one instance throughout
the physical CentricStor components. CentricStor.

PLS

Physical Library Service: drives a real In the case of SCSI-controlled robots,
robot archive.
must be installed on the same ISP as
the associated SCSI interface.

VLM

Virtual Library Manager: manages
the CentricStor virtual libraries.

One instance throughout
CentricStor, installed in the same ISP
as the PLM (VLP).

VLS

Virtual Library Service

VDAS, VACS and VLMF are each
provided once in CentricStor, VAMU
10 times, and VJUK 20 times.

VMD

Virtual Mount Daemon

In each ICP.

GXCC/XTCC can also run on SINIX-Z/Solaris/LINUX/Windows systems which are
independent of CentricStor. In this case, GXCC connects via the LAN to the RequestBroker
of the ISP referenced in the unit selection, exchanges information with it and, on the basis
of this information, builds the graphical display.
GXCC/XTCC also covers the CentricStor components that can only be monitored via
SNMP, such as the Fibre Channel switches. During configuration, you define the ISPs in
which the management station is to be started. In addition, an SNMP agent can be installed
in CentricStor that permits CentricStor to be monitored by an SNMP management station.

88

U41117-J-Z125-7-76

Operating and monitoring CentricStor

Technical design

ISP of the CentricStor network

Remote computer
Keyboard

Screen

VLM

PLM

Mouse

VLS

PLS

GXCC
MSGMGR

PERF

InfoBroker
RequestBroker
ICP of the CentricStor network

VMD

Mainframe
hosts

LD
Open
Systems
hosts

VLS

ESCON, FICON or FC
host connection

InfoBroker
RequestBroker

IDP of the CentricStor network

PDS

SCSI
drives
SCSI
libraries

PLS

SCSI or FC connections to
drives and libraries

InfoBroker
RequestBroker
TCP/IP LAN
TCP/IP LAN or TCP/IP connection
within processor

Components managed via SNMP

Figure 32: Monitoring structure in CentricStor (example VTA 2000-5000)

U41117-J-Z125-7-76

89

Technical design

Operating and monitoring CentricStor

5.1.4 Operating modes
GXCC recognizes the following three user privilege levels:
Service mode

Access to all CentricStor functions available via GXCC. Users
must use the “diag” password to identify themselves to the
CentricStor ISP with which they are connected.

User mode

Access to the functions required for normal operation.
Examples of this are the addition of new logical volumes and the
inclusion of or changes to logical and physical volume groups.
Users identify themselves with the ISP “xtccuser” password.

Observe mode

Monitoring function. Access to the global status and history. By
default no password is required. On CentricStor, access control
can optionally be configured for this mode. Users then legitimate
themselves with the ISP’s “xtccobsv” password.

The operating mode is set as a start parameter when GXCC is called. The password will be
queried once the connection has been established.
If the wrong password is entered, an error message is output and the query is repeated.
After a third wrong entry for Service or User mode the GXCC is started in Observe mode
provided no access control exists for this. If access control is specified for Observe mode,
three wrong password entries are also possible here, after which the program aborts.

i

90

This manual describes User mode and Observe mode. Service mode is reserved
for service personnel.

U41117-J-Z125-7-76

Operating and monitoring CentricStor

Operator configuration

5.2 Operator configuration
5.2.1 Basic configuration
Without requiring additional hardware or further software licences, CentricStor offers the
following configuration for operation and monitoring:

CentricStor
InfoBroker
GXCC

GXCC

X11 server

X11 server

Modem

Phone line
connecting to the
teleservice

Figure 33: CentricStor basic configuration

Within a CentricStor cluster, the InfoBroker will accept two connections to GXCC if this has
been started on an ISP of CentricStor. The X11 server can run internally in CentricStor,
using the local consoles, but also externally. The InfoBroker can also accept an additional
connection to a GXCC outside CentricStor if this is made using a modem (SLIP)
connection. This connection is designed to be used for remote maintenance purposes.

5.2.2 Expansion
The operating options can be expanded using the additional license 3595-RMT (CS Remote Monitoring and Administration). If the RMT key is installed in a CentricStor system,
the InfoBroker accepts any number of connections to a GXCC outside its CentricStor. This
CentricStor can consequently be monitored on any number of independent computers
(workstations) with GXCC/XTCC.
For performance reasons the number of connections with GXCC within CentricStor remains
limited to two.

U41117-J-Z125-7-76

91

Operator configuration

Operating and monitoring CentricStor

5.2.3 GXCC in other systems
GXCC can also be installed and is executable in Windows 98/NT/2000/XP, LINUX and
SOLARIS systems. An installation CD is supplied with each CentricStor. This contains the
tools and information files required for installation on the relevant systems. You will find
more information on this in the installation manual.
GXCC V6.0, GXCC V3.0 and GXTCC V2.x can be installed in the same system at the same
time.
Ongoing updating of GXCC takes place semiautomatically from the connected CentricStor
systems.

5.2.4 Screen display requirements
–

The operator consoles of the ISPs meet the requirements.

–

An external X11 server will require a graphics-capable color monitor. The ideal
resolution is 1280 x 1024 Pixel. The minimum requirement which must be set is
1024 x 786 Pixel.

–

In GXCC important information is displayed using colors. As a result, 16-bit True Color
(or better) is ideal. 8-bit color palettes may lead to incorrect color displays if GXCC is
sharing the screen with other applications.

5.2.5 Managing CentricStor via SNMP
5.2.5.1

Connection to SNMP management systems
CentricStor is prepared for connection to an SNMP management station. On the GUI-CD
of CentricStor the software and information have the settings required. Special functions
are available for CA Unicenter.
SNMP is used, above all, to forward special situations reported in console outputs, for
example, to the management station as traps. The user interface or command-line interface
should then be used for detailed diagnostics.

92

U41117-J-Z125-7-76

Operating and monitoring CentricStor

5.2.5.2

Operator configuration

SNMP and GXCC
Monitoring and operation of CentricStor by GXCC runs independently of SNMP.
In addition, however, CentricStor also offers the basic functions required for management
via an SNMP station. Thanks to the great flexibility of GXCC as regards configuration, when
GXCC is used together with SNMP the monitoring and operation of CentricStor can be
adapted to suit the IT infrastructure and the requirements of the user.
The VLP of CentricStor provides the connection to the outside world. It supports “ping” and
elementary MIB-II. Thus, the operation of the carrier system can be monitored, but not the
functioning of CentricStor.
In addition to standard Traps such as coldStart, linkUp, linkDown etc., when system
messages of priority 5, 6, 7 or 8 (ERROR, CRITICAL, ALERT, EMERGENCY) occur,
CentricStor therefore sends corresponding traps to the management station.
In addition, every 300 seconds a “Global State” with the following values is sent to the
SNMP management station by means of a trap:
1
4
7

CentricStor is ready to operate (green).
Subcomponents of CentricStor are faulty, operation is still possible (yellow).
Operation of CentricStor has been disrupted (red).

Additional functions are made available for installation in management stations of the type
CA Unicenter.
Since GXCC will run on most standard systems, the startup of GXCC for detailed
diagnostics when there is a trap can be largely automated in practically all management
systems.
The current status regarding SNMP support is indicated in a text file. After GUI installation
on a type CA Unicenter management station you can find this file at “...Setup > SNMP
Integration README”.

U41117-J-Z125-7-76

93

Operator configuration

Operating and monitoring CentricStor

The figure below shows some of the possible configurations for an SNMP manager for
connecting GXTXCC to the triggering CentricStor on the basis of a trap:

SNMP management station
Application
launching

SNMP manager

X11
server

GXCC

Workstation
GXCC

SNMP
agent

Info
Broker

GXCC

CentricStor

SNMP
agent

GXCC

Info
Broker

GXCC

Optional if
there is no
RMT
license

GXCC

CentricStor

X11 connection to GXCC, higher bit rate required
Traps from the SNMP agent in CentricStor to the management station
Connection between GXCC and InfoBroker; only a low bit rate required;
only with an RMT license in CentricStor.
Application launching
Figure 34: Configuration options at an SNMP management station

i

–

–

94

In the case of configurations in which there is an external connection between
GXCC and an InfoBroker (shown here in blue), an RMT license is required in
the relevant CentricStor.
The InfoBroker accepts a maximum of two local connections. It is irrelevant here
whether the X11 server runs within CentricStor using the local console or outside CentricStor on a workstation.

U41117-J-Z125-7-76

Operating and monitoring CentricStor

–

Starting GXCC

The GUI software must be installed explicitly on the workstation for operation of
GXCC outside CentricStor. A CD with GXCC (GUI CD) is provided free with
each CentricStor. GXCC can be installed an unlimited number of times to run
CentricStor. It will run on Windows 98/NT/2000/XP, LINUX, SOLARIS and
SINIX-Z systems.

5.3 Starting GXCC
5.3.1 Differences to earlier CentricStor versions

i

In CentricStor V3.0 the name of the interface had already been changed from
“GXTCC” to “GXCC”. Furthermore, Service mode is now started by the start
parameter “-service” (previously “-modify”). The access point (mostly VLP) is
selected via “-unit” (previously “-host”).

For compatibility reasons the call for GXCC and the previous start parameters will continue
to function. However, you are urgently recommended to adapt all the settings to the new
names as soon as possible.

5.3.2 Command line
GXCC is called from the remote operator console or the CentricStor console via the Root
menu. On auxiliary operator consoles a command line is entered. A number of runtime
parameters can or must be entered with this command line.
If GXCC is to be started from a graphical interface, this command line must be entered
when configuring the interface function (see section “Starting from a Windows system via
Exceed” on page 105, for example, or section “Starting from a Windows/NT system via
XVision” on page 108).
The command line has the following format:
/usr/apc/bin/GXCC  [&]

Example of a GXCC call:
/usr/apc/bin/GXCC -user -display 123.45.67.89:0.0 &

The start parameter settings are also transferred to the Global Status monitor.

U41117-J-Z125-7-76

95

Starting GXCC

Operating and monitoring CentricStor

The table below lists the possible start parameters:
Parameter

Meaning

Comment

1

-aspect  Size and position of the main win-  has the format
dow on the screen
[=][WxH]+|-X+|-Y
WxH:
Width x height (pixels)
X,Y:
Coordinates (pixels)
[*]
* is optional
+|+ or -

96

-autoscan1

Cycle duration for updating the
main window

Reduction of the data when
operating via Teleservice

-display

Host name/IP address of the
X terminal at which the window is
to be displayed

Default: local X11 server

-globstat

Activates the Global Status
Monitor

-lang1

Language for helps. De | En

In the event of other defaults En is
set.

-multiport

Connection via Info and/or
RequestBroker port

If not specified: Single Port connection (see page 148)

-nointro

Splash screen suppression

Reduction of the data when
operating via Teleservice

-observe

Start in Observe mode

If not specified: User mode

-profile 

Name of the profile file (see the
section “Profile” on page 191)

If this is not specified, GXCC will
be started with the default profile.

-service

Start in Service mode

If not specified: User mode

-simu 

Simulation mode

 is the file generated in
GXCC/XTCC with File ➟ Save.

-singleport

Connection only via RequestBroker port

If not specified: Single Port connection (see page 148)

-size1 n

Size of the main window

Default value: 80%, 100%, 120%

-unit

Host name/IP address of the
CentricStor node to which GXCC
is connected after start-up

If GXCC is running on a VLP, a
connection to the local InfoBroker
is established if nothing else is
specified. In all other cases, the
Unit Select menu is opened after
the program is started.

U41117-J-Z125-7-76

Operating and monitoring CentricStor

Parameter

Meaning

Comment

-user

Starts the application in User
mode

If not specified: User mode

1

The command line arguments -aspect, -autoscan, -lang, -size have priority over values already stored in a profile
file.

To start in User mode, use:

To start in Observe mode, use:
5.3.2.1

Starting GXCC

gxcc  &
or
GXCC -user  &
GXCC -observe  &

Explanation of the start parameter -aspect
The argument of this parameter has the format {[=][WxH]+|-X+|-Y}
Where:
WxH
+X
-X
+Y
-Y

The window is displayed on the screen with a width of W pixels and a height of H pixels.
Distance of the left-hand window margin from the left edge of the screen in pixels
Distance of the right-hand window margin from the right edge of the screen in pixels
Distance of the upper window margin from the upper edge of the screen in pixels
Distance of the lower window margin from the lower edge of the screen in pixels

Examples: -aspect 500x400-100-100; -aspect 500x400; -aspect +100+100

i

It is possible that the specification W and / or H will be ignored by the application.

!

CAUTION!
Knowledge of the screen setting is required to use X and Y since if values which are
too high are specified, the window will be displayed partly or completely outside the
visible area.

U41117-J-Z125-7-76

97

Starting GXCC

Operating and monitoring CentricStor

5.3.3 Environment variable XTCC_CLASS
GXCC supports an environment variable with this names as follows:
If this environment variable is not defined when GXCC is started, it is set to the value “Xtcc”.
Otherwise the specified value is taken.
The relevant value is is inherited by all applications called by the current GXCC instance.
This (class) name can, for example, be used by virtual window managers to place all the
applications belonging to a particular GXCC instance in the same virtual window.
On Unix systems this variable can, for example, be set as follows when GXCC is called:
XTCC_CLASS=Xtcc1 gxcc -unit A [argumente] &
XTCC_CLASS=Xtcc2 gxcc -unit B [argumente] &

5.3.4 Passwords
The following passwords are needed to start GXCC:

98

●

The password for logging into the CentriStor system running GXCC. GXCC starts under
this password. Normally, this is the user ID “tele”; “root” is also possible.

●

In User mode, GXCC requests a password which it uses for authorization when establishing a connection with the InfoBroker. Here you normally require the password of the
“xtccuser” ID.

●

For Service mode you normally require the password of the “diag” ID.

●

In Observe mode generally no password is required. However, if the optional access
control has been activated on a CentricStor, you normally require the password of the
“xtccobsv” ID.

U41117-J-Z125-7-76

Operating and monitoring CentricStor

5.3.4.1

Starting GXCC

Optional access control for Observe mode
When a CentricStor V3.1 system is installed, the “xtccobsv” ID is set up by default and the
line “+ xtccobsv” entered in the home/xtccobsv/.rhosts file. As a result this optional
access control is initially inactive and no password is required for Observe mode. This
procedure is the same as in earlier CentricStor versions. To activate access control, the
administrator must modify the specified file and - as required - the password of the
“xtccobsv” ID on the
CentricStor V3.1 system (in the SINIX system of the VLP and, if required, on other access
servers).
Example
If the home/xtccobsv/.rhosts file contains only the file entries
“gui_computer_1 xtccobsv” and “gui_computer_2 xtccobsv”, only these two
computers have access without a password dialog. All the others must know the
password, which may have been modified.

5.3.4.2

Authentication
After connection setup, client authentication takes place (in the SINIX system of the VLP
and, if required, on other access servers). Authentication with a password is performed
each time the program is started.
The passwords are defined as follows:
Service mode:
User mode:
Observe mode:

Password of the “diag” ID
Password of the “xtccuser” ID
Default: No password.
Optional as of CentricStor V3.1: Password of the “xtccobsv” ID

The authorization (Service, User or Observe) is forwarded to the applications that are
downstream (such as XTCC for monitoring/operating the ISPs). If the wrong password is
entered, an error message is issued and the query is repeated up to 3 times.

U41117-J-Z125-7-76

99

Starting GXCC

5.3.4.3

Operating and monitoring CentricStor

Suppressing the password query
Releasing individual users
The password query can be suppressed if an entry in the .rhosts file permits access to
CentricStor. To do this, the monitoring system is entered in the following .rhosts file on the
monitored system:
Service mode:
User mode:
Observe mode:

/usr/apc/diag/.rhosts
home/xtccuser/.rhosts
home/xtccobsv/.rhosts

The following options are available for an entry in the .rhosts file:
●

+ 
In this case access can take place from any monitoring host.

●

 
In this case, access is permitted only from the host with the name . The
Name Server entry, the Yellow Page entry or the IP address of the source computer
must be used for . This depends on the current operating configuration
and network topology. The first two entries generally differ only in that the domain name
is part of the name (Name Server) or is missing (Yellow Page). It is most convenient just
to take all options into account in the .rhosts file.
The  currently being used can also be seen in the status line of the
GXCC/XTCC.

Example
If password-free access to CentricStor is to be permitted from the PC “PCjoesmith”, the
following entries must be made on CentricStor in the /.rhosts file which is dependent
on the access mode (here: Observe mode):
PCjoesmith
xtccobsv
PCjoesmith.mch.xyz.de xtccobsv

Releasing indivudual computers
The /etc/hosts.equiv file enables you to grant complete computer password-free
access to CentricStor. Password-free access to all modes is permitted by entering the
computer name or its IP address.

100

U41117-J-Z125-7-76

Operating and monitoring CentricStor

5.3.4.4

Starting GXCC

Additional password query
If GXCC or XTCC requires a password to transfer an update (see the section “Software updates” on page 118) or manual (see the section “Online Manual” on page 235), the following window appears:

This password query is made only when the user can access the system without a primary
password query.
Information on this is provided in the sections “Optional access control for Observe
mode” on page 99 and “Suppressing the password query” on page 100.

i

The window consists of an input field for the password and the following buttons:
OK
The password entered is used if it is not empty. Its validity is not verified: the transfer
fails if the password is invalid.
Unknown
When you click on this button you exit the window. The update or manual is not accepted.
Help
Displays a help text.
NOTES
●

Depending on the context, the term “XTCC” can be replaced by “GXCC” in the window
displayed above. The user ID can be different.

●

In Observe mode the password of the user ID “xtccuser” is requested.

●

If an update from an earlier connection which has not yet been activated is already
available locally, this is indicated by the message Update  available.

U41117-J-Z125-7-76

101

Starting GXCC

Operating and monitoring CentricStor

5.3.5 Starting the CentricStor console
●

Position the mouse pointer to a neutral screen area.

●

Press the right mouse button.

●

The Root Menu appears, including the function
“Global eXtended Control Center”.

If you select this item, you are offered the three modes of GXCC:
–
–
–

Observe Mode
User Mode
Service Mode

When you have selected the required start mode GXCC is started.

5.3.6 Starting from an X11 server

i

5.3.6.1

Some X11 servers have difficulty with cascaded menus. The problem occurs when
an attempt is made to move the mouse pointer from a menu entry onto an
associated, cascaded submenu. The submenu is then closed immediately, with the
result that the desired function cannot be selected. In this case an alternative X11
server should be installed.

General notes on the X11 server architecture
CentricStor is monitored and controlled by the interoperation of three components:

102

–

the X11 server, which presents the graphics formatted by GXCC and controls the actual
man/machine interface,

–

the Global Extended Control Center GXCC, which on the one hand exchanges information with the InfoBroker and prepares this information in graphical form as required,
and on the other hand responds to events at the man/machine interface (e.g.
movements of the mouse pointer, keyboard input, etc.),

U41117-J-Z125-7-76

Operating and monitoring CentricStor

–

Starting GXCC

and the RequestBroker of the connected ISP, which gathers information from all components and forwards commands from the man/machine interface to the respective
recipients (XTCC, which only ever handles a single ISP, connects itself directly to the
InfoBroker of this ISP).
X11 server (e.g. VLP, workstation or PC, Mac, Linux etc.)
Includes the mouse, keyboard, screen. Can be a SINIX or
Windows system or another system with appropriate
software.

X11
server

The “display” setting points to this host.
This host is used for the login for calling the X client
TCP/IP
GXCC host (e.g. VLP or remote computer)
This runs the “Global Extended Control Center”.
The “-display” start parameter points to this host.
The “-unit” start parameter points to this host. The passwords
of “xtccuser” or “diag” are needed on this host.

GXCC

TCP/IP
ReqBroker in connected ISP of CentricStor
The ReqBroker also transmits information between
GXCC and the other system components.
The TCP/IP connections to the other components of
CentricStor are defined in the configuration.

VLP
Info
Broker

Figure 35: X11 architecture for CentricStor operation

The three components communicate with each other using the TCP/IP protocol. They can
therefore be located on one or on up to three hosts.
If you operate GXCC at the CentricStor console for example, the X11 server, GXCC host
and request manager/InfoBroker all run in the VLP of CentricStor.
If CentricStor is operated via a computer (workstation) outside CentricStor, GXCC runs on
a host independent of CentricStor. The display console and consequently the X11 server
can reside on the same computer as GXCC or on a subprocessor.

i

U41117-J-Z125-7-76

A significantly higher transmission bandwidth is needed for the connection between
GXCC and the X11 server than for the connection between GXCC and the
InfoBroker.

103

Starting GXCC

5.3.6.2

Operating and monitoring CentricStor

Using the direct XDMCP interface
The full range of XDMCP (X Display Manager Control Protocol) functions cannot be used
in CentricStor 2.1.
You are strongly recommended to use the X11 servers in Passive mode.

5.3.6.3

Starting from a UNIX system
1. Make sure that the ISP to be addressed is listed in the XHOST list of the calling system.
If in doubt, enter its name or IP address with the “xhost” command
(/usr/bin/X11/xhost +).
2. Remote login, e.g. using telnet, to the desired ISP of CentricStor (in general, this
should be the VLP) under the user ID “tele”. You will need the appropriate password.
3. Call GXCC using the command line described in the section “Command line” on
page 95. Specify the IP address or host name for the -display parameter and,
separated by “:”, the screen number of the calling system. It is advisable to terminate
the line with “&”. The telnet screen then remains open and displays any errors that occur
before the establishment of an X11 connection.
Common error messages:

The X11 server is not running or the X11 host is not in the XHOST list of the GXCC host.

The screen number was forgotten in -display.

104

U41117-J-Z125-7-76

Operating and monitoring CentricStor

5.3.6.4

Starting GXCC

Starting from a Windows system via Exceed
With appropriate preparation, GXCC can be started by clicking an icon on the desktop.
More details can be found in the Exceed help information. This section only describes the
GXCC-specific settings of Exceed.
Exceed preparations
Ê

Open the following window by choosing “Programs” ➟ “Exceed” ➟ “Xconfig”:

Ê

Click “Communication” in the Xconfig window.

Ê

Set the startup mode “Passive” shown in the diagram below.

This prevents xdm from starting on the client. The specification in the display field designates the (partial) screen you desire.
Ê

“Multiple Screens” must be set in the “Screen Definition” window.

U41117-J-Z125-7-76

105

Starting GXCC

Operating and monitoring CentricStor

Starting GXCC via Exceed
Ê

Start Exceed (preferably via the Windows autostart function).

Ê

Choose “Xstart”.

Ê

Activate the Start menu as shown in the following example:

The user ID is tele, and the password entered must be the appropriate tele password.
The GXCC password is requested later in accordance with the desired operating mode.
Host contains the IP address of the ISP to be addressed (generally the VLP).
The “Command” input field contains a sequence of commands separated by “;”. The
first command indicates the host IP address and screen number of the display to be
selected. Following “export DISPLAY”, GXCC is called with all the start parameters
you desire (apart from -display). You must specify the mode (here -user) if Observe
mode is not desired. Other parameters are possible in accordance with the description
in the section “Command line” on page 95.
The absolute path (/use/apc/bin/gxcc) must be used when calling GXCC.
You can enter any comment in the “Description” field.
Ê

106

These settings can be saved by choosing “File” ➟ “Save” or “Save as”.

U41117-J-Z125-7-76

Operating and monitoring CentricStor

Starting GXCC

Choose the Windows Start menu “Programs” ➟ “Hummingbird Connectivity” ➟
“Exceed” and ➟ “Xsession” to display a selection of start files:

When you have selected the start file and chosen “Run!”, GXCC will be started on the
selected unit.

i

U41117-J-Z125-7-76

However, you can also drag the start file to the desktop as a shortcut or save it on
the desktop. Then, the program is started as usual by double-clicking the icon.

107

Starting GXCC

5.3.6.5

Operating and monitoring CentricStor

Starting from a Windows/NT system via XVision

i

When installing XVision, the UNIX environment must also have been installed.

XVision setting
An example is shown in the following diagram:

It is important that the display mode “Multiple windows” is set.

108

U41117-J-Z125-7-76

Operating and monitoring CentricStor

Starting GXCC

The diagram below shows an example of the fonts setting:

Ê

The “Allow font substitution” option must be disabled.

Ê

When specifying the font path items, the UNIX fonts must be listed before other fonts.

If different settings are defined, the layout of GXCC windows and dialog boxes may be
corrupted.

U41117-J-Z125-7-76

109

Starting GXCC

Operating and monitoring CentricStor

All options must be disabled in the “Security” tab. The “XDMCP” box may not be checked,
to ensure that Passive mode is secure:

Settings for the client
The X11 server must be included in the host list of the computer on which the GXCC is
running. This can be done using the XVision Services “Host Finder” or “Host Explorer”.
You must use the user ID “tele”. To automate the login process, the ID and password should
be stored in the host characteristics.

110

U41117-J-Z125-7-76

Operating and monitoring CentricStor

Starting GXCC

Starting GXCC with the “Remote Program Starter”
Ideally, GXCC should be activated via the “Remote Program Starter”. This allows the
process to be automated such that GXCC can be started by clicking an icon. When GXCC
is started in Observe mode, no further dialog is then required.
Preparing the Program Starter:
An appropriate file (*.rps file) must be created. The characteristics must be set in accordance with the following example:
Host
Specify the host on which GXCC is to run. If this host is not part of CentricStor, an
additional host parameter must be specified in the command line to start GXCC.
Command line
The command line, which is only partially visible in the editing area of the sample
screen, first starts a terminal emulation. Emulations of 97801-480 or VT420 terminals
are recommended (as in the example).
The path specification must be absolute. The parameters visible in the screenshot
relate to the alphanumeric terminal and serve as an example only.
-sb
Scrollbar
-ls
Login shell
-j
Jump scroll
-sl 
Size of scroll area
-name 
Label of upper window margin
-display :.
Address of the host running XVision: screen showing the
display.
-e 
This command line, which should be executed following login, is
used to start GXCC. The line should not contain a “-display”
specification. The closing “&” can be omitted.
All entries after “-e” belong to this command line. Options for
terminal emulation must therefore be entered before “-e”.

U41117-J-Z125-7-76

111

Starting GXCC

Operating and monitoring CentricStor

Example

The settings under “Options” and “View Response” should correspond to those shown in
the example.
The full contents of the command line are shown in the “Display” tab:

The input field “Title” has no effect and should not be used. If you need the “-title” function
(label on the window title bar only, not on the icon), specify “-title” 

Source Exif Data:
File Type                       : PDF
File Type Extension             : pdf
MIME Type                       : application/pdf
PDF Version                     : 1.4
Linearized                      : Yes
Page Mode                       : UseOutlines
XMP Toolkit                     : 3.1-702
Producer                        : Acrobat Distiller 7.0.5 (Windows)
Create Date                     : 2007:08:09 10:41Z
Creator Tool                    : FrameMaker 7.1
Modify Date                     : 2008:01:30 12:50:10+01:00
Metadata Date                   : 2008:01:30 12:50:10+01:00
Format                          : application/pdf
Title                           : CentricStor V3.1D User Guide
Creator                         : cognitas GmbH on behalf of Fujitsu Siemens Computers GmbH
Description                     : CentricStor V3.1D User Guide
Document ID                     : uuid:a6ac1b13-b9a7-41b5-bf34-7e2216eda5cf
Instance ID                     : uuid:a7da8db0-4ec9-41c9-a824-0e7327a8d16b
Page Count                      : 640
Subject                         : CentricStor V3.1D User Guide
Author                          : cognitas GmbH on behalf of Fujitsu Siemens Computers GmbH
EXIF Metadata provided by
EXIF.tools

Navigation menu