Command Control Interface Installation And Configuration Guide CCI V01 46 03 02 Install MK 90RD7008 22

User Manual:

Open the PDF directly: View PDF PDF.
Page Count: 138 [warning: Documents this large are best viewed by clicking the View PDF Link!]

Command Control Interface
01-46-03/02
Installation and Configuration Guide
This document describes and provides instructions for installing the Command Control Interface (CCI)
software for the Hitachi RAID storage systems, including upgrading and removing CCI.
MK-90RD7008-22
March 2018
© 2010, 2018 Hitachi, Ltd. All rights reserved.
No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including copying and
recording, or stored in a database or retrieval system for commercial purposes without the express written permission of Hitachi, Ltd., or
Hitachi Vantara Corporation (collectively “Hitachi”). Licensee may make copies of the Materials provided that any such copy is: (i) created as an
essential step in utilization of the Software as licensed and is used in no other manner; or (ii) used for archival purposes. Licensee may not
make any other copies of the Materials. “Materials” mean text, data, photographs, graphics, audio, video and documents.
Hitachi reserves the right to make changes to this Material at any time without notice and assumes no responsibility for its use. The Materials
contain the most current information available at the time of publication.
Some of the features described in the Materials might not be currently available. Refer to the most recent product announcement for
information about feature and product availability, or contact Hitachi Vantara Corporation at https://support.hitachivantara.com/en_us/contact-
us.html.
Notice: Hitachi products and services can be ordered only under the terms and conditions of the applicable Hitachi agreements. The use of
Hitachi products is governed by the terms of your agreements with Hitachi Vantara Corporation.
By using this software, you agree that you are responsible for:
1. Acquiring the relevant consents as may be required under local privacy laws or otherwise from authorized employees and other
individuals; and
2. Verifying that your data continues to be held, retrieved, deleted, or otherwise processed in accordance with relevant laws.
Notice on Export Controls. The technical data and technology inherent in this Document may be subject to U.S. export control laws, including
the U.S. Export Administration Act and its associated regulations, and may be subject to export or import regulations in other countries. Reader
agrees to comply strictly with all such regulations and acknowledges that Reader has the responsibility to obtain licenses to export, re-export, or
import the Document and any Compliant Products.
Hitachi is a registered trademark of Hitachi, Ltd., in the United States and other countries.
AIX, AS/400e, DB2, Domino, DS6000, DS8000, Enterprise Storage Server, eServer, FICON, FlashCopy, IBM, Lotus, MVS, OS/390, PowerPC, RS/6000,
S/390, System z9, System z10, Tivoli, z/OS, z9, z10, z13, z/VM, and z/VSE are registered trademarks or trademarks of International Business
Machines Corporation.
Active Directory, ActiveX, Bing, Excel, Hyper-V, Internet Explorer, the Internet Explorer logo, Microsoft, the Microsoft Corporate Logo, MS-DOS,
Outlook, PowerPoint, SharePoint, Silverlight, SmartScreen, SQL Server, Visual Basic, Visual C++, Visual Studio, Windows, the Windows logo,
Windows Azure, Windows PowerShell, Windows Server, the Windows start button, and Windows Vista are registered trademarks or trademarks
of Microsoft Corporation. Microsoft product screen shots are reprinted with permission from Microsoft Corporation.
All other trademarks, service marks, and company names in this document or website are properties of their respective owners.
Command Control Interface Installation and Conguration Guide ii
Contents
Preface..................................................................................................... 7
Intended audience............................................................................................... 7
Product version....................................................................................................7
Release notes......................................................................................................7
Changes in this revision.......................................................................................8
Referenced documents........................................................................................8
Document conventions........................................................................................ 8
Conventions for storage capacity values........................................................... 10
Accessing product documentation..................................................................... 11
Getting help........................................................................................................12
Comments..........................................................................................................12
Chapter 1: Installation requirements for Command Control
Interface................................................................................................. 13
System requirements for CCI.............................................................................13
CCI operating environment................................................................................17
Platforms that use CCI................................................................................. 17
Applicable platforms for CCI on VM ............................................................ 20
Supported platforms for IPv6........................................................................22
Requirements and restrictions for CCI on z/Linux............................................. 22
Requirements and restrictions for CCI on VM................................................... 25
Restrictions for VMware ESX Server............................................................25
Restrictions for Windows Hyper-V (Windows 2012/2008)............................26
Restrictions for Oracle VM............................................................................28
About platforms supporting IPv6........................................................................29
Library and system call for IPv6................................................................... 29
Environment variables for IPv6.....................................................................29
HORCM start-up log for IPv6........................................................................30
Contents
Command Control Interface Installation and Conguration Guide 3
Startup procedures using detached process on DCL for OpenVMS................. 30
Command examples in DCL for OpenVMS..................................................33
Start-up procedures in bash for OpenVMS........................................................37
Using CCI with Hitachi and other storage systems............................................39
Chapter 2: Installing and configuring CCI.......................................... 41
Installing the CCI hardware............................................................................... 41
Installing the CCI software.................................................................................42
UNIX installation...........................................................................................42
Installing the CCI software into the root directory................................... 42
Installing the CCI software into a non-root directory............................... 43
Changing the CCI user (UNIX systems)................................................. 43
Windows installation.....................................................................................45
Changing the CCI user (Windows systems)........................................... 46
Installing CCI on the same PC as the storage management software ........48
OpenVMS installation...................................................................................49
In-band and out-of-band operations............................................................. 50
Setting up UDP ports.............................................................................. 53
Setting the command device........................................................................ 53
Specifying the command device and virtual command device in the
configuration definition file...................................................................... 55
About alternate command devices..........................................................56
Creating and editing the configuration definition file.....................................57
Notes on editing configuration definition file........................................... 59
Chapter 3: Upgrading CCI.................................................................... 60
Upgrading CCI in a UNIX environment..............................................................60
Upgrading CCI in a Windows environment........................................................61
Upgrading CCI installed on the same PC as the storage management
software............................................................................................................. 62
Upgrading CCI in an OpenVMS environment....................................................63
Chapter 4: Removing CCI.....................................................................65
Removing CCI in a UNIX environment.............................................................. 65
Removing the CCI software on UNIX using RMuninst............................... 65
Contents
Command Control Interface Installation and Conguration Guide 4
Removing the CCI software manually on UNIX........................................... 66
Removing CCI on a Windows system................................................................67
Removing CCI installed on the same PC as the storage management
software ............................................................................................................ 68
Removing CCI on an OpenVMS system........................................................... 69
Chapter 5: Troubleshooting for CCI installation................................ 71
Contacting support.............................................................................................71
Appendix A: Fibre-to-SCSI address conversion................................72
Fibre/FCoE-to-SCSI address conversion...........................................................72
LUN configurations on the RAID storage systems............................................ 74
Fibre address conversion tables........................................................................75
Appendix B: Sample configuration definition files............................79
Sample configuration definition files.................................................................. 79
Configuration file parameters....................................................................... 80
HORCM_MON........................................................................................ 81
HORCM_CMD (in-band method)............................................................81
HORCM_CMD (out-of-band method)......................................................86
HORCM_VCMD......................................................................................88
HORCM_DEV......................................................................................... 89
HORCM_INST........................................................................................ 92
HORCM_INSTP......................................................................................95
HORCM_LDEV....................................................................................... 96
HORCM_LDEVG.................................................................................... 96
HORCM_ALLOW_INST..........................................................................97
Examples of CCI configurations........................................................................ 97
Example of CCI commands for TrueCopy remote configuration.................. 97
Example of CCI commands for TrueCopy local configuration....................102
Example of CCI commands for TrueCopy configuration with two
instances.................................................................................................... 106
Example of CCI commands for ShadowImage configuration..................... 110
Example of CCI commands for ShadowImage cascade configuration.......118
Example of CCI commands for TC/SI cascade configuration.................... 122
Contents
Command Control Interface Installation and Conguration Guide 5
Correspondence of the configuration definition file for cascading volume
and mirror descriptors......................................................................................127
Configuration definition files for cascade configurations..................................129
Configuration definition files for ShadowImage cascade configuration...... 129
Configuration definition files for TrueCopy/ShadowImage cascade
configuration ..............................................................................................131
Index................................................................................................. 135
Contents
Command Control Interface Installation and Conguration Guide 6
Preface
This document describes and provides instructions for installing the Command Control
Interface (CCI) software for the Hitachi RAID storage systems, including upgrading and
removing CCI.
Please read this document carefully to understand how to use this product, and maintain
a copy for your reference.
Intended audience
This document is intended for system administrators, Hitachi Vantara representatives,
and authorized service providers who install, congure, and use the Command Control
Interface software for the Hitachi RAID storage systems.
Readers of this document should be familiar with the following:
Data processing and RAID storage systems and their basic functions.
The Hitachi RAID storage systems and the manual for the storage system (for
example, Hardware Guide of your storage system).
The management software for the storage system (for example, Hitachi Command
Suite, Hitachi Device Manager - Storage Navigator, Storage Navigator) and the applicable
user manuals (for example, Hitachi Command Suite User Guide, System Administrator
Guide for VSP, HUS VM, USP V/VM.
The host systems attached to the Hitachi RAID storage systems.
Product version
This document revision applies to the Command Control Interface software version
01-46-03/02 or later.
Release notes
Read the release notes before installing and using this product. They may contain
requirements or restrictions that are not fully described in this document or updates or
corrections to this document. Release notes are available on Hitachi Vantara Support
Connect: https://knowledge.hitachivantara.com/Documents.
Preface
Command Control Interface Installation and Conguration Guide 7
Changes in this revision
Added support information for Windows 8.1 and Windows 10 (Platforms that use CCI
(on page 17) , Requirements and restrictions for CCI on Windows 8.1 and Windows
10).
Added instructions for disabling the command device settings after removing CCI.
Removed restrictions for number of instances per command device.
Referenced documents
Command Control Interface documents:
Command Control Interface Command Reference, MK-90RD7009
Command Control Interface User and Reference Guide, MK-90RD7010
Storage system documents:
Hardware Guide or User and Reference Guide for the storage system
Open-Systems Host Attachment Guide, MK-90RD7037
Hitachi Command Suite User Guide, MK-90HC172
System Administrator Guide or Storage Navigator User Guide for the storage system
Hitachi Device Manager - Storage Navigator Messages for the storage system
Provisioning Guide for the storage system (VSP Gx00 models, VSP Fx00 models, VSP
G1x00, VSP F1500, VSP, HUS VM)
LUN Manager User Guide and Virtual LVI/LUN User Guide for the storage system (USP
V/VM)
Document conventions
This document uses the following storage system terminology conventions:
Convention Description
VSP G series Refers to the following storage systems:
Hitachi Virtual Storage Platform G1x00
Hitachi Virtual Storage Platform G200
Hitachi Virtual Storage Platform G400
Hitachi Virtual Storage Platform G600
Hitachi Virtual Storage Platform G800
Changes in this revision
Preface
Command Control Interface Installation and Conguration Guide 8
Convention Description
VSP F series Refers to the following storage systems:
Hitachi Virtual Storage Platform F1500
Hitachi Virtual Storage Platform F400
Hitachi Virtual Storage Platform F600
Hitachi Virtual Storage Platform F800
VSP Gx00 models Refers to all of the following models, unless otherwise noted.
Hitachi Virtual Storage Platform G200
Hitachi Virtual Storage Platform G400
Hitachi Virtual Storage Platform G600
Hitachi Virtual Storage Platform G800
VSP Fx00 models Refers to all of the following models, unless otherwise noted.
Hitachi Virtual Storage Platform F400
Hitachi Virtual Storage Platform F600
Hitachi Virtual Storage Platform F800
This document uses the following typographic conventions:
Convention Description
Bold Indicates text in a window, including window titles, menus,
menu options, buttons, elds, and labels. Example:
Click OK.
Indicates emphasized words in list items.
Italic Indicates a document title or emphasized words in text.
Indicates a variable, which is a placeholder for actual text
provided by the user or for output by the system. Example:
pairdisplay -g group
(For exceptions to this convention for variables, see the entry for
angle brackets.)
Monospace Indicates text that is displayed on screen or entered by the user.
Example: pairdisplay -g oradb
Document conventions
Preface
Command Control Interface Installation and Conguration Guide 9
Convention Description
< > angle
brackets
Indicates variables in the following scenarios:
Variables are not clearly separated from the surrounding text or
from other variables. Example:
Status-<report-name><file-version>.csv
Variables in headings.
[ ] square
brackets
Indicates optional values. Example: [ a | b ] indicates that you can
choose a, b, or nothing.
{ } braces Indicates required or expected values. Example: { a | b } indicates
that you must choose either a or b.
| vertical bar Indicates that you have a choice between two or more options or
arguments. Examples:
[ a | b ] indicates that you can choose a, b, or nothing.
{ a | b } indicates that you must choose either a or b.
This document uses the following icons to draw attention to information:
Icon Label Description
Note Calls attention to important or additional information.
Tip Provides helpful information, guidelines, or suggestions for
performing tasks more eectively.
Caution Warns the user of adverse conditions and/or consequences
(for example, disruptive operations, data loss, or a system
crash).
WARNING Warns the user of a hazardous situation which, if not
avoided, could result in death or serious injury.
Conventions for storage capacity values
Physical storage capacity values (for example, disk drive capacity) are calculated based
on the following values:
Conventions for storage capacity values
Preface
Command Control Interface Installation and Conguration Guide 10
Physical capacity unit Value
1 kilobyte (KB) 1,000 (103) bytes
1 megabyte (MB) 1,000 KB or 1,0002 bytes
1 gigabyte (GB) 1,000 MB or 1,0003 bytes
1 terabyte (TB) 1,000 GB or 1,0004 bytes
1 petabyte (PB) 1,000 TB or 1,0005 bytes
1 exabyte (EB) 1,000 PB or 1,0006 bytes
Logical capacity values (for example, logical device capacity, cache memory capacity) are
calculated based on the following values:
Logical capacity unit Value
1 block 512 bytes
1 cylinder Mainframe: 870 KB
Open-systems:
OPEN-V: 960 KB
Others: 720 KB
1 KB 1,024 (210) bytes
1 MB 1,024 KB or 1,0242 bytes
1 GB 1,024 MB or 1,0243 bytes
1 TB 1,024 GB or 1,0244 bytes
1 PB 1,024 TB or 1,0245 bytes
1 EB 1,024 PB or 1,0246 bytes
Accessing product documentation
Product user documentation is available on Hitachi Vantara Support Connect: https://
knowledge.hitachivantara.com/Documents. Check this site for the most current
documentation, including important updates that may have been made after the release
of the product.
Accessing product documentation
Preface
Command Control Interface Installation and Conguration Guide 11
Getting help
Hitachi Vantara Support Connect is the destination for technical support of products and
solutions sold by Hitachi Vantara. To contact technical support, log on to Hitachi Vantara
Support Connect for contact information: https://support.hitachivantara.com/en_us/
contact-us.html.
Hitachi Vantara Community is a global online community for Hitachi Vantara customers,
partners, independent software vendors, employees, and prospects. It is the destination
to get answers, discover insights, and make connections. Join the conversation today!
Go to community.hitachivantara.com, register, and complete your prole.
Comments
Please send us your comments on this document to
doc.comments@hitachivantara.com. Include the document title and number, including
the revision level (for example, -07), and refer to specic sections and paragraphs
whenever possible. All comments become the property of Hitachi Vantara Corporation.
Thank you!
Getting help
Preface
Command Control Interface Installation and Conguration Guide 12
Chapter 1: Installation requirements for
Command Control Interface
The installation requirements for the Command Control Interface (CCI) software include
host requirements, storage system requirements, and requirements and restrictions for
specic operational environments.
System requirements for CCI
The following table lists and describes the system requirements for Command Control
Interface.
Item Requirement
Command
Control
Interface
software
product
The CCI software is supplied on the media for the product (for
example, DVD-ROM). The CCI software les require 2.5 MB of space,
and the log les require 3 MB of space.
Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Conguration Guide 13
Item Requirement
Hitachi RAID
storage systems
The requirements for the RAID storage systems are:
Microcode. The availability of features and functions depends on
the level of microcode installed on the storage system.
Command device. The CCI command device must be dened
and accessed as a raw device (no le system, no mount
operation).
License keys. The software products to be used (for example,
Universal Replicator, Dynamic Tiering) must be enabled on the
storage system.
System option modes. Before you begin operations, the system
option modes (SOMs) must be set on the storage system by your
Hitachi Vantara representative. For details about the SOMs,
contact customer support.
Note: Check the appropriate manuals (for example, Hitachi
TrueCopy® for Mainframe User Guide) for SOMs that are required
or recommended for your operational environment.
Hitachi software products. Make sure that your system meets
the requirements for operation of the Hitachi software products.
For example:
TrueCopy, Universal Replicator, global-active device: Bi-
directional swap must be enabled between the primary and
secondary volumes. The port attributes (for example, initiator,
target, RCU target) and the MCU-RCU paths must be dened.
Copy-on-Write Snapshot: ShadowImage is a prerequisite for
Copy-on-Write Snapshot.
Thin Image: Dynamic Provisioning is a prerequisite for Thin
Image.
Note: Check the appropriate manuals (for example, Hitachi
Universal Replicator User Guide) for the system requirements for
your operational environment.
System requirements for CCI
Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Conguration Guide 14
Item Requirement
Host platforms CCI operations are supported on the following host platforms:
AIX®
HP-UX
Red Hat Enterprise Linux (RHEL)
Oracle Linux (OEL)
Solaris
SUSE Linux Enterprise Server (SLES)
Tru64 UNIX
Windows
z/Linux
When a vendor discontinues support of a host OS version, CCI that
is released at or after that time will not support that version of the
host software.
For detailed host support information (for example, OS versions),
refer to the interoperability matrix at https://
support.hitachivantara.com.
I/O interface For details about I/O interface support (Fibre, SCSI, iSCSI), refer to
the interoperability matrix at https://support.hitachivantara.com.
Host access Root/administrator access to the host is required to perform host-
based CCI operations.
System requirements for CCI
Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Conguration Guide 15
Item Requirement
Host memory CCI requires static memory and dynamic memory for executing the
load module.
Static memory capacity: minimum 600 KB, maximum 1200 KB
Dynamic memory capacity: determined by the description of the
conguration le. The minimum is:
(number_of_unit_IDs × 200 KB) + (number_of_LDEVs ×
360 B) + (number_of_entries × 180 B)
where:
number_of_unit_IDs: number of storage chassis
number_of_LDEVs: number of LDEVs (each instance)
number_of_entries: number of paired entries (pairs)
Example: For a 1:3 pair conguration, use the following values for
number_of_LDEVs and number_of_entries for each instance:
number_of_LDEVs in the primary instance = 1
number_of_entries (pairs) in the primary instance = 3
number_of_LDEVs in the secondary instance = 3
number_of_entries (pairs) in the secondary instance = 3
Host disk Capacity required for running CCI: 20 MB (varies depending on
the platform: average = 20 MB, maximum = 30 MB)
Capacity of the log le that is created after CCI starts: 3000 KB
(when there are no failures, including command execution
errors)
IPv6, IPv4 The minimum OS platform versions for CCI/IPv6 support are:
HP-UX: HP-UX 11.23 (PA/IA) or later
Solaris: Solaris 9/Sparc or later, Solaris 10/x86/64 or later
AIX®: AIX® 5.3 or later
Windows: Windows 2008(LH)
Linux: Linux Kernel 2.4 (RH8.0) or later
Tru64: Tru64 v5.1A or later. Note that v5.1A does not support the
getaddrinfo() function, so this must be specied by IP address
directly.
OpenVMS: OpenVMS 8.3 or later
System requirements for CCI
Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Conguration Guide 16
Item Requirement
UDP ports: Contact your network administrator for appropriate
UDP port numbers to use in your network. The network
administrator must enable these ports to allow trac between CCI
servers.
Supported guest
OS for VMware
CCI needs to use guest OS that is supported by CCI, and also
VMware supported guest OS (for example, Windows Server 2008,
Red Hat Linux, SUSE Linux). For details about guest OS support for
VMware, refer to the interoperability matrix at https://
support.hitachivantara.com.
Failover CCI supports many industry-standard failover products. For details
about supported failover products, refer to the interoperability
matrix at https://support.hitachivantara.com.
Volume
manager
CCI supports many industry-standard volume manager products.
For details about supported volume manager products, refer to the
interoperability matrix at https://support.hitachivantara.com.
High availability
(HA)
congurations
The system that runs and operates TrueCopy in an HA conguration
must be a duplex system having a hot standby or mutual hot
standby (mutual takeover) conguration. The remote copy system
must be designed for remote backup among servers and congured
so that servers cannot share the primary and secondary volumes at
the same time. The HA conguration does not include fault-tolerant
system congurations such as Oracle Parallel Server (OPS) in which
nodes execute parallel accesses. However, two or more nodes can
share the primary volumes of the shared OPS database, and must
use the secondary volumes as exclusive backup volumes.
Host servers that are combined when paired logical volumes are
dened should run on operating systems of the same architecture.
If not, one host might not be able to recognize a paired volume of
another host, even though CCI runs properly.
CCI operating environment
This section describes the supported operating systems, failover software, and I/O
interfaces for CCI. For the latest information about CCI host software version support,
refer to the interoperability matrix at https://support.hitachivantara.com.
Platforms that use CCI
The following tables list the host platforms that support CCI.
CCI can run on the OS version listed in the table or later.
CCI operating environment
Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Conguration Guide 17
For the latest information about host software version and storage system connectivity
support, contact customer support.
Note: When a vendor discontinues support of a host software version, CCI
that is released at or after that time will not support that version of the host
software.
Supported platforms for VSP G1x00, VSP F1500, VSP Gx00 models, and VSP Fx00
models
Vendor Operating system*
Failover
software
Volume
manager
I/O
interface
Oracle Solaris 9 First Watch VxVM Fibre
Solaris 10, 11 Fibre
Solaris 10 on x86 VxVM Fibre
Solaris 11 on x64 Fibre/iSCSI
OEL 6.x (6.2 or later) Fibre/iSCSI
HP HP-UX 11.1x MC/Service
Guard
LVM,
SLVM
Fibre
HP-UX 11.2x/11.3x on IA64
IA64: using IA-32EL on IA64
(except CCI for Linux/IA64)
MC/Service
Guard
LVM,
SLVM
Fibre
Tru64 UNIX 5.0 TruCluster LSM Fibre
IBM®AIX® 5.3, 6.1, 7.1 HACMP LVM Fibre
z/Linux (SUSE 8)
For details, see Requirements
and restrictions for CCI on z/
Linux (on page 22) .
Fibre (FCP)
Microso
ft
Windows Server
2008/2008(R2)/2012/2012(R2)
LDM Fibre
Windows Server 2008(R2) on
IA64
LDM Fibre
Windows Server 2008/2012 on
x64
LDM Fibre
Windows Server 2008(R2)/
2012(R2) on x64
LDM Fibre/iSCSI
Windows Server 2016 on x64 LDM Fibre/iSCSI
Platforms that use CCI
Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Conguration Guide 18
Vendor Operating system*
Failover
software
Volume
manager
I/O
interface
Red Hat RHEL AS/ES 3.0, 4.0, 5.0, 6, 7
If you use RHEL 4.0 with kernel
2.6.9.xx, see "Deprecated SCSI
ioctl" in the troubleshooting
chapter of the Command
Control Interface User and
Reference Guide.
– Fibre
RHEL AS/ES 3.0 Update2, 4.0,
5.0 on x64 / IA64
IA64: using IA-32EL on IA64
(except CCI for Linux/IA64)
– Fibre
RHEL 6 on x64 Fibre/iSCSI
RHEL 7 on x64 Fibre
Novell
(SUSE)
SLES 10, 11 Fibre
SLES 10 on x64 Fibre
SLES 11 on x64 Fibre/iSCSI
SLES 12 on x64 Fibre
* Service packs (SP), update programs, or patch programs are not considered as
requirements if they are not listed.
Supported platforms for VSP and HUS VM
Vendor Operating system*
Failover
software
Volume
manager
I/O
interface
Oracle Solaris 9 First Watch VxVM Fibre
Solaris 10 on x86 VxVM Fibre
OEL 6.x Fibre
HP HP-UX 11.1x MC/Service
Guard
LVM,
SLVM
Fibre
HP-UX 11.2x/11.3x on IA64
IA64: using IA-32EL on IA64
(except CCI for Linux/IA64)
MC/Service
Guard
LVM,
SLVM
Fibre
Tru64 UNIX 5.0 TruCluster LSM Fibre
Platforms that use CCI
Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Conguration Guide 19
Vendor Operating system*
Failover
software
Volume
manager
I/O
interface
IBM®AIX® 5.3 HACMP LVM Fibre
z/Linux (SUSE 8)
For details see Requirements
and restrictions for CCI on z/
Linux (on page 22) .
Fibre (FCP)
Microso
ft
Windows 2008 MSCS LDM Fibre
Windows 2008(R2) on IA64
IA64: using IA-32EL on IA64
(except CCI for Linux/IA64)
MSCS LDM Fibre
Windows Server
2008/2012/2012(R2) on EM64T
MSCS LDM Fibre
Windows Server 2016 on x64 LDM Fibre
Red Hat RHEL AS/ES 3.0, 4.0, 5.0
If you use RHEL 4.0 with kernel
2.6.9.xx, see "Deprecated SCSI
ioctl" in the troubleshooting
chapter of the Command
Control Interface User and
Reference Guide.
– Fibre
RHEL AS/ES 3.0 Update2, 4.0,
5.0 on EM64T / IA64
IA64: using IA-32EL on IA64
(except CCI for Linux/IA64)
– Fibre
Novell
(SUSE)
SLES 10 Fibre
* Service packs (SP), update programs, or patch programs are not considered as
requirements if they are not listed.
Applicable platforms for CCI on VM
The following table lists the applicable platforms for CCI on VM.
CCI can run on the guest OS of the version listed in the table or later. For the latest
information on the OS versions and connectivity with storage systems, contact customer
support.
Applicable platforms for CCI on VM
Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Conguration Guide 20
VM vendor1Layer Guest OS2, 3
Volume
mapping
I/O
interface
VMware ESX
Server 2.5.1 or
later (Linux Kernel
2.4.9)
For details, see
Restrictions for
VMware ESX
Server (on
page 25) .
Guest Windows Server 2008 RDM4Fibre
RHEL5.x/6.x
SLES10 SP2
RDM4Fibre
Solaris 10 u3 (x86) RDM4Fibre
VMware ESXi 5.5 Guest Windows Server
2008(R2)
RDM4Fibre/iSCSI
Windows Server
2008/2012 Hyper-
V
For details, see
Restrictions for
Windows Hyper-V
(Windows
2012/2008) (on
page 26) .
Child Windows Server 2008 Path-thru Fibre
SLES10 SP2 Path-thru Fibre
Hitachi Virtage
(58-12)
Windows Server
2008(R2)
Use LPAR Fibre
RHEL5.4
Oracle VM 3.1 or
later (Oracle VM
Server for SPARC)
Guest Solaris 11.1 See Restrictions
for Oracle VM
(on page 28)
See
Restriction
s for Oracle
VM (on
page 28)
HPVM 6.3 or later Guest HP-UX 11.3 Mapping by
NPIV
Fibre
IBM® VIOS 2.2.0.0 VIOC AIX® 7.1 TL01 Mapping by
NPIV
Fibre
Notes:
1. VM must be versions listed in this table or later.
2. Service packs (SP), update programs, or patch programs are not considered as
requirements if they are not listed.
3. Operations on the guest OS that is not supported by VM are not supported.
4. RDM: Raw Device Mapping using Physical Compatibility Mode is used.
Applicable platforms for CCI on VM
Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Conguration Guide 21
Supported platforms for IPv6
The IPv6 functionality for CCI can be used on the OS versions listed in the following table
or later. For details about the latest OS versions, refer to the interoperability matrix at
https://support.hitachivantara.com.
Vendor OS1IPv62IPv4 mapped to IPv6
Oracle Solaris 9/10/11 Supported Supported
Solaris10/11 on x86 Supported Supported
OEL 6.x Supported Supported
HP HP-UX 11.23(PA/IA) Supported Supported
Tru64 UNIX 5.1A3Supported Supported
IBM®AIX® 5.3 Supported Supported
z/Linux (SUSE 8, SUSE 9) on
Z990
Supported Supported
Microsoft Windows 2008(R2) on x86/
EM64T/IA64
Supported Not supported
Red Hat RHEL AS/ES3.0, RHEL 5.x/6.x Supported Supported
Notes:
1. Service packs (SP), update programs, or patch programs are not considered as
requirements if they are not listed.
2. For details about IPv6 support, see About platforms supporting IPv6 (on
page 29) .
3. Performed by typing the IP address directly.
Requirements and restrictions for CCI on z/Linux
In the following example, z/Linux denes the open volumes that are connected to FCP
as /dev/sd*. Also, the mainframe volumes (3390-xx) that are connected to FICON® are
dened as /dev/dasd*.
The following gure is an example of a CCI conguration on z/Linux.
Supported platforms for IPv6
Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Conguration Guide 22
The restrictions for using CCI with z/Linux are:
SSB information. SSB information might not be displayed correctly.
Command device. CCI uses a SCSI Path-through driver to access the command
device. As such, the command device must be connected through FCP adaptors.
Open Volumes via FCP. Same operation as the other operating systems.
Requirements and restrictions for CCI on z/Linux
Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Conguration Guide 23
Mainframe (3390-9A) Volumes via FICON®. You cannot control the volumes
(3390-9A) that are directly connected to FICON® for ShadowImage pair operations.
Also, mainframe volumes must be mapped to a CHF(FCP) port to access target
volumes using a command device, as shown in the above gure. The mainframe
volume does not have to be connected to an FCP adaptor.
Note: ShadowImage supports only 3390-9A multiplatform volumes.
TrueCopy and Universal Replicator do not support multiplatform volumes
(including 3390-9A) via FICON®.
Volume discovery via FICON®. When you discover volume information, the inqraid
command uses SCSI inquiry. Mainframe volumes connected by FICON® do not
support the SCSI interface. Because of this, information equivalent to SCSI inquiry is
obtained through the mainframe interface (Read_device_characteristics or
Read_conguration_data), and the available information is displayed similarly as the
open volume. As a result, information displayed by executing the inqraid command
cannot be obtained, as shown below. Only the last ve digits of the FICON® volume's
serial number, which is displayed by the inqraid command, are displayed.
sles8z:/HORCM/usr/bin# ls /dev/dasd* | ./inqraid
/dev/dasda -> [ST] Unknown Ser = 1920 LDEV = 4 [HTC ]
[0704_3390_0A]
/dev/dasdaa -> [ST] Unknown Ser = 62724 LDEV =4120 [HTC ]
[C018_3390_0A]
/dev/dasdab -> [ST] Unknown Ser = 62724 LDEV =4121 [HTC ]
[C019_3390_0A]
sles8z:/HORCM/usr/bin# ls /dev/dasd* | ./inqraid -CLI
DEVICE_FILE PORT SERIAL LDEV CTG H/M/12 SSID R:Group PRODUCT_ID
dasda - 1920 4 - - 00C0 -
0704_3390_0A
dasdaa - 62724 4120 - - 9810 -
C018_3390_0A
dasdab - 62724 4121 - - 9810 - C019_3390_0A
The inqraid command displays only ve-digit number at the end of serial number of
the FICON® volume.
In the previous example, the Product_ID, C019_3390_0A, has the following associations:
C019: Serial number
3390: System type
0A: System model
Requirements and restrictions for CCI on z/Linux
Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Conguration Guide 24
The following commands cannot be used because there is no PORT information:
raidscan -pd <raw_device>
raidar -pd <raw_device>
raidvchkscan -pd <raw_device>
raidscan -find
raidscan -find conf
mkconf
Requirements and restrictions for CCI on VM
Restrictions for VMware ESX Server
Whether CCI can run properly depends on the support of guest OS by VMware. In
addition, the guest OS depends on VMware support of virtual hardware (HBA). Therefore,
the guest OS supporting VMware and supported by CCI (such as Windows Server 2003,
Red Hat Linux, or SUSE Linux) must be used, and the restrictions below must be followed
when using CCI on VMware.
The following gure shows the CCI conguration on guest OS/VMware.
Requirements and restrictions for CCI on VM
Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Conguration Guide 25
The restrictions for using CCI with VMware are:
Guest OS. CCI needs to use guest OS that is supported by CCI, and also VMware
supported guest OS (for example, Windows, Red Hat Linux). For specic support
information, refer to the Hitachi Vantara interoperability matrix at https://
support.hitachivantara.com.
Command device. CCI uses SCSI path-through driver to access the command device.
Therefore, the command device must be mapped as Raw Device Mapping using
Physical Compatibility Mode. At least one command device must be assigned for each
guest OS.
CCI instance numbers among dierent guest OS must be dierent, even if the
command device is assigned for each guest OS, because the command device cannot
distinguish a dierence among guest OS due to the same WWN as VMHBA.
About invisible LUN. Assigned LUN for the guest OS must be visible from SCSI
Inquiry when VMware (host OS) is started. For example, the S-VOL on VSS is used as
Read Only and Hidden, and this S-VOL is hidden from SCSI Inquiry. If VMware (host
OS) is started on this volume state, the host OS will hang.
LUN sharing between Guest and Host OS. It is not supported to share a command
device or a normal LUN between guest OS and host OS.
About running on SVC. The ESX Server 3.0 SVC (service console) is a limited
distribution of Linux based on Red Hat Enterprise Linux 3, Update 6 (RHEL 3 U6). The
service console provides an execution environment to monitor and administer the
entire ESX Server host. The CCI user can run CCI by installing "CCI for Linux" on SVC.
The volume mapping (/dev/sd) on SVC is a physical connection without converting
SCSI Inquiry, so CCI will perform like running on Linux regardless of guest OS.
However, VMware protects the service console with a rewall. According to current
documentation, the rewall allows only PORT# 902, 80, 443, 22(SSH) and ICMP(ping),
DHCP
, DNS as defaults, so the CCI user must enable a PORT for CCI (HORCM) using
the iptables command.
Restrictions for Windows Hyper-V (Windows 2012/2008)
Whether CCI can run properly depends on the support of the guest OS by Windows
Hyper-V, and then the guest OS depends on how Hyper-V supports front-end SCSI
interfaces.
The following gure shows the CCI conguration on Hyper-V.
Restrictions for Windows Hyper-V (Windows 2012/2008)
Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Conguration Guide 26
The restrictions for using CCI on Hyper-V are:
Guest OS. CCI needs to use the guest OS that is supported by CCI and also the Hyper-
V supported guest OS (for example, Windows Server 2012, SUSE Linux). For specic
support information, refer to the interoperability matrix at https://
support.hitachivantara.com.
Command device. CCI uses the SCSI path-through driver to access the command
device. Therefore the command device must be mapped as RAW device of the path-
through disk. At least one command device must be assigned for each guest OS (Child
Partition).
The CCI instance number among dierent guest OSs must be used as a dierent
instance number even if the command is assigned for each guest OS. This is because
the command device cannot distinguish a dierence among the guest OSs because
the same WWN via Fscsi is used.
LUN sharing between guest OS and console OS. It is not possible to share a
command device as well as a normal LUN between a guest OS and a console OS.
Running CCI on console OS. The console OS (management OS) is a limited Windows,
like Windows 2008/2012 Server Core, and the Windows standard driver is used. Also
the console OS provides an execution environment to monitor and administer the
entire Hyper-V host.
Therefore, you can run CCI by installing "CCI for Windows NT" on the console OS. In
that case, the CCI instance number between the console OS and the guest OS must
be a dierent instance number, even if the command is assigned for each console
and guest OS.
Restrictions for Windows Hyper-V (Windows 2012/2008)
Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Conguration Guide 27
Restrictions for Oracle VM
Whether Command Control Interface can run properly depends on the guest OS
supported by Oracle VM.
The restrictions for using CCI with Oracle VM are:
Guest OS. CCI must use the guest OS supported by CCI and the guest OS supported
by Oracle VM.
Command device. You cannot connect the command device of Fibre Channel directly
to the guest OS. If you have to execute commands by the in-band method, you must
congure the system as shown in the following gure.
In this conguration, CCI on the guest domain (CCI#1 to CCI#n) transfers the
command to another CCI on the control domain (CCI#0) by an Out-of-Band method.
CCI#0 executes the command by In-Band method, and then transfer the result to
CCI#1 to CCI#n. CCI#0 fullls the same role as a virtual command device in the
SVP/GUM/CCI server.
Volume mapping. Volumes on the guest OS must be mapped physically to the LDEVs
on the disk machine.
System disk. If you specify the OS system disk as an object of copying, the OS might
not start on the system disk of the copy destination.
Restrictions for Oracle VM
Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Conguration Guide 28
About platforms supporting IPv6
Library and system call for IPv6
CCI uses the following functions of IPv6 library to get and convert from hostname to IPv6
address.
IPv6 library to resolve hostname and IPv6 address:
getaddrinfo()
inet_pton()
inet_ntop()
Socket System call to communicate using UDP/IPv6:
socket(AF_INET6)
bind(), sendmsg(), sendto(), rcvmsg(), recvfrom()
If CCI links above function in the object(exe), a core dump might occur if an old platform
(for example, Windows NT, HP-UX 10.20, Solaris 5) does not support it. So CCI links
dynamically above functions by resolving the symbol after determining whether the
shared library and function for IPv6 exists. It depends on supporting of the platform
whether CCI can support IPv6 or not. If platform does not support IPv6 library, then CCI
uses its own internal function corresponding to inet_pton(),inet_ntop(); in this
case, IPv6 address is not allowed to describe hostname.
The following gure shows the library and system call for IPv6.
Environment variables for IPv6
CCI loads and links the library for IPv6 by specifying a PATH as follows:
For Windows systems: Ws2_32.dll
For HP-UX (PA/IA) systems: /usr/lib/libc.sl
About platforms supporting IPv6
Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Conguration Guide 29
However, CCI might need to specify a dierent PATH to use the library for IPv6. After this
consideration, CCI also supports the following environment variables for specifying a
PATH:
$IPV6_DLLPATH (valid for only HP-UX, Windows): This variable is used to change the
default PATH for loading the Library for IPv6. For example:
export IPV6_DLLPATH=/usr/lib/hpux32/lib.so
horcmstart.sh 10
$IPV6_GET_ADDR: This variable is used to change "AI_PASSIVE" value as default for
specifying to the getaddrinfo() function for IPv6. For example:
export IPV6_GET_ADDR=9
horcmstart.sh 10
HORCM start-up log for IPv6
Support level of IPv6 feature depends on the platform and OS version. In certain OS
platform environments, CCI cannot perform IPv6 communication completely, so CCI logs
the results of whether the OS environment supports the IPv6 feature or not.
/HORCM/log/curlog/horcm_HOST NAME.log
*****************************************************************
- HORCM STARTUP LOG - Fri Aug 31 19:09:24 2007
******************************************************************
19:09:24-cc2ec-02187- horcmgr started on Fri Aug 31 19:09:24 2007
:
:
19:09:25-3f3f7-02188- ***** starts Loading library for IPv6 ****
[ AF_INET6 = 26, AI_PASSIVE = 1 ]
19:09:25-47ca1-02188- dlsym() : Symbl = 'getaddrinfo' : dlsym: symbol
"getaddrinfo" not found in "/etc/horcmgr"
getaddrinfo() : Unlinked on itself
inet_pton() : Linked on itself
inet_ntop() : Linked on itself
19:09:25-5ab3e-02188- ****** finished Loading library *******
:
HORCM set to IPv6 ( INET6 value = 26)
:
Startup procedures using detached process on DCL for
OpenVMS
Procedure
1. Create the shareable Logical name for RAID if undened initially.
HORCM start-up log for IPv6
Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Conguration Guide 30
CCI needs to dene the physical device ($1$DGA145…) as either DG* or DK* or GK*
by using the show device and DEFINE/SYSTEM commands, but then does not
need to be mounted in CCI version 01-12-03/03 or earlier.
$ show device
Device Device Error Volume Free Trans Mnt
Name Status Count Label Blocks Count Cnt
$1$DGA145: (VMS4) Online 0
$1$DGA146: (VMS4) Online 0
:
:
$1$DGA153: (VMS4) Online 0
$
$ DEFINE/SYSTEM DKA145 $1$DGA145:
$ DEFINE/SYSTEM DKA146 $1$DGA146:
:
:
$ DEFINE/SYSTEM DKA153 $1$DGA153:
2. Dene the CCI environment in LOGIN.COM.
You need to dene the Path for the CCI commands to DCL$PATH as the foreign
command. See the section about Automatic Foreign Commands in the OpenVMS
user documentation.
$ DEFINE DCL$PATH SYS$POSIX_ROOT:[horcm.usr.bin],SYS$POSIX_ROOT:
[horcm.etc]
If CCI and HORCM are executing in dierent jobs (dierent terminal), then you must
redene LNM$TEMPORARY_MAILBOX in the LNM$PROCESS_DIRECTORY table as
follows:
$ DEFINE/TABLE=LNM$PROCESS_DIRECTORY LNM$TEMPORARY_MAILBOX LNM$GROUP
3. Discover and describe the command device on SYS$POSIX_ROOT:
[etc]horcm0.conf.
$ inqraid DKA145-151 -CLI
DEVICE_FILE PORT SERIAL LDEV CTG H/M/12 SSID R:Group PRODUCT_ID
DKA145 CL1-H 30009 145 - - - - OPEN-9-CM
DKA146 CL1-H 30009 146 - s/S/ss 0004 5:01-11 OPEN-9
DKA147 CL1-H 30009 147 - s/P/ss 0004 5:01-11 OPEN-9
DKA148 CL1-H 30009 148 - s/S/ss 0004 5:01-11 OPEN-9
DKA149 CL1-H 30009 149 - s/P/ss 0004 5:01-11 OPEN-9
DKA150 CL1-H 30009 150 - s/S/ss 0004 5:01-11 OPEN-9
DKA151 CL1-H 30009 151 - s/P/ss 0004 5:01-11 OPEN-9
SYS$POSIX_ROOT:[etc]horcm0.conf
HORCM_MON
#ip_address service poll(10ms) timeout(10ms)
127.0.0.1 30001 1000 3000
HORCM_CMD
Startup procedures using detached process on DCL for OpenVMS
Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Conguration Guide 31
#dev_name dev_name dev_name
DKA145
You will have to start HORCM without a description for HORCM_DEV and
HORCM_INST because the target ID and LUN are Unknown. You can determine a
mapping of a physical device with a logical name easily by using the raidscan -
find command.
4. Execute an 'horcmstart 0'.
$ run /DETACHED SYS$SYSTEM:LOGINOUT.EXE /PROCESS_NAME=horcm0 -
_$ /INPUT=VMS4$DKB100:[SYS0.SYSMGR.][horcm]loginhorcm0.com -
_$ /OUTPUT=VMS4$DKB100:[SYS0.SYSMGR.][horcm]run0.out -
_$ /ERROR=VMS4$DKB100:[SYS0.SYSMGR.][horcm]run0.err
%RUN-S-PROC_ID, identification of created process is 00004160
5. Verify a physical mapping of the logical device.
$ HORCMINST := 0
$ raidscan -pi DKA145-151 -find
DEVICE_FILE UID S/F PORT TARG LUN SERIAL LDEV PRODUCT_ID
DKA145 0 F CL1-H 0 1 30009 145 OPEN-9-CM
DKA146 0 F CL1-H 0 2 30009 146 OPEN-9
DKA147 0 F CL1-H 0 3 30009 147 OPEN-9
DKA148 0 F CL1-H 0 4 30009 148 OPEN-9
DKA149 0 F CL1-H 0 5 30009 149 OPEN-9
DKA150 0 F CL1-H 0 6 30009 150 OPEN-9
DKA151 0 F CL1-H 0 7 30009 151 OPEN-9
$ horcmshutdown 0
inst 0:
HORCM Shutdown inst 0 !!!
6. Describe the known HORCM_DEV on SYS$POSIX_ROOT:[etc]horcm*.conf.
For horcm0.conf
HORCM_DEV
#dev_group dev_name port# TargetID LU# MU#
VG01 oradb1 CL1-H 0 2 0
VG01 oradb2 CL1-H 0 4 0
VG01 oradb3 CL1-H 0 6 0
HORCM_INST
#dev_group ip_address service
VG01 HOSTB horcm1
For horcm1.conf
HORCM_DEV
#dev_group dev_name port# TargetID LU# MU#
VG01 oradb1 CL1-H 0 3 0
VG01 oradb2 CL1-H 0 5 0
VG01 oradb3 CL1-H 0 7 0
Startup procedures using detached process on DCL for OpenVMS
Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Conguration Guide 32
HORCM_INST
#dev_group ip_address service
VG01 HOSTA horcm0
Denes the UDP port name for HORCM communication in the SYS$SYSROOT:
[000000.TCPIP$ETC]SERVICES.DAT le, as in the example below.
horcm0 30001/udp horcm1 30002/udp
7. Start horcm0 and horcm1 as the Detached process.
$ run /DETACHED SYS$SYSTEM:LOGINOUT.EXE /PROCESS_NAME=horcm0 -
_$ /INPUT=VMS4$DKB100:[SYS0.SYSMGR.][horcm]loginhorcm0.com -
_$ /OUTPUT=VMS4$DKB100:[SYS0.SYSMGR.][horcm]run0.out -
_$ /ERROR=VMS4$DKB100:[SYS0.SYSMGR.][horcm]run0.err
%RUN-S-PROC_ID, identification of created process is 00004160
$
$$ run /DETACHED SYS$SYSTEM:LOGINOUT.EXE /PROCESS_NAME=horcm1 -
_$ /INPUT=VMS4$DKB100:[SYS0.SYSMGR.][horcm]loginhorcm1.com -
_$ /OUTPUT=VMS4$DKB100:[SYS0.SYSMGR.][horcm]run1.out -
_$ /ERROR=VMS4$DKB100:[SYS0.SYSMGR.][horcm]run1.err
%RUN-S-PROC_ID, identification of created process is 00004166
You can verify that HORCM daemon is running as Detached Process by using the
show process command.
$ show process horcm0
25-MAR-2003 23:27:27.72 User: SYSTEM Process ID: 0004160
Node: VMS4 Process name:"HORCM0"
Terminal:
User Identifier: [SYSTEM]
Base priority: 4
Default file spec: Not available
Number of Kthreads: 1
Soft CPU Affinity: off
Command examples in DCL for OpenVMS
(1) Setting the environment variable by using Symbol
$ HORCMINST := 0$ HORCC_MRCF := 1
$ raidqry -l
No Group Hostname HORCM_ver Uid Serial# Micro_ver Cache(MB)
1 --- VMS4 01-29-03/05 0 30009 50-04-00/00 8192
$
$ pairdisplay -g VG01 -fdc
Group PairVol(L/R) Device_File M,Seq#,LDEV#.P/S,Status, % ,P-LDEV# M
VG01 oradb1(L) DKA146 0 30009 146..S-VOL PAIR, 100 147 -
Command examples in DCL for OpenVMS
Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Conguration Guide 33
VG01 oradb1(R) DKA147 0 30009 147..P-VOL PAIR, 100 146 -
VG01 oradb2(L) DKA148 0 30009 148..S-VOL PAIR, 100 149 -
VG01 oradb2(R) DKA149 0 30009 149..P-VOL PAIR, 100 148 -
VG01 oradb3(L) DKA150 0 30009 150..S-VOL PAIR, 100 151 -
VG01 oradb3(R) DKA151 0 30009 151..P-VOL PAIR, 100 150 -
$
(2) Removing the environment variable
$ DELETE/SYMBOL HORCC_MRCF
$ pairdisplay -g VG01 -fdc
Group PairVol(L/R) Device_File ,Seq#,LDEV#.P/S,Status,Fence, % ,P-LDEV# M
VG01 oradb1(L) DKA146 30009 146..SMPL ---- ------,----- ---- -
VG01 oradb1(R) DKA147 30009 147..SMPL ---- ------,----- ---- -
VG01 oradb2(L) DKA148 30009 148..SMPL ---- ------,----- ---- -
VG01 oradb2(R) DKA149 30009 149..SMPL ---- ------,----- ---- -
VG01 oradb3(L) DKA150 30009 150..SMPL ---- ------,----- ---- -
VG01 oradb3(R) DKA151 30009 151..SMPL ---- ------,----- ---- -
$
(3) Changing the default log directory
$ HORCC_LOG := /horcm/horcm/TEST
$ pairdisplay
PAIRDISPLAY: requires '-x xxx' as argument
PAIRDISPLAY: [EX_REQARG] Required Arg list
Refer to the command log(SYS$POSIX_ROOT:[HORCM.HORCM.TEST]HORCC_VMS4.LOG (/
HORCM
/HORCM/TEST/horcc_VMS4.log)) for details.
(4) Turning back to the default log directory
$ DELETE/SYMBOL HORCC_LOG
(5) Specifying the device described in scandev.LIS
$ define dev_file SYS$POSIX_ROOT:[etc]SCANDEV
$ type dev_file
DKA145-150
$
$ pipe type dev_file | inqraid -CLI
DEVICE_FILE PORT SERIAL LDEV CTG H/M/12 SSID R:Group PRODUCT_ID
DKA145 CL1-H 30009 145 - - - - OPEN-9-CM
DKA146 CL1-H 30009 146 - s/S/ss 0004 5:01-11 OPEN-9
DKA147 CL1-H 30009 147 - s/P/ss 0004 5:01-11 OPEN-9
DKA148 CL1-H 30009 148 - s/S/ss 0004 5:01-11 OPEN-9
DKA149 CL1-H 30009 149 - s/P/ss 0004 5:01-11 OPEN-9
DKA150 CL1-H 30009 150 - s/S/ss 0004 5:01-11 OPEN-9
Command examples in DCL for OpenVMS
Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Conguration Guide 34
(6) Making the conguration le automatically
You can omit steps from (3) to (6) on the Start-up procedures by using the mkconf
command.
$ type dev_file
DKA145-150
$
$ pipe type dev_file | mkconf -g URA -i 9
starting HORCM inst 9
HORCM Shutdown inst 9 !!!
A CONFIG file was successfully completed.
HORCM inst 9 finished successfully.
starting HORCM inst 9
DEVICE_FILE Group PairVol PORT TARG LUN M SERIAL LDEV
DKA145 - - - - - - 30009 145
DKA146 URA URA_000 CL1-H 0 2 0 30009 146
DKA147 URA URA_001 CL1-H 0 3 0 30009 147
DKA148 URA URA_002 CL1-H 0 4 0 30009 148
DKA149 URA URA_003 CL1-H 0 5 0 30009 149
DKA150 URA URA_004 CL1-H 0 6 0 30009 150
HORCM Shutdown inst 9 !!!
Please check 'SYS$SYSROOT:[SYSMGR]HORCM9.CONF','SYS$SYSROOT:
[SYSMGR.LOG9.CURLOG]
HORCM_*.LOG', and modify 'ip_address & service'.
HORCM inst 9 finished successfully.
$
SYS$SYSROOT:[SYSMGR]horcm9.conf (/sys$sysroot/sysmgr/horcm9.conf)
# Created by mkconf on Thu Mar 13 20:08:41
HORCM_MON
#ip_address service poll(10ms) timeout(10ms)
127.0.0.1 52323 1000 3000
HORCM_CMD
#dev_name dev_name dev_name
#UnitID 0 (Serial# 30009)
DKA145
# ERROR [CMDDEV] DKA145 SER = 30009 LDEV = 145 [ OPEN-9-
CM `
HORCM_DEV
#dev_group dev_name port# TargetID LU# MU#
# DKA146 SER = 30009 LDEV = 146 [ FIBRE FCTBL = 3 ]
URA URA_000 CL1-H 0 2 0
# DKA147 SER = 30009 LDEV = 147 [ FIBRE FCTBL = 3 ]
URA URA_001 CL1-H 0 3 0
# DKA148 SER = 30009 LDEV = 148 [ FIBRE FCTBL = 3 ]
URA URA_002 CL1-H 0 4 0
# DKA149 SER = 30009 LDEV = 149 [ FIBRE FCTBL = 3 ]
Command examples in DCL for OpenVMS
Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Conguration Guide 35
URA URA_003 CL1-H 0 5 0
# DKA150 SER = 30009 LDEV = 150 [ FIBRE FCTBL = 3 ]
URA URA_004 CL1-H 0 6 0
HORCM_INST
#dev_group ip_address service
URA 127.0.0.1 52323
(7) Using $1$* naming as native device name
You can use the native device without DEFINE/SYSTEM command by specifying $1$*
naming directly.
$ inqraid $1$DGA145-155 -CLI
DEVICE_FILE PORT SERIAL LDEV CTG H/M/12 SSID R:Group PRODUCT_ID
$1$DGA145 CL2-H 30009 145 - - - - OPEN-9-CM
$1$DGA146 CL2-H 30009 146 - s/P/ss 0004 5:01-11 OPEN-9
$1$DGA147 CL2-H 30009 147 - s/S/ss 0004 5:01-11 OPEN-9
$1$DGA148 CL2-H 30009 148 0 P/s/ss 0004 5:01-11 OPEN-9
$ pipe show device | INQRAID -CLI
DEVICE_FILE PORT SERIAL LDEV CTG H/M/12 SSID R:Group PRODUCT_ID
$1$DGA145 CL2-H 30009 145 - - - - OPEN-9-CM
$1$DGA146 CL2-H 30009 146 - s/P/ss 0004 5:01-11 OPEN-9
$1$DGA147 CL2-H 30009 147 - s/S/ss 0004 5:01-11 OPEN-9
$1$DGA148 CL2-H 30009 148 0 P/s/ss 0004 5:01-11 OPEN-9
$ pipe show device | MKCONF -g URA -i 9
starting HORCM inst 9
HORCM Shutdown inst 9 !!!
A CONFIG file was successfully completed.
HORCM inst 9 finished successfully.
starting HORCM inst 9
DEVICE_FILE Group PairVol PORT TARG LUN M SERIAL LDEV
$1$DGA145 - - - - - - 30009 145
$1$DGA146 URA URA_000 CL2-H 0 2 0 30009 146
$1$DGA147 URA URA_001 CL2-H 0 3 0 30009 147
$1$DGA148 URA URA_002 CL2-H 0 4 0 30009 148
HORCM Shutdown inst 9 !!!
Please check 'SYS$SYSROOT:[SYSMGR]HORCM9.CONF','SYS$SYSROOT:
[SYSMGR.LOG9.CURLOG]
HORCM_*.LOG', and modify 'ip_address & service'.
HORCM inst 9 finished successfully.
$
$ pipe show device | RAIDSCAN -find
DEVICE_FILE UID S/F PORT TARG LUN SERIAL LDEV PRODUCT_ID
$1$DGA145 0 F CL2-H 0 1 30009 145 OPEN-9-CM
$1$DGA146 0 F CL2-H 0 2 30009 146 OPEN-9
$1$DGA147 0 F CL2-H 0 3 30009 147 OPEN-9
$1$DGA148 0 F CL2-H 0 4 30009 148 OPEN-9
Command examples in DCL for OpenVMS
Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Conguration Guide 36
$ pairdisplay -g BCVG -fdc
Group PairVol(L/R) Device_File M ,Seq#,LDEV#..P/S,Status, % ,P-LDEV# M
BCVG oradb1(L) $1$DGA146 0 30009 146..P-VOL PAIR, 100 147 -
BCVG oradb1(R) $1$DGA147 0 30009 147..S-VOL PAIR, 100 146 -
$
$ pairdisplay -dg $1$DGA146
Group PairVol(L/R) (Port#,TID, LU-M) ,Seq#,LDEV#..P/S,Status, Seq#,P-LDEV#
M
BCVG oradb1(L) (CL1-H,0, 2-0) 30009 146..P-VOL PAIR, 30009 147 -
BCVG oradb1(R) (CL1-H,0, 3-0) 30009 47..S-VOL PAIR, ----- 146 -
$
Start-up procedures in bash for OpenVMS
Do not use CCI through the bash, because the bash is not provided as an ocial release
in OpenVMS.
Procedure
1. Create the shareable Logical name for RAID if undened initially.
You need to dene the Physical device ($1$DGA145…) as either DG* or DK* or GK*
by using the show device command and the DEFINE/SYSTEM command, but then
it does not need to be mounted.
$ show device
Device Device Error Volume Free Trans Mnt
Name Status Count Label Blocks Count Cnt
$1$DGA145: (VMS4) Online 0
$1$DGA146: (VMS4) Online 0
:
:
$1$DGA153: (VMS4) Online 0
$$ DEFINE/SYSTEM DKA145 $1$DGA145:
$ DEFINE/SYSTEM DKA146 $1$DGA146:
:
:
$ DEFINE/SYSTEM DKA153 $1$DGA153:
2. Dene the CCI environment in LOGIN.COM.
If CCI and HORCM are executing in dierent jobs (dierent terminal), then you must
redene LNM$TEMPORARY_MAILBOX in the LNM$PROCESS_DIRECTORY table as
follows:
$ DEFINE/TABLE=LNM$PROCESS_DIRECTORY LNM$TEMPORARY_MAILBOX LNM$GROUP
Start-up procedures in bash for OpenVMS
Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Conguration Guide 37
3. Discover and describe the command device on /etc/horcm0.conf.
bash$ inqraid DKA145-151 -CLI
DEVICE_FILE PORT SERIAL LDEV CTG H/M/12 SSID R:Group PRODUCT_ID
DKA145 CL1-H 30009 145 - - - - OPEN-9-CM
DKA146 CL1-H 30009 146 - s/S/ss 0004 5:01-11 OPEN-9
DKA147 CL1-H 30009 147 - s/P/ss 0004 5:01-11 OPEN-9
DKA148 CL1-H 30009 148 - s/S/ss 0004 5:01-11 OPEN-9
DKA149 CL1-H 30009 149 - s/P/ss 0004 5:01-11 OPEN-9
DKA150 CL1-H 30009 150 - s/S/ss 0004 5:01-11 OPEN-9
DKA151 CL1-H 30009 151 - s/P/ss 0004 5:01-11 OPEN-9
/etc/horcm0.conf
HORCM_MON
#ip_address service poll(10ms) timeout(10ms)
127.0.0.1 52000 1000 3000
HORCM_CMD
#dev_name dev_name dev_name
DKA145
HORCM_DEV
#dev_group dev_name port# TargetID LU# MU#
HORCM_INST
#dev_group ip_address service
You will have to start HORCM without a description for HORCM_DEV and
HORCM_INST because target ID and LUN are Unknown. You can determine a
mapping of a physical device with a logical name easily by using the raidscan -
find command.
4. Execute an 'horcmstart 0' as background.
bash$ horcmstart 0 &
18
bash$
starting HORCM inst 0
5. Verify a physical mapping of the logical device.
bash$ export HORCMINST=0
bash$ raidscan -pi DKA145-151 -find
DEVICE_FILE UID S/F PORT TARG LUN SERIAL LDEV PRODUCT_ID
DKA145 0 F CL1-H 0 1 30009 145 OPEN-9-CM
DKA146 0 F CL1-H 0 2 30009 146 OPEN-9
DKA147 0 F CL1-H 0 3 30009 147 OPEN-9
DKA148 0 F CL1-H 0 4 30009 148 OPEN-9
DKA149 0 F CL1-H 0 5 30009 149 OPEN-9
Start-up procedures in bash for OpenVMS
Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Conguration Guide 38
DKA150 0 F CL1-H 0 6 30009 150 OPEN-9
DKA151 0 F CL1-H 0 7 30009 151 OPEN-9
6. Describe the known HORCM_DEV on /etc/horcm*.conf.
For horcm0.conf
HORCM_DEV
#dev_group dev_name port# TargetID LU# MU#
VG01 oradb1 CL1-H 0 2 0
VG01 oradb2 CL1-H 0 4 0
VG01 oradb3 CL1-H 0 6 0
HORCM_INST
#dev_group ip_address service
VG01 HOSTB horcm1
For horcm1.conf
HORCM_DEV
#dev_group dev_name port# TargetID LU# MU#
VG01 oradb1 CL1-H 0 3 0
VG01 oradb2 CL1-H 0 5 0
VG01 oradb3 CL1-H 0 7 0
HORCM_INST
#dev_group ip_address service
VG01 HOSTA horcm0
7. Start 'horcmstart 0 1'.
The subprocess(HORCM) created by bash is terminated when the bash is EXIT.
bash$ horcmstart 0 &
19
bash$
starting HORCM inst 0
bash$ horcmstart 1 &
20
bash$
starting HORCM inst 1
Using CCI with Hitachi and other storage systems
The following table shows the related two controls between CCI and the RAID storage
system type (Hitachi or HPE). The following gure shows the relationship between the
application, CCI, and RAID storage system.
Using CCI with Hitachi and other storage systems
Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Conguration Guide 39
Version Installation order
RAID
system Common API/CLI XP API/CLI
CCI 01-08-03/00
or later
CCI Hitachi Allowed Cannot use (CLI
options can be
used)
HPE Allowed1
Install CCI after
installing RAID
Manager XP
Hitachi Allowed
HPE Allowed
RAID Manager
XP 01.08.00 or
later (provided
by HPE)
RAID Manager XP HPE Allowed Allowed
Hitachi Allowed1Allowed2
Install RAID
Manager XP after
installing CCI
HPE Allowed Allowed
Hitachi Allowed Allowed2
Notes:
1. The following common API/CLI commands are rejected with EX_ERPERM by
connectivity of CCI with RAID storage system:
horctakeover, paircurchk, paircreate, pairsplit, pairresync,
pairvolchk, pairevtwait, pairdisplay, raidscan (except the -find option),
raidar, raidvchkset, raidvchkdsp, raidvchkscan
2. The following XP API/CLI commands are rejected with EX_ERPERM on the storage
system even when both CCI and RAID Manager XP (provided by HPE) are installed:
pairvolchk -s, pairdisplay -CLI, raidscan -CLI, paircreate -m
noread for TrueCopy/TrueCopy Async/Universal Replicator, paircreate -m
dif/inc for ShadowImage
Using CCI with Hitachi and other storage systems
Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Conguration Guide 40
Chapter 2: Installing and configuring CCI
This chapter describes and provides instructions for installing and conguring CCI.
Installing the CCI hardware
Installation of the hardware required for CCI is performed by the user and the Hitachi
Vantara representative.
Procedure
1. User:
a. Make sure that the UNIX/PC server hardware and software are properly
installed and congured. For specic support information, refer to the
interoperability matrix at https://support.hitachivantara.com.
b. If you will be performing remote replication operations (for example, Universal
Replicator, TrueCopy), identify the primary and secondary volumes, so that the
hardware and software components can be installed and congured properly.
2. Hitachi Vantara representative:
a. Connect the RAID storage systems to the hosts. See the Maintenance Manual
for the storage system and the Open-Systems Host Attachment Guide. Make sure
to set the appropriate system option modes (SOMs) and host mode options
(HMOs) for the operational environment.
b. Congure the RAID storage systems that will contain primary volumes for
replication to report sense information to the hosts.
c. Set the SVP time to the local time so that the time stamps are correct. For VSP
Gx00 models and VSP Fx00 models, use the maintenance utility to set the
system date and time to the local time.
d. Remote replication: Install the remote copy connections between the RAID
storage systems. For detailed information, see the applicable user guide (for
example, Hitachi Universal Replicator User Guide).
3. User and Hitachi Vantara representative:
a. Ensure that the storage systems are accessible via Hitachi Device Manager -
Storage Navigator. For details, see the System Administrator Guide for your
storage system.
b. (Optional) Ensure that the storage systems are accessible by the management
software (for example, Hitachi Storage Advisor, Hitachi Command Suite). For
details, see the user documentation for the software product.
c. Install and enable the applicable license key of your program product (for
example, TrueCopy, ShadowImage, LUN Manager, Universal Replicator for
Chapter 2: Installing and conguring CCI
Command Control Interface Installation and Conguration Guide 41
Mainframe, Data Retention Utility) on the storage systems. For details about
installing license keys, see the System Administrator Guide or Storage Navigator
User Guide.
4. User: Congure the RAID storage systems for operations as described in the user
documentation. For example, before you can create TrueCopy volume pairs using
CCI, you need to congure the ports on the storage systems and establish the MCU-
RCU paths.
Installing the CCI software
To install CCI, log in with "root user" or "administrator" privileges. The login user type is
determined by the operating system. You can install the CCI software on the host servers
with assistance as needed from the Hitachi Vantara representative.
The installation must be done in the following order:
1. Install the CCI software.
2. Set the command device.
3. Create the conguration denition les.
4. Set the environmental variables.
UNIX installation
If you are installing CCI from the media for the program product, use the RMinstsh and
RMuninst scripts on the program product media to automatically install and remove the
CCI software. (For LINUX/IA64 or LINUX/X64, move to the LINUX/IA64 or LINUX/X64
directory and then execute ../../RMinstsh.)
For other media, use the following instructions as given below in the two methods. The
following instructions refer to UNIX commands that might be dierent on your platform.
Consult your OS documentation (for example, UNIX man pages) for platform-specic
command information.
Installing the CCI software into the root directory
Procedure
1. Insert the installation media into the I/O device properly.
2. Move to the current root directory: # cd /
3. Copy all les from the installation media using the cpio command:
# cpio -idmu < /dev/XXXX
where XXXX = I/O device
Preserve the directory structure (d ag) and le modication times (m ag), and
copy unconditionally (u ag).
Installing the CCI software
Chapter 2: Installing and conguring CCI
Command Control Interface Installation and Conguration Guide 42
4. Execute the CCI installation command:
# /HORCM/horcminstall.sh
5. Verify installation of the proper version using the raidqry command:
# raidqry -h
Model: RAID-Manager/HP-UX
Ver&Rev: 01-40-03/03
Usage: raidqry [options]
Installing the CCI software into a non-root directory
Procedure
1. Insert the installation media into the proper I/O device.
2. Move to the desired directory for CCI. The specied directory must be mounted by a
partition of except root disk or an external disk.
# cd /Specified Directory
3. Copy all les from the installation media using the cpio command:
# cpio -idmu < /dev/XXXX XXXX = I/O device
Preserve the directory structure (d ag) and le modication times (m ag), and
copy unconditionally (u ag).
4. Make a symbolic link for /HORCM:
# ln -s /Specified Directory/HORCM /HORCM
5. Execute the CCI installation command:
# /HORCM/horcminstall.sh
6. Verify installation of the proper version using the raidqry command:
# raidqry -h
Model: RAID-Manager/HP-UX
Ver&Rev: 01-40-03/03
Usage: raidqry [options]
Changing the CCI user (UNIX systems)
Just after installation, CCI can be operated only by the root user. When operating CCI by
assigning a dierent user for CCI management, you need to change the owner of the CCI
directory and owner's privilege, specify environment variables, and so on. Use the
following procedure to change the conguration to allow a dierent user to operate CCI.
Installing the CCI software into a non-root directory
Chapter 2: Installing and conguring CCI
Command Control Interface Installation and Conguration Guide 43
Procedure
1. Change the owner of the following CCI les from the root user to the desired user
name:
/HORCM/etc/horcmgr
All CCI commands in the /HORCM/usr/bin directory
/HORCM/log directory
All CCI log directories in the /HORCM/log* directories
/HORCM/.uds directory
2. Give the newly assigned user the privilege of writing to the following CCI directories:
/HORCM/log
/HORCM/log* (when the /HORCM/log* directory exists)
/HORCM (when the /HORCM/log* directory does not exist)
3. Change the owner of the raw device le of the HORCM_CMD (control device)
command device in the conguration denition le from the root user to the
desired user name.
4. Optional: Establishing the HORCM (/etc/horcmgr) start environment: If you have
designation of the full environment variables (HORCM_LOG HORCM_LOGS), then
start the horcmstart.sh command without an argument. In this case, the
HORCM_LOG and HORCM_LOGS directories must be owned by the CCI
administrator. The environment variable (HORCMINST, HORCM_CONF) establishes
as the need arises.
5. Optional: Establishing the command execution environment: If you have
designation of the environment variables (HORCC_LOG), then the HORCC_LOG
directory must be owned by the CCI administrator. The environment variable
(HORCMINST) establishes as the need arises.
6. Establish UNIX domain socket: If the execution user of CCI is dierent from user of
the command, a system administrator needs to change the owner of the following
directory, which is created at each HORCM (/etc/horcmgr) start-up:
/HORCM/.uds/.lcmcl directory
To reset the security of UNIX domain socket to OLD version, perform the following:
1. Give writing permission to /HORCM/.uds directory.
2. Start horcmstart.sh ., and set the "HORCM_EVERYCLI=1" environment variable.
Changing the CCI user (UNIX systems)
Chapter 2: Installing and conguring CCI
Command Control Interface Installation and Conguration Guide 44
Next steps
Note: A user account for the Linux system must have the "CAP_SYS_ADMIN"
and "CAP_SYS_RAWIO" privileges to use the SCSI Class driver (Command
device). The system administrator can apply these privileges by using the
PAM_capability module. However, if the system administrator cannot set
those user privileges, then use the following method. This method starts the
HORCM daemon only with the root user; as an alternative, you can execute
CCI commands.
System administrator: Place the script that starts up horcmstart.sh in the
following directory so that the system can start HORCM from /etc/
rc.d/rc: /etc/init.d
Users: When the log directory is only accessible by the system
administrator, you cannot use the inqraid or raidscan -find
commands. Therefore, set the command log directory by setting the
environment variables (HORCC_LOG), and executing the CCI command.
Note: AIX® does not allow ioctl() with the exception of the root user. CCI
tries to use ioctl(DK_PASSTHRU) or SCSI_Path_thru as much as possible,
if it fails, changes to RAW_IO follows conventional ways. Even so, CCI might
encounter the AIX® FCP driver, which does not support
ioctl(DK_PASSTHRU) fully in the customer site. After this consideration, CCI
also supports by dening either the following environment variable or /
HORCM/etc/USE_OLD_IOCTLfile(size=0) that uses the RAW_IO forcibly.
Example
export USE_OLD_IOCTL=1
horcmstart.sh 10
HORCM/etc:
-rw-r--r-- 1 root root 0 Nov 11 11:12 USE_OLD_IOCT
-r--r--r-- 1 root sys 32651 Nov 10 20:02 horcm.conf
-r-xr--r-- 1 root sys 282713 Nov 10 20:02 horcmgr
Windows installation
Use this procedure to install CCI on a Windows system.
Make sure to install CCI on all servers involved in CCI operations.
Caution:
Installing CCI in multiple drives is not recommended. If you install CCI in
multiple drives, CCI installed in the smallest drive might be used
preferentially.
If CCI is already installed and you are upgrading the CCI version, you must
remove the installed version rst and then install the new version. For
instructions, see Upgrading CCI in a Windows environment (on page 61) .
Windows installation
Chapter 2: Installing and conguring CCI
Command Control Interface Installation and Conguration Guide 45
Before you begin
The network of Windows attachment with TCP/IP protocol must already be installed and
established.
Procedure
1. Insert the media for the product into the proper I/O device.
2. Execute Setup.exe (\program\RM\WIN_NT\RMHORC\Setup.exe or \program\RM
\WIN_NT\RMHORC_X64\Setup.exe on the CD), and follow the instructions on the
screen to complete the installation. The installation directory is HORCM (xed value)
at the root directory.
3. Reboot the Windows server, and then start up CCI.
A warning message for security might appear at the initial start-up depending on
the OS settings. Specify "Temporarily Allow" or "Always Allow" in the dialog box.
4. Verify that the correct version of the CCI software is running on your system by
executing the raidqry command:
D:\HORCM\etc> raidqry -h
Model: RAID-Manager/WindowsNT
Ver&Rev: 01-41-03/xx
Usage: raidqry [options] for HORC
Next steps
Users who execute CCI commands need "administrator" privileges and the right to
access the log directory and the les in it. For instructions on specifying a CCI
administrator, see Changing the CCI user (Windows systems) (on page 46) .
Changing the CCI user (Windows systems)
Users who execute CCI commands need "administrator" privileges and the right to
access a log directory and the les under it. Use the following procedures to specify a
user who does not have "administrator" privileges as a CCI administrator.
Specifying a CCI administrator: system administrator tasks (on page 46)
Specifying a CCI administrator: CCI administrator tasks (on page 47)
Specifying a CCI administrator: system administrator tasks
Procedure
1. Add a user_name to the PhysicalDrive.
Add the user name of the CCI administrator to the Device objects of the command
device for HORCM_CMD in the conguration denition le. For example:
C:\HORCM\tool\>chgacl /A:RMadmin Phys
PhysicalDrive0 -> \Device\Harddisk0\DR0
\\.\PhysicalDrive0 : changed to allow 'RMadmin'
2. Add a user_name to the Volume{GUID}.
Changing the CCI user (Windows systems)
Chapter 2: Installing and conguring CCI
Command Control Interface Installation and Conguration Guide 46
If the CCI administrator needs to use the "-x mount/umount" option for CCI
commands, the system administrator must add the user name of the CCI
administrator to the Device objects of the Volume{GUID}. For example:
C:\HORCM\tool\>chgacl /A:RMadmin Volume
Volume{b0736c01-9b14-11d8-b1b6-806d6172696f} -> \Device\CdRom0
\\.\Volume{b0736c01-9b14-11d8-b1b6-806d6172696f} : changed to allow
'RMadmin'
Volume{b0736c00-9b14-11d8-b1b6-806d6172696f} -> \Device\HarddiskVolume1
\\.\Volume{b0736c00-9b14-11d8-b1b6-806d6172696f} : changed to allow
'RMadmin'
3. Add user_name to the ScsiX.
If the CCI administrator needs to use the "-x portscan" option for CCI commands,
the system administrator must add the user name of the CCI administrator to the
Device objects of the ScsiX. For example:
C:\HORCM\tool\>chgacl /A:RMadmin Scsi
Scsi0: -> \Device\Ide\IdePort0
\\.\Scsi0: : changed to allow 'RMadmin'
Scsi1: -> \Device\Ide\IdePort1
\\.\Scsi1: : changed to allow 'RMadmin '
Result
Because the ACL (Access Control List) of the Device objects is set every time Windows
starts-up, the Device objects are also required when Windows starts up. The ACL is also
required when new Device objects are created.
Specifying a CCI administrator: CCI administrator tasks
Procedure
1. Establish the HORCM (/etc/horcmgr) startup environment.
By default, copy the conguration denition le in the following directory:
%SystemDrive%:\windows\
Because users cannot write to this directory, the CCI administrator must change the
directory by using the HORCM_CONF variable. For example:
C:\HORCM\etc\>set HORCM_CONF=C:\Documents and Settings\RMadmin
\horcm10.conf
C:\HORCM\etc\>set HORCMINST=10
C:\HORCM\etc\>horcmstart [This must be started without arguments]
The mountvol command is denied use by user privilege, therefore "the directory
mount" option of CCI commands using the mountvol command cannot be
executed.
The inqraid "-gvinf" option uses the %SystemDrive%:\windows\ directory, so this
option cannot be used unless the system administrator allows you to write.
Specifying a CCI administrator: CCI administrator tasks
Chapter 2: Installing and conguring CCI
Command Control Interface Installation and Conguration Guide 47
However, CCI can be changed from the %SystemDrive%:\windows\ directory to
the %TEMP% directory by setting the "HORCM_USE_TEMP" environment variable.
For example:
C:\HORCM\etc\>set HORCM_USE_TEMP=1
C:\HORCM\etc\>inqraid $Phys -gvinf
2. Ensure that the CCI command and the HORCM have the same privileges. If CCI
command and the HORCM are executing dierent privileges (dierent users), then
CCI command can not attach to HORCM (CCI command and HORCM are denied
communication through the Mailslot).
However, CCI does permit a HORCM connection through the "HORCM_EVERYCLI"
environment variable, as shown in the following example:
C:\HORCM\etc\>set HORCM_CONF=C:\Documents and Settings\RMadmin
\horcm10.conf
C:\HORCM\etc\>set HORCMINST=10
C:\HORCM\etc\>set HORCM_EVERYCLI=1
C:\HORCM\etc\>horcmstart [This must be started without arguments]
In this example, users who execute CCI commands must be restricted to use only
CCI commands. This can be done using the Windows "explore" or "cacls"
commands.
Installing CCI on the same PC as the storage management software
CCI is supplied with the storage management software for VSP Gx00 models and VSP
Fx00 models. Installing CCI and the storage management software on the same PC
allows you to use CCI of the appropriate version.
Caution: If CCI is already installed and you are upgrading the CCI version, you
must remove the installed version rst and then install the new version. For
instructions, see Upgrading CCI installed on the same PC as the storage
management software (on page 62) .
Before you begin
The network of Windows attachment with TCP/IP protocol must already be installed and
established.
Procedure
1. Right-click <storage-management-software-installation-path>\wk
\supervisor\restapi\uninstall.bat to run as administrator.
2. Install CCI in the same drive as the storage management software as follows:
a. Insert the media for the product into the proper I/O device.
b. Execute Setup.exe (\program\RM\WIN_NT\RMHORC\Setup.exe or
\program\RM\WIN_NT\RMHORC_X64\Setup.exe on the CD), and follow the
Installing CCI on the same PC as the storage management software
Chapter 2: Installing and conguring CCI
Command Control Interface Installation and Conguration Guide 48
instructions on the screen to complete the installation. The installation
directory is HORCM (xed value) at the root directory.
c. Reboot the Windows server, and then start up CCI.
A warning message for security might appear at the initial start-up depending
on the OS settings. Specify "Temporarily Allow" or "Always Allow" in the dialog
box.
d. Verify that the correct version of the CCI software is running on your system by
executing the raidqry command:
D:\HORCM\etc> raidqry -h
Model: RAID-Manager/WindowsNT
Ver&Rev: 01-41-03/xx
Usage: raidqry [options] for HORC
3. Right-click <storage-management-software-installation-path>\wk
\supervisor\restapi\install.bat to run as administrator.
OpenVMS installation
Make sure to install CCI on all servers involved in CCI operations. Establish the network
(TCP/IP), if not already established. CCI is provided as the following PolyCenter Software
Installation (PCSI) le:
HITACHI-ARMVMS-RM-V0122-2-1.PCSI HITACHI-I64VMS-RM-V0122-2-1.PCSI
CCI also requires that POSIX_ROOT exist on the system, so you must dene the
POSIX_ROOT before installing the CCI software. It is recommended that you dene the
following three logical names for CCI in LOGIN.COM:
$ DEFINE/TRANSLATION=(CONCEALED,TERMINAL) SYS$POSIX_ROOT "Device:
[directory]"
$ DEFINE DCL$PATH SYS$POSIX_ROOT:[horcm.usr.bin],SYS$POSIX_ROOT:[horcm.etc]
$ DEFINE/TABLE=LNM$PROCESS_DIRECTORY LNM$TEMPORARY_MAILBOX LNM$GROUP
$ DEFINE DECC$ARGV_PARSE_STYLE ENABLE
$ SET PROCESS/PARSE_STYLE=EXTENDED
where Device:[directory] is dened as SYS$POSIX_ROOT
Follow the steps below to install the CCI software on an OpenVMS system.
Procedure
1. Insert and mount the provided CD or diskette.
2. Execute the following command:
$ PRODUCT INSTALL RM /source=Device:[PROGRAM.RM.OVMS]/LOG -
_$ /destination=SYS$POSIX_ROOT:[000000]
Device:[PROGRAM.RM.OVMS] where HITACH-ARMVMS-RM-V0122-2-
1.PCSI exists
OpenVMS installation
Chapter 2: Installing and conguring CCI
Command Control Interface Installation and Conguration Guide 49
3. Verify installation of the proper version using the raidqry command:
$ raidqry -h
Model: RAID-Manager/OpenVMS
Ver&Rev: 01-40-03/03
Usage: raidqry [options]
In-band and out-of-band operations
CCI operations can be performed using either the in-band method (all storage systems)
or the out-of-band method (VSP and later).
In-band (host-based) method. CCI commands are transferred from the client or server
to the command device in the storage system via the host Fibre-Channel or iSCSI
interface. The command device must be dened in the conguration denition le (as
shown in the gure below).
Out-of-band (LAN-based) method. CCI commands are transferred from a client PC via
the LAN. For CCI on USP V/VM, to execute a command from a client PC that is not
connected directly to a storage system, you must write a shell script to log in to a CCI
server (in-band method) via Telnet or SSH.
For CCI on VSP and later, you can create a virtual command device on the SVP by
specifying the IP address in the conguration denition le. For CCI on VSP Gx00
models and VSP Fx00 models, you can create a virtual command device on GUM in a
storage system by specifying the IP address of the storage system.
By creating a virtual command device, you can execute the same script as the in-band
method from a client PC that is not connected directly to the storage system. CCI
commands are transferred to the virtual command device from the client PC and then
executed in storage systems.
A virtual command device can also be created on the CCI server, which is a remote CCI
installation that is connected by LAN. The location of the virtual command device
depends on the type of storage system. The following table lists the storage system
types and indicates the allowable locations of the virtual command device.
Storage system type
Location of virtual command device
SVP GUM CCI server
VSP Gx00 models, VSP Fx00
models
OK* OK OK
HUS VM OK Not applicable OK
VSP G1x00, VSP F1500 OK Not applicable OK
VSP OK Not applicable OK
* CCI on the SVP must be congured as a CCI server in advance.
In-band and out-of-band operations
Chapter 2: Installing and conguring CCI
Command Control Interface Installation and Conguration Guide 50
The following gure shows a sample system conguration with the command device and
virtual command device settings for the in-band and out-of-band methods on VSP Gx00
models, VSP Fx00 models, VSP G1x00, VSP F1500, VSP, and HUS VM.
The following gure shows a sample system conguration with the command device and
virtual command device settings for the in-band and out-of-band methods on VSP Gx00
models and VSP Fx00 models. In the following gure, CCI B is the CCI server for CCI A.
You can issue commands from CCI A to the storage system through the virtual command
device of CCI B. You can also issue commands from CCI B directly to the storage system
(without CCI A). When you issue commands directly from CCI B, CCI A is optional.
In-band and out-of-band operations
Chapter 2: Installing and conguring CCI
Command Control Interface Installation and Conguration Guide 51
The following gure shows a sample system conguration with a CCI server connected
by the in-band method for VSP G1x00, VSP F1500, VSP, and HUS VM.
In-band and out-of-band operations
Chapter 2: Installing and conguring CCI
Command Control Interface Installation and Conguration Guide 52
Setting up UDP ports
This section contains information about setting up strict rewalls.
If you do not have a HORCM_MON IP address in your conguration denition le, CCI
(horcm) opens the following ports on horcmstart:
For in-band or out-of-band: [31000 + horcminstance + 1]
For out-of-band: [34000 + horcminstance + 1]
If you have a HORCM_MON IP address in your conguration denition le, you need to
open up the port that is dened in this entry.
Setting the command device
For in-band CCI operations, commands are issued to the command device and then
executed on the RAID storage system. The command device is a user-selected, dedicated
logical volume on the storage system that functions as the interface to the CCI software
on the host. The command device is dedicated to CCI operations and cannot be used by
any other applications. The command device accepts read and write commands that are
executed by the storage system and returns read requests to the host.
The command device can be any OPEN-V device that is accessible to the host. A LUSE
volume cannot be used as a command device. The command device uses 16 MB, and the
remaining volume space is reserved for CCI and its utilities. A Virtual LUN volume as
small as 36 MB can be used as a command device.
Note: For Solaris operations, the command device must be labeled.
Setting up UDP ports
Chapter 2: Installing and conguring CCI
Command Control Interface Installation and Conguration Guide 53
First you set the command device using Device Manager - Storage Navigator, and then
you dene the command device in the HORCM_CMD section of the conguration
denition le for the CCI instance on the attached host.
For specifying the command device and the virtual command device, you can enter up to
511 characters on a line.
Procedure
1. Make sure the device that will be set as a command device does not contain any
user data. Once a volume is set as a command device, it is inaccessible to the host.
2. Log on to Storage Navigator, and connect to the storage system on which you want
to set a command device.
3. Congure the device as needed before setting it as a command device. For example,
you can create a custom-size device that has 36 MB of storage capacity for use as a
command device. For instructions, see the Provisioning Guide for your storage
system. For Universal Storage Platform V/VM, see the Hitachi Virtual LVI/LUN User's
Guide.
4. Locate and select the device, and set the device as a command device. For
instructions, see the Provisioning Guide for your storage system. For Universal
Storage Platform V/VM, see the Hitachi LUN Manager User's Guide.
If you plan to use the CCI Data Protection Facility, enable the command device
security attribute of the command device. For details about the CCI Data Protection
Facility, see the Command Control Interface User and Reference Guide.
If you plan to use CCI commands for provisioning (raidcom commands), enable the
user authentication attribute of the command device.
If you plan to use device groups, enable the device group denition attribute of the
command device.
5. Write down the system raw device name (character-type device le name) of the
command device (for example, /dev/rdsk/c0t0d1s2 in Solaris, \\.\CMD-Ser#-
ldev#-Port# in Windows). You will need this information when you dene the
command device in the conguration denition le.
6. If you want to set an alternate command device, repeat this procedure for another
volume.
7. If you want to enable dual pathing of the command device under Solaris systems,
include all paths to the command device on a single line in the HORCM_CMD section
of the conguration denition le.
The following example shows the two controller paths (c1 and c2) to the command
device. Putting the path information on separate lines might cause parsing issues,
and failover might not occur unless the HORCM startup script is restarted on the
Solaris system.
Example of dual path for command device for Solaris systems:
HORCM_CMD
#dev_name dev_name dev_name
/dev/rdsk/c1t66d36s2 /dev/rdsk/c2t66d36s2
Setting the command device
Chapter 2: Installing and conguring CCI
Command Control Interface Installation and Conguration Guide 54
Specifying the command device and virtual command device in the configuration
definition file
If you will execute commands by the in-band method to a command device on the
storage system, specify the LU path for the command device in the conguration
denition le. The command device in the storage system specied by the LU path
accepts the commands from the client and executes the operation.
If you will execute commands by the out-of-band method, specify the virtual command
device in the conguration denition le. The virtual command device is dened by the
IP address of the SVP or GUM, the UDP communication port number (xed at 31001),
and the storage system unit ID* in the conguration denition le. When a virtual
command device is used, the command is transferred from the client or server via LAN
to the virtual command device specied by the IP address of the SVP, and an operation
instruction is assigned to the storage system.
* The storage system unit ID is required only for congurations with multiple storage
systems.
The following examples show how a command device and a virtual command device are
specied in the conguration denition le. For details, see the Command Control
Interface User and Reference Guide.
Example of command device in conguration denition le (in-band method)
HORCM_CMD
#dev_name dev_name dev_name
\\.\CMD-64015:/dev/rdsk/*
Example of virtual command device in conguration denition le (out-of-band
method with SVP)
Example for SVP IP address 192.168.1.100 and UDP communication port number 31001:
HORCM_CMD
#dev_name dev_name dev_name
\\.\IPCMD-192.168.1.100-31001
Example of virtual command device in conguration denition le (out-of-band
method with GUM)
Example for GUM IP addresses 192.168.0.16, 192.168.0.17 and UDP communication port
numbers 31001, 31002. In this case, enter the IP addresses without line feed.
HORCM_CMD
#dev_name dev_name dev_name
\\.\IPCMD-192.168.0.16-31001 \\.\IPCMD-192.168.0.17-31001 \\.
\IPCMD-192.168.0.16-31002 \\.\IPCMD-192.168.0.17-31002
Specifying the command device and virtual command device in the conguration denition le
Chapter 2: Installing and conguring CCI
Command Control Interface Installation and Conguration Guide 55
About alternate command devices
If CCI receives an error notication in reply to a read or write request to a command
device, the CCI software can switch to an alternate command device, if one is dened. If a
command device is unavailable (for example, blocked due to online maintenance), you
can switch to an alternate command device manually. If no alternate command device is
dened or available, all commands terminate abnormally, and the host cannot issue CCI
commands to the storage system. To ensure that CCI operations continue when a
command device becomes unavailable, you should set one or more alternate command
devices.
Because the use of alternate I/O pathing depends on the platform, restrictions are
placed upon it. For example, on HP-UX systems only devices subject to the LVM can use
the alternate path PV-LINK. To prevent command device failure, CCI supports an
alternate command device function.
Denition of alternate command devices. To use an alternate command device,
dene two or more command devices for the HORCM_CMD item in the conguration
denition le. When two or more devices are dened, they are recognized as
alternate command devices. If an alternate command device is not dened in the
conguration denition le, CCI cannot switch to the alternate command device.
Timing of alternate command devices. When the HORCM receives an error
notication in reply from the operating system via the raw I/O interface, the
command device is alternated. It is possible to alternate the command device forcibly
by issuing an alternating command provided by TrueCopy (horcctl -C).
Operation of alternating command. If the command device is blocked due to online
maintenance (for example, microcode replacement), the alternating command should
be issued in advance. When the alternating command is issued again after
completion of the online maintenance, the previous command device is activated
again.
Multiple command devices on HORCM startup. If at least one command device is
available and one or more command devices are specied in the conguration
denition le, then HORCM starts with a warning message to startup log by using
available command device. Conrm that all command devices can be changed by
using the horcctl -C command option, or HORCM has been started without warning
message to the HORCM startup log.
The following gure shows the workow for the alternate command device function.
About alternate command devices
Chapter 2: Installing and conguring CCI
Command Control Interface Installation and Conguration Guide 56
Creating and editing the configuration definition file
The conguration denition le is a text le that is created and edited using any standard
text editor (for example, UNIX vi editor, Windows Notepad). The conguration denition
le denes correspondences between the server and the volumes used by the server.
There is a conguration denition le for each host server. When the CCI software starts
up, it refers to the denitions in the conguration denition le.
The conguration denition le denes the devices in copy pairs and is used for host
management of the copy pairs, including ShadowImage, ShadowImage for Mainframe,
TrueCopy, TrueCopy for Mainframe, Copy-on-Write Snapshot, Thin Image, Universal
Replicator, and Universal Replicator for Mainframe. ShadowImage, ShadowImage for
Mainframe, Copy-on-Write Snapshot, and Thin Image use the same conguration les
and commands, and the RAID storage system determines the type of copy pair based on
the S-VOL characteristics and (for Copy-on-Write Snapshot and Thin Image) the pool
type.
The conguration denition le contains the following sections:
HORCM_MON: Denes information about the local host.
HORCM_CMD: Denes information about the command (CMD) devices.
HORCM_VCMD: Denes information about the virtual storage machine.
HORCM_DEV or HORCM_LDEV: Denes information about the copy pairs.
HORM_INST or INSTP: Denes information about the remote host.
HORCM_LDEVG: Denes information about the device group.
HORCM_ALLOW_INST: Denes information about user permissions.
A sample conguration denition le, HORCM_CONF (/HORCM/etc/horcm.conf), is
included with the CCI software. This le should be used as the basis for creating your
conguration denition les. The system administrator should make a copy of the
sample le, set the necessary parameters in the copied le, and place the le in the
proper directory.
The following table lists the conguration parameters dened in the horcm.conf le and
species the default value, type, and limit for each parameter. For details about
parameters in the conguration le, see the Command Control Interface User and
Reference Guide.
Parameter Default Type Limit
ip_address None Character string 63 characters
service None Character string or numeric
value
15 characters
poll (10 ms) 1000 Numeric value* None
timeout (10 ms) 3000 Numeric value* None
Creating and editing the conguration denition le
Chapter 2: Installing and conguring CCI
Command Control Interface Installation and Conguration Guide 57
Parameter Default Type Limit
dev_name for
HORCM_DEV
None Character string 31 characters
dev_group None Character string 31 characters
Recommended value
= 8 char. or less
port # None Character string 31 characters
target ID None Numeric value* 7 characters
LU# None Numeric value* 7 characters
MU# 0 Numeric value* 7 characters
Serial# None Numeric value* 12 characters
CU:LDEV(LDEV#) None Numeric value 6 characters
dev_name for
HORCM_CMD
None Character string 63 characters
Recommended value
= 8 char. or less
*Use decimal notation (not hexadecimal) for these numeric values.
Creating and editing the conguration denition le
Chapter 2: Installing and conguring CCI
Command Control Interface Installation and Conguration Guide 58
Notes on editing configuration definition file
Follow the notes given below for editing conguration denition le.
Do not edit the conguration denition le while CCI is running. Shut down CCI, edit
the conguration le as needed, and then restart CCI. When you change the system
conguration, it is required to shut down CCI once and rewrite the conguration
denition le to match with the change and then restart CCI. When you change the
storage system conguration (microprogram, cache capacity, LU path, and so on), you
must restart CCI regardless of the necessity of the conguration denition le editing.
When you restart CCI, conrm that there is no contradiction in the connection
conguration by using the "-c" option of the pairdisplay command and the
raidqry command. However, you cannot conrm the consistency of the P-VOL and
S-VOL capacity with the "-c" option of pairdisplay command. Conrm the capacity
of each volume by using the raidcom command.
Do not mix pairs created with the "At-Time Split" option (-m grp) and pairs created
without this option in the same group dened in the CCI conguration le. If you do, a
pairsplit operation might end abnormally, or S-VOLs of the P-VOLs in the same
consistency group (CTG) might not be created correctly at the time the pairsplit
request is received.
If the hardware conguration is changed during the time an OS is running in Linux,
the name of a special le corresponding to the command device might be changed. At
this time, if HORCM was started by specifying the special le name in the
conguration denition le, HORCM cannot detect the command device, and the
communication with the storage system might fail.
To prevent this failure, specify the path name allocated by udev to the conguration
denition le before booting HORCM. Use the following procedure to specify the path
name. In this example, the path name for /dev/sdgh can be found.
1. Find the special le name of the command device by using inqraid command.
Command example:
[root@myhost ~]# ls /dev/sd* | /HORCM/usr/bin/inqraid -CLI |
grep CM sda CL1-B 30095 0 - - 0000 A:00000 OPEN-V-CM sdgh CL1-
A 30095 0 - - 0000 A:00000 OPEN-V-CM [root@myhost ~]#
2. Find the path name from the by-path directory. Command example:
[root@myhost ~]# ls -l /dev/disk/by-path/ | grep sdgh
lrwxrwxrwx. 1 root root 10 Jun 11 17:04 2015 pci-0000:08:00.0-
fc-0x50060e8010311940-lun-0 -> ../../sdgh [root@myhost ~]#
In this example, "pci-0000:08:00.0-fc-0x50060e8010311940-lun-0" is the path
name.
3. Enter the path name to HORCM_CMD in the conguration denition le as
follows.
HORCM_CMD /dev/disk/by-path/pci-0000:08:00.0-
fc-0x50060e8010311940-lun-0
4. Boot the HORCM instance as usual.
Notes on editing conguration denition le
Chapter 2: Installing and conguring CCI
Command Control Interface Installation and Conguration Guide 59
Chapter 3: Upgrading CCI
For upgrading the CCI software, use the RMuninst scripts on the media for the program
product. For other media, please use the instructions in this chapter to upgrade the CCI
software. The instructions might be dierent on your platform. Please consult your
operating system documentation (for example, UNIX man pages) for platform-specic
command information.
Upgrading CCI in a UNIX environment
Use the RMinstsh script on the media for the program product to upgrade the CCI
software to a later version.
For other media, use the following instructions to upgrade the CCI software to a later
version. The following instructions refer to UNIX commands that might be dierent on
your platform. Please consult your operating system documentation (for example, UNIX
man pages) for platform-specic command information.
Follow the steps below to update the CCI software version on a UNIX system.
Procedure
1. Conrm that HORCM is not running. If it is running, shut it down.
One CCI instance: # horcmshutdown.sh
Two CCI instances: # horcmshutdown.sh 0 1
If CCI commands are running in the interactive mode, terminate the interactive
mode and exit these commands using the -q option.
2. Insert the installation media into the proper I/O device. Use the RMinstsh
(RMINSTSH) under the ./program/RM directory on the CD for the installation. For
LINUX/IA64 and LINUX/X64, execute ../../RMinstsh after moving to LINUX/IA64
or LINUX/X64 directory.
3. Move to the directory containing the HORCM directory (for example, # cd / for the
root directory).
4. Copy all les from the installation media using the cpio command: # cpio -idmu
< /dev/XXXX
where XXXX = I/O device. Preserve the directory structure (d ag) and le
modication times (m ag), and copy unconditionally (u ag).
5. Execute the CCI installation command. # /HORCM/horcminstall.sh
6. Verify installation of the proper version using the raidqry command.
# raidqry -h
Model: RAID-Manager/HP-UX
Chapter 3: Upgrading CCI
Command Control Interface Installation and Conguration Guide 60
Ver&Rev: 01-29-03/05
Usage: raidqry [options]
Next steps
After upgrading CCI, ensure that the CCI user is appropriately set for the upgraded/
installed les. For instructions, see Changing the CCI user (UNIX systems) (on page 43) .
Upgrading CCI in a Windows environment
Use this procedure to upgrade the CCI software version on a Windows system.
To upgrade the CCI version, you must rst remove the installed CCI version and then
install the new CCI version.
Caution: When you upgrade the CCI software, the sample script le is
overwritten. If you have edited the sample script le and want to keep your
changes, rst back up the edited sample script le, and then restore the data
of the sample script le using the backup le after the upgrade installation.
For details about the sample script le, see the Command Control Interface
User and Reference Guide.
Procedure
1. You can upgrade the CCI software only when CCI is not running. If CCI is running,
shut down CCI using the horcmshutdown command to ensure a normal end to all
functions.
2. Remove the installed CCI software using the Windows Control Panel.
For example, on a Windows 7 system:
a. Open the Control Panel.
b. Under Programs, click Uninstall a program.
c. In the program list, select RAID Manager for WindowsNT, and then click
Uninstall.
3. Insert the installation media for the product into the proper I/O device.
4. Execute Setup.exe (\program\RM\WIN_NT\RMHORC\Setup.exe or \program\RM
\WIN_NT\RMHORC_X64\Setup.exe on the CD), and follow the instructions on the
screen to complete the installation. The installation directory is HORCM (xed value)
at the root directory.
5. In the InstallShield window, follow the instructions on screen to install the CCI
software.
6. Reboot the Windows server, and verify that the correct version of the CCI software
is running on your system by executing the raidqry -h command.
Example:
C:\HORCM\etc>raidqry -h
Model : RAID-Manager/WindowsNT
Upgrading CCI in a Windows environment
Chapter 3: Upgrading CCI
Command Control Interface Installation and Conguration Guide 61
Ver&Rev: 01-40-03/xx
Usage : raidqry [options] for HORC
Next steps
Users who execute CCI commands need "administrator" privileges and the right to
access the log directory and the les in it. For instructions on specifying a CCI
administrator, see Changing the CCI user (Windows systems) (on page 46) .
Upgrading CCI installed on the same PC as the storage
management software
If CCI is installed on the same PC as the storage management software for VSP Gx00
models and VSP Fx00 models, use this procedure to upgrade the CCI software.
To upgrade the CCI version, you must rst remove the installed CCI version and then
install the new CCI version.
Note: Installing CCI on the same drive as the storage management software
allows you to use CCI of the appropriate version. If CCI and the storage
management software are installed on dierent drives, remove CCI, and then
install it on the same drive as the storage management software.
Caution: When you upgrade the CCI software, the sample script le is
overwritten. If you have edited the sample script le and want to keep your
changes, rst back up the edited sample script le, and then restore the data
of the sample script le using the backup le after the upgrade installation.
For details about the sample script le, see the Command Control Interface
User and Reference Guide.
Procedure
1. You can upgrade the CCI software only when CCI is not running. If CCI is running,
shut down CCI using the horcmshutdown command to ensure a normal end to all
functions.
2. Right-click <storage-management-software-installation-path>\wk
\supervisor\restapi\uninstall.bat to run as administrator.
3. Remove the installed CCI software using the Windows Control Panel.
For example, on a Windows 7 system:
a. Open the Control Panel.
b. Under Programs, click Uninstall a program.
c. In the program list, select RAID Manager for WindowsNT, and then click
Uninstall.
4. Insert the installation media for the product into the proper I/O device.
5. Execute Setup.exe (\program\RM\WIN_NT\RMHORC\Setup.exe or \program\RM
\WIN_NT\RMHORC_X64\Setup.exe on the CD), and follow the instructions on the
Upgrading CCI installed on the same PC as the storage management software
Chapter 3: Upgrading CCI
Command Control Interface Installation and Conguration Guide 62
screen to complete the installation. The installation directory is HORCM (xed value)
at the root directory.
Make sure to select the drive on which the storage management software is
installed.
6. In the InstallShield window, follow the instructions on screen to install the CCI
software.
7. Reboot the Windows server, and verify that the correct version of the CCI software
is running on your system by executing the raidqry -h command.
Example:
C:\HORCM\etc>raidqry -h
Model : RAID-Manager/WindowsNT
Ver&Rev: 01-40-03/xx
Usage : raidqry [options] for HORC
8. Right-click <storage-management-software-installation-path>\wk
\supervisor\restapi\install.bat to run as administrator.
Next steps
Users who execute CCI commands need "administrator" privileges and the right to
access the log directory and the les in it. For instructions on specifying a CCI
administrator, see Changing the CCI user (Windows systems) (on page 46) .
Upgrading CCI in an OpenVMS environment
Follow the steps below to update the CCI software version on an OpenVMS system:
Procedure
1. You can upgrade the CCI software only when CCI is not running. If CCI is running,
shut down CCI using the horcmshutdown command to ensure a normal end to all
functions:
$horcmshutdown for one HORCM instance $horcmshutdown 0 1 for two HORCM
instances. When a command is being used in interactive mode, terminate it using
the -q option.
2. Insert and mount the provided installation media.
3. Execute the following command:
$ PRODUCT INSTALL CCI /source=Device:[PROGRAM.CCI.OVMS]/LOG
Device:[PROGRAM.CCI.OVMS] where HITACH-ARMVMS-CCI-V0122-
2-1.PCSI exists
4. Verify installation of the proper version using the raidqry command.
$ raidqry -h
Model: CCI/OpenVMS
Upgrading CCI in an OpenVMS environment
Chapter 3: Upgrading CCI
Command Control Interface Installation and Conguration Guide 63
Ver&Rev: 01-29-03/05
Usage: raidqry [options]
Upgrading CCI in an OpenVMS environment
Chapter 3: Upgrading CCI
Command Control Interface Installation and Conguration Guide 64
Chapter 4: Removing CCI
This chapter describes and provides instructions for removing the CCI software.
Removing CCI in a UNIX environment
Removing the CCI software on UNIX using RMuninst
Use this procedure to remove the CCI software on a UNIX system using the RMuninst
script on the installation media.
Before you begin
If you are discontinuing local or remote copy operations (for example, ShadowImage,
TrueCopy), delete all volume pairs and wait until the volumes are in simplex status.
If you will continue copy operations (for example, using Storage Navigator), do not
delete any volume pairs.
Procedure
1. If CCI commands are running in the interactive mode, use the -q option to
terminate the interactive mode and exit horcmshutdown.sh commands.
2. You can remove the CCI software only when CCI is not running. If CCI is running,
shut down CCI using the horcmshutdown.sh command to ensure a normal end to
all functions:
One CCI instance: # horcmshutdown.sh
Two CCI instances: # horcmshutdown.sh 0 1
3. Use the RMuninst script on the CCI installation media to remove the CCI software.
4. After the CCI software has been removed, the CCI command devices (used for the
in-band method) are no longer needed. If you want to congure the volumes that
were used by CCI command devices for operations from the connected hosts, you
must disable the command device setting on each volume.
To disable the command device setting:
a. Click Storage Systems, expand the Storage Systems tree, and click Logical
Devices.
On the LDEVs tab, the CCI command devices are identied by Command
Device in the Attribute column.
b. Select the command device, and then click More Actions > Edit Command
Devices.
c. For Command Device, click Disable, and then click Finish.
Chapter 4: Removing CCI
Command Control Interface Installation and Conguration Guide 65
d. In the Conrm window, verify the settings, and enter the task name.
You can enter up to 32 ASCII characters and symbols, with the exception of:
\ / : , ; * ? " < > |. The value "date-window name" is entered by default.
e. Click Apply.
If Go to tasks window for status is selected, the Tasks window appears.
Removing the CCI software manually on UNIX
If you do not have the installation media for CCI, use this procedure to remove the CCI
software manually on a UNIX system.
Before you begin
If you are discontinuing local or remote copy operations (for example, ShadowImage,
TrueCopy), delete all volume pairs and wait until the volumes are in simplex status.
If you will continue copy operations (for example, using Storage Navigator), do not
delete any volume pairs.
Procedure
1. If CCI commands are running in the interactive mode, use the -q option to
terminate the interactive mode and exit horcmshutdown.sh commands.
2. You can remove the CCI software only when CCI is not running. If CCI is running,
shut down CCI using the horcmshutdown.sh command to ensure a normal end to
all functions:
One CCI instance: # horcmshutdown.sh
Two CCI instances: # horcmshutdown.sh 0 1
3. When HORCM is installed in the root directory (/HORCM is not a symbolic link),
remove the CCI software as follows:
a. Execute the horcmuninstall command: # /HORCM/horcmuninstall.sh
b. Move to the root directory: # cd /
c. Delete the product using the rm command: # rm -rf /HORCM
Example
#/HORCM/horcmuninstall.sh
#cd /
#rm -rf /HORCM
4. When HORCM is not installed in the root directory (/HORCM is a symbolic link),
remove the CCI software as follows:
a. Execute the horcmuninstall command: # HORCM/horcmuninstall.sh
b. Move to the root directory: # cd /
c. Delete the symbolic link for /HORCM: # rm /HORCM
d. Delete the product using the rm command: # rm -rf /Directory/HORCM
Removing the CCI software manually on UNIX
Chapter 4: Removing CCI
Command Control Interface Installation and Conguration Guide 66
Example
#/HORCM/horcmuninstall.sh
#cd /
#rm /HORCM
#rm -rf /<non-root_directory_name>/HORCM
5. After the CCI software has been removed, the CCI command devices (used for the
in-band method) are no longer needed. If you want to congure the volumes that
were used by CCI command devices for operations from the connected hosts, you
must disable the command device setting on each volume.
To disable the command device setting:
a. Click Storage Systems, expand the Storage Systems tree, and click Logical
Devices.
On the LDEVs tab, the CCI command devices are identied by Command
Device in the Attribute column.
b. Select the command device, and then click More Actions > Edit Command
Devices.
c. For Command Device, click Disable, and then click Finish.
d. In the Conrm window, verify the settings, and enter the task name.
You can enter up to 32 ASCII characters and symbols, with the exception of:
\ / : , ; * ? " < > |. The value "date-window name" is entered by default.
e. Click Apply.
If Go to tasks window for status is selected, the Tasks window appears.
Removing CCI on a Windows system
Use this procedure to remove the CCI software on a Windows system.
Before you begin
If you are discontinuing local or remote copy operations (for example, ShadowImage,
TrueCopy), delete all volume pairs and wait until the volumes are in simplex status.
If you will continue copy operations (for example, using Storage Navigator), do not
delete any volume pairs.
Procedure
1. You can remove the CCI software only when CCI is not running. If CCI is running,
shut down CCI using the horcmshutdown command to ensure a normal end to all
functions:
One CCI instance: D:\HORCM\etc > horcmshutdown
Two CCI instances: D:\HORCM\etc > horcmshutdown 0 1
2. Remove the CCI software using the Windows Control Panel.
For example, perform the following steps on a Windows 7 system:
a. Open the Control Panel.
Removing CCI on a Windows system
Chapter 4: Removing CCI
Command Control Interface Installation and Conguration Guide 67
b. Under Programs, click Uninstall a program.
c. In the program list, select RAID Manager for WindowsNT, and then click
Uninstall.
3. After the CCI software has been removed, the CCI command devices (used for the
in-band method) are no longer needed. If you want to congure the volumes that
were used by CCI command devices for operations from the connected hosts, you
must disable the command device setting on each volume.
To disable the command device setting:
a. Click Storage Systems, expand the Storage Systems tree, and click Logical
Devices.
On the LDEVs tab, the CCI command devices are identied by Command
Device in the Attribute column.
b. Select the command device, and then click More Actions > Edit Command
Devices.
c. For Command Device, click Disable, and then click Finish.
d. In the Conrm window, verify the settings, and enter the task name.
You can enter up to 32 ASCII characters and symbols, with the exception of:
\ / : , ; * ? " < > |. The value "date-window name" is entered by default.
e. Click Apply.
If Go to tasks window for status is selected, the Tasks window appears.
Removing CCI installed on the same PC as the storage
management software
If CCI is installed on the same PC as the storage management software for VSP Gx00
models and VSP Fx00 models, use this procedure to remove the CCI software.
Before you begin
If you are discontinuing local or remote copy operations (for example, ShadowImage,
TrueCopy), delete all volume pairs and wait until the volumes are in simplex status.
If you will continue copy operations (for example, using Storage Navigator), do not
delete any volume pairs.
Procedure
1. You can remove the CCI software only when CCI is not running. If CCI is running,
shut down CCI using the horcmshutdown command to ensure a normal end to all
functions:
One CCI instance: D:\HORCM\etc > horcmshutdown
Two CCI instances: D:\HORCM\etc > horcmshutdown 0 1
2. Right-click <storage-management-software-installation-path>\wk
\supervisor\restapi\uninstall.bat to run as administrator.
3. Remove the CCI software using the Windows Control Panel.
Removing CCI installed on the same PC as the storage management software
Chapter 4: Removing CCI
Command Control Interface Installation and Conguration Guide 68
For example, perform the following steps on a Windows 7 system:
a. Open the Control Panel.
b. Under Programs, click Uninstall a program.
c. In the program list, select RAID Manager for WindowsNT, and then click
Uninstall.
4. Perform the procedure for upgrading the storage management software, the SVP
software, and the rmware.
5. After the CCI software has been removed, the CCI command devices (used for the
in-band method) are no longer needed. If you want to congure the volumes that
were used by CCI command devices for operations from the connected hosts, you
must disable the command device setting on each volume.
To disable the command device setting:
a. Click Storage Systems, expand the Storage Systems tree, and click Logical
Devices.
On the LDEVs tab, the CCI command devices are identied by Command
Device in the Attribute column.
b. Select the command device, and then click More Actions > Edit Command
Devices.
c. For Command Device, click Disable, and then click Finish.
d. In the Conrm window, verify the settings, and enter the task name.
You can enter up to 32 ASCII characters and symbols, with the exception of:
\ / : , ; * ? " < > |. The value "date-window name" is entered by default.
e. Click Apply.
If Go to tasks window for status is selected, the Tasks window appears.
Removing CCI on an OpenVMS system
Use this procedure to remove the CCI software on an OpenVMS system.
Before you begin
If you are discontinuing local or remote copy operations (for example, ShadowImage,
TrueCopy), delete all volume pairs and wait until the volumes are in simplex status.
If you will continue copy operations (for example, using Storage Navigator), do not
delete any volume pairs.
Procedure
1. If CCI commands are running in the interactive mode, use the -q option to
terminate the interactive mode and exit horcmshutdown.sh commands.
2. You can remove the CCI software only when CCI is not running. If CCI is running,
shut down CCI using the horcmshutdown command to ensure a normal end to all
functions:
For one instance: $ horcmshutdown
Removing CCI on an OpenVMS system
Chapter 4: Removing CCI
Command Control Interface Installation and Conguration Guide 69
For two instances: $ horcmshutdown 0 1
3. Remove the installed CCI software by using the following command:
$ PRODUCT REMOVE RM /LOG
4. After the CCI software has been removed, the CCI command devices (used for the
in-band method) are no longer needed. If you want to congure the volumes that
were used by CCI command devices for operations from the connected hosts, you
must disable the command device setting on each volume.
To disable the command device setting:
a. Click Storage Systems, expand the Storage Systems tree, and click Logical
Devices.
On the LDEVs tab, the CCI command devices are identied by Command
Device in the Attribute column.
b. Select the command device, and then click More Actions > Edit Command
Devices.
c. For Command Device, click Disable, and then click Finish.
d. In the Conrm window, verify the settings, and enter the task name.
You can enter up to 32 ASCII characters and symbols, with the exception of:
\ / : , ; * ? " < > |. The value "date-window name" is entered by default.
e. Click Apply.
If Go to tasks window for status is selected, the Tasks window appears.
Removing CCI on an OpenVMS system
Chapter 4: Removing CCI
Command Control Interface Installation and Conguration Guide 70
Chapter 5: Troubleshooting for CCI installation
If you have a problem installing or upgrading the CCI software, make sure that all system
requirements and restrictions have been met (see System requirements for CCI (on
page 13) ).
If you are unable to resolve an error condition, contact customer support for assistance.
Contacting support
If you need to call customer support, please provide as much information about the
problem as possible, including:
The circumstances surrounding the error or failure.
The content of any error messages displayed on the host systems.
The content of any error messages displayed by Device Manager - Storage Navigator.
The Device Manager - Storage Navigator conguration information (use the Dump
Tool).
The service information messages (SIMs), including reference codes and severity
levels, displayed by Device Manager - Storage Navigator.
The customer support sta is available 24 hours a day, seven days a week. To contact
technical support, log on to Hitachi Vantara Support Connect for contact information:
https://support.hitachivantara.com/en_us/contact-us.html.
Chapter 5: Troubleshooting for CCI installation
Command Control Interface Installation and Conguration Guide 71
Appendix A: Fibre-to-SCSI address conversion
Disks connected with Fibre Channel display as SCSI disks on UNIX hosts. Disks connected
with Fibre Channel connections can be fully utilized. CCI converts Fibre-Channel physical
addresses to SCSI target IDs (TIDs) using a conversion table.
Fibre/FCoE-to-SCSI address conversion
The following gure shows an example of Fibre-to-SCSI address conversion.
For iSCSI, the AL_PA is the xed value 0xFE.
The following table lists the limits for target IDs (TIDs) and LUNs.
Port
HP-UX, other systems Solaris systems Windows systems
TID LUN TID LUN TID LUN
Fibre 0 to 15 0 to 1023 0 to 125 0 to 1023 0 to 31 0 to 1023
SCSI 0 to 15 0 to 7 0 to 15 0 to 7 0 to 15 0 to 7
Conversion table for Windows
The conversion table for Windows is based on conversion by an Emulex driver. If the
Fibre Channel adapter is dierent (for example, Qlogic, HPE), the target ID that is
indicated by the raidscan command might be dierent from the target ID on the
Windows host.
Appendix A: Fibre-to-SCSI address conversion
Command Control Interface Installation and Conguration Guide 72
The following shows an example of using the raidscan command to display the TID and
LUN of Harddisk6 (HP driver). You must start HORCM without the descriptions of
HORCM_DEV or HORCM_INST in the conguration denition le because of the unknown
TIDs and LUNs.
Using raidscan to display TID and LUN for FC devices
C:\>raidscan -pd hd6 -x drivescan hd6
Harddisk 6... Port[ 2] PhId[ 4] TId[ 3] Lun[ 5] [HITACHI ] [OPEN-3
]
Port[CL1-J] Ser#[ 30053] LDEV#[ 14(0x00E)]
HORC = SMPL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL]
RAID5[Group 1- 2] SSID = 0x0004
PORT# /ALPA/C,TID#,LU#.Num(LDEV#....)...P/S, Status,Fence,LDEV#,P-Seq#,P-
LDEV#
CL1-J / e2/4, 29, 0.1(9).............SMPL ---- ------ ----, ----- ----
CL1-J / e2/4, 29, 1.1(10)............SMPL ---- ------ ----, ----- ----
CL1-J / e2/4, 29, 2.1(11)............SMPL ---- ------ ----, ----- ----
CL1-J / e2/4, 29, 3.1(12)............SMPL ---- ------ ----, ----- ----
CL1-J / e2/4, 29, 4.1(13)............SMPL ---- ------ ----, ----- ----
CL1-J / e2/4, 29, 5.1(14)............SMPL ---- ------ ----, ----- ----
CL1-J / e2/4, 29, 6.1(15)............SMPL ---- ------ ----, ----- ----
Specified device is LDEV# 0014
In this case, the target ID indicated by the raidscan command must be used in the
conguration denition le. This can be accomplished using either of the following two
methods:
Using the default conversion table: Use the TID# and LU# indicated by the
raidscan command in the HORCM conguration denition le (TID=29 LUN=5 in the
example above).
Changing the default conversion table: Change the default conversion table using
the HORCMFCTBL environmental variable (TID=3 LUN=5 in the following example).
Using HORCMFCTBL to change the default bre conversion table
C:\>set HORCMFCTBL=X <-- X=fibre conversion table #
C:\>horcmstart ... <-- Start of HORCM.
:
:
Result of "set HORCMFCTBL=X" command:
C:\>raidscan -pd hd6 -x drivescan hd6
Harddisk 6... Port[ 2] PhId[ 4] TId[ 3] Lun[ 5] [HITACHI ] [OPEN-3
]
Port[CL1-J] Ser#[ 30053] LDEV#[ 14(0x00E)]
HORC = SMPL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL]
RAID5[Group 1- 2] SSID = 0x0004
PORT# /ALPA/C,TID#,LU#.Num(LDEV#....)...P/S,Status,Fence,LDEV#,P-Seq#,P-
LDEV#
CL1-J / e2/0, 3, 0.1(9).............SMPL ---- ------ ----, ----- ----
CL1-J / e2/0, 3, 1.1(10)............SMPL ---- ------ ----, ----- ----
CL1-J / e2/0, 3, 2.1(11)............SMPL ---- ------ ----, ----- ----
Fibre/FCoE-to-SCSI address conversion
Appendix A: Fibre-to-SCSI address conversion
Command Control Interface Installation and Conguration Guide 73
CL1-J / e2/0, 3, 3.1(12)............SMPL ---- ------ ----, ----- ----
CL1-J / e2/0, 3, 4.1(13)............SMPL ---- ------ ----, ----- ----
CL1-J / e2/0, 3, 5.1(14)............SMPL ---- ------ ----, ----- ----
CL1-J / e2/0, 3, 6.1(15)............SMPL ---- ------ ----, ----- ----
Specified device is LDEV# 0014
LUN configurations on the RAID storage systems
The RAID storage systems (9900V and later) manage the LUN conguration on a port
through the LUN security as shown in the following gure.
CCI uses absolute LUNs to scan a port, whereas the LUNs on a group are mapped to the
host system so that the TID and LUN indicated by the raidscan command are dierent
from the TID and LUN displayed by the host system. In this case, the TID and LUN
indicated by the raidscan command should be used.
In the following example, you must start HORCM without a description for HORCM_DEV
and HORCM_INST because the TID and LUN are not known. Use the port, TID, and LUN
displayed by the raidscan -find or raidscan -find conf command for
HORCM_DEV (see the example for displaying the port, TID, and LUN using raidscan).
For details about LUN discovery based on a host group, see Host Group Control in the
Command Control Interface User and Reference Guide.
Displaying the port, TID, and LUN using raidscan
# ls /dev/rdsk/* | raidscan -find
DEVICE_FILE UID S/F PORT TARG LUN SERIAL LDEV PRODUCT_ID
/dev/rdsk/c0t0d4 0 S CL1-M 0 4 31168 216 OPEN-3-CVS-CM
/dev/rdsk/c0t0d1 0 S CL1-M 0 1 31168 117 OPEN-3-CVS
/dev/rdsk/c1t0d1 - - CL1-M - - 31170 121 OPEN-3-CVS
UID: Displays the UnitID for multiple RAID conguration. A hyphen (-) is displayed when
the command device for HORCM_CMD is not found.
S/F: S indicates that the port is SCSI, and F indicates that the port is Fibre Channel.
LUN congurations on the RAID storage systems
Appendix A: Fibre-to-SCSI address conversion
Command Control Interface Installation and Conguration Guide 74
PORT: Displays the RAID storage system port number
TARG: Displays the target ID (converted by the bre conversion table)
LUN: Displays the logical unit number (converted by the bre conversion table).
SERIAL: Displays the production number (serial#) of the RAID storage system.
LDEV: Displays the LDEV# within the RAID storage system.
PRODUCT_ID: Displays product-id eld in the STD inquiry page.
Fibre address conversion tables
Following are the bre address conversion tables:
Table number 0 = HP-UX systems
Table number 1 = Solaris systems
Table number 2 = Windows systems
The conversion table for Windows systems is based on the Emulex driver. If a dierent
Fibre-Channel adapter is used, the target ID indicated by the raidscan command might
be dierent than the target ID indicated by the Windows system.
Note: Table 3 for other Platforms is used to indicate the LUN without target
ID for unknown FC_AL conversion table or Fibre-Channel fabric (Fibre-Channel
worldwide name). In this case, the target ID is always zero, thus Table 3 is not
described in this document. Table 3 is used as the default for platforms
other than those listed above. If the host will use the WWN notation for the
device les, then this table number should be changed by using the
$HORCMFCTBL variable.
If the TID displayed on the system is dierent than the TID indicated in the bre
conversion table, you must use the TID (or LU#) returned by the raidscan command to
specify the device(s).
Fibre address conversion table for HP-UX systems (Table 0)
C0 C1 C2 C3 C4 C5 C6 C7
AL-
PA
TI
D
AL-
PA
TI
D
AL-
PA
TI
D
AL-
PA
TI
D
AL-
PA
TI
D
AL-
PA
TI
D
AL-
PA
TI
D
AL-
PA TID
EF 0 CD 0 B2 0 98 0 72 0 55 0 3A 0 25 0
E8 1 CC 1 B1 1 97 1 71 1 54 1 39 1 23 1
E4 2 CB 2 AE 2 90 2 6E 2 53 2 36 2 1F 2
E2 3 CA 3 AD 3 8F 3 6D 3 52 3 35 3 1E 3
E1 4 C9 4 AC 4 88 4 6C 4 51 4 34 4 1D 4
Fibre address conversion tables
Appendix A: Fibre-to-SCSI address conversion
Command Control Interface Installation and Conguration Guide 75
C0 C1 C2 C3 C4 C5 C6 C7
AL-
PA
TI
D
AL-
PA
TI
D
AL-
PA
TI
D
AL-
PA
TI
D
AL-
PA
TI
D
AL-
PA
TI
D
AL-
PA
TI
D
AL-
PA TID
E0 5 C7 5 AB 5 84 5 6B 5 4E 5 33 5 1B 5
DC 6 C6 6 AA 6 82 6 6A 6 4D 6 32 6 18 6
DA 7 C5 7 A9 7 81 7 69 7 4C 7 31 7 17 7
D9 8 C3 8 A7 8 80 8 67 8 4B 8 2E 8 10 8
D6 9 BC 9 A6 9 7C 9 66 9 4A 9 2D 9 0F 9
D5 10 BA 10 A5 10 7A 10 65 10 49 10 2C 10 08 10
D4 11 B9 11 A3 11 79 11 63 11 47 11 2B 11 04 11
D3 12 B6 12 9F 12 76 12 5C 12 46 12 2A 12 02 12
D2 13 B5 13 9E 13 75 13 5A 13 45 13 29 13 01 13
D1 14 B4 14 9D 14 74 14 59 14 43 14 27 14 - -
CE 15 B3 15 9B 15 73 15 56 15 3C 15 26 15 - -
Fibre address conversion table for Solaris systems (Table 1)
C0 C1 C2 C3 C4 C5 C6 C7
AL-
PA
TI
D
AL-
PA
TI
D
AL-
PA
TI
D
AL-
PA
TI
D
AL-
PA
TI
D
AL-
PA
TI
D
AL-
PA
TI
D
AL
-
PA TID
EF 0 CD 16 B2 32 98 48 72 64 55 80 3A 96 25 112
E8 1 CC 17 B1 33 97 49 71 65 54 81 39 97 23 113
E4 2 CB 18 AE 34 90 50 6E 66 53 82 36 98 1F 114
E2 3 CA 19 AD 35 8F 51 6D 67 52 83 35 99 1E 115
E1 4 C9 20 AC 36 88 52 6C 68 51 84 34 10
0
1D 116
E0 5 C7 21 AB 37 84 53 6B 69 4E 85 33 10
1
1B 117
DC 6 C6 22 AA 38 82 54 6A 70 4D 86 32 10
1
18 118
Fibre address conversion tables
Appendix A: Fibre-to-SCSI address conversion
Command Control Interface Installation and Conguration Guide 76