Command Control Interface Installation And Configuration Guide CCI V01 46 03 02 Install MK 90RD7008 22

User Manual:

Open the PDF directly: View PDF PDF.
Page Count: 138

DownloadCommand Control Interface Installation And Configuration Guide CCI V01 46 03 02 Install MK-90RD7008-22
Open PDF In BrowserView PDF
Command Control Interface
01-46-03/02

Installation and Configuration Guide
This document describes and provides instructions for installing the Command Control Interface (CCI)
software for the Hitachi RAID storage systems, including upgrading and removing CCI.

MK-90RD7008-22
March 2018

© 2010, 2018 Hitachi, Ltd. All rights reserved.
No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including copying and
recording, or stored in a database or retrieval system for commercial purposes without the express written permission of Hitachi, Ltd., or
Hitachi Vantara Corporation (collectively “Hitachi”). Licensee may make copies of the Materials provided that any such copy is: (i) created as an
essential step in utilization of the Software as licensed and is used in no other manner; or (ii) used for archival purposes. Licensee may not
make any other copies of the Materials. “Materials” mean text, data, photographs, graphics, audio, video and documents.
Hitachi reserves the right to make changes to this Material at any time without notice and assumes no responsibility for its use. The Materials
contain the most current information available at the time of publication.
Some of the features described in the Materials might not be currently available. Refer to the most recent product announcement for
information about feature and product availability, or contact Hitachi Vantara Corporation at https://support.hitachivantara.com/en_us/contactus.html.
Notice: Hitachi products and services can be ordered only under the terms and conditions of the applicable Hitachi agreements. The use of
Hitachi products is governed by the terms of your agreements with Hitachi Vantara Corporation.
By using this software, you agree that you are responsible for:
1.

Acquiring the relevant consents as may be required under local privacy laws or otherwise from authorized employees and other
individuals; and

2.

Verifying that your data continues to be held, retrieved, deleted, or otherwise processed in accordance with relevant laws.

Notice on Export Controls. The technical data and technology inherent in this Document may be subject to U.S. export control laws, including
the U.S. Export Administration Act and its associated regulations, and may be subject to export or import regulations in other countries. Reader
agrees to comply strictly with all such regulations and acknowledges that Reader has the responsibility to obtain licenses to export, re-export, or
import the Document and any Compliant Products.
Hitachi is a registered trademark of Hitachi, Ltd., in the United States and other countries.
AIX, AS/400e, DB2, Domino, DS6000, DS8000, Enterprise Storage Server, eServer, FICON, FlashCopy, IBM, Lotus, MVS, OS/390, PowerPC, RS/6000,
S/390, System z9, System z10, Tivoli, z/OS, z9, z10, z13, z/VM, and z/VSE are registered trademarks or trademarks of International Business
Machines Corporation.
Active Directory, ActiveX, Bing, Excel, Hyper-V, Internet Explorer, the Internet Explorer logo, Microsoft, the Microsoft Corporate Logo, MS-DOS,
Outlook, PowerPoint, SharePoint, Silverlight, SmartScreen, SQL Server, Visual Basic, Visual C++, Visual Studio, Windows, the Windows logo,
Windows Azure, Windows PowerShell, Windows Server, the Windows start button, and Windows Vista are registered trademarks or trademarks
of Microsoft Corporation. Microsoft product screen shots are reprinted with permission from Microsoft Corporation.
All other trademarks, service marks, and company names in this document or website are properties of their respective owners.

Command Control Interface Installation and Configuration Guide

ii

Contents
Preface..................................................................................................... 7
Intended audience............................................................................................... 7
Product version....................................................................................................7
Release notes......................................................................................................7
Changes in this revision.......................................................................................8
Referenced documents........................................................................................8
Document conventions........................................................................................ 8
Conventions for storage capacity values........................................................... 10
Accessing product documentation..................................................................... 11
Getting help........................................................................................................12
Comments..........................................................................................................12

Chapter 1: Installation requirements for Command Control
Interface................................................................................................. 13
System requirements for CCI.............................................................................13
CCI operating environment................................................................................17
Platforms that use CCI................................................................................. 17
Applicable platforms for CCI on VM ............................................................ 20
Supported platforms for IPv6........................................................................22
Requirements and restrictions for CCI on z/Linux............................................. 22
Requirements and restrictions for CCI on VM................................................... 25
Restrictions for VMware ESX Server............................................................25
Restrictions for Windows Hyper-V (Windows 2012/2008)............................26
Restrictions for Oracle VM............................................................................28
About platforms supporting IPv6........................................................................29
Library and system call for IPv6................................................................... 29
Environment variables for IPv6.....................................................................29
HORCM start-up log for IPv6........................................................................30

Contents
Command Control Interface Installation and Configuration Guide

3

Startup procedures using detached process on DCL for OpenVMS................. 30
Command examples in DCL for OpenVMS..................................................33
Start-up procedures in bash for OpenVMS........................................................37
Using CCI with Hitachi and other storage systems............................................39

Chapter 2: Installing and configuring CCI.......................................... 41
Installing the CCI hardware............................................................................... 41
Installing the CCI software.................................................................................42
UNIX installation...........................................................................................42
Installing the CCI software into the root directory................................... 42
Installing the CCI software into a non-root directory............................... 43
Changing the CCI user (UNIX systems)................................................. 43
Windows installation.....................................................................................45
Changing the CCI user (Windows systems)........................................... 46
Installing CCI on the same PC as the storage management software ........ 48
OpenVMS installation...................................................................................49
In-band and out-of-band operations............................................................. 50
Setting up UDP ports.............................................................................. 53
Setting the command device........................................................................ 53
Specifying the command device and virtual command device in the
configuration definition file...................................................................... 55
About alternate command devices..........................................................56
Creating and editing the configuration definition file.....................................57
Notes on editing configuration definition file........................................... 59

Chapter 3: Upgrading CCI.................................................................... 60
Upgrading CCI in a UNIX environment.............................................................. 60
Upgrading CCI in a Windows environment........................................................ 61
Upgrading CCI installed on the same PC as the storage management
software............................................................................................................. 62
Upgrading CCI in an OpenVMS environment.................................................... 63

Chapter 4: Removing CCI.....................................................................65
Removing CCI in a UNIX environment.............................................................. 65
Removing the CCI software on UNIX using RMuninst............................... 65

Contents
Command Control Interface Installation and Configuration Guide

4

Removing the CCI software manually on UNIX........................................... 66
Removing CCI on a Windows system................................................................67
Removing CCI installed on the same PC as the storage management
software ............................................................................................................ 68
Removing CCI on an OpenVMS system........................................................... 69

Chapter 5: Troubleshooting for CCI installation................................ 71
Contacting support.............................................................................................71

Appendix A: Fibre-to-SCSI address conversion................................ 72
Fibre/FCoE-to-SCSI address conversion...........................................................72
LUN configurations on the RAID storage systems............................................ 74
Fibre address conversion tables........................................................................75

Appendix B: Sample configuration definition files............................79
Sample configuration definition files.................................................................. 79
Configuration file parameters....................................................................... 80
HORCM_MON........................................................................................ 81
HORCM_CMD (in-band method)............................................................ 81
HORCM_CMD (out-of-band method)......................................................86
HORCM_VCMD...................................................................................... 88
HORCM_DEV......................................................................................... 89
HORCM_INST........................................................................................ 92
HORCM_INSTP...................................................................................... 95
HORCM_LDEV....................................................................................... 96
HORCM_LDEVG.................................................................................... 96
HORCM_ALLOW_INST..........................................................................97
Examples of CCI configurations........................................................................ 97
Example of CCI commands for TrueCopy remote configuration.................. 97
Example of CCI commands for TrueCopy local configuration....................102
Example of CCI commands for TrueCopy configuration with two
instances.................................................................................................... 106
Example of CCI commands for ShadowImage configuration..................... 110
Example of CCI commands for ShadowImage cascade configuration.......118
Example of CCI commands for TC/SI cascade configuration.................... 122

Contents
Command Control Interface Installation and Configuration Guide

5

Correspondence of the configuration definition file for cascading volume
and mirror descriptors......................................................................................127
Configuration definition files for cascade configurations..................................129
Configuration definition files for ShadowImage cascade configuration...... 129
Configuration definition files for TrueCopy/ShadowImage cascade
configuration ..............................................................................................131

Index................................................................................................. 135

Contents
Command Control Interface Installation and Configuration Guide

6

Preface
This document describes and provides instructions for installing the Command Control
Interface (CCI) software for the Hitachi RAID storage systems, including upgrading and
removing CCI.
Please read this document carefully to understand how to use this product, and maintain
a copy for your reference.

Intended audience
This document is intended for system administrators, Hitachi Vantara representatives,
and authorized service providers who install, configure, and use the Command Control
Interface software for the Hitachi RAID storage systems.
Readers of this document should be familiar with the following:
■

Data processing and RAID storage systems and their basic functions.

■

The Hitachi RAID storage systems and the manual for the storage system (for
example, Hardware Guide of your storage system).

■

The management software for the storage system (for example, Hitachi Command
Suite, Hitachi Device Manager - Storage Navigator, Storage Navigator) and the applicable
user manuals (for example, Hitachi Command Suite User Guide, System Administrator
Guide for VSP, HUS VM, USP V/VM.

■

The host systems attached to the Hitachi RAID storage systems.

Product version
This document revision applies to the Command Control Interface software version
01-46-03/02 or later.

Release notes
Read the release notes before installing and using this product. They may contain
requirements or restrictions that are not fully described in this document or updates or
corrections to this document. Release notes are available on Hitachi Vantara Support
Connect: https://knowledge.hitachivantara.com/Documents.

Preface
Command Control Interface Installation and Configuration Guide

7

Changes in this revision

Changes in this revision
■

Added support information for Windows 8.1 and Windows 10 (Platforms that use CCI
(on page 17) , Requirements and restrictions for CCI on Windows 8.1 and Windows
10).

■

Added instructions for disabling the command device settings after removing CCI.

■

Removed restrictions for number of instances per command device.

Referenced documents
Command Control Interface documents:
■

Command Control Interface Command Reference, MK-90RD7009

■

Command Control Interface User and Reference Guide, MK-90RD7010

Storage system documents:
■

Hardware Guide or User and Reference Guide for the storage system

■

Open-Systems Host Attachment Guide, MK-90RD7037

■

Hitachi Command Suite User Guide, MK-90HC172

■

System Administrator Guide or Storage Navigator User Guide for the storage system

■

Hitachi Device Manager - Storage Navigator Messages for the storage system

■

Provisioning Guide for the storage system (VSP Gx00 models, VSP Fx00 models, VSP
G1x00, VSP F1500, VSP, HUS VM)

■

LUN Manager User Guide and Virtual LVI/LUN User Guide for the storage system (USP
V/VM)

Document conventions
This document uses the following storage system terminology conventions:
Convention
VSP G series

Description
Refers to the following storage systems:
■

Hitachi Virtual Storage Platform G1x00

■

Hitachi Virtual Storage Platform G200

■

Hitachi Virtual Storage Platform G400

■

Hitachi Virtual Storage Platform G600

■

Hitachi Virtual Storage Platform G800

Preface
Command Control Interface Installation and Configuration Guide

8

Document conventions

Convention

Description

VSP F series

Refers to the following storage systems:

VSP Gx00 models

■

Hitachi Virtual Storage Platform F1500

■

Hitachi Virtual Storage Platform F400

■

Hitachi Virtual Storage Platform F600

■

Hitachi Virtual Storage Platform F800

Refers to all of the following models, unless otherwise noted.

VSP Fx00 models

■

Hitachi Virtual Storage Platform G200

■

Hitachi Virtual Storage Platform G400

■

Hitachi Virtual Storage Platform G600

■

Hitachi Virtual Storage Platform G800

Refers to all of the following models, unless otherwise noted.
■

Hitachi Virtual Storage Platform F400

■

Hitachi Virtual Storage Platform F600

■

Hitachi Virtual Storage Platform F800

This document uses the following typographic conventions:
Convention
Bold

Description
■

Indicates text in a window, including window titles, menus,
menu options, buttons, fields, and labels. Example:
Click OK.

Italic

■

Indicates emphasized words in list items.

■

Indicates a document title or emphasized words in text.

■

Indicates a variable, which is a placeholder for actual text
provided by the user or for output by the system. Example:
pairdisplay -g group
(For exceptions to this convention for variables, see the entry for
angle brackets.)

Monospace

Indicates text that is displayed on screen or entered by the user.
Example: pairdisplay -g oradb

Preface
Command Control Interface Installation and Configuration Guide

9

Conventions for storage capacity values

Convention

Description
Indicates variables in the following scenarios:

< > angle
brackets

■

Variables are not clearly separated from the surrounding text or
from other variables. Example:
Status-.csv

■

Variables in headings.

[ ] square
brackets

Indicates optional values. Example: [ a | b ] indicates that you can
choose a, b, or nothing.

{ } braces

Indicates required or expected values. Example: { a | b } indicates
that you must choose either a or b.

| vertical bar

Indicates that you have a choice between two or more options or
arguments. Examples:
[ a | b ] indicates that you can choose a, b, or nothing.
{ a | b } indicates that you must choose either a or b.

This document uses the following icons to draw attention to information:
Icon

Label

Description

Note

Calls attention to important or additional information.

Tip

Provides helpful information, guidelines, or suggestions for
performing tasks more effectively.

Caution

Warns the user of adverse conditions and/or consequences
(for example, disruptive operations, data loss, or a system
crash).

WARNING

Warns the user of a hazardous situation which, if not
avoided, could result in death or serious injury.

Conventions for storage capacity values
Physical storage capacity values (for example, disk drive capacity) are calculated based
on the following values:

Preface
Command Control Interface Installation and Configuration Guide

10

Accessing product documentation

Physical capacity unit

Value

1 kilobyte (KB)

1,000 (103) bytes

1 megabyte (MB)

1,000 KB or 1,0002 bytes

1 gigabyte (GB)

1,000 MB or 1,0003 bytes

1 terabyte (TB)

1,000 GB or 1,0004 bytes

1 petabyte (PB)

1,000 TB or 1,0005 bytes

1 exabyte (EB)

1,000 PB or 1,0006 bytes

Logical capacity values (for example, logical device capacity, cache memory capacity) are
calculated based on the following values:
Logical capacity unit

Value

1 block

512 bytes

1 cylinder

Mainframe: 870 KB
Open-systems:
■

OPEN-V: 960 KB

■

Others: 720 KB

1 KB

1,024 (210) bytes

1 MB

1,024 KB or 1,0242 bytes

1 GB

1,024 MB or 1,0243 bytes

1 TB

1,024 GB or 1,0244 bytes

1 PB

1,024 TB or 1,0245 bytes

1 EB

1,024 PB or 1,0246 bytes

Accessing product documentation
Product user documentation is available on Hitachi Vantara Support Connect: https://
knowledge.hitachivantara.com/Documents. Check this site for the most current
documentation, including important updates that may have been made after the release
of the product.

Preface
Command Control Interface Installation and Configuration Guide

11

Getting help

Getting help
Hitachi Vantara Support Connect is the destination for technical support of products and
solutions sold by Hitachi Vantara. To contact technical support, log on to Hitachi Vantara
Support Connect for contact information: https://support.hitachivantara.com/en_us/
contact-us.html.
Hitachi Vantara Community is a global online community for Hitachi Vantara customers,
partners, independent software vendors, employees, and prospects. It is the destination
to get answers, discover insights, and make connections. Join the conversation today!
Go to community.hitachivantara.com, register, and complete your profile.

Comments
Please send us your comments on this document to
doc.comments@hitachivantara.com. Include the document title and number, including
the revision level (for example, -07), and refer to specific sections and paragraphs
whenever possible. All comments become the property of Hitachi Vantara Corporation.
Thank you!

Preface
Command Control Interface Installation and Configuration Guide

12

Chapter 1: Installation requirements for
Command Control Interface
The installation requirements for the Command Control Interface (CCI) software include
host requirements, storage system requirements, and requirements and restrictions for
specific operational environments.

System requirements for CCI
The following table lists and describes the system requirements for Command Control
Interface.
Item
Command
Control
Interface
software
product

Requirement
The CCI software is supplied on the media for the product (for
example, DVD-ROM). The CCI software files require 2.5 MB of space,
and the log files require 3 MB of space.

Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Configuration Guide

13

System requirements for CCI

Item
Hitachi RAID
storage systems

Requirement
The requirements for the RAID storage systems are:
■

Microcode. The availability of features and functions depends on
the level of microcode installed on the storage system.

■

Command device. The CCI command device must be defined
and accessed as a raw device (no file system, no mount
operation).

■

License keys. The software products to be used (for example,
Universal Replicator, Dynamic Tiering) must be enabled on the
storage system.

■

System option modes. Before you begin operations, the system
option modes (SOMs) must be set on the storage system by your
Hitachi Vantara representative. For details about the SOMs,
contact customer support.
Note: Check the appropriate manuals (for example, Hitachi
TrueCopy® for Mainframe User Guide) for SOMs that are required
or recommended for your operational environment.

■

Hitachi software products. Make sure that your system meets
the requirements for operation of the Hitachi software products.
For example:
●

TrueCopy, Universal Replicator, global-active device: Bidirectional swap must be enabled between the primary and
secondary volumes. The port attributes (for example, initiator,
target, RCU target) and the MCU-RCU paths must be defined.

●

Copy-on-Write Snapshot: ShadowImage is a prerequisite for
Copy-on-Write Snapshot.

●

Thin Image: Dynamic Provisioning is a prerequisite for Thin
Image.

Note: Check the appropriate manuals (for example, Hitachi
Universal Replicator User Guide) for the system requirements for
your operational environment.

Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Configuration Guide

14

System requirements for CCI

Item
Host platforms

Requirement
CCI operations are supported on the following host platforms:
®

■

AIX

■

HP-UX

■

Red Hat Enterprise Linux (RHEL)

■

Oracle Linux (OEL)

■

Solaris

■

SUSE Linux Enterprise Server (SLES)

■

Tru64 UNIX

■

Windows

■

z/Linux

When a vendor discontinues support of a host OS version, CCI that
is released at or after that time will not support that version of the
host software.
For detailed host support information (for example, OS versions),
refer to the interoperability matrix at https://
support.hitachivantara.com.
I/O interface

For details about I/O interface support (Fibre, SCSI, iSCSI), refer to
the interoperability matrix at https://support.hitachivantara.com.

Host access

Root/administrator access to the host is required to perform hostbased CCI operations.

Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Configuration Guide

15

System requirements for CCI

Item
Host memory

Requirement
CCI requires static memory and dynamic memory for executing the
load module.
■

Static memory capacity: minimum 600 KB, maximum 1200 KB

■

Dynamic memory capacity: determined by the description of the
configuration file. The minimum is:
(number_of_unit_IDs × 200 KB) + (number_of_LDEVs ×
360 B) + (number_of_entries × 180 B)
where:
■

number_of_unit_IDs: number of storage chassis

■

number_of_LDEVs: number of LDEVs (each instance)

■

number_of_entries: number of paired entries (pairs)

Example: For a 1:3 pair configuration, use the following values for
number_of_LDEVs and number_of_entries for each instance:

Host disk

IPv6, IPv4

■

number_of_LDEVs in the primary instance = 1

■

number_of_entries (pairs) in the primary instance = 3

■

number_of_LDEVs in the secondary instance = 3

■

number_of_entries (pairs) in the secondary instance = 3

■

Capacity required for running CCI: 20 MB (varies depending on
the platform: average = 20 MB, maximum = 30 MB)

■

Capacity of the log file that is created after CCI starts: 3000 KB
(when there are no failures, including command execution
errors)

The minimum OS platform versions for CCI/IPv6 support are:
■

HP-UX: HP-UX 11.23 (PA/IA) or later

■

Solaris: Solaris 9/Sparc or later, Solaris 10/x86/64 or later

■

AIX : AIX 5.3 or later

■

Windows: Windows 2008(LH)

■

Linux: Linux Kernel 2.4 (RH8.0) or later

■

Tru64: Tru64 v5.1A or later. Note that v5.1A does not support the
getaddrinfo() function, so this must be specified by IP address
directly.

■

OpenVMS: OpenVMS 8.3 or later

®

®

Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Configuration Guide

16

CCI operating environment

Item

Requirement
UDP ports: Contact your network administrator for appropriate
UDP port numbers to use in your network. The network
administrator must enable these ports to allow traffic between CCI
servers.

Supported guest CCI needs to use guest OS that is supported by CCI, and also
OS for VMware
VMware supported guest OS (for example, Windows Server 2008,
Red Hat Linux, SUSE Linux). For details about guest OS support for
VMware, refer to the interoperability matrix at https://
support.hitachivantara.com.
Failover

CCI supports many industry-standard failover products. For details
about supported failover products, refer to the interoperability
matrix at https://support.hitachivantara.com.

Volume
manager

CCI supports many industry-standard volume manager products.
For details about supported volume manager products, refer to the
interoperability matrix at https://support.hitachivantara.com.

High availability
(HA)
configurations

The system that runs and operates TrueCopy in an HA configuration
must be a duplex system having a hot standby or mutual hot
standby (mutual takeover) configuration. The remote copy system
must be designed for remote backup among servers and configured
so that servers cannot share the primary and secondary volumes at
the same time. The HA configuration does not include fault-tolerant
system configurations such as Oracle Parallel Server (OPS) in which
nodes execute parallel accesses. However, two or more nodes can
share the primary volumes of the shared OPS database, and must
use the secondary volumes as exclusive backup volumes.
Host servers that are combined when paired logical volumes are
defined should run on operating systems of the same architecture.
If not, one host might not be able to recognize a paired volume of
another host, even though CCI runs properly.

CCI operating environment
This section describes the supported operating systems, failover software, and I/O
interfaces for CCI. For the latest information about CCI host software version support,
refer to the interoperability matrix at https://support.hitachivantara.com.

Platforms that use CCI
The following tables list the host platforms that support CCI.
CCI can run on the OS version listed in the table or later.

Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Configuration Guide

17

Platforms that use CCI
For the latest information about host software version and storage system connectivity
support, contact customer support.
Note: When a vendor discontinues support of a host software version, CCI
that is released at or after that time will not support that version of the host
software.
Supported platforms for VSP G1x00, VSP F1500, VSP Gx00 models, and VSP Fx00
models

Oracle

HP

I/O
interface

First Watch

VxVM

Fibre

Solaris 10, 11

–

–

Fibre

Solaris 10 on x86

–

VxVM

Fibre

Solaris 11 on x64

–

–

Fibre/iSCSI

OEL 6.x (6.2 or later)

–

–

Fibre/iSCSI

HP-UX 11.1x

MC/Service
Guard

LVM,
SLVM

Fibre

HP-UX 11.2x/11.3x on IA64

MC/Service
Guard

LVM,
SLVM

Fibre

TruCluster

LSM

Fibre

AIX 5.3, 6.1, 7.1

HACMP

LVM

Fibre

z/Linux (SUSE 8)

–

–

Fibre (FCP)

Windows Server
2008/2008(R2)/2012/2012(R2)

–

LDM

Fibre

Windows Server 2008(R2) on
IA64

–

LDM

Fibre

Windows Server 2008/2012 on
x64

–

LDM

Fibre

Windows Server 2008(R2)/
2012(R2) on x64

–

LDM

Fibre/iSCSI

Windows Server 2016 on x64

–

LDM

Fibre/iSCSI

Tru64 UNIX 5.0
®

Volume
manager

Solaris 9

IA64: using IA-32EL on IA64
(except CCI for Linux/IA64)

®

IBM

Failover
software

Operating system*

Vendor

For details, see Requirements
and restrictions for CCI on z/
Linux (on page 22) .
Microso
ft

Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Configuration Guide

18

Platforms that use CCI

Vendor
Red Hat

Failover
software

Operating system*

Volume
manager

I/O
interface

–

–

Fibre

–

–

Fibre

RHEL 6 on x64

–

–

Fibre/iSCSI

RHEL 7 on x64

–

–

Fibre

SLES 10, 11

–

–

Fibre

SLES 10 on x64

–

–

Fibre

SLES 11 on x64

–

–

Fibre/iSCSI

SLES 12 on x64

–

–

Fibre

RHEL AS/ES 3.0, 4.0, 5.0, 6, 7
If you use RHEL 4.0 with kernel
2.6.9.xx, see "Deprecated SCSI
ioctl" in the troubleshooting
chapter of the Command
Control Interface User and
Reference Guide.
RHEL AS/ES 3.0 Update2, 4.0,
5.0 on x64 / IA64
IA64: using IA-32EL on IA64
(except CCI for Linux/IA64)

Novell
(SUSE)

* Service packs (SP), update programs, or patch programs are not considered as
requirements if they are not listed.

Supported platforms for VSP and HUS VM

Vendor
Oracle

HP

Failover
software

Operating system*

Volume
manager

I/O
interface

Solaris 9

First Watch

VxVM

Fibre

Solaris 10 on x86

–

VxVM

Fibre

OEL 6.x

–

–

Fibre

HP-UX 11.1x

MC/Service
Guard

LVM,
SLVM

Fibre

HP-UX 11.2x/11.3x on IA64

MC/Service
Guard

LVM,
SLVM

Fibre

TruCluster

LSM

Fibre

IA64: using IA-32EL on IA64
(except CCI for Linux/IA64)
Tru64 UNIX 5.0

Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Configuration Guide

19

Applicable platforms for CCI on VM

Vendor
®

®

IBM

Failover
software

Operating system*

Volume
manager

I/O
interface

AIX 5.3

HACMP

LVM

Fibre

z/Linux (SUSE 8)

–

–

Fibre (FCP)

Windows 2008

MSCS

LDM

Fibre

Windows 2008(R2) on IA64

MSCS

LDM

Fibre

Windows Server
2008/2012/2012(R2) on EM64T

MSCS

LDM

Fibre

Windows Server 2016 on x64

–

LDM

Fibre

RHEL AS/ES 3.0, 4.0, 5.0

–

–

Fibre

–

–

Fibre

–

–

Fibre

For details see Requirements
and restrictions for CCI on z/
Linux (on page 22) .
Microso
ft

IA64: using IA-32EL on IA64
(except CCI for Linux/IA64)

Red Hat

If you use RHEL 4.0 with kernel
2.6.9.xx, see "Deprecated SCSI
ioctl" in the troubleshooting
chapter of the Command
Control Interface User and
Reference Guide.
RHEL AS/ES 3.0 Update2, 4.0,
5.0 on EM64T / IA64
IA64: using IA-32EL on IA64
(except CCI for Linux/IA64)
Novell
(SUSE)

SLES 10

* Service packs (SP), update programs, or patch programs are not considered as
requirements if they are not listed.

Applicable platforms for CCI on VM
The following table lists the applicable platforms for CCI on VM.
CCI can run on the guest OS of the version listed in the table or later. For the latest
information on the OS versions and connectivity with storage systems, contact customer
support.

Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Configuration Guide

20

Applicable platforms for CCI on VM

VM vendor1

Guest OS2, 3

Layer

Volume
mapping

I/O
interface

VMware ESX
Guest
Server 2.5.1 or
later (Linux Kernel
2.4.9)

Windows Server 2008

RDM4

Fibre

RHEL5.x/6.x

RDM4

Fibre

For details, see
Restrictions for
VMware ESX
Server (on
page 25) .

Solaris 10 u3 (x86)

RDM4

Fibre

SLES10 SP2

VMware ESXi 5.5

Guest

Windows Server
2008(R2)

RDM4

Fibre/iSCSI

Windows Server
2008/2012 HyperV

Child

Windows Server 2008

Path-thru

Fibre

SLES10 SP2

Path-thru

Fibre

Windows Server
2008(R2)

Use LPAR

Fibre

For details, see
Restrictions for
Windows Hyper-V
(Windows
2012/2008) (on
page 26) .
Hitachi

Virtage
(58-12)

RHEL5.4
Oracle VM 3.1 or
later (Oracle VM
Server for SPARC)

Guest

Solaris 11.1

See Restrictions See
for Oracle VM
Restriction
(on page 28)
s for Oracle
VM (on
page 28)

HPVM 6.3 or later

Guest

HP-UX 11.3

Mapping by
NPIV

Fibre

VIOC

AIX 7.1 TL01

Mapping by
NPIV

Fibre

®

IBM VIOS 2.2.0.0

®

Notes:
1. VM must be versions listed in this table or later.
2. Service packs (SP), update programs, or patch programs are not considered as
requirements if they are not listed.
3. Operations on the guest OS that is not supported by VM are not supported.
4. RDM: Raw Device Mapping using Physical Compatibility Mode is used.

Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Configuration Guide

21

Supported platforms for IPv6

Supported platforms for IPv6
The IPv6 functionality for CCI can be used on the OS versions listed in the following table
or later. For details about the latest OS versions, refer to the interoperability matrix at
https://support.hitachivantara.com.
OS1

Vendor
Oracle

IPv62

IPv4 mapped to IPv6

Solaris 9/10/11

Supported

Supported

Solaris10/11 on x86

Supported

Supported

OEL 6.x

Supported

Supported

HP-UX 11.23(PA/IA)

Supported

Supported

Tru64 UNIX 5.1A3

Supported

Supported

AIX 5.3

Supported

Supported

z/Linux (SUSE 8, SUSE 9) on
Z990

Supported

Supported

Microsoft

Windows 2008(R2) on x86/
EM64T/IA64

Supported

Not supported

Red Hat

RHEL AS/ES3.0, RHEL 5.x/6.x

Supported

Supported

HP

IBM

®

®

Notes:
1. Service packs (SP), update programs, or patch programs are not considered as
requirements if they are not listed.
2. For details about IPv6 support, see About platforms supporting IPv6 (on
page 29) .
3. Performed by typing the IP address directly.

Requirements and restrictions for CCI on z/Linux
In the following example, z/Linux defines the open volumes that are connected to FCP
®
as /dev/sd*. Also, the mainframe volumes (3390-xx) that are connected to FICON are
defined as /dev/dasd*.
The following figure is an example of a CCI configuration on z/Linux.

Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Configuration Guide

22

Requirements and restrictions for CCI on z/Linux

The restrictions for using CCI with z/Linux are:
■

SSB information. SSB information might not be displayed correctly.

■

Command device. CCI uses a SCSI Path-through driver to access the command
device. As such, the command device must be connected through FCP adaptors.

■

Open Volumes via FCP. Same operation as the other operating systems.

Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Configuration Guide

23

Requirements and restrictions for CCI on z/Linux
■

®

Mainframe (3390-9A) Volumes via FICON . You cannot control the volumes
®
(3390-9A) that are directly connected to FICON for ShadowImage pair operations.
Also, mainframe volumes must be mapped to a CHF(FCP) port to access target
volumes using a command device, as shown in the above figure. The mainframe
volume does not have to be connected to an FCP adaptor.
Note: ShadowImage supports only 3390-9A multiplatform volumes.
TrueCopy and Universal Replicator do not support multiplatform volumes
®
(including 3390-9A) via FICON .

■

®

Volume discovery via FICON . When you discover volume information, the inqraid
®
command uses SCSI inquiry. Mainframe volumes connected by FICON do not
support the SCSI interface. Because of this, information equivalent to SCSI inquiry is
obtained through the mainframe interface (Read_device_characteristics or
Read_configuration_data), and the available information is displayed similarly as the
open volume. As a result, information displayed by executing the inqraid command
®
cannot be obtained, as shown below. Only the last five digits of the FICON volume's
serial number, which is displayed by the inqraid command, are displayed.

sles8z:/HORCM/usr/bin# ls /dev/dasd* | ./inqraid
/dev/dasda -> [ST] Unknown Ser =
1920 LDEV =
4 [HTC
]
[0704_3390_0A]
/dev/dasdaa -> [ST] Unknown Ser =
62724 LDEV =4120 [HTC
]
[C018_3390_0A]
/dev/dasdab -> [ST] Unknown Ser =
62724 LDEV =4121 [HTC
]
[C019_3390_0A]
sles8z:/HORCM/usr/bin# ls /dev/dasd* | ./inqraid -CLI
DEVICE_FILE
PORT
SERIAL LDEV CTG H/M/12 SSID R:Group PRODUCT_ID
dasda
1920
4
- 00C0
0704_3390_0A
dasdaa
62724 4120
- 9810
C018_3390_0A
dasdab
62724 4121
- 9810
- C019_3390_0A
The inqraid command displays only five-digit number at the end of serial number of
®
the FICON volume.
In the previous example, the Product_ID, C019_3390_0A, has the following associations:
■

C019: Serial number

■

3390: System type

■

0A: System model

Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Configuration Guide

24

Requirements and restrictions for CCI on VM
The following commands cannot be used because there is no PORT information:
■

raidscan -pd 

■

raidar -pd 

■

raidvchkscan -pd 

■

raidscan -find

■

raidscan -find conf

■

mkconf

Requirements and restrictions for CCI on VM
Restrictions for VMware ESX Server
Whether CCI can run properly depends on the support of guest OS by VMware. In
addition, the guest OS depends on VMware support of virtual hardware (HBA). Therefore,
the guest OS supporting VMware and supported by CCI (such as Windows Server 2003,
Red Hat Linux, or SUSE Linux) must be used, and the restrictions below must be followed
when using CCI on VMware.
The following figure shows the CCI configuration on guest OS/VMware.

Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Configuration Guide

25

Restrictions for Windows Hyper-V (Windows 2012/2008)
The restrictions for using CCI with VMware are:
■

Guest OS. CCI needs to use guest OS that is supported by CCI, and also VMware
supported guest OS (for example, Windows, Red Hat Linux). For specific support
information, refer to the Hitachi Vantara interoperability matrix at https://
support.hitachivantara.com.

■

Command device. CCI uses SCSI path-through driver to access the command device.
Therefore, the command device must be mapped as Raw Device Mapping using
Physical Compatibility Mode. At least one command device must be assigned for each
guest OS.
CCI instance numbers among different guest OS must be different, even if the
command device is assigned for each guest OS, because the command device cannot
distinguish a difference among guest OS due to the same WWN as VMHBA.

■

About invisible LUN. Assigned LUN for the guest OS must be visible from SCSI
Inquiry when VMware (host OS) is started. For example, the S-VOL on VSS is used as
Read Only and Hidden, and this S-VOL is hidden from SCSI Inquiry. If VMware (host
OS) is started on this volume state, the host OS will hang.

■

LUN sharing between Guest and Host OS. It is not supported to share a command
device or a normal LUN between guest OS and host OS.

■

About running on SVC. The ESX Server 3.0 SVC (service console) is a limited
distribution of Linux based on Red Hat Enterprise Linux 3, Update 6 (RHEL 3 U6). The
service console provides an execution environment to monitor and administer the
entire ESX Server host. The CCI user can run CCI by installing "CCI for Linux" on SVC.
The volume mapping (/dev/sd) on SVC is a physical connection without converting
SCSI Inquiry, so CCI will perform like running on Linux regardless of guest OS.
However, VMware protects the service console with a firewall. According to current
documentation, the firewall allows only PORT# 902, 80, 443, 22(SSH) and ICMP(ping),
DHCP, DNS as defaults, so the CCI user must enable a PORT for CCI (HORCM) using
the iptables command.

Restrictions for Windows Hyper-V (Windows 2012/2008)
Whether CCI can run properly depends on the support of the guest OS by Windows
Hyper-V, and then the guest OS depends on how Hyper-V supports front-end SCSI
interfaces.
The following figure shows the CCI configuration on Hyper-V.

Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Configuration Guide

26

Restrictions for Windows Hyper-V (Windows 2012/2008)

The restrictions for using CCI on Hyper-V are:
■

Guest OS. CCI needs to use the guest OS that is supported by CCI and also the HyperV supported guest OS (for example, Windows Server 2012, SUSE Linux). For specific
support information, refer to the interoperability matrix at https://
support.hitachivantara.com.

■

Command device. CCI uses the SCSI path-through driver to access the command
device. Therefore the command device must be mapped as RAW device of the paththrough disk. At least one command device must be assigned for each guest OS (Child
Partition).
The CCI instance number among different guest OSs must be used as a different
instance number even if the command is assigned for each guest OS. This is because
the command device cannot distinguish a difference among the guest OSs because
the same WWN via Fscsi is used.

■

LUN sharing between guest OS and console OS. It is not possible to share a
command device as well as a normal LUN between a guest OS and a console OS.

■

Running CCI on console OS. The console OS (management OS) is a limited Windows,
like Windows 2008/2012 Server Core, and the Windows standard driver is used. Also
the console OS provides an execution environment to monitor and administer the
entire Hyper-V host.
Therefore, you can run CCI by installing "CCI for Windows NT" on the console OS. In
that case, the CCI instance number between the console OS and the guest OS must
be a different instance number, even if the command is assigned for each console
and guest OS.

Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Configuration Guide

27

Restrictions for Oracle VM

Restrictions for Oracle VM
Whether Command Control Interface can run properly depends on the guest OS
supported by Oracle VM.
The restrictions for using CCI with Oracle VM are:
■

Guest OS. CCI must use the guest OS supported by CCI and the guest OS supported
by Oracle VM.

■

Command device. You cannot connect the command device of Fibre Channel directly
to the guest OS. If you have to execute commands by the in-band method, you must
configure the system as shown in the following figure.

In this configuration, CCI on the guest domain (CCI#1 to CCI#n) transfers the
command to another CCI on the control domain (CCI#0) by an Out-of-Band method.
CCI#0 executes the command by In-Band method, and then transfer the result to
CCI#1 to CCI#n. CCI#0 fulfills the same role as a virtual command device in the
SVP/GUM/CCI server.
■

Volume mapping. Volumes on the guest OS must be mapped physically to the LDEVs
on the disk machine.

■

System disk. If you specify the OS system disk as an object of copying, the OS might
not start on the system disk of the copy destination.

Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Configuration Guide

28

About platforms supporting IPv6

About platforms supporting IPv6
Library and system call for IPv6
CCI uses the following functions of IPv6 library to get and convert from hostname to IPv6
address.
■

■

IPv6 library to resolve hostname and IPv6 address:
●

getaddrinfo()

●

inet_pton()

●

inet_ntop()

Socket System call to communicate using UDP/IPv6:
●

socket(AF_INET6)

●

bind(), sendmsg(), sendto(), rcvmsg(), recvfrom()…

If CCI links above function in the object(exe), a core dump might occur if an old platform
(for example, Windows NT, HP-UX 10.20, Solaris 5) does not support it. So CCI links
dynamically above functions by resolving the symbol after determining whether the
shared library and function for IPv6 exists. It depends on supporting of the platform
whether CCI can support IPv6 or not. If platform does not support IPv6 library, then CCI
uses its own internal function corresponding to inet_pton(),inet_ntop(); in this
case, IPv6 address is not allowed to describe hostname.
The following figure shows the library and system call for IPv6.

Environment variables for IPv6
CCI loads and links the library for IPv6 by specifying a PATH as follows:
■

For Windows systems: Ws2_32.dll

■

For HP-UX (PA/IA) systems: /usr/lib/libc.sl

Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Configuration Guide

29

HORCM start-up log for IPv6
However, CCI might need to specify a different PATH to use the library for IPv6. After this
consideration, CCI also supports the following environment variables for specifying a
PATH:
■

$IPV6_DLLPATH (valid for only HP-UX, Windows): This variable is used to change the
default PATH for loading the Library for IPv6. For example:
export IPV6_DLLPATH=/usr/lib/hpux32/lib.so
horcmstart.sh 10

■

$IPV6_GET_ADDR: This variable is used to change "AI_PASSIVE" value as default for
specifying to the getaddrinfo() function for IPv6. For example:
export IPV6_GET_ADDR=9
horcmstart.sh 10

HORCM start-up log for IPv6
Support level of IPv6 feature depends on the platform and OS version. In certain OS
platform environments, CCI cannot perform IPv6 communication completely, so CCI logs
the results of whether the OS environment supports the IPv6 feature or not.
/HORCM/log/curlog/horcm_HOST NAME.log
*****************************************************************
- HORCM STARTUP LOG - Fri Aug 31 19:09:24 2007
******************************************************************
19:09:24-cc2ec-02187- horcmgr started on Fri Aug 31 19:09:24 2007
:
:
19:09:25-3f3f7-02188- ***** starts Loading library for IPv6 ****
[ AF_INET6 = 26, AI_PASSIVE = 1 ]
19:09:25-47ca1-02188- dlsym() : Symbl = 'getaddrinfo' : dlsym: symbol
"getaddrinfo" not found in "/etc/horcmgr"
getaddrinfo() : Unlinked on itself
inet_pton()
: Linked on itself
inet_ntop()
: Linked on itself
19:09:25-5ab3e-02188- ****** finished Loading library *******
:
HORCM set to IPv6 ( INET6 value = 26)
:

Startup procedures using detached process on DCL for
OpenVMS
Procedure
1. Create the shareable Logical name for RAID if undefined initially.

Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Configuration Guide

30

Startup procedures using detached process on DCL for OpenVMS
CCI needs to define the physical device ($1$DGA145…) as either DG* or DK* or GK*
by using the show device and DEFINE/SYSTEM commands, but then does not
need to be mounted in CCI version 01-12-03/03 or earlier.
$ show device
Device
Error
Device
Name
Status
Count
$1$DGA145:
(VMS4) Online
0
$1$DGA146:
(VMS4) Online
0
:
:
$1$DGA153:
(VMS4) Online
0
$
$ DEFINE/SYSTEM DKA145 $1$DGA145:
$ DEFINE/SYSTEM DKA146 $1$DGA146:
:
:
$ DEFINE/SYSTEM DKA153 $1$DGA153:

Volume
Label

Free
Blocks

Trans Mnt
Count Cnt

2. Define the CCI environment in LOGIN.COM.
You need to define the Path for the CCI commands to DCL$PATH as the foreign
command. See the section about Automatic Foreign Commands in the OpenVMS
user documentation.
$ DEFINE DCL$PATH SYS$POSIX_ROOT:[horcm.usr.bin],SYS$POSIX_ROOT:
[horcm.etc]
If CCI and HORCM are executing in different jobs (different terminal), then you must
redefine LNM$TEMPORARY_MAILBOX in the LNM$PROCESS_DIRECTORY table as
follows:
$ DEFINE/TABLE=LNM$PROCESS_DIRECTORY LNM$TEMPORARY_MAILBOX LNM$GROUP
3. Discover and describe the command device on SYS$POSIX_ROOT:
[etc]horcm0.conf.
$ inqraid DKA145-151 -CLI
DEVICE_FILE PORT SERIAL LDEV CTG
DKA145
CL1-H
30009 145
DKA146
CL1-H
30009 146
DKA147
CL1-H
30009 147
DKA148
CL1-H
30009 148
DKA149
CL1-H
30009 149
DKA150
CL1-H
30009 150
DKA151
CL1-H
30009 151
SYS$POSIX_ROOT:[etc]horcm0.conf
HORCM_MON
#ip_address
service
127.0.0.1
30001
HORCM_CMD

H/M/12
s/S/ss
s/P/ss
s/S/ss
s/P/ss
s/S/ss
s/P/ss

SSID
0004
0004
0004
0004
0004
0004

poll(10ms)
1000

R:Group PRODUCT_ID
- OPEN-9-CM
5:01-11 OPEN-9
5:01-11 OPEN-9
5:01-11 OPEN-9
5:01-11 OPEN-9
5:01-11 OPEN-9
5:01-11 OPEN-9

timeout(10ms)
3000

Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Configuration Guide

31

Startup procedures using detached process on DCL for OpenVMS
#dev_name
DKA145

dev_name

dev_name

You will have to start HORCM without a description for HORCM_DEV and
HORCM_INST because the target ID and LUN are Unknown. You can determine a
mapping of a physical device with a logical name easily by using the raidscan find command.
4. Execute an 'horcmstart 0'.
$ run /DETACHED SYS$SYSTEM:LOGINOUT.EXE /PROCESS_NAME=horcm0 _$ /INPUT=VMS4$DKB100:[SYS0.SYSMGR.][horcm]loginhorcm0.com _$ /OUTPUT=VMS4$DKB100:[SYS0.SYSMGR.][horcm]run0.out _$ /ERROR=VMS4$DKB100:[SYS0.SYSMGR.][horcm]run0.err
%RUN-S-PROC_ID, identification of created process is 00004160
5. Verify a physical mapping of the logical device.
$ HORCMINST := 0
$ raidscan -pi DKA145-151 -find
DEVICE_FILE
UID S/F PORT TARG
DKA145
0
F CL1-H
0
DKA146
0
F CL1-H
0
DKA147
0
F CL1-H
0
DKA148
0
F CL1-H
0
DKA149
0
F CL1-H
0
DKA150
0
F CL1-H
0
DKA151
0
F CL1-H
0
$ horcmshutdown 0
inst 0:
HORCM Shutdown inst 0 !!!

LUN
1
2
3
4
5
6
7

SERIAL
30009
30009
30009
30009
30009
30009
30009

LDEV
145
146
147
148
149
150
151

PRODUCT_ID
OPEN-9-CM
OPEN-9
OPEN-9
OPEN-9
OPEN-9
OPEN-9
OPEN-9

6. Describe the known HORCM_DEV on SYS$POSIX_ROOT:[etc]horcm*.conf.
For horcm0.conf
HORCM_DEV
#dev_group
VG01
VG01
VG01
HORCM_INST
#dev_group
VG01

dev_name
oradb1
oradb2
oradb3

port#
CL1-H
CL1-H
CL1-H

ip_address
HOSTB

TargetID
0
0
0

LU#
2
4
6

MU#
0
0
0

LU#
3
5
7

MU#
0
0
0

service
horcm1

For horcm1.conf
HORCM_DEV
#dev_group
VG01
VG01
VG01

dev_name
oradb1
oradb2
oradb3

port#
CL1-H
CL1-H
CL1-H

TargetID
0
0
0

Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Configuration Guide

32

Command examples in DCL for OpenVMS
HORCM_INST
#dev_group
VG01

ip_address
HOSTA

service
horcm0

Defines the UDP port name for HORCM communication in the SYS$SYSROOT:
[000000.TCPIP$ETC]SERVICES.DAT file, as in the example below.
horcm0 30001/udp horcm1 30002/udp
7. Start horcm0 and horcm1 as the Detached process.
$ run /DETACHED SYS$SYSTEM:LOGINOUT.EXE /PROCESS_NAME=horcm0 _$ /INPUT=VMS4$DKB100:[SYS0.SYSMGR.][horcm]loginhorcm0.com _$ /OUTPUT=VMS4$DKB100:[SYS0.SYSMGR.][horcm]run0.out _$ /ERROR=VMS4$DKB100:[SYS0.SYSMGR.][horcm]run0.err
%RUN-S-PROC_ID, identification of created process is 00004160
$
$$ run /DETACHED SYS$SYSTEM:LOGINOUT.EXE /PROCESS_NAME=horcm1 _$ /INPUT=VMS4$DKB100:[SYS0.SYSMGR.][horcm]loginhorcm1.com _$ /OUTPUT=VMS4$DKB100:[SYS0.SYSMGR.][horcm]run1.out _$ /ERROR=VMS4$DKB100:[SYS0.SYSMGR.][horcm]run1.err
%RUN-S-PROC_ID, identification of created process is 00004166
You can verify that HORCM daemon is running as Detached Process by using the
show process command.
$ show process horcm0
25-MAR-2003 23:27:27.72
Terminal:
User Identifier:
Base priority:
Default file spec:
Number of Kthreads:

User: SYSTEM
Node: VMS4

Process ID:
0004160
Process name:"HORCM0"

[SYSTEM]
4
Not available
1

Soft CPU Affinity: off

Command examples in DCL for OpenVMS
(1) Setting the environment variable by using Symbol
$ HORCMINST := 0$ HORCC_MRCF := 1
$ raidqry -l
No Group
Hostname
HORCM_ver Uid Serial# Micro_ver Cache(MB)
1 --VMS4
01-29-03/05
0
30009 50-04-00/00
8192
$
$ pairdisplay -g VG01 -fdc
Group PairVol(L/R) Device_File M,Seq#,LDEV#.P/S,Status, % ,P-LDEV# M
VG01 oradb1(L)
DKA146
0 30009 146..S-VOL PAIR, 100 147 -

Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Configuration Guide

33

Command examples in DCL for OpenVMS
VG01
VG01
VG01
VG01
VG01
$

oradb1(R)
oradb2(L)
oradb2(R)
oradb3(L)
oradb3(R)

DKA147
DKA148
DKA149
DKA150
DKA151

0
0
0
0
0

30009
30009
30009
30009
30009

147..P-VOL
148..S-VOL
149..P-VOL
150..S-VOL
151..P-VOL

PAIR,
PAIR,
PAIR,
PAIR,
PAIR,

100
100
100
100
100

146
149
148
151
150

-

(2) Removing the environment variable
$ DELETE/SYMBOL HORCC_MRCF
$ pairdisplay -g VG01 -fdc
Group PairVol(L/R) Device_File
VG01 oradb1(L) DKA146
30009
VG01 oradb1(R) DKA147
30009
VG01 oradb2(L) DKA148
30009
VG01 oradb2(R) DKA149
30009
VG01 oradb3(L) DKA150
30009
VG01 oradb3(R) DKA151
30009
$

,Seq#,LDEV#.P/S,Status,Fence, % ,P-LDEV# M
146..SMPL ---- ------,----- ---- 147..SMPL ---- ------,----- ---- 148..SMPL ---- ------,----- ---- 149..SMPL ---- ------,----- ---- 150..SMPL ---- ------,----- ---- 151..SMPL ---- ------,----- ---- -

(3) Changing the default log directory
$ HORCC_LOG := /horcm/horcm/TEST
$ pairdisplay
PAIRDISPLAY: requires '-x xxx' as argument
PAIRDISPLAY: [EX_REQARG] Required Arg list
Refer to the command log(SYS$POSIX_ROOT:[HORCM.HORCM.TEST]HORCC_VMS4.LOG (/
HORCM
/HORCM/TEST/horcc_VMS4.log)) for details.
(4) Turning back to the default log directory
$ DELETE/SYMBOL HORCC_LOG
(5) Specifying the device described in scandev.LIS
$ define dev_file SYS$POSIX_ROOT:[etc]SCANDEV
$ type dev_file
DKA145-150
$
$ pipe type dev_file | inqraid -CLI
DEVICE_FILE PORT SERIAL LDEV CTG H/M/12 SSID
DKA145
CL1-H
30009 145 DKA146
CL1-H
30009 146 - s/S/ss 0004
DKA147
CL1-H
30009 147 - s/P/ss 0004
DKA148
CL1-H
30009 148 - s/S/ss 0004
DKA149
CL1-H
30009 149 - s/P/ss 0004
DKA150
CL1-H
30009 150 - s/S/ss 0004

R:Group
5:01-11
5:01-11
5:01-11
5:01-11
5:01-11

PRODUCT_ID
OPEN-9-CM
OPEN-9
OPEN-9
OPEN-9
OPEN-9
OPEN-9

Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Configuration Guide

34

Command examples in DCL for OpenVMS
(6) Making the configuration file automatically
You can omit steps from (3) to (6) on the Start-up procedures by using the mkconf
command.
$ type dev_file
DKA145-150
$
$ pipe type dev_file | mkconf -g URA -i 9
starting HORCM inst 9
HORCM Shutdown inst 9 !!!
A CONFIG file was successfully completed.
HORCM inst 9 finished successfully.
starting HORCM inst 9
DEVICE_FILE
Group PairVol
PORT
TARG LUN M SERIAL LDEV
DKA145
- 30009
145
DKA146
URA URA_000
CL1-H
0
2 0
30009
146
DKA147
URA URA_001
CL1-H
0
3 0
30009
147
DKA148
URA URA_002
CL1-H
0
4 0
30009
148
DKA149
URA URA_003
CL1-H
0
5 0
30009
149
DKA150
URA URA_004
CL1-H
0
6 0
30009
150
HORCM Shutdown inst 9 !!!
Please check 'SYS$SYSROOT:[SYSMGR]HORCM9.CONF','SYS$SYSROOT:
[SYSMGR.LOG9.CURLOG]
HORCM_*.LOG', and modify 'ip_address & service'.
HORCM inst 9 finished successfully.
$
SYS$SYSROOT:[SYSMGR]horcm9.conf (/sys$sysroot/sysmgr/horcm9.conf)
# Created by mkconf on Thu Mar 13 20:08:41
HORCM_MON
#ip_address
127.0.0.1

service
52323

HORCM_CMD
#dev_name
dev_name
#UnitID 0 (Serial# 30009)
DKA145
# ERROR [CMDDEV] DKA145
CM `
HORCM_DEV
#dev_group
dev_name
# DKA146
SER =
URA
URA_000
# DKA147
SER =
URA
URA_001
# DKA148
SER =
URA
URA_002
# DKA149
SER =

poll(10ms)
1000

timeout(10ms)
3000

dev_name

SER =

port#
30009 LDEV
CL1-H
30009 LDEV
CL1-H
30009 LDEV
CL1-H
30009 LDEV

30009

LDEV =

TargetID
LU#
= 146 [ FIBRE FCTBL
0
2
= 147 [ FIBRE FCTBL
0
3
= 148 [ FIBRE FCTBL
0
4
= 149 [ FIBRE FCTBL

145 [ OPEN-9-

MU#
= 3 ]
0
= 3 ]
0
= 3 ]
0
= 3 ]

Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Configuration Guide

35

Command examples in DCL for OpenVMS
URA
# DKA150
URA
HORCM_INST
#dev_group
URA

URA_003
SER =
URA_004

CL1-H
30009 LDEV =
CL1-H

ip_address
127.0.0.1

0
5
0
150 [ FIBRE FCTBL = 3 ]
0
6
0

service
52323

(7) Using $1$* naming as native device name
You can use the native device without DEFINE/SYSTEM command by specifying $1$*
naming directly.
$ inqraid $1$DGA145-155 -CLI
DEVICE_FILE PORT SERIAL LDEV CTG H/M/12 SSID R:Group
$1$DGA145
CL2-H 30009 145
$1$DGA146
CL2-H 30009 146
- s/P/ss 0004 5:01-11
$1$DGA147
CL2-H 30009 147
- s/S/ss 0004 5:01-11
$1$DGA148
CL2-H 30009 148
0 P/s/ss 0004 5:01-11

PRODUCT_ID
OPEN-9-CM
OPEN-9
OPEN-9
OPEN-9

$ pipe show device
| INQRAID -CLI
DEVICE_FILE PORT SERIAL LDEV CTG H/M/12 SSID R:Group PRODUCT_ID
$1$DGA145
CL2-H 30009 145
- OPEN-9-CM
$1$DGA146
CL2-H 30009 146
- s/P/ss 0004 5:01-11 OPEN-9
$1$DGA147
CL2-H 30009 147
- s/S/ss 0004 5:01-11 OPEN-9
$1$DGA148
CL2-H 30009 148
0 P/s/ss 0004 5:01-11 OPEN-9
$ pipe show device
| MKCONF -g URA -i 9
starting HORCM inst 9
HORCM Shutdown inst 9 !!!
A CONFIG file was successfully completed.
HORCM inst 9 finished successfully.
starting HORCM inst 9
DEVICE_FILE
Group PairVol
PORT TARG LUN M SERIAL LDEV
$1$DGA145
- 30009
145
$1$DGA146
URA
URA_000
CL2-H
0
2 0
30009
146
$1$DGA147
URA
URA_001
CL2-H
0
3 0
30009
147
$1$DGA148
URA
URA_002
CL2-H
0
4 0
30009
148
HORCM Shutdown inst 9 !!!
Please check 'SYS$SYSROOT:[SYSMGR]HORCM9.CONF','SYS$SYSROOT:
[SYSMGR.LOG9.CURLOG]
HORCM_*.LOG', and modify 'ip_address & service'.
HORCM inst 9 finished successfully.
$
$ pipe show device
| RAIDSCAN -find
DEVICE_FILE
UID S/F PORT
TARG LUN
$1$DGA145
0
F CL2-H
0
1
$1$DGA146
0
F CL2-H
0
2
$1$DGA147
0
F CL2-H
0
3
$1$DGA148
0
F CL2-H
0
4

SERIAL
30009
30009
30009
30009

LDEV
145
146
147
148

PRODUCT_ID
OPEN-9-CM
OPEN-9
OPEN-9
OPEN-9

Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Configuration Guide

36

Start-up procedures in bash for OpenVMS

$ pairdisplay -g BCVG -fdc
Group PairVol(L/R) Device_File M ,Seq#,LDEV#..P/S,Status, % ,P-LDEV# M
BCVG oradb1(L) $1$DGA146 0 30009 146..P-VOL PAIR, 100
147 BCVG oradb1(R) $1$DGA147 0 30009 147..S-VOL PAIR, 100
146 $
$ pairdisplay -dg $1$DGA146
Group PairVol(L/R) (Port#,TID, LU-M) ,Seq#,LDEV#..P/S,Status, Seq#,P-LDEV#
M
BCVG oradb1(L) (CL1-H,0, 2-0) 30009 146..P-VOL PAIR, 30009 147 BCVG oradb1(R) (CL1-H,0, 3-0) 30009 47..S-VOL PAIR, ----- 146 $

Start-up procedures in bash for OpenVMS
Do not use CCI through the bash, because the bash is not provided as an official release
in OpenVMS.
Procedure
1. Create the shareable Logical name for RAID if undefined initially.
You need to define the Physical device ($1$DGA145…) as either DG* or DK* or GK*
by using the show device command and the DEFINE/SYSTEM command, but then
it does not need to be mounted.
$ show device
Device
Error Volume
Device
Name
Status
Count
Label
$1$DGA145: (VMS4) Online
0
$1$DGA146: (VMS4) Online
0
:
:
$1$DGA153: (VMS4) Online
0
$$ DEFINE/SYSTEM DKA145 $1$DGA145:
$ DEFINE/SYSTEM DKA146 $1$DGA146:
:
:
$ DEFINE/SYSTEM DKA153 $1$DGA153:

Free
Blocks

Trans Mnt
Count Cnt

2. Define the CCI environment in LOGIN.COM.
If CCI and HORCM are executing in different jobs (different terminal), then you must
redefine LNM$TEMPORARY_MAILBOX in the LNM$PROCESS_DIRECTORY table as
follows:
$ DEFINE/TABLE=LNM$PROCESS_DIRECTORY LNM$TEMPORARY_MAILBOX LNM$GROUP

Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Configuration Guide

37

Start-up procedures in bash for OpenVMS
3. Discover and describe the command device on /etc/horcm0.conf.
bash$ inqraid DKA145-151 -CLI
DEVICE_FILE PORT SERIAL LDEV CTG H/M/12
DKA145
CL1-H 30009 145
DKA146
CL1-H 30009 146
- s/S/ss
DKA147
CL1-H 30009 147
- s/P/ss
DKA148
CL1-H 30009 148
- s/S/ss
DKA149
CL1-H 30009 149
- s/P/ss
DKA150
CL1-H 30009 150
- s/S/ss
DKA151
CL1-H 30009 151
- s/P/ss

SSID
0004
0004
0004
0004
0004
0004

R:Group
5:01-11
5:01-11
5:01-11
5:01-11
5:01-11
5:01-11

PRODUCT_ID
OPEN-9-CM
OPEN-9
OPEN-9
OPEN-9
OPEN-9
OPEN-9
OPEN-9

/etc/horcm0.conf
HORCM_MON
#ip_address
127.0.0.1

service
52000

poll(10ms)
1000

dev_name

dev_name

HORCM_DEV
#dev_group

dev_name

port#

HORCM_INST
#dev_group

ip_address

service

HORCM_CMD
#dev_name
DKA145

timeout(10ms)
3000

TargetID

LU#

MU#

You will have to start HORCM without a description for HORCM_DEV and
HORCM_INST because target ID and LUN are Unknown. You can determine a
mapping of a physical device with a logical name easily by using the raidscan find command.
4. Execute an 'horcmstart 0' as background.
bash$ horcmstart 0 &
18
bash$
starting HORCM inst 0
5. Verify a physical mapping of the logical device.
bash$ export HORCMINST=0
bash$ raidscan -pi DKA145-151 -find
DEVICE_FILE
UID S/F PORT
TARG LUN
DKA145
0
F CL1-H
0
1
DKA146
0
F CL1-H
0
2
DKA147
0
F CL1-H
0
3
DKA148
0
F CL1-H
0
4
DKA149
0
F CL1-H
0
5

SERIAL
30009
30009
30009
30009
30009

LDEV
145
146
147
148
149

PRODUCT_ID
OPEN-9-CM
OPEN-9
OPEN-9
OPEN-9
OPEN-9

Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Configuration Guide

38

Using CCI with Hitachi and other storage systems
DKA150
DKA151

0
0

F
F

CL1-H
CL1-H

0
0

6
7

30009
30009

150
151

OPEN-9
OPEN-9

6. Describe the known HORCM_DEV on /etc/horcm*.conf.
For horcm0.conf
HORCM_DEV
#dev_group
VG01
VG01
VG01
HORCM_INST
#dev_group
VG01

dev_name
oradb1
oradb2
oradb3

port#
CL1-H
CL1-H
CL1-H

ip_address
HOSTB

TargetID
0
0
0

LU#
2
4
6

MU#
0
0
0

LU#
3
5
7

MU#
0
0
0

service
horcm1

For horcm1.conf
HORCM_DEV
#dev_group
VG01
VG01
VG01
HORCM_INST
#dev_group
VG01

dev_name
oradb1
oradb2
oradb3

port#
CL1-H
CL1-H
CL1-H

ip_address
HOSTA

TargetID
0
0
0
service
horcm0

7. Start 'horcmstart 0 1'.
The subprocess(HORCM) created by bash is terminated when the bash is EXIT.
bash$ horcmstart 0 &
19
bash$
starting HORCM inst 0
bash$ horcmstart 1 &
20
bash$
starting HORCM inst 1

Using CCI with Hitachi and other storage systems
The following table shows the related two controls between CCI and the RAID storage
system type (Hitachi or HPE). The following figure shows the relationship between the
application, CCI, and RAID storage system.

Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Configuration Guide

39

Using CCI with Hitachi and other storage systems

Version

Installation order

CCI 01-08-03/00 CCI
or later

RAID Manager
XP 01.08.00 or
later (provided
by HPE)

RAID
system

Common API/CLI

XP API/CLI

Hitachi

Allowed

HPE

Allowed1

Install CCI after
installing RAID
Manager XP

Hitachi

Allowed

HPE

Allowed

RAID Manager XP

HPE

Allowed

Allowed

Hitachi

Allowed1

Allowed2

HPE

Allowed

Allowed

Hitachi

Allowed

Allowed2

Install RAID
Manager XP after
installing CCI

Cannot use (CLI
options can be
used)

Notes:
1. The following common API/CLI commands are rejected with EX_ERPERM by
connectivity of CCI with RAID storage system:
horctakeover, paircurchk, paircreate, pairsplit, pairresync,
pairvolchk, pairevtwait, pairdisplay, raidscan (except the -find option),
raidar, raidvchkset, raidvchkdsp, raidvchkscan
2. The following XP API/CLI commands are rejected with EX_ERPERM on the storage
system even when both CCI and RAID Manager XP (provided by HPE) are installed:
pairvolchk -s, pairdisplay -CLI, raidscan -CLI, paircreate -m
noread for TrueCopy/TrueCopy Async/Universal Replicator, paircreate -m
dif/inc for ShadowImage

Chapter 1: Installation requirements for Command Control Interface
Command Control Interface Installation and Configuration Guide

40

Chapter 2: Installing and configuring CCI
This chapter describes and provides instructions for installing and configuring CCI.

Installing the CCI hardware
Installation of the hardware required for CCI is performed by the user and the Hitachi
Vantara representative.
Procedure
1. User:
a. Make sure that the UNIX/PC server hardware and software are properly
installed and configured. For specific support information, refer to the
interoperability matrix at https://support.hitachivantara.com.
b. If you will be performing remote replication operations (for example, Universal
Replicator, TrueCopy), identify the primary and secondary volumes, so that the
hardware and software components can be installed and configured properly.
2. Hitachi Vantara representative:
a. Connect the RAID storage systems to the hosts. See the Maintenance Manual
for the storage system and the Open-Systems Host Attachment Guide. Make sure
to set the appropriate system option modes (SOMs) and host mode options
(HMOs) for the operational environment.
b. Configure the RAID storage systems that will contain primary volumes for
replication to report sense information to the hosts.
c. Set the SVP time to the local time so that the time stamps are correct. For VSP
Gx00 models and VSP Fx00 models, use the maintenance utility to set the
system date and time to the local time.
d. Remote replication: Install the remote copy connections between the RAID
storage systems. For detailed information, see the applicable user guide (for
example, Hitachi Universal Replicator User Guide).
3. User and Hitachi Vantara representative:
a. Ensure that the storage systems are accessible via Hitachi Device Manager Storage Navigator. For details, see the System Administrator Guide for your
storage system.
b. (Optional) Ensure that the storage systems are accessible by the management
software (for example, Hitachi Storage Advisor, Hitachi Command Suite). For
details, see the user documentation for the software product.
c. Install and enable the applicable license key of your program product (for
example, TrueCopy, ShadowImage, LUN Manager, Universal Replicator for
Chapter 2: Installing and configuring CCI
Command Control Interface Installation and Configuration Guide

41

Installing the CCI software
Mainframe, Data Retention Utility) on the storage systems. For details about
installing license keys, see the System Administrator Guide or Storage Navigator
User Guide.
4. User: Configure the RAID storage systems for operations as described in the user
documentation. For example, before you can create TrueCopy volume pairs using
CCI, you need to configure the ports on the storage systems and establish the MCURCU paths.

Installing the CCI software
To install CCI, log in with "root user" or "administrator" privileges. The login user type is
determined by the operating system. You can install the CCI software on the host servers
with assistance as needed from the Hitachi Vantara representative.
The installation must be done in the following order:
1. Install the CCI software.
2. Set the command device.
3. Create the configuration definition files.
4. Set the environmental variables.

UNIX installation
If you are installing CCI from the media for the program product, use the RMinstsh and
RMuninst scripts on the program product media to automatically install and remove the
CCI software. (For LINUX/IA64 or LINUX/X64, move to the LINUX/IA64 or LINUX/X64
directory and then execute ../../RMinstsh.)
For other media, use the following instructions as given below in the two methods. The
following instructions refer to UNIX commands that might be different on your platform.
Consult your OS documentation (for example, UNIX man pages) for platform-specific
command information.

Installing the CCI software into the root directory
Procedure
1. Insert the installation media into the I/O device properly.
2. Move to the current root directory: # cd /
3. Copy all files from the installation media using the cpio command:
# cpio

-idmu

<

/dev/XXXX

where XXXX = I/O device
Preserve the directory structure (d flag) and file modification times (m flag), and
copy unconditionally (u flag).

Chapter 2: Installing and configuring CCI
Command Control Interface Installation and Configuration Guide

42

Installing the CCI software into a non-root directory
4. Execute the CCI installation command:
# /HORCM/horcminstall.sh
5. Verify installation of the proper version using the raidqry command:
# raidqry -h
Model: RAID-Manager/HP-UX
Ver&Rev: 01-40-03/03
Usage: raidqry [options]

Installing the CCI software into a non-root directory
Procedure
1. Insert the installation media into the proper I/O device.
2. Move to the desired directory for CCI. The specified directory must be mounted by a
partition of except root disk or an external disk.
# cd

/Specified Directory

3. Copy all files from the installation media using the cpio command:
# cpio

-idmu

<

/dev/XXXX

XXXX = I/O device

Preserve the directory structure (d flag) and file modification times (m flag), and
copy unconditionally (u flag).
4. Make a symbolic link for /HORCM:
# ln

-s

/Specified Directory/HORCM

/HORCM

5. Execute the CCI installation command:
# /HORCM/horcminstall.sh
6. Verify installation of the proper version using the raidqry command:
# raidqry -h
Model: RAID-Manager/HP-UX
Ver&Rev: 01-40-03/03
Usage: raidqry [options]

Changing the CCI user (UNIX systems)
Just after installation, CCI can be operated only by the root user. When operating CCI by
assigning a different user for CCI management, you need to change the owner of the CCI
directory and owner's privilege, specify environment variables, and so on. Use the
following procedure to change the configuration to allow a different user to operate CCI.

Chapter 2: Installing and configuring CCI
Command Control Interface Installation and Configuration Guide

43

Changing the CCI user (UNIX systems)
Procedure
1. Change the owner of the following CCI files from the root user to the desired user
name:
■

/HORCM/etc/horcmgr

■

All CCI commands in the /HORCM/usr/bin directory

■

/HORCM/log directory

■

All CCI log directories in the /HORCM/log* directories

■

/HORCM/.uds directory

2. Give the newly assigned user the privilege of writing to the following CCI directories:
■

/HORCM/log

■

/HORCM/log* (when the /HORCM/log* directory exists)

■

/HORCM (when the /HORCM/log* directory does not exist)

3. Change the owner of the raw device file of the HORCM_CMD (control device)
command device in the configuration definition file from the root user to the
desired user name.
4. Optional: Establishing the HORCM (/etc/horcmgr) start environment: If you have
designation of the full environment variables (HORCM_LOG HORCM_LOGS), then
start the horcmstart.sh command without an argument. In this case, the
HORCM_LOG and HORCM_LOGS directories must be owned by the CCI
administrator. The environment variable (HORCMINST, HORCM_CONF) establishes
as the need arises.
5. Optional: Establishing the command execution environment: If you have
designation of the environment variables (HORCC_LOG), then the HORCC_LOG
directory must be owned by the CCI administrator. The environment variable
(HORCMINST) establishes as the need arises.
6. Establish UNIX domain socket: If the execution user of CCI is different from user of
the command, a system administrator needs to change the owner of the following
directory, which is created at each HORCM (/etc/horcmgr) start-up:
■

/HORCM/.uds/.lcmcl directory

To reset the security of UNIX domain socket to OLD version, perform the following:
1. Give writing permission to /HORCM/.uds directory.
2. Start horcmstart.sh ., and set the "HORCM_EVERYCLI=1" environment variable.

Chapter 2: Installing and configuring CCI
Command Control Interface Installation and Configuration Guide

44

Windows installation
Next steps
Note: A user account for the Linux system must have the "CAP_SYS_ADMIN"
and "CAP_SYS_RAWIO" privileges to use the SCSI Class driver (Command
device). The system administrator can apply these privileges by using the
PAM_capability module. However, if the system administrator cannot set
those user privileges, then use the following method. This method starts the
HORCM daemon only with the root user; as an alternative, you can execute
CCI commands.
■

System administrator: Place the script that starts up horcmstart.sh in the
following directory so that the system can start HORCM from /etc/
rc.d/rc: /etc/init.d

■

Users: When the log directory is only accessible by the system
administrator, you cannot use the inqraid or raidscan -find
commands. Therefore, set the command log directory by setting the
environment variables (HORCC_LOG), and executing the CCI command.
®

Note: AIX does not allow ioctl() with the exception of the root user. CCI
tries to use ioctl(DK_PASSTHRU) or SCSI_Path_thru as much as possible,
if it fails, changes to RAW_IO follows conventional ways. Even so, CCI might
®
encounter the AIX FCP driver, which does not support
ioctl(DK_PASSTHRU) fully in the customer site. After this consideration, CCI
also supports by defining either the following environment variable or /
HORCM/etc/USE_OLD_IOCTLfile(size=0) that uses the RAW_IO forcibly.
Example
export USE_OLD_IOCTL=1
horcmstart.sh 10
HORCM/etc:
-rw-r--r-- 1 root root
0 Nov 11 11:12 USE_OLD_IOCT
-r--r--r-- 1 root sys
32651 Nov 10 20:02 horcm.conf
-r-xr--r-- 1 root sys 282713 Nov 10 20:02 horcmgr

Windows installation
Use this procedure to install CCI on a Windows system.
Make sure to install CCI on all servers involved in CCI operations.
Caution:
■

Installing CCI in multiple drives is not recommended. If you install CCI in
multiple drives, CCI installed in the smallest drive might be used
preferentially.

■

If CCI is already installed and you are upgrading the CCI version, you must
remove the installed version first and then install the new version. For
instructions, see Upgrading CCI in a Windows environment (on page 61) .

Chapter 2: Installing and configuring CCI
Command Control Interface Installation and Configuration Guide

45

Changing the CCI user (Windows systems)
Before you begin
The network of Windows attachment with TCP/IP protocol must already be installed and
established.
Procedure
1. Insert the media for the product into the proper I/O device.
2. Execute Setup.exe (\program\RM\WIN_NT\RMHORC\Setup.exe or \program\RM
\WIN_NT\RMHORC_X64\Setup.exe on the CD), and follow the instructions on the
screen to complete the installation. The installation directory is HORCM (fixed value)
at the root directory.
3. Reboot the Windows server, and then start up CCI.
A warning message for security might appear at the initial start-up depending on
the OS settings. Specify "Temporarily Allow" or "Always Allow" in the dialog box.
4. Verify that the correct version of the CCI software is running on your system by
executing the raidqry command:
D:\HORCM\etc> raidqry -h
Model: RAID-Manager/WindowsNT
Ver&Rev: 01-41-03/xx
Usage: raidqry [options] for HORC
Next steps
Users who execute CCI commands need "administrator" privileges and the right to
access the log directory and the files in it. For instructions on specifying a CCI
administrator, see Changing the CCI user (Windows systems) (on page 46) .

Changing the CCI user (Windows systems)
Users who execute CCI commands need "administrator" privileges and the right to
access a log directory and the files under it. Use the following procedures to specify a
user who does not have "administrator" privileges as a CCI administrator.
■

Specifying a CCI administrator: system administrator tasks (on page 46)

■

Specifying a CCI administrator: CCI administrator tasks (on page 47)

Specifying a CCI administrator: system administrator tasks
Procedure
1. Add a user_name to the PhysicalDrive.
Add the user name of the CCI administrator to the Device objects of the command
device for HORCM_CMD in the configuration definition file. For example:
C:\HORCM\tool\>chgacl /A:RMadmin Phys
PhysicalDrive0 -> \Device\Harddisk0\DR0
\\.\PhysicalDrive0 : changed to allow 'RMadmin'
2. Add a user_name to the Volume{GUID}.

Chapter 2: Installing and configuring CCI
Command Control Interface Installation and Configuration Guide

46

Specifying a CCI administrator: CCI administrator tasks
If the CCI administrator needs to use the "-x mount/umount" option for CCI
commands, the system administrator must add the user name of the CCI
administrator to the Device objects of the Volume{GUID}. For example:
C:\HORCM\tool\>chgacl /A:RMadmin Volume
Volume{b0736c01-9b14-11d8-b1b6-806d6172696f} -> \Device\CdRom0
\\.\Volume{b0736c01-9b14-11d8-b1b6-806d6172696f} : changed to allow
'RMadmin'
Volume{b0736c00-9b14-11d8-b1b6-806d6172696f} -> \Device\HarddiskVolume1
\\.\Volume{b0736c00-9b14-11d8-b1b6-806d6172696f} : changed to allow
'RMadmin'
3. Add user_name to the ScsiX.
If the CCI administrator needs to use the "-x portscan" option for CCI commands,
the system administrator must add the user name of the CCI administrator to the
Device objects of the ScsiX. For example:
C:\HORCM\tool\>chgacl /A:RMadmin Scsi
Scsi0: -> \Device\Ide\IdePort0
\\.\Scsi0: : changed to allow 'RMadmin'
Scsi1: -> \Device\Ide\IdePort1
\\.\Scsi1: : changed to allow 'RMadmin '
Result
Because the ACL (Access Control List) of the Device objects is set every time Windows
starts-up, the Device objects are also required when Windows starts up. The ACL is also
required when new Device objects are created.

Specifying a CCI administrator: CCI administrator tasks
Procedure
1. Establish the HORCM (/etc/horcmgr) startup environment.
By default, copy the configuration definition file in the following directory:
%SystemDrive%:\windows\
Because users cannot write to this directory, the CCI administrator must change the
directory by using the HORCM_CONF variable. For example:
C:\HORCM\etc\>set HORCM_CONF=C:\Documents and Settings\RMadmin
\horcm10.conf
C:\HORCM\etc\>set HORCMINST=10
C:\HORCM\etc\>horcmstart
[This must be started without arguments]
The mountvol command is denied use by user privilege, therefore "the directory
mount" option of CCI commands using the mountvol command cannot be
executed.
The inqraid "-gvinf" option uses the %SystemDrive%:\windows\ directory, so this
option cannot be used unless the system administrator allows you to write.

Chapter 2: Installing and configuring CCI
Command Control Interface Installation and Configuration Guide

47

Installing CCI on the same PC as the storage management software
However, CCI can be changed from the %SystemDrive%:\windows\ directory to
the %TEMP% directory by setting the "HORCM_USE_TEMP" environment variable.
For example:
C:\HORCM\etc\>set HORCM_USE_TEMP=1
C:\HORCM\etc\>inqraid $Phys -gvinf
2. Ensure that the CCI command and the HORCM have the same privileges. If CCI
command and the HORCM are executing different privileges (different users), then
CCI command can not attach to HORCM (CCI command and HORCM are denied
communication through the Mailslot).
However, CCI does permit a HORCM connection through the "HORCM_EVERYCLI"
environment variable, as shown in the following example:
C:\HORCM\etc\>set HORCM_CONF=C:\Documents and Settings\RMadmin
\horcm10.conf
C:\HORCM\etc\>set HORCMINST=10
C:\HORCM\etc\>set HORCM_EVERYCLI=1
C:\HORCM\etc\>horcmstart [This must be started without arguments]
In this example, users who execute CCI commands must be restricted to use only
CCI commands. This can be done using the Windows "explore" or "cacls"
commands.

Installing CCI on the same PC as the storage management software
CCI is supplied with the storage management software for VSP Gx00 models and VSP
Fx00 models. Installing CCI and the storage management software on the same PC
allows you to use CCI of the appropriate version.
Caution: If CCI is already installed and you are upgrading the CCI version, you
must remove the installed version first and then install the new version. For
instructions, see Upgrading CCI installed on the same PC as the storage
management software (on page 62) .
Before you begin
The network of Windows attachment with TCP/IP protocol must already be installed and
established.
Procedure
1. Right-click \wk
\supervisor\restapi\uninstall.bat to run as administrator.
2. Install CCI in the same drive as the storage management software as follows:
a. Insert the media for the product into the proper I/O device.
b. Execute Setup.exe (\program\RM\WIN_NT\RMHORC\Setup.exe or
\program\RM\WIN_NT\RMHORC_X64\Setup.exe on the CD), and follow the

Chapter 2: Installing and configuring CCI
Command Control Interface Installation and Configuration Guide

48

OpenVMS installation
instructions on the screen to complete the installation. The installation
directory is HORCM (fixed value) at the root directory.
c. Reboot the Windows server, and then start up CCI.
A warning message for security might appear at the initial start-up depending
on the OS settings. Specify "Temporarily Allow" or "Always Allow" in the dialog
box.
d. Verify that the correct version of the CCI software is running on your system by
executing the raidqry command:
D:\HORCM\etc> raidqry -h
Model: RAID-Manager/WindowsNT
Ver&Rev: 01-41-03/xx
Usage: raidqry [options] for HORC
3. Right-click \wk
\supervisor\restapi\install.bat to run as administrator.

OpenVMS installation
Make sure to install CCI on all servers involved in CCI operations. Establish the network
(TCP/IP), if not already established. CCI is provided as the following PolyCenter Software
Installation (PCSI) file:
HITACHI-ARMVMS-RM-V0122-2-1.PCSI HITACHI-I64VMS-RM-V0122-2-1.PCSI
CCI also requires that POSIX_ROOT exist on the system, so you must define the
POSIX_ROOT before installing the CCI software. It is recommended that you define the
following three logical names for CCI in LOGIN.COM:
$ DEFINE/TRANSLATION=(CONCEALED,TERMINAL) SYS$POSIX_ROOT "Device:
[directory]"
$ DEFINE DCL$PATH SYS$POSIX_ROOT:[horcm.usr.bin],SYS$POSIX_ROOT:[horcm.etc]
$ DEFINE/TABLE=LNM$PROCESS_DIRECTORY LNM$TEMPORARY_MAILBOX LNM$GROUP
$ DEFINE DECC$ARGV_PARSE_STYLE ENABLE
$ SET PROCESS/PARSE_STYLE=EXTENDED
where Device:[directory] is defined as SYS$POSIX_ROOT
Follow the steps below to install the CCI software on an OpenVMS system.
Procedure
1. Insert and mount the provided CD or diskette.
2. Execute the following command:
$ PRODUCT INSTALL RM /source=Device:[PROGRAM.RM.OVMS]/LOG -

_$ /destination=SYS$POSIX_ROOT:[000000]
Device:[PROGRAM.RM.OVMS] where HITACH-ARMVMS-RM-V0122-21.PCSI exists

Chapter 2: Installing and configuring CCI
Command Control Interface Installation and Configuration Guide

49

In-band and out-of-band operations
3. Verify installation of the proper version using the raidqry command:
$ raidqry -h
Model: RAID-Manager/OpenVMS
Ver&Rev: 01-40-03/03
Usage: raidqry [options]

In-band and out-of-band operations
CCI operations can be performed using either the in-band method (all storage systems)
or the out-of-band method (VSP and later).
■

In-band (host-based) method. CCI commands are transferred from the client or server
to the command device in the storage system via the host Fibre-Channel or iSCSI
interface. The command device must be defined in the configuration definition file (as
shown in the figure below).

■

Out-of-band (LAN-based) method. CCI commands are transferred from a client PC via
the LAN. For CCI on USP V/VM, to execute a command from a client PC that is not
connected directly to a storage system, you must write a shell script to log in to a CCI
server (in-band method) via Telnet or SSH.
For CCI on VSP and later, you can create a virtual command device on the SVP by
specifying the IP address in the configuration definition file. For CCI on VSP Gx00
models and VSP Fx00 models, you can create a virtual command device on GUM in a
storage system by specifying the IP address of the storage system.
By creating a virtual command device, you can execute the same script as the in-band
method from a client PC that is not connected directly to the storage system. CCI
commands are transferred to the virtual command device from the client PC and then
executed in storage systems.
A virtual command device can also be created on the CCI server, which is a remote CCI
installation that is connected by LAN. The location of the virtual command device
depends on the type of storage system. The following table lists the storage system
types and indicates the allowable locations of the virtual command device.
Location of virtual command device
Storage system type

SVP

GUM

CCI server

VSP Gx00 models, VSP Fx00
models

OK*

OK

OK

HUS VM

OK

Not applicable

OK

VSP G1x00, VSP F1500

OK

Not applicable

OK

VSP

OK

Not applicable

OK

* CCI on the SVP must be configured as a CCI server in advance.

Chapter 2: Installing and configuring CCI
Command Control Interface Installation and Configuration Guide

50

In-band and out-of-band operations
The following figure shows a sample system configuration with the command device and
virtual command device settings for the in-band and out-of-band methods on VSP Gx00
models, VSP Fx00 models, VSP G1x00, VSP F1500, VSP, and HUS VM.

The following figure shows a sample system configuration with the command device and
virtual command device settings for the in-band and out-of-band methods on VSP Gx00
models and VSP Fx00 models. In the following figure, CCI B is the CCI server for CCI A.
You can issue commands from CCI A to the storage system through the virtual command
device of CCI B. You can also issue commands from CCI B directly to the storage system
(without CCI A). When you issue commands directly from CCI B, CCI A is optional.

Chapter 2: Installing and configuring CCI
Command Control Interface Installation and Configuration Guide

51

In-band and out-of-band operations

The following figure shows a sample system configuration with a CCI server connected
by the in-band method for VSP G1x00, VSP F1500, VSP, and HUS VM.

Chapter 2: Installing and configuring CCI
Command Control Interface Installation and Configuration Guide

52

Setting up UDP ports

Setting up UDP ports
This section contains information about setting up strict firewalls.
If you do not have a HORCM_MON IP address in your configuration definition file, CCI
(horcm) opens the following ports on horcmstart:
■

For in-band or out-of-band: [31000 + horcminstance + 1]

■

For out-of-band: [34000 + horcminstance + 1]

If you have a HORCM_MON IP address in your configuration definition file, you need to
open up the port that is defined in this entry.

Setting the command device
For in-band CCI operations, commands are issued to the command device and then
executed on the RAID storage system. The command device is a user-selected, dedicated
logical volume on the storage system that functions as the interface to the CCI software
on the host. The command device is dedicated to CCI operations and cannot be used by
any other applications. The command device accepts read and write commands that are
executed by the storage system and returns read requests to the host.
The command device can be any OPEN-V device that is accessible to the host. A LUSE
volume cannot be used as a command device. The command device uses 16 MB, and the
remaining volume space is reserved for CCI and its utilities. A Virtual LUN volume as
small as 36 MB can be used as a command device.
Note: For Solaris operations, the command device must be labeled.

Chapter 2: Installing and configuring CCI
Command Control Interface Installation and Configuration Guide

53

Setting the command device
First you set the command device using Device Manager - Storage Navigator, and then
you define the command device in the HORCM_CMD section of the configuration
definition file for the CCI instance on the attached host.
For specifying the command device and the virtual command device, you can enter up to
511 characters on a line.
Procedure
1. Make sure the device that will be set as a command device does not contain any
user data. Once a volume is set as a command device, it is inaccessible to the host.
2. Log on to Storage Navigator, and connect to the storage system on which you want
to set a command device.
3. Configure the device as needed before setting it as a command device. For example,
you can create a custom-size device that has 36 MB of storage capacity for use as a
command device. For instructions, see the Provisioning Guide for your storage
system. For Universal Storage Platform V/VM, see the Hitachi Virtual LVI/LUN User's
Guide.
4. Locate and select the device, and set the device as a command device. For
instructions, see the Provisioning Guide for your storage system. For Universal
Storage Platform V/VM, see the Hitachi LUN Manager User's Guide.
If you plan to use the CCI Data Protection Facility, enable the command device
security attribute of the command device. For details about the CCI Data Protection
Facility, see the Command Control Interface User and Reference Guide.
If you plan to use CCI commands for provisioning (raidcom commands), enable the
user authentication attribute of the command device.
If you plan to use device groups, enable the device group definition attribute of the
command device.
5. Write down the system raw device name (character-type device file name) of the
command device (for example, /dev/rdsk/c0t0d1s2 in Solaris, \\.\CMD-Ser#ldev#-Port# in Windows). You will need this information when you define the
command device in the configuration definition file.
6. If you want to set an alternate command device, repeat this procedure for another
volume.
7. If you want to enable dual pathing of the command device under Solaris systems,
include all paths to the command device on a single line in the HORCM_CMD section
of the configuration definition file.
The following example shows the two controller paths (c1 and c2) to the command
device. Putting the path information on separate lines might cause parsing issues,
and failover might not occur unless the HORCM startup script is restarted on the
Solaris system.
Example of dual path for command device for Solaris systems:
HORCM_CMD
#dev_name dev_name dev_name
/dev/rdsk/c1t66d36s2 /dev/rdsk/c2t66d36s2

Chapter 2: Installing and configuring CCI
Command Control Interface Installation and Configuration Guide

54

Specifying the command device and virtual command device in the configuration definition file

Specifying the command device and virtual command device in the configuration
definition file
If you will execute commands by the in-band method to a command device on the
storage system, specify the LU path for the command device in the configuration
definition file. The command device in the storage system specified by the LU path
accepts the commands from the client and executes the operation.
If you will execute commands by the out-of-band method, specify the virtual command
device in the configuration definition file. The virtual command device is defined by the
IP address of the SVP or GUM, the UDP communication port number (fixed at 31001),
and the storage system unit ID* in the configuration definition file. When a virtual
command device is used, the command is transferred from the client or server via LAN
to the virtual command device specified by the IP address of the SVP, and an operation
instruction is assigned to the storage system.
* The storage system unit ID is required only for configurations with multiple storage
systems.
The following examples show how a command device and a virtual command device are
specified in the configuration definition file. For details, see the Command Control
Interface User and Reference Guide.
Example of command device in configuration definition file (in-band method)
HORCM_CMD
#dev_name
\\.\CMD-64015:/dev/rdsk/*

dev_name

dev_name

Example of virtual command device in configuration definition file (out-of-band
method with SVP)
Example for SVP IP address 192.168.1.100 and UDP communication port number 31001:
HORCM_CMD
#dev_name
\\.\IPCMD-192.168.1.100-31001

dev_name

dev_name

Example of virtual command device in configuration definition file (out-of-band
method with GUM)
Example for GUM IP addresses 192.168.0.16, 192.168.0.17 and UDP communication port
numbers 31001, 31002. In this case, enter the IP addresses without line feed.
HORCM_CMD
dev_name
dev_name
#dev_name
\\.\IPCMD-192.168.0.16-31001 \\.\IPCMD-192.168.0.17-31001 \\.
\IPCMD-192.168.0.16-31002 \\.\IPCMD-192.168.0.17-31002

Chapter 2: Installing and configuring CCI
Command Control Interface Installation and Configuration Guide

55

About alternate command devices

About alternate command devices
If CCI receives an error notification in reply to a read or write request to a command
device, the CCI software can switch to an alternate command device, if one is defined. If a
command device is unavailable (for example, blocked due to online maintenance), you
can switch to an alternate command device manually. If no alternate command device is
defined or available, all commands terminate abnormally, and the host cannot issue CCI
commands to the storage system. To ensure that CCI operations continue when a
command device becomes unavailable, you should set one or more alternate command
devices.
Because the use of alternate I/O pathing depends on the platform, restrictions are
placed upon it. For example, on HP-UX systems only devices subject to the LVM can use
the alternate path PV-LINK. To prevent command device failure, CCI supports an
alternate command device function.
■

Definition of alternate command devices. To use an alternate command device,
define two or more command devices for the HORCM_CMD item in the configuration
definition file. When two or more devices are defined, they are recognized as
alternate command devices. If an alternate command device is not defined in the
configuration definition file, CCI cannot switch to the alternate command device.

■

Timing of alternate command devices. When the HORCM receives an error
notification in reply from the operating system via the raw I/O interface, the
command device is alternated. It is possible to alternate the command device forcibly
by issuing an alternating command provided by TrueCopy (horcctl -C).

■

Operation of alternating command. If the command device is blocked due to online
maintenance (for example, microcode replacement), the alternating command should
be issued in advance. When the alternating command is issued again after
completion of the online maintenance, the previous command device is activated
again.

■

Multiple command devices on HORCM startup. If at least one command device is
available and one or more command devices are specified in the configuration
definition file, then HORCM starts with a warning message to startup log by using
available command device. Confirm that all command devices can be changed by
using the horcctl -C command option, or HORCM has been started without warning
message to the HORCM startup log.

The following figure shows the workflow for the alternate command device function.

Chapter 2: Installing and configuring CCI
Command Control Interface Installation and Configuration Guide

56

Creating and editing the configuration definition file

Creating and editing the configuration definition file
The configuration definition file is a text file that is created and edited using any standard
text editor (for example, UNIX vi editor, Windows Notepad). The configuration definition
file defines correspondences between the server and the volumes used by the server.
There is a configuration definition file for each host server. When the CCI software starts
up, it refers to the definitions in the configuration definition file.
The configuration definition file defines the devices in copy pairs and is used for host
management of the copy pairs, including ShadowImage, ShadowImage for Mainframe,
TrueCopy, TrueCopy for Mainframe, Copy-on-Write Snapshot, Thin Image, Universal
Replicator, and Universal Replicator for Mainframe. ShadowImage, ShadowImage for
Mainframe, Copy-on-Write Snapshot, and Thin Image use the same configuration files
and commands, and the RAID storage system determines the type of copy pair based on
the S-VOL characteristics and (for Copy-on-Write Snapshot and Thin Image) the pool
type.
The configuration definition file contains the following sections:
■

HORCM_MON: Defines information about the local host.

■

HORCM_CMD: Defines information about the command (CMD) devices.

■

HORCM_VCMD: Defines information about the virtual storage machine.

■

HORCM_DEV or HORCM_LDEV: Defines information about the copy pairs.

■

HORM_INST or INSTP: Defines information about the remote host.

■

HORCM_LDEVG: Defines information about the device group.

■

HORCM_ALLOW_INST: Defines information about user permissions.

A sample configuration definition file, HORCM_CONF (/HORCM/etc/horcm.conf), is
included with the CCI software. This file should be used as the basis for creating your
configuration definition files. The system administrator should make a copy of the
sample file, set the necessary parameters in the copied file, and place the file in the
proper directory.
The following table lists the configuration parameters defined in the horcm.conf file and
specifies the default value, type, and limit for each parameter. For details about
parameters in the configuration file, see the Command Control Interface User and
Reference Guide.
Parameter

Default

Type

Limit

ip_address

None

Character string

63 characters

service

None

Character string or numeric
value

15 characters

poll (10 ms)

1000

Numeric value*

None

timeout (10 ms)

3000

Numeric value*

None

Chapter 2: Installing and configuring CCI
Command Control Interface Installation and Configuration Guide

57

Creating and editing the configuration definition file

Parameter

Default

Type

Limit

dev_name for
HORCM_DEV

None

Character string

31 characters

dev_group

None

Character string

31 characters
Recommended value
= 8 char. or less

port #

None

Character string

31 characters

target ID

None

Numeric value*

7 characters

LU#

None

Numeric value*

7 characters

MU#

0

Numeric value*

7 characters

Serial#

None

Numeric value*

12 characters

CU:LDEV(LDEV#)

None

Numeric value

6 characters

dev_name for
HORCM_CMD

None

Character string

63 characters
Recommended value
= 8 char. or less

*Use decimal notation (not hexadecimal) for these numeric values.

Chapter 2: Installing and configuring CCI
Command Control Interface Installation and Configuration Guide

58

Notes on editing configuration definition file

Notes on editing configuration definition file
Follow the notes given below for editing configuration definition file.
■

Do not edit the configuration definition file while CCI is running. Shut down CCI, edit
the configuration file as needed, and then restart CCI. When you change the system
configuration, it is required to shut down CCI once and rewrite the configuration
definition file to match with the change and then restart CCI. When you change the
storage system configuration (microprogram, cache capacity, LU path, and so on), you
must restart CCI regardless of the necessity of the configuration definition file editing.
When you restart CCI, confirm that there is no contradiction in the connection
configuration by using the "-c" option of the pairdisplay command and the
raidqry command. However, you cannot confirm the consistency of the P-VOL and
S-VOL capacity with the "-c" option of pairdisplay command. Confirm the capacity
of each volume by using the raidcom command.

■

Do not mix pairs created with the "At-Time Split" option (-m grp) and pairs created
without this option in the same group defined in the CCI configuration file. If you do, a
pairsplit operation might end abnormally, or S-VOLs of the P-VOLs in the same
consistency group (CTG) might not be created correctly at the time the pairsplit
request is received.

■

If the hardware configuration is changed during the time an OS is running in Linux,
the name of a special file corresponding to the command device might be changed. At
this time, if HORCM was started by specifying the special file name in the
configuration definition file, HORCM cannot detect the command device, and the
communication with the storage system might fail.
To prevent this failure, specify the path name allocated by udev to the configuration
definition file before booting HORCM. Use the following procedure to specify the path
name. In this example, the path name for /dev/sdgh can be found.
1. Find the special file name of the command device by using inqraid command.
Command example:
[root@myhost ~]# ls /dev/sd* | /HORCM/usr/bin/inqraid -CLI |
grep CM sda CL1-B 30095 0 - - 0000 A:00000 OPEN-V-CM sdgh CL1A 30095 0 - - 0000 A:00000 OPEN-V-CM [root@myhost ~]#
2. Find the path name from the by-path directory. Command example:
[root@myhost ~]# ls -l /dev/disk/by-path/ | grep sdgh
lrwxrwxrwx. 1 root root 10 Jun 11 17:04 2015 pci-0000:08:00.0fc-0x50060e8010311940-lun-0 -> ../../sdgh [root@myhost ~]#
In this example, "pci-0000:08:00.0-fc-0x50060e8010311940-lun-0" is the path
name.
3. Enter the path name to HORCM_CMD in the configuration definition file as
follows.
HORCM_CMD /dev/disk/by-path/pci-0000:08:00.0fc-0x50060e8010311940-lun-0
4. Boot the HORCM instance as usual.

Chapter 2: Installing and configuring CCI
Command Control Interface Installation and Configuration Guide

59

Chapter 3: Upgrading CCI
For upgrading the CCI software, use the RMuninst scripts on the media for the program
product. For other media, please use the instructions in this chapter to upgrade the CCI
software. The instructions might be different on your platform. Please consult your
operating system documentation (for example, UNIX man pages) for platform-specific
command information.

Upgrading CCI in a UNIX environment
Use the RMinstsh script on the media for the program product to upgrade the CCI
software to a later version.
For other media, use the following instructions to upgrade the CCI software to a later
version. The following instructions refer to UNIX commands that might be different on
your platform. Please consult your operating system documentation (for example, UNIX
man pages) for platform-specific command information.
Follow the steps below to update the CCI software version on a UNIX system.
Procedure
1. Confirm that HORCM is not running. If it is running, shut it down.
One CCI instance: # horcmshutdown.sh
Two CCI instances: # horcmshutdown.sh 0 1
If CCI commands are running in the interactive mode, terminate the interactive
mode and exit these commands using the -q option.
2. Insert the installation media into the proper I/O device. Use the RMinstsh
(RMINSTSH) under the ./program/RM directory on the CD for the installation. For
LINUX/IA64 and LINUX/X64, execute ../../RMinstsh after moving to LINUX/IA64
or LINUX/X64 directory.
3. Move to the directory containing the HORCM directory (for example, # cd / for the
root directory).
4. Copy all files from the installation media using the cpio command: # cpio -idmu
< /dev/XXXX
where XXXX = I/O device. Preserve the directory structure (d flag) and file
modification times (m flag), and copy unconditionally (u flag).
5. Execute the CCI installation command. # /HORCM/horcminstall.sh
6. Verify installation of the proper version using the raidqry command.
# raidqry -h
Model: RAID-Manager/HP-UX
Chapter 3: Upgrading CCI
Command Control Interface Installation and Configuration Guide

60

Upgrading CCI in a Windows environment
Ver&Rev: 01-29-03/05
Usage: raidqry [options]
Next steps
After upgrading CCI, ensure that the CCI user is appropriately set for the upgraded/
installed files. For instructions, see Changing the CCI user (UNIX systems) (on page 43) .

Upgrading CCI in a Windows environment
Use this procedure to upgrade the CCI software version on a Windows system.
To upgrade the CCI version, you must first remove the installed CCI version and then
install the new CCI version.
Caution: When you upgrade the CCI software, the sample script file is
overwritten. If you have edited the sample script file and want to keep your
changes, first back up the edited sample script file, and then restore the data
of the sample script file using the backup file after the upgrade installation.
For details about the sample script file, see the Command Control Interface
User and Reference Guide.
Procedure
1. You can upgrade the CCI software only when CCI is not running. If CCI is running,
shut down CCI using the horcmshutdown command to ensure a normal end to all
functions.
2. Remove the installed CCI software using the Windows Control Panel.
For example, on a Windows 7 system:
a. Open the Control Panel.
b. Under Programs, click Uninstall a program.
c. In the program list, select RAID Manager for WindowsNT, and then click
Uninstall.
3. Insert the installation media for the product into the proper I/O device.
4. Execute Setup.exe (\program\RM\WIN_NT\RMHORC\Setup.exe or \program\RM
\WIN_NT\RMHORC_X64\Setup.exe on the CD), and follow the instructions on the
screen to complete the installation. The installation directory is HORCM (fixed value)
at the root directory.
5. In the InstallShield window, follow the instructions on screen to install the CCI
software.
6. Reboot the Windows server, and verify that the correct version of the CCI software
is running on your system by executing the raidqry -h command.
Example:
C:\HORCM\etc>raidqry -h
Model : RAID-Manager/WindowsNT

Chapter 3: Upgrading CCI
Command Control Interface Installation and Configuration Guide

61

Upgrading CCI installed on the same PC as the storage management software
Ver&Rev: 01-40-03/xx
Usage : raidqry [options] for HORC
Next steps
Users who execute CCI commands need "administrator" privileges and the right to
access the log directory and the files in it. For instructions on specifying a CCI
administrator, see Changing the CCI user (Windows systems) (on page 46) .

Upgrading CCI installed on the same PC as the storage
management software
If CCI is installed on the same PC as the storage management software for VSP Gx00
models and VSP Fx00 models, use this procedure to upgrade the CCI software.
To upgrade the CCI version, you must first remove the installed CCI version and then
install the new CCI version.
Note: Installing CCI on the same drive as the storage management software
allows you to use CCI of the appropriate version. If CCI and the storage
management software are installed on different drives, remove CCI, and then
install it on the same drive as the storage management software.
Caution: When you upgrade the CCI software, the sample script file is
overwritten. If you have edited the sample script file and want to keep your
changes, first back up the edited sample script file, and then restore the data
of the sample script file using the backup file after the upgrade installation.
For details about the sample script file, see the Command Control Interface
User and Reference Guide.
Procedure
1. You can upgrade the CCI software only when CCI is not running. If CCI is running,
shut down CCI using the horcmshutdown command to ensure a normal end to all
functions.
2. Right-click \wk
\supervisor\restapi\uninstall.bat to run as administrator.
3. Remove the installed CCI software using the Windows Control Panel.
For example, on a Windows 7 system:
a. Open the Control Panel.
b. Under Programs, click Uninstall a program.
c. In the program list, select RAID Manager for WindowsNT, and then click
Uninstall.
4. Insert the installation media for the product into the proper I/O device.
5. Execute Setup.exe (\program\RM\WIN_NT\RMHORC\Setup.exe or \program\RM
\WIN_NT\RMHORC_X64\Setup.exe on the CD), and follow the instructions on the

Chapter 3: Upgrading CCI
Command Control Interface Installation and Configuration Guide

62

Upgrading CCI in an OpenVMS environment
screen to complete the installation. The installation directory is HORCM (fixed value)
at the root directory.
Make sure to select the drive on which the storage management software is
installed.
6. In the InstallShield window, follow the instructions on screen to install the CCI
software.
7. Reboot the Windows server, and verify that the correct version of the CCI software
is running on your system by executing the raidqry -h command.
Example:
C:\HORCM\etc>raidqry -h
Model : RAID-Manager/WindowsNT
Ver&Rev: 01-40-03/xx
Usage : raidqry [options] for HORC
8. Right-click \wk
\supervisor\restapi\install.bat to run as administrator.
Next steps
Users who execute CCI commands need "administrator" privileges and the right to
access the log directory and the files in it. For instructions on specifying a CCI
administrator, see Changing the CCI user (Windows systems) (on page 46) .

Upgrading CCI in an OpenVMS environment
Follow the steps below to update the CCI software version on an OpenVMS system:
Procedure
1. You can upgrade the CCI software only when CCI is not running. If CCI is running,
shut down CCI using the horcmshutdown command to ensure a normal end to all
functions:
$horcmshutdown for one HORCM instance $horcmshutdown 0 1 for two HORCM
instances. When a command is being used in interactive mode, terminate it using
the -q option.
2. Insert and mount the provided installation media.
3. Execute the following command:
$ PRODUCT INSTALL CCI /source=Device:[PROGRAM.CCI.OVMS]/LOG
Device:[PROGRAM.CCI.OVMS] where HITACH-ARMVMS-CCI-V01222-1.PCSI exists
4. Verify installation of the proper version using the raidqry command.
$ raidqry -h
Model: CCI/OpenVMS

Chapter 3: Upgrading CCI
Command Control Interface Installation and Configuration Guide

63

Upgrading CCI in an OpenVMS environment
Ver&Rev: 01-29-03/05
Usage: raidqry [options]

Chapter 3: Upgrading CCI
Command Control Interface Installation and Configuration Guide

64

Chapter 4: Removing CCI
This chapter describes and provides instructions for removing the CCI software.

Removing CCI in a UNIX environment
Removing the CCI software on UNIX using RMuninst
Use this procedure to remove the CCI software on a UNIX system using the RMuninst
script on the installation media.
Before you begin
■

If you are discontinuing local or remote copy operations (for example, ShadowImage,
TrueCopy), delete all volume pairs and wait until the volumes are in simplex status.
If you will continue copy operations (for example, using Storage Navigator), do not
delete any volume pairs.

Procedure
1. If CCI commands are running in the interactive mode, use the -q option to
terminate the interactive mode and exit horcmshutdown.sh commands.
2. You can remove the CCI software only when CCI is not running. If CCI is running,
shut down CCI using the horcmshutdown.sh command to ensure a normal end to
all functions:
One CCI instance: # horcmshutdown.sh
Two CCI instances: # horcmshutdown.sh 0 1
3. Use the RMuninst script on the CCI installation media to remove the CCI software.
4. After the CCI software has been removed, the CCI command devices (used for the
in-band method) are no longer needed. If you want to configure the volumes that
were used by CCI command devices for operations from the connected hosts, you
must disable the command device setting on each volume.
To disable the command device setting:
a. Click Storage Systems, expand the Storage Systems tree, and click Logical
Devices.
On the LDEVs tab, the CCI command devices are identified by Command
Device in the Attribute column.
b. Select the command device, and then click More Actions > Edit Command
Devices.
c. For Command Device, click Disable, and then click Finish.
Chapter 4: Removing CCI
Command Control Interface Installation and Configuration Guide

65

Removing the CCI software manually on UNIX
d. In the Confirm window, verify the settings, and enter the task name.
You can enter up to 32 ASCII characters and symbols, with the exception of:
\ / : , ; * ? " < > |. The value "date-window name" is entered by default.
e. Click Apply.
If Go to tasks window for status is selected, the Tasks window appears.

Removing the CCI software manually on UNIX
If you do not have the installation media for CCI, use this procedure to remove the CCI
software manually on a UNIX system.
Before you begin
■

If you are discontinuing local or remote copy operations (for example, ShadowImage,
TrueCopy), delete all volume pairs and wait until the volumes are in simplex status.
If you will continue copy operations (for example, using Storage Navigator), do not
delete any volume pairs.

Procedure
1. If CCI commands are running in the interactive mode, use the -q option to
terminate the interactive mode and exit horcmshutdown.sh commands.
2. You can remove the CCI software only when CCI is not running. If CCI is running,
shut down CCI using the horcmshutdown.sh command to ensure a normal end to
all functions:
One CCI instance: # horcmshutdown.sh
Two CCI instances: # horcmshutdown.sh 0 1
3. When HORCM is installed in the root directory (/HORCM is not a symbolic link),
remove the CCI software as follows:
a. Execute the horcmuninstall command: # /HORCM/horcmuninstall.sh
b. Move to the root directory: # cd /
c. Delete the product using the rm command: # rm -rf /HORCM
Example
#/HORCM/horcmuninstall.sh
#cd /
#rm -rf /HORCM
4. When HORCM is not installed in the root directory (/HORCM is a symbolic link),
remove the CCI software as follows:
a. Execute the horcmuninstall command: # HORCM/horcmuninstall.sh
b. Move to the root directory: # cd /
c. Delete the symbolic link for /HORCM: # rm /HORCM
d. Delete the product using the rm command: # rm -rf /Directory/HORCM

Chapter 4: Removing CCI
Command Control Interface Installation and Configuration Guide

66

Removing CCI on a Windows system
Example
#/HORCM/horcmuninstall.sh
#cd /
#rm /HORCM
#rm -rf //HORCM
5. After the CCI software has been removed, the CCI command devices (used for the
in-band method) are no longer needed. If you want to configure the volumes that
were used by CCI command devices for operations from the connected hosts, you
must disable the command device setting on each volume.
To disable the command device setting:
a. Click Storage Systems, expand the Storage Systems tree, and click Logical
Devices.
On the LDEVs tab, the CCI command devices are identified by Command
Device in the Attribute column.
b. Select the command device, and then click More Actions > Edit Command
Devices.
c. For Command Device, click Disable, and then click Finish.
d. In the Confirm window, verify the settings, and enter the task name.
You can enter up to 32 ASCII characters and symbols, with the exception of:
\ / : , ; * ? " < > |. The value "date-window name" is entered by default.
e. Click Apply.
If Go to tasks window for status is selected, the Tasks window appears.

Removing CCI on a Windows system
Use this procedure to remove the CCI software on a Windows system.
Before you begin
■

If you are discontinuing local or remote copy operations (for example, ShadowImage,
TrueCopy), delete all volume pairs and wait until the volumes are in simplex status.
If you will continue copy operations (for example, using Storage Navigator), do not
delete any volume pairs.

Procedure
1. You can remove the CCI software only when CCI is not running. If CCI is running,
shut down CCI using the horcmshutdown command to ensure a normal end to all
functions:
One CCI instance: D:\HORCM\etc > horcmshutdown
Two CCI instances: D:\HORCM\etc > horcmshutdown 0 1
2. Remove the CCI software using the Windows Control Panel.
For example, perform the following steps on a Windows 7 system:
a. Open the Control Panel.
Chapter 4: Removing CCI
Command Control Interface Installation and Configuration Guide

67

Removing CCI installed on the same PC as the storage management software
b. Under Programs, click Uninstall a program.
c. In the program list, select RAID Manager for WindowsNT, and then click
Uninstall.
3. After the CCI software has been removed, the CCI command devices (used for the
in-band method) are no longer needed. If you want to configure the volumes that
were used by CCI command devices for operations from the connected hosts, you
must disable the command device setting on each volume.
To disable the command device setting:
a. Click Storage Systems, expand the Storage Systems tree, and click Logical
Devices.
On the LDEVs tab, the CCI command devices are identified by Command
Device in the Attribute column.
b. Select the command device, and then click More Actions > Edit Command
Devices.
c. For Command Device, click Disable, and then click Finish.
d. In the Confirm window, verify the settings, and enter the task name.
You can enter up to 32 ASCII characters and symbols, with the exception of:
\ / : , ; * ? " < > |. The value "date-window name" is entered by default.
e. Click Apply.
If Go to tasks window for status is selected, the Tasks window appears.

Removing CCI installed on the same PC as the storage
management software
If CCI is installed on the same PC as the storage management software for VSP Gx00
models and VSP Fx00 models, use this procedure to remove the CCI software.
Before you begin
■

If you are discontinuing local or remote copy operations (for example, ShadowImage,
TrueCopy), delete all volume pairs and wait until the volumes are in simplex status.
If you will continue copy operations (for example, using Storage Navigator), do not
delete any volume pairs.

Procedure
1. You can remove the CCI software only when CCI is not running. If CCI is running,
shut down CCI using the horcmshutdown command to ensure a normal end to all
functions:
One CCI instance: D:\HORCM\etc > horcmshutdown
Two CCI instances: D:\HORCM\etc > horcmshutdown 0 1
2. Right-click \wk
\supervisor\restapi\uninstall.bat to run as administrator.
3. Remove the CCI software using the Windows Control Panel.

Chapter 4: Removing CCI
Command Control Interface Installation and Configuration Guide

68

Removing CCI on an OpenVMS system
For example, perform the following steps on a Windows 7 system:
a. Open the Control Panel.
b. Under Programs, click Uninstall a program.
c. In the program list, select RAID Manager for WindowsNT, and then click
Uninstall.
4. Perform the procedure for upgrading the storage management software, the SVP
software, and the firmware.
5. After the CCI software has been removed, the CCI command devices (used for the
in-band method) are no longer needed. If you want to configure the volumes that
were used by CCI command devices for operations from the connected hosts, you
must disable the command device setting on each volume.
To disable the command device setting:
a. Click Storage Systems, expand the Storage Systems tree, and click Logical
Devices.
On the LDEVs tab, the CCI command devices are identified by Command
Device in the Attribute column.
b. Select the command device, and then click More Actions > Edit Command
Devices.
c. For Command Device, click Disable, and then click Finish.
d. In the Confirm window, verify the settings, and enter the task name.
You can enter up to 32 ASCII characters and symbols, with the exception of:
\ / : , ; * ? " < > |. The value "date-window name" is entered by default.
e. Click Apply.
If Go to tasks window for status is selected, the Tasks window appears.

Removing CCI on an OpenVMS system
Use this procedure to remove the CCI software on an OpenVMS system.
Before you begin
■

If you are discontinuing local or remote copy operations (for example, ShadowImage,
TrueCopy), delete all volume pairs and wait until the volumes are in simplex status.
If you will continue copy operations (for example, using Storage Navigator), do not
delete any volume pairs.

Procedure
1. If CCI commands are running in the interactive mode, use the -q option to
terminate the interactive mode and exit horcmshutdown.sh commands.
2. You can remove the CCI software only when CCI is not running. If CCI is running,
shut down CCI using the horcmshutdown command to ensure a normal end to all
functions:
For one instance: $ horcmshutdown

Chapter 4: Removing CCI
Command Control Interface Installation and Configuration Guide

69

Removing CCI on an OpenVMS system
For two instances: $ horcmshutdown 0 1
3. Remove the installed CCI software by using the following command:
$ PRODUCT REMOVE RM /LOG
4. After the CCI software has been removed, the CCI command devices (used for the
in-band method) are no longer needed. If you want to configure the volumes that
were used by CCI command devices for operations from the connected hosts, you
must disable the command device setting on each volume.
To disable the command device setting:
a. Click Storage Systems, expand the Storage Systems tree, and click Logical
Devices.
On the LDEVs tab, the CCI command devices are identified by Command
Device in the Attribute column.
b. Select the command device, and then click More Actions > Edit Command
Devices.
c. For Command Device, click Disable, and then click Finish.
d. In the Confirm window, verify the settings, and enter the task name.
You can enter up to 32 ASCII characters and symbols, with the exception of:
\ / : , ; * ? " < > |. The value "date-window name" is entered by default.
e. Click Apply.
If Go to tasks window for status is selected, the Tasks window appears.

Chapter 4: Removing CCI
Command Control Interface Installation and Configuration Guide

70

Chapter 5: Troubleshooting for CCI installation
If you have a problem installing or upgrading the CCI software, make sure that all system
requirements and restrictions have been met (see System requirements for CCI (on
page 13) ).
If you are unable to resolve an error condition, contact customer support for assistance.

Contacting support
If you need to call customer support, please provide as much information about the
problem as possible, including:
■

The circumstances surrounding the error or failure.

■

The content of any error messages displayed on the host systems.

■

The content of any error messages displayed by Device Manager - Storage Navigator.

■

The Device Manager - Storage Navigator configuration information (use the Dump
Tool).

■

The service information messages (SIMs), including reference codes and severity
levels, displayed by Device Manager - Storage Navigator.

The customer support staff is available 24 hours a day, seven days a week. To contact
technical support, log on to Hitachi Vantara Support Connect for contact information:
https://support.hitachivantara.com/en_us/contact-us.html.

Chapter 5: Troubleshooting for CCI installation
Command Control Interface Installation and Configuration Guide

71

Appendix A: Fibre-to-SCSI address conversion
Disks connected with Fibre Channel display as SCSI disks on UNIX hosts. Disks connected
with Fibre Channel connections can be fully utilized. CCI converts Fibre-Channel physical
addresses to SCSI target IDs (TIDs) using a conversion table.

Fibre/FCoE-to-SCSI address conversion
The following figure shows an example of Fibre-to-SCSI address conversion.

For iSCSI, the AL_PA is the fixed value 0xFE.
The following table lists the limits for target IDs (TIDs) and LUNs.
HP-UX, other systems
Port

TID

LUN

Solaris systems
TID

LUN

Windows systems
TID

LUN

Fibre

0 to 15

0 to 1023

0 to 125

0 to 1023

0 to 31

0 to 1023

SCSI

0 to 15

0 to 7

0 to 15

0 to 7

0 to 15

0 to 7

Conversion table for Windows
The conversion table for Windows is based on conversion by an Emulex driver. If the
Fibre Channel adapter is different (for example, Qlogic, HPE), the target ID that is
indicated by the raidscan command might be different from the target ID on the
Windows host.

Appendix A: Fibre-to-SCSI address conversion
Command Control Interface Installation and Configuration Guide

72

Fibre/FCoE-to-SCSI address conversion
The following shows an example of using the raidscan command to display the TID and
LUN of Harddisk6 (HP driver). You must start HORCM without the descriptions of
HORCM_DEV or HORCM_INST in the configuration definition file because of the unknown
TIDs and LUNs.
Using raidscan to display TID and LUN for FC devices
C:\>raidscan -pd hd6 -x drivescan hd6
Harddisk 6... Port[ 2] PhId[ 4] TId[ 3] Lun[ 5] [HITACHI
] [OPEN-3
]
Port[CL1-J] Ser#[
30053] LDEV#[ 14(0x00E)]
HORC = SMPL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL]
RAID5[Group 1- 2] SSID = 0x0004
PORT# /ALPA/C,TID#,LU#.Num(LDEV#....)...P/S, Status,Fence,LDEV#,P-Seq#,PLDEV#
CL1-J / e2/4, 29, 0.1(9).............SMPL ---- ------ ----, ----- ---CL1-J / e2/4, 29, 1.1(10)............SMPL ---- ------ ----, ----- ---CL1-J / e2/4, 29, 2.1(11)............SMPL ---- ------ ----, ----- ---CL1-J / e2/4, 29, 3.1(12)............SMPL ---- ------ ----, ----- ---CL1-J / e2/4, 29, 4.1(13)............SMPL ---- ------ ----, ----- ---CL1-J / e2/4, 29, 5.1(14)............SMPL ---- ------ ----, ----- ---CL1-J / e2/4, 29, 6.1(15)............SMPL ---- ------ ----, ----- ---Specified device is LDEV# 0014
In this case, the target ID indicated by the raidscan command must be used in the
configuration definition file. This can be accomplished using either of the following two
methods:
■

Using the default conversion table: Use the TID# and LU# indicated by the
raidscan command in the HORCM configuration definition file (TID=29 LUN=5 in the
example above).

■

Changing the default conversion table: Change the default conversion table using
the HORCMFCTBL environmental variable (TID=3 LUN=5 in the following example).

Using HORCMFCTBL to change the default fibre conversion table
C:\>set HORCMFCTBL=X
<-- X=fibre conversion table #
C:\>horcmstart ...
<-- Start of HORCM.
:
:
Result of "set HORCMFCTBL=X" command:
C:\>raidscan -pd hd6 -x drivescan hd6
Harddisk 6... Port[ 2] PhId[ 4] TId[ 3] Lun[ 5] [HITACHI
] [OPEN-3
]
Port[CL1-J] Ser#[
30053] LDEV#[ 14(0x00E)]
HORC = SMPL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL]
RAID5[Group 1- 2] SSID = 0x0004
PORT# /ALPA/C,TID#,LU#.Num(LDEV#....)...P/S,Status,Fence,LDEV#,P-Seq#,PLDEV#
CL1-J / e2/0,
3, 0.1(9).............SMPL ---- ------ ----, ----- ---CL1-J / e2/0,
3, 1.1(10)............SMPL ---- ------ ----, ----- ---CL1-J / e2/0,
3, 2.1(11)............SMPL ---- ------ ----, ----- ---Appendix A: Fibre-to-SCSI address conversion
Command Control Interface Installation and Configuration Guide

73

LUN configurations on the RAID storage systems
CL1-J / e2/0,
CL1-J / e2/0,
CL1-J / e2/0,
CL1-J / e2/0,
Specified device

3,
3,
3,
3,
is

3.1(12)............SMPL
4.1(13)............SMPL
5.1(14)............SMPL
6.1(15)............SMPL
LDEV# 0014

-------------

---------------------

----,
----,
----,
----,

-----------------

-------------

LUN configurations on the RAID storage systems
The RAID storage systems (9900V and later) manage the LUN configuration on a port
through the LUN security as shown in the following figure.

CCI uses absolute LUNs to scan a port, whereas the LUNs on a group are mapped to the
host system so that the TID and LUN indicated by the raidscan command are different
from the TID and LUN displayed by the host system. In this case, the TID and LUN
indicated by the raidscan command should be used.
In the following example, you must start HORCM without a description for HORCM_DEV
and HORCM_INST because the TID and LUN are not known. Use the port, TID, and LUN
displayed by the raidscan -find or raidscan -find conf command for
HORCM_DEV (see the example for displaying the port, TID, and LUN using raidscan).
For details about LUN discovery based on a host group, see Host Group Control in the
Command Control Interface User and Reference Guide.
Displaying the port, TID, and LUN using raidscan
# ls /dev/rdsk/* | raidscan -find
UID S/F PORT
TARG
DEVICE_FILE
/dev/rdsk/c0t0d4 0
S CL1-M
0
/dev/rdsk/c0t0d1 0
S CL1-M
0
/dev/rdsk/c1t0d1 - CL1-M
-

LUN
4
1
-

SERIAL
31168
31168
31170

LDEV
216
117
121

PRODUCT_ID
OPEN-3-CVS-CM
OPEN-3-CVS
OPEN-3-CVS

UID: Displays the UnitID for multiple RAID configuration. A hyphen (-) is displayed when
the command device for HORCM_CMD is not found.
S/F: S indicates that the port is SCSI, and F indicates that the port is Fibre Channel.
Appendix A: Fibre-to-SCSI address conversion
Command Control Interface Installation and Configuration Guide

74

Fibre address conversion tables
PORT: Displays the RAID storage system port number
TARG: Displays the target ID (converted by the fibre conversion table)
LUN: Displays the logical unit number (converted by the fibre conversion table).
SERIAL: Displays the production number (serial#) of the RAID storage system.
LDEV: Displays the LDEV# within the RAID storage system.
PRODUCT_ID: Displays product-id field in the STD inquiry page.

Fibre address conversion tables
Following are the fibre address conversion tables:
■

Table number 0 = HP-UX systems

■

Table number 1 = Solaris systems

■

Table number 2 = Windows systems

The conversion table for Windows systems is based on the Emulex driver. If a different
Fibre-Channel adapter is used, the target ID indicated by the raidscan command might
be different than the target ID indicated by the Windows system.
Note: Table 3 for other Platforms is used to indicate the LUN without target
ID for unknown FC_AL conversion table or Fibre-Channel fabric (Fibre-Channel
worldwide name). In this case, the target ID is always zero, thus Table 3 is not
described in this document. Table 3 is used as the default for platforms
other than those listed above. If the host will use the WWN notation for the
device files, then this table number should be changed by using the
$HORCMFCTBL variable.
If the TID displayed on the system is different than the TID indicated in the fibre
conversion table, you must use the TID (or LU#) returned by the raidscan command to
specify the device(s).
Fibre address conversion table for HP-UX systems (Table 0)
C0

C1

C2

C3

C4

C5

C6

C7

ALPA

TI
D

ALPA

TI
D

ALPA

TI
D

ALPA

TI
D

ALPA

TI
D

ALPA

TI
D

ALPA

TI
D

ALPA

TID

EF

0

CD

0

B2

0

98

0

72

0

55

0

3A

0

25

0

E8

1

CC

1

B1

1

97

1

71

1

54

1

39

1

23

1

E4

2

CB

2

AE

2

90

2

6E

2

53

2

36

2

1F

2

E2

3

CA

3

AD

3

8F

3

6D

3

52

3

35

3

1E

3

E1

4

C9

4

AC

4

88

4

6C

4

51

4

34

4

1D

4

Appendix A: Fibre-to-SCSI address conversion
Command Control Interface Installation and Configuration Guide

75

Fibre address conversion tables

C0

C1

C2

C3

C4

C5

C6

C7

ALPA

TI
D

ALPA

TI
D

ALPA

TI
D

ALPA

TI
D

ALPA

TI
D

ALPA

TI
D

ALPA

TI
D

ALPA

TID

E0

5

C7

5

AB

5

84

5

6B

5

4E

5

33

5

1B

5

DC

6

C6

6

AA

6

82

6

6A

6

4D

6

32

6

18

6

DA

7

C5

7

A9

7

81

7

69

7

4C

7

31

7

17

7

D9

8

C3

8

A7

8

80

8

67

8

4B

8

2E

8

10

8

D6

9

BC

9

A6

9

7C

9

66

9

4A

9

2D

9

0F

9

D5

10 BA

10 A5

10 7A

10 65

10 49

10 2C

10 08

10

D4

11 B9

11 A3

11 79

11 63

11 47

11 2B

11 04

11

D3

12 B6

12 9F

12 76

12 5C

12 46

12 2A

12 02

12

D2

13 B5

13 9E

13 75

13 5A

13 45

13 29

13 01

13

D1

14 B4

14 9D

14 74

14 59

14 43

14 27

14 -

-

CE

15 B3

15 9B

15 73

15 56

15 3C

15 26

15 -

-

Fibre address conversion table for Solaris systems (Table 1)
C0

C1

ALPA

TI
D

ALPA

EF

0

CD

E8

1

E4

C2
TI
D

C3
ALPA

C4
TI
D

ALPA

C5
TI
D

ALPA

C6
TI
D

ALPA

C7
TI
D

AL
PA

ALPA

TI
D

16

B2

32 98

48 72

64 55

80 3A

96 25

112

CC

17

B1

33 97

49 71

65 54

81 39

97 23

113

2

CB

18

AE

34 90

50 6E

66 53

82 36

98 1F

114

E2

3

CA

19

AD

35 8F

51 6D

67 52

83 35

99 1E

115

E1

4

C9

20

AC

36 88

52 6C

68 51

84 34

10 1D
0

116

E0

5

C7

21

AB

37 84

53 6B

69 4E

85 33

10 1B
1

117

DC

6

C6

22

AA

38 82

54 6A

70 4D

86 32

10 18
1

118

TID

Appendix A: Fibre-to-SCSI address conversion
Command Control Interface Installation and Configuration Guide

76

Fibre address conversion tables

C0

C1

ALPA

TI
D

ALPA

DA

7

C5

D9

8

D6

9

C2
TI
D

C3
ALPA

C4
TI
D

ALPA

C5
TI
D

ALPA

C6
TI
D

ALPA

C7
AL
PA

ALPA

TI
D

TI
D

23

A9

39 81

55 69

71 4C

87 31

10 17
3

119

C3

24

A7

40 80

56 67

72 4B

88 2E

10 10
4

120

BC

25

A6

41 7C

57 66

73 4A

89 2D

10 0F
5

121

D5

10 BA

26

A5

42 7A

58 65

74 49

90 2C

10 08
6

122

D4

11 B9

27

A3

43 79

59 63

75 47

91 2B

10 04
7

123

D3

12 B6

28

9F

44 76

60 5C

76 46

92 2A

10 02
8

124

D2

13 B5

29

9E

45 75

61 5A

77 45

93 29

10 01
9

125

D1

14 B4

30

9D

46 74

62 59

78 43

94 27

11 0

-

CE

15 B3

31

9B

47 73

63 56

79 3C

95 26

11 1

-

TID

Fibre address conversion table for Windows systems (Table 2)
C5
(PhId5
)

C4 (PhId4)

C3 (PhId3)

AL
PA

T
I
D

AL
PA

TI
D

AL
PA

TI
D

-

-

-

-

CC

-

-

E4

-

-

-

-

AL
PA

C2 (PhId2)

TI
D

AL
PA

TI
D

15 -

-

98

30 CB

14 B1

E2

29 CA

E1

28 C9

AL
PA

C1 (PhId1)

TI
D

AL
PA

TI
D

15 -

-

56

30 97

14 72

13 AE

29 90

12 AD

28 8F

AL
PA

TI
D

AL
PA

TI
D

15 -

-

27

15

30 55

14 3C

30 26

14

13 71

29 54

13 3A

29 25

13

12 6E

28 53

12 39

28 23

12

Appendix A: Fibre-to-SCSI address conversion
Command Control Interface Installation and Configuration Guide

77

Fibre address conversion tables

C5
(PhId5
)

C4 (PhId4)
AL
PA

C3 (PhId3)

T
I
D

AL
PA

TI
D

-

-

E0

27 C7

11 AC

27 88

11 6D

27 52

11 36

27 1F

11

-

-

DC

26 C6

10 AB

26 84

10 6C

26 51

10 35

26 1E

10

-

-

DA

25 C5

9

AA

25 82

9

6B

25 4E

9

34

25 1D

9

-

-

D9

24 C3

8

A9

24 81

8

6A

24 4D

8

33

24 1B

8

-

-

D6

23 BC

7

A7

23 80

7

69

23 4C

7

32

23 18

7

-

-

D5

22 BA

6

A6

22 7C

6

67

22 4B

6

31

22 17

6

-

-

D4

21 B9

5

A5

21 7A

5

66

21 4A

5

2E

21 10

5

-

-

D3

20 B6

4

A3

20 79

4

65

20 49

4

2D

20 0F

4

-

-

D2

19 B5

3

9F

19 76

3

63

19 47

3

2C

19 08

3

-

-

D1

18 B4

2

9E

18 75

2

5C

18 46

2

2B

18 04

2

EF

1

CE

17 B3

1

9D

17 74

1

5A

17 45

1

2A

17 02

1

E8

0

CD

16 B2

0

9B

16 73

0

59

16 43

0

29

16 01

1

TI
D

AL
PA

TI
D

AL
PA

TI
D

AL
PA

C1 (PhId1)

AL
PA

TI
D

AL
PA

C2 (PhId2)
TI
D

AL
PA

TI
D

AL
PA

TI
D

Appendix A: Fibre-to-SCSI address conversion
Command Control Interface Installation and Configuration Guide

78

Appendix B: Sample configuration definition
files
This chapter describes sample configuration definition files for typical CCI configurations.

Sample configuration definition files
The following figure illustrates the configuration definition of paired volumes.

The following example shows a sample configuration file for a UNIX-based operating
system.
Configuration file example – UNIX-based servers (# indicates a comment)
HORCM_MON
#ip_address
HST1

service
horcm

poll(10ms)
1000

timeout(10ms)
3000

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

79

Configuration file parameters
HORCM_CMD
#unitID 0... (seq#30014)
#dev_name dev_name
dev_name
/dev/rdsk/c0t0d0
#unitID 1... (seq#30015)
#dev_name dev_name
dev_name
/dev/rdsk/c1t0d0
HORCM_DEV
#dev_group dev_name
port#
TargetID
oradb
oradb1
CL1-A
3
oradb
oradb2
CL1-A
3
oralog
oralog1
CL1-A
5
oralog
oralog2
CL1-A1
5
oralog
oralog3
CL1-A1
5
oralog
oralog4
CL1-A1
5
HORCM_INST
#dev_group ip_address
service
oradb
HST2
horcm
oradb
HST3
horcm
oralog
HST3
horcm

LU#
1
1
0
0
1
1

MU#
0
1

h1

The following figure shows a sample configuration file for a Windows operating system.

Configuration file parameters
The configuration file sets the following parameters:
■

HORCM_MON (on page 81)

■

HORCM_CMD (in-band method) (on page 81)
Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

80

HORCM_MON
■

HORCM_CMD (out-of-band method) (on page 86)

■

HORCM_VCMD (on page 88)

■

HORCM_DEV (on page 89)

■

HORCM_INST (on page 92)

■

HORCM_INSTP (on page 95)

■

HORCM_LDEV (on page 96)

■

HORCM_LDEVG (on page 96)

■

HORCM_ALLOW_INST (on page 97)

HORCM_MON
The monitor parameter (HORCM_MON) in the CCI configuration definition file defines the
following values:
■

ip_address: Specifies the local host name or the IP address of the local host. When
you specify the name of a local host that has multiple IP addresses, one of the IP
addresses is selected at random and used. If you want to use all IP addresses, specify
NONE for IPv4 or NONE6 for IPv6.

■

service: Specifies the UDP port name assigned to the HORCM communication path,
which is registered in /etc/services in UNIX (%windir%\system32\drivers\etc
\services in Windows, SYS$SYSROOT:[000000.TCPIP$ETC]SERVICES.DAT in
OpenVMS). If a port number is specified instead of a port name, the port number is
used.

■

poll: Specifies the interval for monitoring paired volumes in increments of 10 ms. To
reduce the HORCM daemon load, make this interval longer. When the interval is set
to -1, the paired volumes are not monitored. The value of -1 is specified when two or
more CCI instances run on a single machine.

■

timeout: The time-out period of communication with the remote server.
If HORCM_MON is not specified, then the following defaults are set:
#ip_address service
poll(10ms) timeout(10ms)
NONE
default_port 1000
3000
default_port:
■

For no specified HORCM instance: 31000 + 0

■

For instance HORCM X: 31000 + X + 1

HORCM_CMD (in-band method)
When the in-band method is used, the command device parameter (HORCM_CMD)
defines the UNIX device path or Windows physical device number of each command
device that can be accessed by CCI. You can specify multiple command devices in
HORCM_CMD to provide failover in case the primary command device becomes
unavailable.

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

81

HORCM_CMD (in-band method)

Tip:
■

To enhance redundancy, you can make multiple command devices
available for a single storage system. This configuration is called alternate
command device configuration. For this configuration, command devices
are listed horizontally on a line in the configuration definition file. In the
following example, CMD1 and CMD2 are command devices in the same
storage system:
HORCM_CMD
CMD1 CMD2

■

To control multiple storage systems in one configuration definition file, you
can list the command devices for each storage system in the configuration
definition file. In this case, the command devices are listed vertically. CMD1
and CMD2 in the following example are command devices in different
storage systems:
HORCM_CMD
CMD1
CMD2

■

When you specify a command device, you can enter a maximum of 511
characters for each line.

The command device must be mapped to the SCSI/fibre using LUN Manager first. The
mapped command devices are identified by "-CM" appended to the PRODUCT_ID
displayed by the inqraid command, as shown in the following examples.
Viewing the command device using inqraid (UNIX host)
# ls /dev/rdsk/c1t0* | /HORCM/usr/bin/inqraid -CLI -sort
DEVICE_FILE PORT SERIAL LDEV CTG H/M/12 SSID R:Group PRODUCT_ID
c1t0d0s2 CL2-E 63502 576 - - - - OPEN-V-CM
c1t0d1s2 CL2-E 63502 577 - s/s/ss 0006 1:02-01 OPEN-V -SUN
c1t0d2s2 CL2-E 63502 578 - s/s/ss 0006 1:02-01 OPEN-V -SUN
In this example, the command device is /dev/rdsk/c1t0d2s2.
Viewing the command device using inqraid (Windows host)
D:\HORCM\etc>inqraid $Phys –CLI
\\.\PhysicalDrive1:
# Harddisk1 -> [VOL61459_449_DA7C0D92] [OPEN-3 ]
\\.\PhysicalDrive2:
# Harddisk2 -> [VOL61459_450_DA7C0D93] [OPEN-3-CM ]
In this example, the command device is \\.\PhysicalDrive2.
After mapping the command device, set the HORCM_CMD parameter in the
configuration definition file as follows:

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

82

HORCM_CMD (in-band method)
\\.\CMD-:: Specifies the serial number of the storage system. For VSP G1x00
and VSP F1500, add a “3” at the beginning of the serial number. For example, for
serial number 12345, enter 312345.

■

: Specifies the device special file name of the
command device.

For example, specify the following for serial number 64015 and device special file
name /dev/rdsk/*:
HORCM_CMD
#dev_name dev_name dev_name
\\.\CMD-64015:/dev/rdsk/*
Caution: To enable dual path of the command device under UNIX systems,
make sure to include all paths to the command device on a single line in the
HORCM_CMD section of the configuration definition file. Entering path
information on separate lines might cause syntax parsing issues, and failover
might not occur unless the HORCM startup script is restarted on the UNIX
system.
When two or more storage systems are connected, CCI identifies each storage system
using unit IDs. The unit ID is assigned sequentially in the order described in
HORCM_CMD of the configuration definition file. For a command device alternative
configuration, a special file for multiple command devices is written.
Caution: When storage systems are shared by two or more servers, unit IDs
and serial numbers must be consistent among the servers. List serial
numbers of the storage systems in HORCM_CMD of the configuration
definition file in the same order. The following figure illustrates unit IDs when
multiple servers share multiple storage systems.
The following figure shows the configuration and unit IDs for multiple storage systems.

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

83

HORCM_CMD (in-band method)
For Windows 2000, 2003, 2008, and 2012
Normally, physical drives are specified for command devices in storage systems.
However, CCI provides a method that is not affected by changes of physical drives in
Windows 2000, 2003, 2008, and 2012 by using the following naming format to specify the
serial number, LDEV number, and port number in that order:
\\.\CMD-Ser#-ldev#-Port#
Note: For VSP G1x00 and VSP F1500, add a "3" to the beginning of the serial
number (for example, enter "312345" for serial number "12345").
The following example specifies 30095 for the storage system's serial number, 250 for
the LDEV number, and CL1-A for the port number:
HORCM_CMD
#dev_name dev_name dev_name
\\.\CMD-30095-250-CL1-A
■

Minimum specification
For the command device with serial number 30095, specify as follows:
\\.\CMD-30095

■

Command devices in the multi-path environment
Specify serial number 30095, and LDEV number 250 as follows:
\\.\CMD-30095-250

■

Other specifications
Specify serial number 30095, LDEV number 250, and port number CLI-A as follows:
\\.\CMD-30095-250-CL1-A
or
\\.\CMD-30095-250-CL1

For UNIX
Device files are specified for command devices in UNIX. However, CCI provides a method
that is not affected by changes of device files in UNIX by using the following naming
format specifying the serial number, LDEV number, and port number in that order:
\\.\CMD-Ser#-ldev#-Port#:HINT
Note: For VSP G1x00 and VSP F1500, add a "3" to the beginning of the serial
number (for example, enter "312345" for serial number "12345").
Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

84

HORCM_CMD (in-band method)
The following example specifies 30095 for the storage system's serial number, 250 for
the LDEV number, and CL1-A for the port number:
HORCM_CMD
#dev_name dev_name dev_name
\\.\CMD-30095-250-CL1-A:/dev/rdsk/
HINT provides a path to scan and specifies a directory ending with a slash (/) or a name
pattern including the directory. Device files are searched using a name filter similar to
the inqraid command.
■

To find command devices from ' /dev/rdsk/* , enter /dev/rdsk/.

■

To find command devices from ' /dev/rdsk/c10*, enter /dev/rdsk/c10.

■

To find command devices from ' /dev/rhdisk*, enter /dev/rhdisk.

For an alternate command device configuration, HINT of the second command device
can be omitted. In this case, command devices are searched from the device file that was
scanned first.
HORCM_CMD
#dev_name dev_name dev_name
\\.\CMD-30095-CL1:/dev/rdsk/ \\.\CMD-30095-CL2
■

Minimum specification
For the command device of a storage system with serial number 30095, specify as
follows:
\\.\CMD-30095:/dev/rdsk/

■

Command devices in a multi-path environment
Specify storage system serial number 30095 and LDEV number 250 as follows:
\\.\CMD-30095-250:/dev/rdsk/

■

Other specifications
Specify an alternate path with storage system serial number 30095 and LDEV number
250 as follows:
\\.\CMD-30095-250-CL1:/dev/rdsk/ \\.\CMD-30095-250-CL2
\\.\CMD-30095:/dev/rdsk/c1 \\.\CMD-30095:/dev/rdsk/c2

For Linux
Note the following important information when using CCI on a Linux host.

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

85

HORCM_CMD (out-of-band method)

Note: If the hardware configuration is changed while an OS is running in
Linux, the name of a special file corresponding to the command device might
be changed. At this time, if HORCM was started by specifying the special file
name in the configuration definition file, HORCM cannot detect the command
device, and the communication with the storage system might fail.
To prevent this failure, specify the path name allocated by udev to the
configuration definition file before booting HORCM. Use the following
procedure to specify the path name. In this example, the path name
for /dev/sdgh can be found.
1. Find the special file name of the command device by using inqraid
command:
[root@myhost ~]# ls /dev/sd* | /HORCM/usr/bin/inqraid -CLI |
grep CM sda CL1-B 30095 0 - - 0000 A:00000 OPEN-V-CM sdgh
CL1-A 30095 0 - - 0000 A:00000 OPEN-V-CM [root@myhost ~]#
2. Find the path name from the by-path directory:
[root@myhost ~]# ls -l /dev/disk/by-path/ | grep sdgh
lrwxrwxrwx. 1 root root 10 Jun 11 17:04 2015 pci0000:08:00.0- fc-0x50060e8010311940-lun-0 -> ../../sdgh
[root@myhost ~]#
In this example, pci-0000:08:00.0-fc-0x50060e8010311940lun-0 is the path name.
3. Enter the path name in HORCM_CMD in the configuration definition file
as follows:
HORCM_CMD /dev/disk/by-path/pci-0000:08:00.0-fc0x50060e8010311940-lun-0
4. Boot the HORCM instance as usual.

HORCM_CMD (out-of-band method)
For the out-of-band method, a virtual command device is used instead of a command
device. By specifying the location of the virtual command device in HORCM_CMD, you
can create a virtual command device.
The location where the virtual command device can be created is different according to
the type of the storage system. For details about locations, see the section System
configuration using CCI in the Command Control Interface User and Reference Guide.
Tip: When you specify a virtual command device, you can enter a maximum
of 511 characters for each line.

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

86

HORCM_CMD (out-of-band method)
Create a virtual command device on an SVP (VSP, HUS VM, VSP G1x00, VSP F1500)
Specify the following to HORCM_CMD of the configuration definition file:
\\.\IPCMD--[-unit ID]
■

: Sets an IP address of SVP.

■

: Sets the UDP communication port number.
This value (31001) is fixed.

■

[-unit ID]: Sets the unit ID of the storage system for the multiple units connection
configuration. This can be omitted.

Create a virtual command device on a GUM (VSP Gx00 models and VSP Fx00
models)
Specify the following to HORCM_CMD of the configuration definition file:
\\.\IPCMD--[-unit ID]
■

: Sets an IP address of GUM.

■

: Sets the UDP communication port number.
These values (31001 and 31002) are fixed.

■

[-unit ID]: Sets the unit ID of the storage system for the multiple units connection
configuration. This can be omitted.
Note: To use GUM, we recommend that you set the combination of all
GUM IP addresses in the storage system and the UDP communication port
numbers by an alternate command device configuration. See the following
examples for how to set the combination.

Use a CCI server port as a virtual command device
Specify the following in HORCM_CMD of the configuration definition file:
\\.\IPCMD--[-Unit ID]
■

: Sets the IP address of the CCI server.

■

: Sets the CCI port number.

■

[-Unit ID]: Sets the unit ID of the storage system for the multiple units connection
configuration. This can be omitted.

Examples
This example shows the case of IPv4.
HORCM_CMD
#dev_name dev_name dev_name
\\.\IPCMD-158.214.135.113-31001

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

87

HORCM_VCMD
This example shows the case of IPv6.
HORCM_CMD
#dev_name dev_name dev_name
\\.\IPCMD-fe80::209:6bff:febe:3c17-31001
This example shows the case when both the in-band and out-band methods are used:
HORCM_CMD
#dev_name dev_name dev_name
\\.\CMD-64015:/dev/rdsk/* \\.\IPCMD-158.214.135.113-31001
This example shows the case when both the in-band and out-band methods are used in
an alternate command device configuration:
HORCM_CMD
#dev_name dev_name
\\.\CMD-64015:/dev/rdsk/* \\.\IPCMD-158.214.135.113-31001
HORCM_CMD
#dev_name dev_name
\\.\IPCMD-158.214.135.113-31001 \\.\CMD-64015:/dev/rdsk/*
This example shows the case of virtual command devices in a cascade configuration
(three units):
HORCM_CMD
#dev_name dev_name dev_name
\\.\IPCMD-158.214.135.113-31001
\\.\IPCMD-158.214.135.114-31001
\\.\IPCMD-158.214.135.115-31001
(VSP Gx00 models, VSP Fx00 models) This example shows the case of alternate
command device configuration of the combination of all GUM IP addresses in the
storage system and the UDP communication port numbers. In this case, enter the IP
addresses without a line feed.
HORCM_CMD
#dev_name dev_name dev_name
\\.\IPCMD-192.168.0.16-31001 \\.\IPCMD-192.168.0.17-31001 \\.\IPCMD192.168.0.16-31002 \\.\IPCMD-192.168.0.17-31002
An IP address and a port number can be expressed using a host name and a service
name.

HORCM_VCMD
The HORCM_VCMD parameter specifies the serial number of the virtual storage machine
to be operated by this CCI instance.

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

88

HORCM_DEV
You can only use virtual storage machines whose serial numbers are specified in
HORCM_VCMD. To use more than one virtual storage machine from a CCI instance,
specify each serial number on a separate line in HORCM_VCMD.
Note: If you want to use the virtual storage machine specified on the second
or subsequent line of HORCM_VCMD, you must use the command options
(for example, -s  or -u ). If you omit these command options,
the virtual storage machine specified on the first line is used. If you specify a
virtual storage machine whose serial number is not specified in
HORCM_VCMD using the command option (-s  or -u ), the
EX_ENOUNT error occurs.

HORCM_DEV
The device parameter (HORCM_DEV) defines the RAID storage system device addresses
for the paired logical volume names. When the server is connected to two or more
storage systems, the unit ID is expressed by port number extension. Each group name is
a unique name discriminated by a server which uses the volumes, the data attributes of
the volumes (such as database data, log file, UNIX file), recovery level, and so on. The
group and paired logical volume names described in this item must reside in the remote
server. The hardware SCSI/fibre port, target ID, and LUN as hardware components need
not be the same.
The following values are defined in the HORCM_DEV parameter:
■

dev_group: Names a group of paired logical volumes. A command is executed for all
corresponding volumes according to this group name.

■

dev_name: Names the paired logical volume within a group (i.e., name of the special
file or unique logical volume). The name of paired logical volume must be different
than the "dev name" on another group.

■

Port#: Defines the RAID storage system port number of the volume that corresponds
with the dev_name volume.
For details about specifying Port#, see Specifying Port# (on page 90) below.

■

Target ID: Defines the SCSI/fibre target ID number of the physical volume on the
specified port.

■

LU#: Defines the SCSI/fibre logical unit number (LU#) of the physical volume on the
specified target ID and port.
For Fibre Channel, if the TID and LU# displayed on the system are different from the
TID in the fibre address conversion table, then use the TID and LU# indicated by the
raidscan command in the CCI configuration definition file.

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

89

HORCM_DEV
■

MU# for ShadowImage/Copy-on-Write Snapshot: Defines the mirror unit number (0 to
2) if using redundant mirror for the identical LU on ShadowImage. If this number is
omitted it is assumed to be zero (0). The cascaded mirroring of the S-VOL is expressed
as virtual volumes using the mirror descriptors (MU#1 to 2) in the configuration
definition file. The MU#0 of a mirror descriptor is used for connection of the S-VOL.
The mirror descriptor (MU#0 to 2) can be used on ShadowImage and Copy-on-Write
Snapshot. MU#3 to 63 can be used only on Copy-on-Write Snapshot.
Note: When you enter the MU number for a ShadowImage/Copy-on-Write
Snapshot pair into the configuration definition file, enter only the number,
for example, “0” or “1”.

SMPL
Feature
ShadowImage

MU#0 to
2
Valid

Copy-on-Write Valid
Snapshot

■

MU#3 to
63

P-VOL
MU#0 to
2

MU#3 to
63

S-VOL
MU#0

MU#1 to
63

Not valid

Valid

Not valid

Valid

Not valid

Valid

Valid

Valid

Valid

Not valid

MU# for TrueCopy/Universal Replicator/global-active device: Defines the mirror unit
number (0 to 3) if using redundant mirror for the identical LU on TC/UR/GAD. If this
number is omitted, it is assumed to be (MU#0). You can specify only MU#0 for
TrueCopy, and 4 MU numbers (MU#0 to 3) for Universal Replicator and global-active
device.
Note: When you enter the MU number for a TC/UR/GAD pair into the
configuration definition file, add an "h" before the number, for example,
"h0" or "h1".

State/
Feature

SMPL
MU#0

MU#1 to 3

P-VOL
MU#0

MU#1 to 3

S-VOL
MU#0

MU#1 to 3

TrueCopy

Valid

Not valid

Valid

Not valid

Valid

Not valid

Universal
Replicator/
global-active
device

Valid

Valid

Valid

Valid

Valid

Valid

Specifying Port#
The following "n" shows unit ID when the server is connected to two or more storage
systems (for example, CL1-A1 = CL1-A in unit ID 1). If the "n" option is omitted, the unit ID
is 0. The port is not case sensitive (for example, CL1-A = cl1-a = CL1-a = cl1-A).
Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

90

HORCM_DEV

Port

Basic

Option

Option

Option

CL1

An B
n

Cn D
n

En Fn

G
n

H
n

Jn

Kn Ln

Mn

Nn

Pn Qn

Rn

CL2

An B
n

Cn D
n

En Fn

G
n

H
n

Jn

Kn Ln

Mn

Nn

Pn Qn

Rn

The following ports can only be specified for 9900V:
Port

Basic

Option

Option

Option

CL3

an

bn

cn

dn

en

fn

gn

hn

jn

kn

ln

mn

n
n

p
n

q
n

rn

CL4

an

bn

cn

dn

en

fn

gn

hn

jn

kn

ln

mn

n
n

p
n

q
n

rn

For 9900V, CCI supports four types of port names for host groups:
■

Specifying the port name without a host group:
CL1-A for a RAID storage system
CL1-An, where n = unit ID for multiple RAID storage systems

■

Specifying the port with a host group:
CL1-A-g, where g = host group
CL1-An-g, where n-g = host group g on CL1-A in unit ID n

The following ports can only be specified for TagmaStore USP/TagmaStore NSC and USP
V/VM:
Port

Basic

Option

Option

Option

CL5

an

bn

cn

dn

en

fn

gn

hn

jn

kn

ln

mn

n
n

p
n

q
n

rn

CL6

an

bn

cn

dn

en

fn

gn

hn

jn

kn

ln

mn

n
n

p
n

q
n

rn

CL7

an

bn

cn

dn

en

fn

gn

hn

jn

kn

ln

mn

n
n

p
n

q
n

rn

CL8

an

bn

cn

dn

en

fn

gn

hn

jn

kn

ln

mn

n
n

p
n

q
n

rn

CL9

an

bn

cn

dn

en

fn

gn

hn

jn

kn

ln

mn

n
n

p
n

q
n

rn

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

91

HORCM_INST

Port

Basic

Option

Option

Option

CLA

an

bn

cn

dn

en

fn

gn

hn

jn

kn

ln

mn

n
n

p
n

q
n

rn

CLB

an

bn

cn

dn

en

fn

gn

hn

jn

kn

ln

mn

n
n

p
n

q
n

rn

CLC

an

bn

cn

dn

en

fn

gn

hn

jn

kn

ln

mn

n
n

p
n

q
n

rn

CLD

an

bn

cn

dn

en

fn

gn

hn

jn

kn

ln

mn

n
n

p
n

q
n

rn

CLE

an

bn

cn

dn

en

fn

gn

hn

jn

kn

ln

mn

n
n

p
n

q
n

rn

CLF

an

bn

cn

dn

en

fn

gn

hn

jn

kn

ln

mn

n
n

p
n

q
n

rn

CLG

an

bn

cn

dn

en

fn

gn

hn

jn

kn

ln

mn

n
n

p
n

q
n

rn

HORCM_INST
The instance parameter (HORCM_INST) defines the network address (IP address) of the
remote server (active or standby). It is used to refer to or change the status of the paired
volume in the remote server (active or standby). When the primary volume is shared by
two or more servers, there are two or more remote servers using the secondary volume.
Thus, it is necessary to describe the addresses of all of these servers.
The following values are defined in the HORCM_INST parameter:
■

dev_group: The server name described in dev_group of HORC_DEV.

■

ip_address: The network address of the specified remote server.

■

service: The port name assigned to the HORCM communication path (registered in
the /etc/services file). If a port number is specified instead of a port name, the
port number is used.

A configuration for multiple networks can be found using raidqry -r 
command option on each host. The current network address of HORCM can be changed
using horcctl -NC  on each host.
When you use all IP addresses of the local host in the configuration for multiple
networks, specify NONE (IPv4) or NONE6 (IPv6) as the ip_address of HORCM_MON
parameter.
The following figure shows the configuration for multiple networks.

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

92

HORCM_INST

# horcctl -ND -g IP46G
Current network address = 158.214.135.106,services = 50060# horcctl -NC -g
IP46G
Changed network address(158.214.135.106,50060 -> fe80::39e7:7667:9897:2142,
50060)
For IPv6 only, the configuration must be defined as HORCM/IPv6. The following figure
shows the network configuration for IPv6.

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

93

HORCM_INST
It is possible to communicate between HORCM/IPv4 and HORCM/IPv6 using IPv4
mapped to IPv6. The following figure shows the network configuration for mapped IPv6.

In the case of mixed IPv4 and IPv6, HORCM/IPv4 and HORCM/IPv6 can be connected via
IPv4 mapped IPv6, and native IPv6 is used for connecting HORCM/IPv6 and HORCM/IPv6.
The following figure shows the network configuration for mixed IPv4 and IPv6.

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

94

HORCM_INSTP

HORCM_INSTP
The HORCM_INSTP parameter is used to specify "pathID" for TrueCopy, Universal
Replicator, and global-active device link as well as HORCM_INST. The value for pathID
must be specified from 1 to 255. If you do not specify the pathID, the behavior is the
same as when HORCM_INST is used.
HORCM_INSTP
dev_group
VG01
VG02

ip_address
HSTA
HSTA

service
horcm
horcm

pathID
1
2

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

95

HORCM_LDEV

Note: The path ID can be specified for TrueCopy, Universal Replicator,
Universal Replicator for Mainframe, and global-active device. However, the
path ID cannot be specified for UR/URz when connecting TagmaStore USP/
TagmaStore NSC or USP V/VM.
The same path ID must be specified between the site of P-VOL and S-VOL
because the path ID is used by the paircreate command.

HORCM_LDEV
The HORCM_LDEV parameter is used for specifying stable LDEV# and Serial# as the
physical volumes corresponding to the paired logical volume names. Each group name is
unique and typically has a name fitting its use (for example, database data, Redo log file,
UNIX file). The group and paired logical volume names described in this item must also
be known to the remote server.
■

dev_group: (same as HORCM_DEV parameter) Names a group of paired logical
volumes. The command is executed for all corresponding volumes according to this
group name.

■

dev_name: (same as HORCM_DEV parameter) Names the paired logical volume within
a group (i.e., name of the special file or unique logical volume). The name of paired
logical volume must be different than the "dev name" on another group.

■

MU#: (same as HORCM_DEV parameter)

■

Serial#: Describes the serial number of the RAID storage system. For VSP G1x00
and VSP F1500, add a “3” at the beginning of the serial number (for example, enter
“312345” for serial number 12345).

■

CU:LDEV(LDEV#): Describes the LDEV number in the RAID storage system, and
supports three types of format as LDEV#.
●

Specifying "CU:LDEV" in hex.
Example for LDEV# 260: 01:04

●

Specifying "LDEV" in decimal used by the inqraid command.
Example for LDEV# 260: 260

●

Specifying "LDEV" in hex used by the inqraid command.
Example for LDEV# 260: 0x104

#dev_group
oradb
oradb

dev_name
dev1
dev2

Serial#
30095
30095

CU:LDEV(LDEV#)
02:40
02:41

MU#
0
0

HORCM_LDEVG
The HORCM_LDEVG parameter defines the device group information that the CCI
instance reads. For details about device groups, see the Command Control Interface User
and Reference Guide.

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

96

HORCM_ALLOW_INST
The following values are defined:
■

Copy_Group: Specifies the name of the copy group. This is equivalent to dev_group
of the HORCM_DEV and HORCM_LDEV parameters.
CCI operates by using the information defined here.

■

ldev_group: Specifies the name of the device group that the CCI instance reads.

■

Serial#: Specifies the storage system serial number. For VSP G1x00 and VSP F1500,
add a “3” at the beginning of the serial number (for example, enter “312345” for serial
number 12345).

HORCM_LDEVG
#Copy_Group
ora

ldev_group
grp1

Serial#
64034

HORCM_ALLOW_INST
The HORCM_ALLOW_INST parameter is used to restrict the users using the virtual
command device. The following IP addresses and port numbers are allowed:
For IPv4:
HORCM_ALLOW_INST
#ip_address
service
158.214.135.113 34000
158.214.135.114 34000
For IPv6:
HORCM_ALLOW_INST
#ip_address
fe80::209:6bff:febe:3c17

service
34000

service in the above example means the initiator port number of HORCM.
If CCI clients are not defined in HORCM_ALLOW_INST, HORCM instance starting up is
rejected by SCSI check condition (SKEY=0x05, ASX=0xfe) and CCI cannot be started up.

Examples of CCI configurations
The following examples show CCI configurations, the configuration definition file(s) for
each configuration, and examples of CCI command use for each configuration.

Example of CCI commands for TrueCopy remote configuration
The following figure shows the TrueCopy remote configuration that is used in the
following examples.

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

97

Example of CCI commands for TrueCopy remote configuration

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

98

Example of CCI commands for TrueCopy remote configuration
Example of CCI commands with HOSTA
■

Designate a group name (Oradb) and a local host as P-VOL.
# paircreate

-g Oradb

-f never

-vl

This command creates pairs for all LUs assigned to group Oradb in the configuration
definition file (two pairs for the configuration in the above figure).
■

Designate a volume name (oradev1) and a local host as P-VOL.
# paircreate

-g Oradb

-d oradev1

-f never

-vl

This command creates pairs for all LUs designated as oradev1 in the configuration
definition file (CL1-A,T1,L1 and CL1-D,T2,L1 for the configuration in the above figure).
■

Designate a group name and display pair status.
# pairdisplay -g Oradb
Group PairVol(L/R) (P,T#,L#),
Seq#, P-LDEV# M
oradb oradev1(L)
(CL1-A, 1,1)
30054 19
oradb oradev1(R)
(CL1-D, 2,1)
-- 18
oradb oradev2(L)
(CL1-A, 1,2)
30054 21
oradb oradev2(R)
(CL1-D, 2,2)
-- 20
-

Seq#,

LDEV#..P/S,

Status, Fence,

30053

18...P-VOL

COPY

NEVER,

30054

19...S-VOL

COPY

NEVER, ---

30053

20...P-VOL

COPY

NEVER,

30054

21...S-VOL

COPY

NEVER , ---

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

99

Example of CCI commands for TrueCopy remote configuration
Example of CCI commands with HOSTB
■

Designate a group name and a remote host as P-VOL.
# paircreate

-g Oradb

-f

never

-vr

This command creates pairs for all LU designated as Oradb in the configuration
definition file (two pairs for the configuration in the above figure).
■

Designate a volume name (oradev1) and a remote host as P-VOL.
# paircreate

-g Oradb

-d

oradev1

-f

never

-vr

This command creates pairs for all LUs designated as oradev1 in the configuration
definition file (CL1-A,T1,L1 and CL1-D,T2,L1 for the configuration in the above figure).

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

100

Example of CCI commands for TrueCopy remote configuration
■

Designate a group name and display pair status.
# pairdisplay -g Oradb
Group PairVol(L/R) (P,T#,L#),
Seq#, P-LDEV# M
oradb oradev1(L)
(CL1-D, 2,1)
- 18
oradb oradev1(R)
(CL1-A, 1,1)
30054 19
oradb oradev2(L)
(CL1-D, 2,2)
- 20
oradb oradev2(R)
(CL1-A, 1,2)
30054 21
-

Seq#,

LDEV#..P/S,

Status, Fence,

30054

19...S-VOL

COPY

NEVER, ----

30053

18...P-VOL

COPY

NEVER,

30054

21...S-VOL

COPY

NEVER, ----

30053

20...P-VOL

COPY

NEVER,

The command device is defined using the system raw device name (character-type
device file name). For example, the command devices for the following figure would
be:
■

HP-UX:
HORCM_CMD of HOSTA = /dev/rdsk/c0t0d1
HORCM_CMD of HOSTB = /dev/rdsk/c1t0d1

■

Solaris:
HORCM_CMD of HOSTA = /dev/rdsk/c0t0d1s2
HORCM_CMD of HOSTB = /dev/rdsk/c1t0d1s2
For Solaris operations with CCI version 01-09-03/04 or later, the command device
does not need to be labeled during the format command.

■

AIX®:
HORCM_CMD of HOSTA = /dev/rhdiskXX
HORCM_CMD of HOSTB = /dev/rhdiskXX
®

where XX = device number assigned by AIX
■

Tru64 UNIX:
HORCM_CMD of HOSTA = /dev/rdisk/dskXXc
HORCM_CMD of HOSTB = /dev/rdisk/dskXXc
where XX = device number assigned by Tru64 UNIX

■

Windows:
HORCM_CMD of HOSTA = \\.\CMD-Ser#-ldev#-Port#
HORCM_CMD of HOSTB = \\.\CMD-Ser#-ldev#-Port#

■

Linux, z/Linux:

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

101

Example of CCI commands for TrueCopy local configuration

HORCM_CMD of HOSTA = /dev/sdX
HORCM_CMD of HOSTB = /dev/sdX
where X = disk number assigned by Linux, z/Linux

Example of CCI commands for TrueCopy local configuration
The following figure shows the TrueCopy local configuration example.
Note: Input the raw device (character device) name of UNIX/Windows system
for command device.

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

102

Example of CCI commands for TrueCopy local configuration
Example of CCI commands with HOSTA
■

Designate a group name (Oradb) and a local host as P-VOL.
# paircreate

-g Oradb

-f

never

-vl

This command creates pairs for all LUs assigned to group Oradb in the configuration
definition file (two pairs for the configuration in above figure).
■

Designate a volume name (oradev1) and a local host as P-VOL.
# paircreate

-g Oradb

-d oradev1

-f

never

-vl

This command creates pairs for all LUs designated as oradev1 in the configuration
definition file (CL1-A,T1,L1 and CL1-D,T2,L1 for the configuration in above figure).
■

Designate a group name and display pair status.
# pairdisplay -g Oradb
Group PairVol(L/R) (P,T#,L#),
Seq#, P-LDEV# M
oradb oradev1(L)
(CL1-A, 1,1)
30053 19
oradb oradev1(R)
(CL1-D, 2,1)
- 18
oradb oradev2(L)
(CL1-A, 1,2)
30053 21
oradb oradev2(R)
(CL1-D, 2,2)
- 20
-

Seq#,

LDEV#..P/S,

Status, Fence,

30053

18..

P-VOL

COPY

NEVER,

30053

19..

S-VOL

COPY

NEVER, ----

30053

20..

P-VOL

COPY

NEVER,

30053

21..

S-VOL

COPY

NEVER, ----

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

103

Example of CCI commands for TrueCopy local configuration
Example of CCI commands with HOSTB
■

Designate a group name and a remote host as P-VOL.
# paircreate

-g Oradb

-f

never

-vr

This command creates pairs for all LU designated as Oradb in the configuration
definition file (two pairs for the configuration in figure above).
■

Designate a volume name (oradev1) and a remote host as P-VOL.
# paircreate

-g Oradb

-d

oradev1

-f

never

-vr

This command creates pairs for all LUs designated as oradev1 in the configuration
definition file (CL1-A,T1,L1 and CL1-D,T2,L1 for the configuration in above figure).

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

104

Example of CCI commands for TrueCopy local configuration
■

Designate a group name and display pair status.
# pairdisplay -g Oradb
PairVol(L/R)
(P,T#,L#),
Group
LDEV# M
oradb
oradev1(L)
(CL1-D, 2,1)
18 oradb
oradev1(R)
(CL1-A, 1,1)
19 oradb
oradev2(L)
(CL1-D, 2,2)
20 oradb
oradev2(R)
(CL1-A, 1,2)
21 -

Seq#,LDEV#..P/S,

Status, Fence,Seq#,P-

30053

19.. S-VOL COPY

NEVER ,-----

30053

18.. P-VOL COPY

NEVER ,30053

30053

21.. S-VOL COPY

NEVER ,-----

30053

20.. P-VOL COPY

NEVER ,30053

The command device is defined using the system raw device name (character-type
device file name). For example, the command devices can be defined as follows:
●

HP-UX:
HORCM_CMD of HORCMINST0 = /dev/rdsk/c0t0d1
HORCM_CMD of HORCMINST1 = /dev/rdsk/c1t0d1

●

Solaris:
HORCM_CMD of HORCMINST0 = /dev/rdsk/c0t0d1s2
HORCM_CMD of HORCMINST1 = /dev/rdsk/c1t0d1s2
For Solaris operations with CCI version 01-09-03/04 or later, the command device
does not need to be labeled during the format command.

●

AIX®:
HORCM_CMD of HORCMINST0 = /dev/rhdiskXX
HORCM_CMD of HORCMINST1 = /dev/rhdiskXX
®

where XX = device number assigned by AIX
●

Tru64 UNIX:
HORCM_CMD of HORCMINST0 = /dev/rrzbXXc
HORCM_CMD of HORCMINST1 = /dev/rrzbXXc
where XX = device number assigned by Tru64 UNIX

●

Windows:
HORCM_CMD of HORCMINST0 = \\.\CMD-Ser#-ldev#-Port#
HORCM_CMD of HORCMINST1 = \\.\CMD-Ser#-ldev#-Port#

●

Linux, z/Linux:

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

105

Example of CCI commands for TrueCopy configuration with two instances

HORCM_CMD of HORCMINST0 = /dev/sdX
HORCM_CMD of HORCMINST1 = /dev/sdX
where X = device number assigned by Linux, z/Linux

Example of CCI commands for TrueCopy configuration with two
instances
The following figure shows the TrueCopy configuration example for two instances.
Note: Input the raw device (character device) name of UNIX/Windows system
for command device.

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

106

Example of CCI commands for TrueCopy configuration with two instances
Example of CCI commands with Instance-0 on HOSTA
■

When the command execution environment is not set, set an instance number.
For C shell: # setenv HORCMINST 0
For Windows: set HORCMINST=0

■

Designate a group name (Oradb) and a local instance as P-VOL.
# paircreate

-g Oradb

-f never

-vl

This command creates pairs for all LUs assigned to group Oradb in the configuration
definition file (two pairs for the configuration in above figure).
■

Designate a volume name (oradev1) and a local instance as P-VOL.
# paircreate

-g Oradb

-d oradev1

-f never

-vl

This command creates pairs for all LUs designated as oradev1 in the configuration
definition file (CL1-A,T1,L1 and CL1-D,T2,L1 for the configuration in above figure).
■

Designate a group name and display pair status.
# pairdisplay -g Oradb
Group PairVol(L/R) (P,T#,L#),
Seq#, P-LDEV# M
oradb oradev1(L)
(CL1-A, 1,1)
30053 19
oradb oradev1(R)
(CL1-D, 2,1)
- 18
oradb oradev2(L)
(CL1-A, 1,2)
30053 21
oradb oradev2(R)
(CL1-D, 2,2)
- 20
-

Seq#,

LDEV#.. P/S, Status, Fence,

30053

18..

P-VOL COPY

NEVER,

30053

19..

S-VOL COPY

NEVER, ----

30053

20..

P-VOL COPY

NEVER,

30053

21..

S-VOL COPY

NEVER, ----

Example of CCI commands with Instance-1 on HOSTA
■

When the command execution environment is not set, set an instance number.
For C shell: # setenv HORCMINST 1
For Windows: set HORCMINST=1

■

Designate a group name and a remote instance as P-VOL.
# paircreate

-g Oradb

-f

never

-vr

This command creates pairs for all LU designated as Oradb in the configuration
definition file (two pairs for the configuration in above figure).

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

107

Example of CCI commands for TrueCopy configuration with two instances
■

Designate a volume name (oradev1) and a remote instance as P-VOL.
# paircreate

-g Oradb

-d

oradev1

-f

never

-vr

This command creates pairs for all LUs designated as oradev1 in the configuration
definition file (CL1-A,T1,L1 and CL1-D,T2,L1 for the configuration in above figure).
■

Designate a group name and display pair status.
# pairdisplay -g Oradb
Group PairVol(L/R) (P,T#,L#),
Seq#, P-LDEV# M
oradb oradev1(L)
(CL1-D, 2,1)
---- 18
oradb oradev1(R)
(CL1-A, 1,1)
30053 19
oradb oradev2(L)
(CL1-D, 2,2)
---- 20
oradb oradev2(R)
(CL1-A, 1,2)
30053 21
-

Seq#,

LDEV#.. P/S,

Status, Fence,

30053

19..

S-VOL COPY

NEVER , -

30053

18..

P-VOL COPY

NEVER ,

30053

21..

S-VOL COPY

NEVER , -

30053

20..

P-VOL COPY

NEVER ,

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

108

Example of CCI commands for TrueCopy configuration with two instances
The command device is defined using the system raw device name (character-type
device file name) of UNIX/Windows system. For example, the command devices for this
configuration would be:
■

HP-UX:
HORCM_CMD
HORCM_CMD
HORCM_CMD
HORCM_CMD

■

of
of
of
of

HOSTA
HOSTB
HOSTC
HOSTD

=
=
=
=

/dev/rdsk/c0t0d1
/dev/rdsk/c1t0d1
/dev/rdsk/c1t0d1
/dev/rdsk/c1t0d1

of
of
of
of

HOSTA
HOSTB
HOSTC
HOSTD

=
=
=
=

/dev/rdsk/c0t0d1s2
/dev/rdsk/c1t0d1s2
/dev/rdsk/c1t0d1s2
/dev/rdsk/c1t0d1s2

Solaris:
HORCM_CMD
HORCM_CMD
HORCM_CMD
HORCM_CMD

For Solaris operations with CCI version 01-09-03/04 or later, the command device
does not need to be labeled during the format command.
■

AIX®:
HORCM_CMD
HORCM_CMD
HORCM_CMD
HORCM_CMD

of
of
of
of

HOSTA
HOSTB
HOSTC
HOSTD

=
=
=
=

/dev/rhdiskXX
/dev/rhdiskXX
/dev/rhdiskXX
/dev/rhdiskXX
®

where XX = device number created automatically by AIX
■

Tru64 UNIX:
HORCM_CMD
HORCM_CMD
HORCM_CMD
HORCM_CMD

of
of
of
of

HOSTA
HOSTB
HOSTC
HOSTD

=
=
=
=

/dev/rrzbXXc
/dev/rrzbXXc
/dev/rrzbXXc
/dev/rrzbXXc

where XX = device number defined by Tru64 UNIX
■

Windows:
HORCM_CMD
HORCM_CMD
HORCM_CMD
HORCM_CMD

■

of
of
of
of

HOSTA
HOSTB
HOSTC
HOSTD

=
=
=
=

\\.\CMD-Ser#-ldev#-Port#
\\.\CMD-Ser#-ldev#-Port#
\\.\CMD-Ser#-ldev#-Port#
\\.\CMD-Ser#-ldev#-Port#

HOSTA
HOSTB
HOSTC
HOSTD

=
=
=
=

/dev/sdX
/dev/sdX
/dev/sdX
/dev/sdX

Linux, z/Linux:
HORCM_CMD
HORCM_CMD
HORCM_CMD
HORCM_CMD

of
of
of
of

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

109

Example of CCI commands for ShadowImage configuration
where X = disk number defined by Linux, z/Linux

Example of CCI commands for ShadowImage configuration
The following figure shows the ShadowImage configuration example.

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

110

Example of CCI commands for ShadowImage configuration

Example of CCI commands with HOSTA (group Oradb)
■

When the command execution environment is not set, set HORCC_MRCF to the
environment variable.
For C shell: # setenv HORCC_MRCF 1
Windows: set HORCC_MRCF=1

■

Designate a group name (Oradb) and a local host as P-VOL.
# paircreate

-g Oradb -vl

This command creates pairs for all LUs assigned to group Oradb in the configuration
definition file (two pairs for the configuration in above figure).

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

111

Example of CCI commands for ShadowImage configuration
■

Designate a volume name (oradev1) and a local host as P-VOL.
# paircreate

-g Oradb

-d oradev1 -vl

This command creates pairs for all LUs designated as oradev1 in the configuration
definition file (CL1-A,T1,L1 and CL1-D,T2,L1 for the configuration in the above figure).
■

Designate a group name and display pair status.
# pairdisplay -g Oradb
PairVol(L/R) (Port#,TID,LU-M),
Group
Seq#, P-LDEV# M
oradb
oradev1(L)
(CL1-A, 1,1 - 0)
20
oradb
oradev1(R)
(CL2-B, 2,1 - 0)
18
oradb
oradev2(L)
(CL1-A, 1,2 - 0)
21
oradb
oradev2(R)
(CL2-B, 2,2 - 0)
19
-

Seq#,

LDEV#..P/S,

Status,

30053

18..P-VOL

COPY

30053

30053

20..S-VOL

COPY

-----

30053

19..P-VOL

COPY

30053

30053

21..S-VOL

COPY

-----

Example of CCI commands with HOSTB (group Oradb)
■

When the command execution environment is not set, set HORCC_MRCF to the
environment variable.
For C shell: # setenv HORCC_MRCF 1
Windows: set HORCC_MRCF=1

■

Designate a group name and a remote host as P-VOL.
# paircreate

-g Oradb

-vr

This command creates pairs for all LUs assigned to group Oradb in the configuration
definition file (two pairs for the configuration in the above figure).

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

112

Example of CCI commands for ShadowImage configuration
■

Designate a volume name (oradev1) and a remote host as P-VOL.
# paircreate

-g Oradb

-d

oradev1

-vr

This command creates pairs for all LUs designated as oradev1 in the configuration
definition file (CL1-A,T1,L1 and CL1-D,T2,L1 for the configuration in the above figure).
■

Designate a group name and display pair status.
# pairdisplay -g Oradb
PairVol(L/R) (Port#,TID,LU-M),
Group
Seq#, P-LDEV# M
oradb
oradev1(L)
(CL2-B, 2,1 - 0)
18
oradb
oradev1(R)
(CL1-A, 1,1 - 0)
20
oradb
oradev2(L)
(CL2-B, 2,2 - 0)
19
oradb
oradev2(R)
(CL1-A, 1,2 - 0)
21
-

Seq#,

LDEV#..P/S,

Status,

30053

20..S-VOL

COPY

-----

30053

18..P-VOL

COPY

30053

30053

21..S-VOL

COPY

-----

30053

19..P-VOL

COPY

30053

Example of CCI commands with HOSTA (group Oradb1)
■

When the command execution environment is not set, set HORCC_MRCF to the
environment variable.
For C shell: # setenv HORCC_MRCF 1
For Windows: set HORCC_MRCF=1

■

Designate a group name (Oradb1) and a local host as P-VOL.
# paircreate

-g Oradb1 -vl

This command creates pairs for all LUs assigned to group Oradb1 in the configuration
definition file (two pairs for the configuration in the above figure).

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

113

Example of CCI commands for ShadowImage configuration
■

Designate a volume name (oradev1-1) and a local host as P-VOL.
# paircreate

-g Oradb1

-d oradev1-1 -vl

This command creates pairs for all LUs designated as oradev1-1 in the configuration
definition file (CL1-A,T1,L1 and CL1-D,T2,L1 for the configuration in the above figure).
■

Designate a group name and display pair status.
# pairdisplay -g Oradb1
PairVol(L/R) (Port#,TID,LU-M),
Group
Seq#,P-LDEV# M
oradb1
oradev1-1(L) (CL1-A, 1, 1 - 1)
22 oradb1
oradev1-1(R) (CL2-C, 2, 1 - 0)
18 oradb1
oradev1-2(L) (CL1-A, 1, 2 - 1)
23 oradb1
oradev1-2(R) (CL2-C, 2, 2 - 0)
19 -

Seq#,LDEV#..P/S,

Status,

30053

18..P-VOL

COPY

30053

30053

22..S-VOL

COPY

-----

30053

19..P-VOL

COPY

30053

30053

23..S-VOL

COPY

-----

Example of CCI commands with HOSTC (group Oradb1)
■

When the command execution environment is not set, set HORCC_MRCF to the
environment variable.
For C shell: # setenv HORCC_MRCF 1
For Windows: set HORCC_MRCF=1

■

Designate a group name and a remote host as P-VOL.
# paircreate

-g Oradb1

-vr

This command creates pairs for all LUs assigned to group Oradb1 in the configuration
definition file (two pairs for the configuration in the above figure).

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

114

Example of CCI commands for ShadowImage configuration
■

Designate a volume name (oradev1-1) and a remote host as P-VOL.
# paircreate

-g Oradb1

-d

oradev1-1

-vr

This command creates pairs for all LUs designated as oradev1-1 in the configuration
definition file (CL1-A,T1,L1 and CL1-D,T2,L1 for the configuration in the above figure).
■

Designate a group name and display pair status.
# pairdisplay -g Oradb1
PairVol(L/R) (Port#,TID,LU-M),
Group
Seq#, P-LDEV# M
oradb1 oradev1-1(L) (CL2-C, 2, 1 - 0)
18
oradb1 oradev1-1(R) (CL1-A, 1, 1 - 1)
30053
22
oradb1 oradev1-2(L) (CL2-C, 2, 2 - 0)
19
oradb1 oradev1-2(R) (CL1-A, 1, 2 - 1)
30053
23
-

Seq#,

LDEV#..P/S,

Status,

30053

22..S-VOL

COPY

30053

18..P-VOL

COPY

30053

23..S-VOL

COPY

30053

19..P-VOL

COPY

----

----

Example of CCI commands with HOSTA (group Oradb2)
■

When the command execution environment is not set, set HORCC_MRCF to the
environment variable.
For C shell: # setenv HORCC_MRCF 1
For Windows: set HORCC_MRCF=1

■

Designate a group name (Oradb2) and a local host as P-VOL.
# paircreate

-g Oradb2 -vl

This command creates pairs for all LUs assigned to group Oradb2 in the configuration
definition file (two pairs for the configuration in above figure).

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

115

Example of CCI commands for ShadowImage configuration
■

Designate a volume name (oradev2-1) and a local host as P-VOL.
# paircreate

-g Oradb2

-d oradev2-1 -vl

This command creates pairs for all LUs designated as oradev2-1 in the configuration
definition file (CL1-A,T1,L1 and CL1-D,T2,L1 for the configuration in the above figure).
■

Designate a group name and display pair status.
# pairdisplay -g Oradb2
PairVol(L/R) (Port#,TID,LU-M),
Group
Seq#, P-LDEV# M
oradb2 oradev2-1(L) (CL1-A, 1, 1 - 2)
30053 24
oradb2 oradev2-1(R) (CL2-D, 2, 1 - 0)
- 18
oradb2 oradev2-2(L) (CL1-A, 1, 2 - 2)
30053 25
oradb2 oradev2-2(R) (CL2-D, 2, 2 - 0)
- 19
-

Seq#,

LDEV#..P/S,

Status,

30053

18..P-VOL

COPY

30053

24..S-VOL

COPY

30053

19..P-VOL

COPY

30053

25..S-VOL

COPY

----

----

Example of CCI commands with HOSTD (group Oradb2)
■

When the command execution environment is not set, set HORCC_MRCF to the
environment variable.
For C shell: # setenv HORCC_MRCF 1
For Windows: set HORCC_MRCF=1

■

Designate a group name and a remote host as P-VOL.
# paircreate

-g Oradb2

-vr

This command creates pairs for all LUs assigned to group Oradb2 in the configuration
definition file (two pairs for the configuration in the above figure).

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

116

Example of CCI commands for ShadowImage configuration
■

Designate a volume name (oradev2-1) and a remote host as P-VOL.
# paircreate

-g Oradb2

-d

oradev2-1

-vr

This command creates pairs for all LUs designated as oradev2-1 in the configuration
definition file (CL1-A,T1,L1 and CL1-D,T2,L1 for the configuration in the above figure).
■

Designate a group name and display pair status.
# pairdisplay -g Oradb2
PairVol(L/R) (Port#,TID,LU-M),
Group
Seq#,P-LDEV# M
oradb2 oradev2-1(L) (CL2-D, 2, 1 - 0)
18
oradb2 oradev2-1(R) (CL1-A, 1, 1 - 2)
24 oradb2 oradev2-2(L) (CL2-D, 2, 2 - 0)
19 oradb2 oradev2-2(R) (CL1-A, 1, 2 - 2)
25 -

Seq#,

LDEV#..P/S, Status,

30053

24..S-VOL

COPY

-----

30053

18..P-VOL

COPY

30053

30053

25..S-VOL

COPY

-----

30053

19..P-VOL

COPY

30053

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

117

Example of CCI commands for ShadowImage cascade configuration
The command device is defined using the system raw device name (character-type
device file name) of UNIX/Windows system. For example, the command devices for this
configuration would be:
■

HP-UX:
HORCM_CMD of HORCMINST0 = /dev/rdsk/c0t0d1
HORCM_CMD of HORCMINST1 = /dev/rdsk/c1t0d1

■

Solaris:
HORCM_CMD of HORCMINST0 = /dev/rdsk/c0t0d1s2
HORCM_CMD of HORCMINST1 = /dev/rdsk/c1t0d1s2
For Solaris operations with CCI version 01-09-03/04 or later, the command device
does not need to be labeled during format command.

■

AIX®:
HORCM_CMD of HORCMINST0 = /dev/rhdiskXX
HORCM_CMD of HORCMINST1 = /dev/rhdiskXX
®

where XX = device number assigned by AIX
■

Tru64 UNIX:
HORCM_CMD of HORCMINST0 = /dev/rrzbXXc
HORCM_CMD of HORCMINST1 = /dev/rrzbXXc
where XX = device number assigned by Tru64 UNIX

■

Windows:
HORCM_CMD of HORCMINST0 = \\.\CMD-Ser#-ldev#-Port#
HORCM_CMD of HORCMINST1 = \\.\CMD-Ser#-ldev#-Port#

■

Linux, z/Linux:
HORCM_CMD of HORCMINST0 = /dev/sdX
HORCM_CMD of HORCMINST1 = /dev/sdX
where X = disk number defined by Linux, z/Linux

Example of CCI commands for ShadowImage cascade configuration
The following figure shows the ShadowImage configuration example with cascade pairs.

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

118

Example of CCI commands for ShadowImage cascade configuration

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

119

Example of CCI commands for ShadowImage cascade configuration
Example of CCI commands with Instance-0 on HOSTA
■

When the command execution environment is not set, set an instance number.
For C shell:# setenv HORCMINST 0 # setenv HORCC_MRCF 1
For Windows:set HORCMINST=0 set HORCC_MRCF=1

■

Designate a group name (Oradb) and a local instance P- VOL.
# paircreate
# paircreate

-g Oradb -vl
-g Oradb1 -vr

These commands create pairs for all LUs assigned to groups Oradb and Oradb1 in the
configuration definition file.
■

Designate a group name and display pair status.
# pairdisplay -g oradb -m cas
PairVol(L/R) (Port#,TID,LU-M), Seq#, LDEV#. P/S, Status,
Group
Seq#, P-LDEV# M
oradb
oradev1(L)
(CL1-A , 1, 1-0) 30053 266.. P-VOL PAIR,
30053
268
oradb
oradev1(R)
(CL1-D , 2, 1-0) 30053 268.. S-VOL PAIR, ----266
oradb1 oradev11(R) (CL1-D , 2, 1-1) 30053 268.. P-VOL PAIR,
30053
270
oradb2 oradev21(R) (CL1-D , 2, 1-2) 30053 268.. SMPL ----, -------oradb
oradev2(L)
(CL1-A , 1, 2-0) 30053 267.. P-VOL PAIR,
30053
269
oradb
oradev2(R)
(CL1-D , 2, 2-0) 30053 269.. S-VOL PAIR, ----267
oradb1 oradev12(R) (CL1-D , 2, 2-1) 30053 269.. P-VOL PAIR,
30053
271
oradb2 oradev22(R) (CL1-D , 2, 2-2) 30053 269.. SMPL ----, --------

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

120

Example of CCI commands for ShadowImage cascade configuration
Example of CCI commands with Instance-1 on HOSTA
■

When the command execution environment is not set, set an instance number.
For C shell:# setenv HORCMINST 1 # setenv HORCC_MRCF 1
For Windows:set HORCMINST=1 set HORCC_MRCF=1

■

Designate a group name and a remote instance P-VOL.
# paircreate
# paircreate

-g Oradb -vr
-g Oradb1 -vl

These commands create pairs for all LUs assigned to groups Oradb and Oradb1 in the
configuration definition file.
■

Designate a group name and display pair status.
# pairdisplay -g oradb -m cas
PairVol(L/R) (Port#,TID,LU-M),Seq#,LDEV#.P/S,Status, Seq#,PGroup
LDEV# M
oradb
oradev1(L) (CL1-D , 2, 1-0)30053 268..S-VOL PAIR,----266
oradb1 oradev11(L) (CL1-D , 2, 1-1)30053 268..P-VOL PAIR,30053
270
oradb2 oradev21(L) (CL1-D , 2, 1-2)30053 268..SMPL ----,-------oradb
oradev1(R) (CL1-A , 1, 1-0)30053 266..P-VOL PAIR,30053
268
oradb
oradev2(L) (CL1-D , 2, 2-0)30053 269..S-VOL PAIR,----267
oradb1 oradev12(L) (CL1-D , 2, 2-1)30053 269..P-VOL PAIR,30053
271
oradb2 oradev22(L) (CL1-D , 2, 2-2)30053 269..SMPL ----,-------oradb
oradev2(R) (CL1-A , 1, 2-0)30053 267..P-VOL PAIR,30053
269
-

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

121

Example of CCI commands for TC/SI cascade configuration
The command device is defined using the system raw device name (character-type
device file name) of UNIX/Windows system. For example, the command devices for this
configuration would be:
■

HP-UX:
HORCM_CMD of HOSTA (/etc/horcm.conf) ... /dev/rdsk/c0t0d1
HORCM_CMD of HOSTB (/etc/horcm.conf) ... /dev/rdsk/c1t0d1
HORCM_CMD of HOSTB (/etc/horcm0.conf) ... /dev/rdsk/c1t0d1

■

Solaris:
HORCM_CMD of HOSTA(/etc/horcm.conf) ... /dev/rdsk/c0t0d1s2
HORCM_CMD of HOSTB(/etc/horcm.conf) ... /dev/rdsk/c1t0d1s2
HORCM_CMD of HOSTB(/etc/horcm0.conf) ... /dev/rdsk/c1t0d1s2
For Solaris operations with CCI version 01-09-03/04 or later, the command device
does not need to be labeled during format command.

■

AIX®:
HORCM_CMD of HOSTA(/etc/horcm.conf) ... /dev/rhdiskXX
HORCM_CMD of HOSTB(/etc/horcm.conf) ... /dev/rhdiskXX
HORCM_CMD of HOSTB(/etc/horcm0.conf)... /dev/rhdiskXX
®

where XX = device number assigned by AIX
■

Tru64 UNIX:
HORCM_CMD of HOSTA(/etc/horcm.conf) ... /dev/rrzbXXc
HORCM_CMD of HOSTB(/etc/horcm.conf) ... /dev/rrzbXXc
HORCM_CMD of HOSTB(/etc/horcm0.conf)... /dev/rrzbXXc
where XX = device number assigned by Tru64 UNIX

■

Windows:
HORCM_CMD of HOSTA(/etc/horcm.conf) ... \\.\CMD-Ser#-ldev#-Port#
HORCM_CMD of HOSTB(/etc/horcm.conf) ... \\.\CMD-Ser#-ldev#-Port#
HORCM_CMD of HOSTB(/etc/horcm0.conf) ... \\.\CMD-Ser#-ldev#-Port#

■

Linux, z/Linux:
HORCM_CMD of HOSTA(/etc/horcm.conf) ... /dev/sdX
HORCM_CMD of HOSTB(/etc/horcm.conf) ... /dev/sdX
HORCM_CMD of HOSTB(/etc/horcm0.conf) ... /dev/sdX
where X = device number assigned by Linux, z/Linux

Example of CCI commands for TC/SI cascade configuration
The following figure shows the TC/SI configuration example with cascade pairs.

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

122

Example of CCI commands for TC/SI cascade configuration

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

123

Example of CCI commands for TC/SI cascade configuration
Example of CCI commands with HOSTA and HOSTB
■

Designate a group name (Oradb) on TrueCopy environment of HOSTA.
# paircreate

■

-g Oradb

-vl

Designate a group name (Oradb1) on ShadowImage environment of HOSTB. When
the command execution environment is not set, set HORCC_MRCF.
For C shell: # setenv HORCC_MRCF 1
For Windows: set HORCC_MRCF=1
# paircreate -g Oradb1 -vl
These commands create pairs for all LUs assigned to groups Oradb and Oradb1 in the
configuration definition file (four pairs for the configuration in the above figures).

■

Designate a group name and display pair status on HOSTA.
# pairdisplay -g oradb -m cas
PairVol(L/R) (Port#,TID,LU-M),Seq#,LDEV#.P/S,Status, Seq#,PGroup
LDEV# M
oradb
oradev1(L) (CL1-A , 1, 1-0)30052 266..SMPL ----,-------oradb
oradev1(L) (CL1-A , 1, 1) 30052 266..P-VOL COPY,30053
268
oradb1 oradev11(R) (CL1-D , 2, 1-0)30053 268..P-VOL COPY,30053
270
oradb2 oradev21(R) (CL1-D , 2, 1-1)30053 268..SMPL ----,-------oradb
oradev1(R) (CL1-D , 2, 1) 30053 268..S-VOL COPY,----266
oradb
oradev2(L) (CL1-A , 1, 2-0)30052 267..SMPL ----,-------oradb
oradev2(L) (CL1-A , 1, 2) 30052 267..P-VOL COPY,30053
269
oradb1 oradev12(R) (CL1-D , 2, 2-0)30053 269..P-VOL COPY,30053
271
oradb2 oradev22(R) (CL1-D , 2, 2-1)30053 269..SMPL ----,-------oradb
oradev2(R) (CL1-D , 2, 2) 30053 269..S-VOL COPY,----267
-

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

124

Example of CCI commands for TC/SI cascade configuration
Example of CCI commands with HOSTB
■

Designate a group name (oradb) on TrueCopy environment of HOSTB.
# paircreate

■

-g Oradb

-vr

Designate a group name (Oradb1) on ShadowImage environment of HOSTB. When
the command execution environment is not set, set HORCC_MRCF.
For C shell: # setenv HORCC_MRCF 1
For Windows: set HORCC_MRCF=1
# paircreate -g Oradb1 -vl
This command creates pairs for all LUs assigned to group Oradb1 in the configuration
definition file (four pairs for the configuration in the above figures).

■

Designate a group name and display pair status on TrueCopy environment of HOSTB.
# pairdisplay -g oradb -m cas
PairVol(L/R) (Port#,TID,LU-M),Seq#,LDEV#.P/S,Status, Seq#,PGroup
LDEV# M
oradb1 oradev11(L) (CL1-D , 2, 1-0)30053 268..P-VOL PAIR,30053
270
oradb2 oradev21(L) (CL1-D , 2, 1-1)30053 268..SMPL ----,-------oradb
oradev1(L) (CL1-D , 2, 1) 30053 268..S-VOL PAIR,----266
oradb
oradev1(R) (CL1-A , 1, 1-0)30052 266..SMPL ----,-------oradb
oradev1(R) (CL1-A , 1, 1) 30052 266..P-VOL PAIR,30053
268
oradb1 oradev12(L) (CL1-D , 2, 2-0)30053 269..P-VOL PAIR,30053
271
oradb2 oradev22(L) (CL1-D , 2, 2-1)30053 269..SMPL ----,-------oradb
oradev2(L) (CL1-D , 2, 2) 30053 269..S-VOL PAIR,----267
oradb
oradev2(R) (CL1-A , 1, 2-0)30052 267..SMPL ----,-------oradb
oradev2(R) (CL1-A , 1, 2) 30052 267..P-VOL PAIR,30053
269
-

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

125

Example of CCI commands for TC/SI cascade configuration
■

Designate a group name and display pair status on ShadowImage environment of
HOSTB.
# pairdisplay -g oradb1 -m cas
PairVol(L/R) (Port#,TID,LU-M),Seq#,LDEV#.P/S,Status, Seq#,PGroup
LDEV# M
oradb1 oradev11(L) (CL1-D , 2, 1-0)30053 268..P-VOL PAIR,30053
270
oradb2 oradev21(L) (CL1-D , 2, 1-1)30053 268..SMPL ----,-------oradb
oradev1(L) (CL1-D , 2, 1) 30053 268..S-VOL PAIR,----266
oradb1 oradev11(R) (CL1-D , 3, 1-0)30053 270..S-VOL PAIR,----268
oradb1 oradev12(L) (CL1-D , 2, 2-0)30053 269..P-VOL PAIR,30053
271
oradb2 oradev22(L) (CL1-D , 2, 2-1)30053 269..SMPL ----,-------oradb
oradev2(L) (CL1-D , 2, 2) 30053 269..S-VOL PAIR,----267
oradb1 oradev12(R) (CL1-D , 3, 2-0)30053 271..S-VOL PAIR,----269
-

■

Designate a group name and display pair status on ShadowImage environment of
HOSTB (HORCMINST0).
# pairdisplay -g oradb1 -m cas
PairVol(L/R) (Port#,TID,LU-M),Seq#,LDEV#.P/S,Status, Seq#,PGroup
LDEV# M
oradb1 oradev11(L) (CL1-D , 3, 1-0)30053 270..S-VOL PAIR,----268
oradb1 oradev11(R) (CL1-D , 2, 1-0)30053 268..P-VOL PAIR,30053
270
oradb2 oradev21(R) (CL1-D , 2, 1-1)30053 268..SMPL ----,-------oradb
oradev1(R) (CL1-D , 2, 1) 30053 268..S-VOL PAIR,----266
oradb1 oradev12(L) (CL1-D , 3, 2-0)30053 271..S-VOL PAIR,----269
oradb1 oradev12(R) (CL1-D , 2, 2-0)30053 269..P-VOL PAIR,30053
271
oradb2 oradev22(R) (CL1-D , 2, 2-1)30053 269..SMPL ----,-------oradb
oradev2(R) (CL1-D , 2, 2) 30053 269..S-VOL PAIR,----267
-

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

126

Correspondence of the configuration definition file for cascading volume and mirror descriptors

Correspondence of the configuration definition file for
cascading volume and mirror descriptors
The CCI software (HORCM) is capable of keeping a record of the multiple pair
configurations per LDEV. CCI distinguishes the records of the each pair configuration by
MU#. You can assign 64 MU#s (MU#0 to 63) for local copy products and 4 MU#s (MU#0
to 3) for remote copy products as the following figure, you can define up to 68 device
groups (records of pair configuration) in the configuration definition file.
The following figure shows the management of pair configuration by mirror descriptors.

The group name and MU# that are noted in the HORCM_DEV section of the
configuration definition file are assigned to the corresponding mirror descriptors. This
outline is described in the following table. "Omission of MU#" is handled as MU#0, and
the specified group is registered to MU#0 on ShadowImage/Copy-on-Write Snapshot and
TrueCopy/Universal Replicator/global-active device. Also, when you note the MU# in
HORCM_DEV, the sequence of the MU# can be random (for example, 2, 1, 0).
SI/Copy-onWrite
Snapshot
only

MU#0

HORCM_DEV Parameter in Configuration File
HORCM_DEV
#dev_group
LU# MU#
Oradb
HORCM_DEV
#dev_group
LU# MU#
Oradb
Oradb1

dev_name

port#

TargetID

oradev1

CL1-D

2

dev_name

port#

TargetID

oradev1
oradev11

CL1-D
CL1-D

2
2

TC/
UR/GAD

UR/GAD

MU#1 to 2
SI

oradev1

oradev1

oradev1

oradev1

MU#1 to
3

(MU#3 to 63)
-

-

1
oradev11
oradev21

-

1
1

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

127

Correspondence of the configuration definition file for cascading volume and mirror descriptors

SI/Copy-onWrite
Snapshot
only

MU#0

1
Oradb2
2
HORCM_DEV
#dev_group
LU# MU#
Oradb
Oradb1
0
Oradb2
1
Oradb3
2
HORCM_DEV
#dev_group
LU# MU#
Oradb
0
HORCM_DEV
#dev_group
LU# MU#
Oradb
h0
HORCM_DEV
#dev_group
LU# MU#
Oradb
0
Oradb1
1
Oradb2
2

oradev21

CL1-D

2

port#

TargetID

oradev1
oradev11

CL1-D
CL1-D

2
2

1
1

oradev21

CL1-D

2

1

oradev31

CL1-D

2

1

port#

TargetID

oradev1

CL1-D

2

oradev1

oradev1
1

-

oradev1

port#

TargetID

oradev1

CL1-D

2

MU#1 to
3

(MU#3 to 63)

oradev21
oradev31

-

-

-

-

-

-

oradev1

oradev11
oradev21

1

oradev1
dev_name

SI

1

dev_name

dev_name

MU#1 to 2

TC/
UR/GAD

HORCM_DEV Parameter in Configuration File

UR/GAD

1

dev_name

port#

TargetID

oradev1

CL1-D

2

1

oradev1

CL1-D

2

1

oradev21

CL1-D

2

1

-

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

128

Configuration definition files for cascade configurations

SI/Copy-onWrite
Snapshot
only

MU#0

HORCM_DEV Parameter in Configuration File
HORCM_DEV
#dev_group
LU# MU#
Oradb
Oradb1
0
Oradb2
h1
Oradb3
h2
Oradb4
h3

TC/
UR/GAD
oradev1

dev_name

port#

TargetID

MU#1 to 2
SI
oradev1
1

(MU#3 to 63)
-

UR/GAD
MU#1 to
3
oradev21
oradev31

oradev1
oradev11

CL1-D
CL1-D

2
2

1
1

oradev21

CL1-D

2

1

oradev31

CL1-D

2

1

oradev41

CL1-D

2

1

oradev41

Configuration definition files for cascade configurations
Each volume in a cascading connection is described by an entry in the configuration
definition file on each HORCM instance, and each connection of the volume is specified
by mirror descriptor. In the case of a ShadowImage/TrueCopy cascading connection, too,
the volume is described in the configuration definition file on the same instance. The
following topics present examples of ShadowImage and ShadowImage/TrueCopy
cascading configurations.

Configuration definition files for ShadowImage cascade configuration
The following figure shows an example of a ShadowImage cascade configuration and the
associated entries in the configuration definition files. ShadowImage is a mirror
configuration within one storage system, so the volumes are described in the
configuration definition file for each HORCM instance: volumes T3L0, T3L4, and T3L6 in
HORCMINST0, and volume T3L2 in HORCMINST1. As shown in this ShadowImage
cascading connection example, the specified dev group is assigned to the ShadowImage
mirror descriptor: MU#0 in HORCMINST0, and MU#0, MU#1, and MU#2 in HORCMINST1.

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

129

Configuration definition files for ShadowImage cascade configuration

The following figures show the pairdisplay information for this example of a
ShadowImage cascading configuration.

Figure 1 Pairdisplay -g on HORCMINST0

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

130

Configuration definition files for TrueCopy/ShadowImage cascade configuration

Figure 2 Pairdisplay -g on HORCMINST1

Figure 3 Pairdisplay -d on HORCMINST0

Configuration definition files for TrueCopy/ShadowImage cascade
configuration
The cascading connections for TrueCopy/ShadowImage can be set up by using three
configuration definition files that describe the cascading volume entity in a configuration
definition file on the same instance. The mirror descriptor of ShadowImage and
TrueCopy definitely describe "0" as MU#, and the mirror descriptor of TrueCopy does not
describe "0" as MU#.
The following figure shows the TC/SI cascading connection and configuration file.

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

131

Configuration definition files for TrueCopy/ShadowImage cascade configuration

The following figures show the cascading configurations and the pairdisplay information
for each configuration.

Figure 4 Pairdisplay for TrueCopy on HOST1

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

132

Configuration definition files for TrueCopy/ShadowImage cascade configuration

Figure 5 Pairdisplay for TrueCopy on HOST2 (HORCMINST)

Figure 6 Pairdisplay for ShadowImage on HOST2 (HORCMINST)

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

133

Configuration definition files for TrueCopy/ShadowImage cascade configuration

Figure 7 Pairdisplay for ShadowImage on HOST2 (HORCMINST0)

Appendix B: Sample configuration definition files
Command Control Interface Installation and Configuration Guide

134

Index
A

creating the configuration definition file 57

alternate command devices 56

D

C
CCI
installing on Windows 45
CCI administrator, specifying on Windows 46
CCI and RAID Manager XP 39
changing the user
UNIX environment 43
command devices
alternate 56
requirements 14
setting 53
specifying in configuration definition file 55
virtual 55
configuration definition file
cascade examples 129
HORCM_ALLOW_INST parameter 97
HORCM_CMD parameter for in-band method 81
HORCM_CMD parameter for out-of-band method
86
HORCM_DEV parameter 89
HORCM_INST parameter 92
HORCM_INSTP parameter 95
HORCM_LDEV parameter 96
HORCM_LDEVG parameter 96
HORCM_MON parameter 81
HORCM_VCMD parameter 88
specifying the command devices 55
configuration examples 97
configuration file
creating 57
editing 57
examples 79
parameters 57
sample file 57
configuration file parameters 57, 80
contacting support 71
conversion tables, fibre-to-SCSI addresses 75

definition file, configuration
creating 57
editing 57
examples 79
parameters 57
sample file 57
definition file, configuration parameters 57, 80

E
editing the configuration definition file 57
example configuration files 79

F
failover software support 17
FCP, z/Linux restrictions 22
fibre-to-SCSI address conversion
example 72
table for HP-UX 75
table for Solaris 75
table for Windows 75
FICON, z/Linux restrictions 22

H
hardware installation 41
HORCM_ALLOW_INST 97
HORCM_CMD (in-band method) 81
HORCM_CMD (out-of-band) 86
HORCM_CONF 57
HORCM_DEV 89
HORCM_INST 92
HORCM_INSTP 95
HORCM_LDEV 96
HORCM_LDEVG 96
HORCM_MON 81
HORCM_VCMD 88
HORCMFCTBL 72

Index
Command Control Interface Installation and Configuration Guide

135

host platform support 17

I
I/O interface support 17
in-band command execution 50
installation requirements 13
installing CCI
Windows system 45
installing CCI software
UNIX environment 42
UNIX root directory 42
installing hardware 41
installing software
OpenVMS environment 49
IPv6
environment variables 29
library and system call 29
supported platforms 22

removing CCI (continued)
PC with storage management software 68
using script on UNIX 65
Windows 67
requirements and restrictions
Oracle VM 28
system 13
VMWare ESX Server 25
Windows 2012/2008 Hyper-V 26
z/Linux 22

S

M

sample configuration files 79
sample definition file 57
setting the command devices 53
software installation
OpenVMS environment 49
UNIX environment 42
software upgrade
OpenVMS environment 63
UNIX environment 60
Windows environment 61
SVC, VMWare restrictions 25
system option modes 14
system requirements 13

mirror descriptors
configuration file correspondence 127

T

L
license key requirements 14
LUN configurations 74

tables, fibre-to-SCSI address conversion 75

O
OpenVMS
bash start-up 37
DCL command examples 33
DCL detached process start-up 30
installation 49
Oracle VM
restrictions 28
OS support 17
out-of-band command execution 50

U

parameters, configuration 57
program product requirements 14

uninstalling CCI
manually on UNIX 66
OpenVMS 69
PC with storage management software 68
using script on UNIX 65
Windows 67
upgrading software
OpenVMS environment 63
UNIX environment 60
Windows environment 61
user, changing
UNIX environment 43

R

V

RAID Manager XP and CCI 39
removing CCI
manually on UNIX 66
OpenVMS 69

virtual command devices 55
VM
applicable platforms 20
VMWare ESX Server, restrictions 25

P

Index
Command Control Interface Installation and Configuration Guide

136

volume manager support 17

W
Windows 2012/2008 Hyper-V, restrictions 26

Z
z/Linux, restrictions 22

Index
Command Control Interface Installation and Configuration Guide

137

Hitachi Vantara Corporation
Corporate Headquarters

Regional Contact Information

2845 Lafayette Street

Americas: +1 866 374 5822 or info@hitachivantara.com

Santa Clara, CA 95050-2639 USA

Europe, Middle East, and Africa: +44 (0) 1753 618000 or info@emea@hitachivantara.com

www.HitachiVantara.com | community.HitachiVantara.com Asia Pacific: + 852 3189 7900 or info.marketing.apac@hitachivantara.com



Source Exif Data:
File Type                       : PDF
File Type Extension             : pdf
MIME Type                       : application/pdf
PDF Version                     : 1.5
Linearized                      : Yes
Author                          : Hitachi Vantara Corporation
Create Date                     : 2018:02:28 16:03:42-07:00
Keywords                        : installation guide, Command Control Interface, CCI
Modify Date                     : 2018:03:02 07:45:44-08:00
Subject                         : This document describes and provides instructions for installing the Command Control Interface (CCI) software for the Hitachi RAID storage systems, including upgrading and removing CCI.
Language                        : EN
XMP Toolkit                     : Adobe XMP Core 5.4-c006 80.159825, 2016/09/16-03:31:08
Name                            : Hitachi Vantara DITA to PDF
Version                         : Transform Ver: v1.1
Creator Tool                    : AH XSL Formatter V6.2 R1 for Windows (x64) : 6.2.2.15776 (2014/02/27 18:51JST)
Metadata Date                   : 2018:03:02 07:45:44-08:00
Format                          : application/pdf
Description                     : This document describes and provides instructions for installing the Command Control Interface (CCI) software for the Hitachi RAID storage systems, including upgrading and removing CCI.
Creator                         : Hitachi Vantara Corporation
Title                           : Command Control Interface Installation and Configuration Guide
Producer                        : Antenna House PDF Output Library 6.2.469 (Windows (x64))
Trapped                         : False
Document ID                     : uuid:51d742bf-cfef-47f6-8235-e9eaf212a9c9
Instance ID                     : uuid:259b71f3-3531-4b83-af4c-3985fea522b7
Page Layout                     : OneColumn
Page Mode                       : UseOutlines
Page Count                      : 138
EXIF Metadata provided by EXIF.tools

Navigation menu