Dell Storage Scv3000 And SCv3020 System Deployment Guide User Manual En Us

User Manual: Dell storage-scv3000 - Dell SCv3000 and SCv3020 Storage System Deployment Guide

Open the PDF directly: View PDF PDF.
Page Count: 100

DownloadDell Storage-scv3000 SCv3000 And SCv3020 Storage System Deployment Guide User Manual  - En-us
Open PDF In BrowserView PDF
Dell SCv3000 and SCv3020 Storage System
Deployment Guide

Notes, Cautions, and Warnings
NOTE: A NOTE indicates important information that helps you make better use of your product.
CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the
problem.
WARNING: A WARNING indicates a potential for property damage, personal injury, or death.

Copyright © 2017 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its
subsidiaries. Other trademarks may be trademarks of their respective owners.
2017 - 11
Rev. B

Contents
About This Guide................................................................................................................7
Revision History.................................................................................................................................................................. 7
Audience.............................................................................................................................................................................7
Contacting Dell................................................................................................................................................................... 7
Related Publications............................................................................................................................................................7

1 About the SCv3000 and SCv3020 Storage System......................................................... 9
Storage Center Hardware Components..............................................................................................................................9
SCv3000 and SCv3020 Storage System......................................................................................................................9
Expansion Enclosures................................................................................................................................................... 9
Switches.......................................................................................................................................................................9
Storage Center Communication........................................................................................................................................ 10
Front-End Connectivity...............................................................................................................................................10
Back-End Connectivity............................................................................................................................................... 15
System Administration................................................................................................................................................ 15
Storage Center Replication......................................................................................................................................... 15
SCv3000 and SCv3020 Storage System Hardware.......................................................................................................... 15
SCv3000 and SCv3020 Storage System Front-Panel View........................................................................................ 15
SCv3000 and SCv3020 Storage System Back-Panel View......................................................................................... 17
Expansion Enclosure Overview................................................................................................................................... 21

2 Install the Storage Center Hardware............................................................................. 28
Unpacking Storage Center Equipment............................................................................................................................. 28
Safety Precautions........................................................................................................................................................... 28
Installation Safety Precautions....................................................................................................................................28
Electrical Safety Precautions......................................................................................................................................29
Electrostatic Discharge Precautions........................................................................................................................... 29
General Safety Precautions........................................................................................................................................ 29
Prepare the Installation Environment................................................................................................................................ 30
Install the Storage System in a Rack................................................................................................................................ 30

3 Connect the Front-End Cabling.....................................................................................32
Types of Redundancy for Front-End Connections............................................................................................................ 32
Port Redundancy........................................................................................................................................................32
Storage Controller Redundancy..................................................................................................................................32
Connecting to Host Servers with Fibre Channel HBAs..................................................................................................... 33
Fibre Channel Zoning..................................................................................................................................................33
Cable the Storage System with 2-Port Fibre Channel IO Cards..................................................................................34
Cable the Storage System with 4-Port Fibre Channel IO Cards..................................................................................35
Labeling the Front-End Cables................................................................................................................................... 35
Connecting to Host Servers with iSCSI HBAs or Network Adapters.................................................................................37
Cable the Storage System with 2–Port iSCSI IO Cards.............................................................................................. 37
Cable the Storage System with 4–Port iSCSI IO Cards..............................................................................................38
3

Connect a Storage System to a Host Server Using an iSCSI Mezzanine Card........................................................... 39
Labeling the Front-End Cables................................................................................................................................... 40
Connecting to Host Servers with SAS HBAs.....................................................................................................................41
Cable the Storage System with 4-Port SAS HBAs to Host Servers with One SAS HBA per Server............................ 41
Labeling the Front-End Cables................................................................................................................................... 42
Attach Host Servers (Fibre Channel)............................................................................................................................... 43
Attach the Host Servers (iSCSI)...................................................................................................................................... 44
Attach the Host Servers (SAS)........................................................................................................................................ 44
Connect the Management Ports to the Management Network....................................................................................... 45
Labeling the Ethernet Management Cables................................................................................................................45

4 Connect the Back-End Cabling......................................................................................47
Expansion Enclosure Cabling Guidelines........................................................................................................................... 47
Back-End SAS Redundancy........................................................................................................................................47
Back-End Connections for an SCv3000 and SCv3020 Storage System With Expansion Enclosures................................47
SCv3000 and SCv3020 and One SCv300 and SCv320 Expansion Enclosure............................................................ 48
SCv3000 and SCv3020 and Two SCv300 and SCv320 Expansion Enclosures...........................................................49
SCv3000 and SCv3020 Storage System and One SCv360 Expansion Enclosure.......................................................49
SCv3000 and SCv3020 Storage System and Two SCv360 Expansion Enclosures.....................................................50
Label the Back-End Cables............................................................................................................................................... 51

5 Discover and Configure the Storage Center.................................................................. 53
Connect Power Cables and Turn On the Storage System................................................................................................ 53
Locate Your Service Tag................................................................................................................................................... 54
Record System Information.............................................................................................................................................. 54
Supported Operating Systems for Storage Center Automated Setup ..............................................................................54
Install and Use the Dell Storage Manager......................................................................................................................... 54
Discover and Select an Uninitialized Storage Center........................................................................................................ 55
Deploy the Storage Center Using the Direct Connect Method.........................................................................................55
Customer Installation Authorization..................................................................................................................................56
Set System Information....................................................................................................................................................56
Set Administrator Information.......................................................................................................................................... 56
Confirm the Storage Center Configuration....................................................................................................................... 57
Initialize the Storage Center..............................................................................................................................................57
Configure Key Management Server Settings....................................................................................................................57
Create a Storage Type...................................................................................................................................................... 57
Configure Ports................................................................................................................................................................58
Configure Fibre Channel Ports................................................................................................................................... 58
Configure iSCSI Ports................................................................................................................................................ 58
Configure SAS Ports.................................................................................................................................................. 59
Configure Time Settings...................................................................................................................................................59
Configure SMTP Server Settings..................................................................................................................................... 59
Using Dell SupportAssist.................................................................................................................................................. 59
Enable SupportAssist ................................................................................................................................................ 60
Update the Storage Center...............................................................................................................................................61
Complete the Configuration and Continue With Setup......................................................................................................61
4

Modify iDRAC Interface Settings for a Storage System.............................................................................................. 61
Unconfigure Unused I/O Ports................................................................................................................................... 62

6 Perform Post-Setup Tasks............................................................................................ 63
Update Storage Center Using Dell Storage Manager........................................................................................................63
Check the Status of the Update.......................................................................................................................................63
Change the Operation Mode of a Storage Center............................................................................................................ 63
Verify Connectivity and Failover....................................................................................................................................... 64
Create Test Volumes.................................................................................................................................................. 64
Test Basic Connectivity.............................................................................................................................................. 64
Test Storage Controller Failover................................................................................................................................. 64
Test MPIO..................................................................................................................................................................65
Clean Up Test Volumes.............................................................................................................................................. 65
Send Diagnostic Data Using Dell SupportAssist................................................................................................................65

7 Adding or Removing Expansion Enclosures................................................................... 66
Adding Expansion Enclosures to a Storage System Deployed Without Expansion Enclosures.......................................... 66
Install New SCv300 and SCv320 Expansion Enclosures in a Rack..............................................................................66
Add the SCv300 and SCv320 Expansion Enclosures to the A-Side of the Chain........................................................67
Add the SCv300 and SCv320 Expansion Enclosures to the B-Side of the Chain....................................................... 68
Install New SCv360 Expansion Enclosures in a Rack..................................................................................................68
Add the SCv360 Expansion Enclosures to the A-Side of the Chain............................................................................69
Add an SCv360 Expansion Enclosure to the B-Side of the Chain............................................................................... 70
Adding a Single Expansion Enclosure to a Chain Currently in Service............................................................................... 72
Check the Drive Count............................................................................................................................................... 72
Add an SCv300 and SCv320 Expansion Enclosure to the A-Side of the Chain...........................................................72
Add an SCv300 and SCv320 Expansion Enclosure to the B-Side of the Chain...........................................................73
Add an SCv360 Expansion Enclosure to the A-Side of the Chain............................................................................... 75
Add an SCv360 Expansion Enclosure to the B-Side of the Chain............................................................................... 76
Removing an Expansion Enclosure from a Chain Currently in Service............................................................................... 77
Release the Drives in the Expansion Enclosure........................................................................................................... 78
Disconnect the SCv300 and SCv320 Expansion Enclosure from the A-Side of the Chain..........................................78
Disconnect the SCv300 and SCv320 Expansion Enclosure from the B-Side of the Chain..........................................79
Disconnect the SCv360 Expansion Enclosure from the A-Side of the Chain...............................................................81
Disconnect the SCv360 Expansion Enclosure from the B-Side of the Chain.............................................................. 82

8 Troubleshooting Storage Center Deployment................................................................ 85
Troubleshooting Storage Controllers.................................................................................................................................85
Troubleshooting Hard Drives.............................................................................................................................................85
Troubleshooting Expansion Enclosures............................................................................................................................. 85
Troubleshooting With Lasso............................................................................................................................................. 86
Lasso Application........................................................................................................................................................86
Lasso Documentation.................................................................................................................................................86
Lasso Requirements................................................................................................................................................... 86

A Set Up a Local Host or VMware Host............................................................................ 87
Set Up a VMware ESXi Host from Initial Setup.................................................................................................................87

5

Set Up a Local host from Initial Setup...............................................................................................................................87
Set Up Multiple VMware ESXi Hosts in a VMware vSphere Cluster................................................................................. 88

B Initialize the Storage Center Using the USB Serial Port................................................ 89
Install the USB Serial Port Driver .....................................................................................................................................89
Establish a Terminal Session............................................................................................................................................. 89
Discover the Storage Center Using the Setup Utility Tool................................................................................................ 90

C Worksheet to Record System Information..................................................................... 91
Storage Center Information...............................................................................................................................................91
iSCSI Fault Domain Information.........................................................................................................................................91
Additional Storage Center Information..............................................................................................................................92
Fibre Channel Zoning Information.....................................................................................................................................92

D HBA Server Settings.....................................................................................................94
Settings by HBA Manufacturer.........................................................................................................................................94
Dell 12 Gb SAS HBAs..................................................................................................................................................94
Cisco Fibre Channel HBAs..........................................................................................................................................94
Emulex HBAs..............................................................................................................................................................94
QLogic HBAs..............................................................................................................................................................95
Settings by Server Operating System.............................................................................................................................. 95
Citrix XenServer......................................................................................................................................................... 96
Microsoft Windows Server.........................................................................................................................................96
Novell Netware ..........................................................................................................................................................97
Red Hat Enterprise Linux............................................................................................................................................ 97

E iSCSI Settings.............................................................................................................. 99
Flow Control Settings.......................................................................................................................................................99
Ethernet Flow Control................................................................................................................................................99
Switch Ports and Flow Control...................................................................................................................................99
Flow Control...............................................................................................................................................................99
Jumbo Frames and Flow Control................................................................................................................................99
Other iSCSI Settings.......................................................................................................................................................100

6

About This Guide
This guide describes the features and technical specifications of the SCv3000 and SCv3020 storage system.

Revision History
Document Number: 680-136-001
Revision

Date

Description

A

October 2017

Initial release

B

November 2017

Corrections to SCv360 cabling

Audience
The information provided in this guide is intended for storage or network administrators and deployment personnel.

Contacting Dell
Dell provides several online and telephone-based support and service options. Availability varies by country and product, and some
services might not be available in your area.
To contact Dell for sales, technical support, or customer service issues, go to www.dell.com/support.
•

For customized support, type your system service tag on the support page and click Submit.

•

For general support, browse the product list on the support page and select your product.

Related Publications
The following documentation provides additional information about the SCv3000 and SCv3020 storage system.
•

Dell SCv3000 and SCv3020 Storage System Getting Started Guide
Provides information about an SCv3000 and SCv3020 storage system, such as installation instructions and technical
specifications.

•

Dell SCv3000 and SCv3020 Storage System Owner’s Manual
Provides information about an SCv3000 and SCv3020 storage system, such as hardware features, replacing customerreplaceable components, and technical specifications.

•

Dell SCv3000 and SCv3020 Storage System Service Guide
Provides information about SCv3000 and SCv3020 storage system hardware, system component replacement, and system
troubleshooting.

•

Dell Storage Center Release Notes
Provides information about new features and known and resolved issues for the Storage Center software.

•

Dell Storage Center Update Utility Administrator’s Guide
Describes how to use the Storage Center Update Utility to install Storage Center software updates. Updating Storage Center
software using the Storage Center Update Utility is intended for use only by sites that cannot update Storage Center using
standard methods.

•

Dell Storage Center Software Update Guide
Describes how to update Storage Center software from an earlier version to the current version.

•

Dell Storage Center Command Utility Reference Guide
Provides instructions for using the Storage Center Command Utility. The Command Utility provides a command-line interface
(CLI) to enable management of Storage Center functionality on Windows, Linux, Solaris, and AIX platforms.
About This Guide

7

•

Dell Storage Center Command Set for Windows PowerShell
Provides instructions for getting started with Windows PowerShell cmdlets and scripting objects that interact with the Storage
Center using the PowerShell interactive shell, scripts, and PowerShell hosting applications. Help for individual cmdlets is available
online.

•

Storage Manager Installation Guide
Contains installation and setup information.

•

Storage Manager Administrator’s Guide
Contains in-depth feature configuration and usage information.

•

Dell Storage Manager Release Notes
Provides information about Storage Manager releases, including new features and enhancements, open issues, and resolved
issues.

•

Dell TechCenter
Provides technical white papers, best practice guides, and frequently asked questions about Dell Storage products. Go to http://
en.community.dell.com/techcenter/storage/.

8

About This Guide

1
About the SCv3000 and SCv3020 Storage System
The SCv3000 and SCv3020 storage system provides the central processing capabilities for the Storage Center Operating System
(OS), application software, and management of RAID storage.
The SCv3000 and SCv3020 storage system holds the physical drives that provide storage for the Storage Center. If additional
storage is needed, the SCv3000 and SCv3020 supports SCv300 and SCv320 and SCv360 expansion enclosures.

Storage Center Hardware Components
The Storage Center described in this document consists of an SCv3000 and SCv3020 storage system, expansion enclosures, and
enterprise-class switches.
To allow for storage expansion, the SCv3000 and SCv3020 storage system supports multiple SCv300 and SCv320 and SCv360
expansion enclosures.
NOTE: The cabling between the storage system, switches, and host servers is referred to as front‐end connectivity. The
cabling between the storage system and expansion enclosures is referred to as back-end connectivity.

SCv3000 and SCv3020 Storage System
The SCv3000 and SCv3020 storage systems contain two redundant power supply/cooling fan modules, and two storage controllers
with multiple I/O ports. The I/O ports provide communication with host servers and expansion enclosures. The SCv3000 storage
system contains up to 16 3.5-inch drives and the SCv3020 storage system contains up to 30 2.5-inch drives.
The SCv3000 Series Storage Center supports up to 222 drives per Storage Center system. This total includes the drives in the
storage system chassis and the drives in the expansion enclosures. The SCv3000 and SCv3020 require a minimum of seven hard
disk drives (HDDs) or four solid-state drives (SSDs) installed in the storage system chassis or an expansion enclosure.
Configuration

Number of Drives Supported

SCv3000 storage system with SCv300 or SCv320 expansion enclosure

208

SCv3000 storage system with SCv360 expansion enclosure

196

SCv3020 storage system with SCv300 or SCv320 expansion enclosure

222

SCv3020 storage system with SCv360 expansion enclosure

210

Expansion Enclosures
Expansion enclosures allow the data storage capabilities of the SCv3000 and SCv3020 storage system to be expanded beyond the
16 or 30 drives in the storage system chassis.
The SCv3000 and SCv3020 support up to 16 SCv300 expansion enclosures, up to eight SCv320 expansion enclosures, and up to
two SCv360 expansion enclosures.

Switches
Dell offers enterprise-class switches as part of the total Storage Center solution.
The SCv3000 and SCv3020 storage system supports Fibre Channel (FC) and Ethernet switches, which provide robust connectivity
to servers and allow for the use of redundant transport paths. Fibre Channel (FC) or Ethernet switches can provide connectivity to a

About the SCv3000 and SCv3020 Storage System

9

remote Storage Center to allow for replication of data. In addition, Ethernet switches provide connectivity to a management network
to allow configuration, administration, and management of the Storage Center.

Storage Center Communication
A Storage Center uses multiple types of communication for both data transfer and administrative functions.
Storage Center communication is classified into three types: front end, back end, and system administration.

Front-End Connectivity
Front-end connectivity provides I/O paths from servers to a storage system and replication paths from one Storage Center to
another Storage Center. The SCv3000 and SCv3020 storage system provides the following types of front-end connectivity:
•

Fibre Channel – Hosts, servers, or network-attached storage (NAS) appliances access storage by connecting to the storage
system Fibre Channel ports through one or more Fibre Channel switches. Connecting host servers directly to the storage
system, without using Fibre Channel switches, is not supported.

•

iSCSI – Hosts, servers, or network-attached storage (NAS) appliances access storage by connecting to the storage system
iSCSI ports through one or more Ethernet switches. Connecting host servers directly to the storage system, without using
Ethernet switches, is not supported.

•

SAS – Hosts or servers access storage by connecting directly to the storage system SAS ports.

When replication is licensed, the SCv3000 and SCv3020 can use the front-end Fibre Channel or iSCSI ports to replicate data to
another Storage Center.

SCv3000 and SCv3020 Storage System With Fibre Channel Front-End Connectivity
A storage system with Fibre Channel front-end connectivity can communicate with the following components of a Storage Center
system.

Figure 1. Storage System With Fibre Channel Front-End Connectivity

10

About the SCv3000 and SCv3020 Storage System

Item

Description

Speed

Communication Type

1

Server with Fibre Channel host bus adapters (HBAs)

8 Gbps or 16 Gbps

Front End

2

Remote Storage Center connected via Fibre Channel for
replication

8 Gbps or 16 Gbps

Front End

3

Fibre Channel switch (A pair of Fibre Channel switches are
recommended for optimal redundancy and connectivity)

8 Gbps or 16 Gbps

Front End

4

Ethernet switch for the management network

1 Gbps

System Administration

5

SCv3000 and SCv3020 with FC front-end connectivity

8 Gbps or 16 Gbps

Front End

6

Storage Manager (Installed on a computer connected to
the storage system through the Ethernet switch)

Up to 1 Gbps

System Administration

7

SCv300 and SCv320 expansion enclosures

12 Gbps per channel

Back End

SCv3000 and SCv3020 Storage System With iSCSI Front-End Connectivity
A storage system with iSCSI front-end connectivity can communicate with the following components of a Storage Center system.

Figure 2. Storage System With iSCSI Front-End Connectivity

Item

Description

Speed

1

Server with Ethernet (iSCSI) ports or iSCSI host bus adapters 1 GbE or 10 GbE
(HBAs)

Front End

2

Remote Storage Center connected via iSCSI for replication

1 GbE or 10 GbE

Front End

3

Ethernet switch (A pair of Ethernet switches is recommended 1 GbE or 10 GbE
for optimal redundancy and connectivity)

Front End

4

Ethernet switch for the management network

System Administration

1 Gbps

Communication Type

About the SCv3000 and SCv3020 Storage System

11

Item

Description

Speed

Communication Type

5

SCv3000 and SCv3020 with iSCSI front-end connectivity

1 GbE or 10 GbE

Front End

6

Storage Manager (Installed on a computer connected to the
storage system through the Ethernet switch)

Up to 1 Gbps

System Administration

7

SCv300 and SCv320 expansion enclosures

12 Gbps per channel

Back End

SCv3000 and SCv3020 Storage System With Front-End SAS Connectivity
The SCv3000 and SCv3020 storage system with front-end SAS connectivity can communicate with the following components of a
Storage Center system.

Figure 3. Storage System With Front-End SAS Connectivity

Item

Description

Speed

Communication Type

1

Server with SAS host bus adapters (HBAs)

12 Gbps per channel

Front End

2

Remote Storage Center connected via iSCSI for
replication

1 GbE or 10 GbE

Front End

3

Ethernet switch (A pair of Ethernet switches is
1 GbE or 10 GbE
recommended for optimal redundancy and connectivity)

Front End

4

Ethernet switch for the management network

Up to 1 GbE

System Administration

5

SCv3000 and SCv3020 with front-end SAS
connectivity

12 Gbps per channel

Front End

6

Storage Manager (Installed on a computer connected to Up to 1 Gbps
the storage system through the Ethernet switch)

System Administration

7

SCv300 and SCv320 expansion enclosures

Back End

12

About the SCv3000 and SCv3020 Storage System

12 Gbps per channel

Using SFP+ Transceiver Modules
You can connect to the front-end port of a storage controller using a direct-attached SFP+ cable or an SFP+ transceiver module. An
SCv3000 and SCv3020 storage system with 16 Gb Fibre Channel or 10 GbE iSCSI storage controllers uses short-range small-formfactor pluggable (SFP+) transceiver modules.

Figure 4. SFP+ Transceiver Module With a Bail Clasp Latch

The SFP+ transceiver modules are installed into the front-end ports of a storage controller.

Guidelines for Using SFP+ Transceiver Modules
Before installing SFP+ transceiver modules and fiber-optic cables, read the following guidelines.
CAUTION: When handling static-sensitive devices, take precautions to avoid damaging the product from static
electricity.
•

Use only Dell-supported SFP+ transceiver modules with the Storage Center. Other generic SFP+ transceiver modules are not
supported and might not work with the Storage Center.

•

The SFP+ transceiver module housing has an integral guide key that is designed to prevent you from inserting the transceiver
module incorrectly.

•

Use minimal pressure when inserting an SFP+ transceiver module into a Fibre Channel port. Forcing the SFP+ transceiver
module into a port could damage the transceiver module or the port.

•

The SFP+ transceiver module must be installed into a port before you connect the fiber-optic cable.

•

The fiber-optic cable must be removed from the SFP+ transceiver module before you remove the transceiver module from the
port.

Install an SFP+ Transceiver Module
Use the following procedure to install an SFP+ transceiver module into a storage controller.
About this task
Read the following cautions and information before installing an SFP+ transceiver module.
WARNING: To reduce the risk of injury from laser radiation or damage to the equipment, take the following precautions:
•

Do not open any panels, operate controls, make adjustments, or perform procedures to a laser device other than those
specified in this document.

•

Do not stare into the laser beam.

CAUTION: Transceiver modules can be damaged by electrostatic discharge (ESD). To prevent ESD damage to the
transceiver module, take the following precautions:
•

Wear an antistatic discharge strap while handling transceiver modules.

•

Place transceiver modules in antistatic packing material when transporting or storing them.

Steps
1. Position the transceiver module so that the key is oriented correctly to the port in the storage controller.

About the SCv3000 and SCv3020 Storage System

13

Figure 5. Install the SFP+ Transceiver Module

1.
2.

SFP+ transceiver module

2.

Fiber-optic cable connector

Insert the transceiver module into the port until it is firmly seated and the latching mechanism clicks.
The transceiver modules are keyed so that they can be inserted only with the correct orientation. If a transceiver module does
not slide in easily, ensure that it is correctly oriented.
CAUTION: To reduce the risk of damage to the equipment, do not use excessive force when inserting the transceiver
module.

3.

Position the fiber-optic cable so that the key (the ridge on one side of the cable connector) is aligned with the slot in the
transceiver module.
CAUTION: Touching the end of a fiber-optic cable damages the cable. Whenever a fiber-optic cable is not
connected, replace the protective covers on the ends of the cable.

4.

Insert the fiber-optic cable into the transceiver module until the latching mechanism clicks.

Remove an SFP+ Transceiver Module
Complete the following steps to remove an SFP+ transceiver module from a storage controller.
Prerequisite
Use failover testing to make sure that the connection between host servers and the Storage Center remains up if the port is
disconnected.
About this task
Read the following cautions and information before beginning the removal and replacement procedures.
WARNING: To reduce the risk of injury from laser radiation or damage to the equipment, take the following precautions:
•

Do not open any panels, operate controls, make adjustments, or perform procedures to a laser device other than those
specified in this document.

•

Do not stare into the laser beam.

CAUTION: Transceiver modules can be damaged by electrostatic discharge (ESD). To prevent ESD damage to the
transceiver module, take the following precautions:
•

Wear an antistatic discharge strap while handling modules.

•

Place modules in antistatic packing material when transporting or storing them.

Steps
1. Remove the fiber-optic cable that is inserted into the transceiver.
a. Make sure the fiber-optic cable is labeled before removing it.
b. Press the release clip on the bottom of the cable connector to remove the fiber-optic cable from the transceiver.
CAUTION: Touching the end of a fiber-optic cable damages the cable. Whenever a fiber-optic cable is not
connected, replace the protective covers on the ends of the cables.
2.

Open the transceiver module latching mechanism.

14

About the SCv3000 and SCv3020 Storage System

3.

Grasp the bail clasp latch on the transceiver module and pull the latch out and down to eject the transceiver module from the
socket.

4.

Slide the transceiver module out of the port.

Figure 6. Remove the SFP+ Transceiver Module

1.

SFP+ transceiver module

2.

Fiber-optic cable connector

Back-End Connectivity
Back-end connectivity is strictly between the storage system and expansion enclosures.
The SCv3000 and SCv3020 storage system supports back-end connectivity to multiple SCv300, SCv320, and SCv360 expansion
enclosures.

System Administration
To perform system administration, the Storage Center communicates with computers using the Ethernet management (MGMT)
port on the storage controllers.
The Ethernet management port is used for Storage Center configuration, administration, and management.

Storage Center Replication
Storage Center sites can be collocated or remotely connected and data can be replicated between sites. Storage Center replication
can duplicate volume data to another site in support of a disaster recovery plan or to provide local access to a remote data volume.
Typically, data is replicated remotely as part of an overall disaster avoidance or recovery plan.
The SCv3000 and SCv3020 supports replication to other SCv3000/SCv3020, SC5020, SC7020, SC8000, SC9000, and SC4020
storage systems. However, a Storage Manager Data Collector must be used to replicate data between the storage systems. For
more information about installing and managing the Data Collector, and setting up replications, see the Storage Manager
Administrator’s Guide.

SCv3000 and SCv3020 Storage System Hardware
The SCv3000 and SCv3020 storage system ships with Dell Enterprise Plus Value drives, two redundant power supply/cooling fan
modules, and two redundant storage controllers.
Each storage controller contains the front-end, back-end, and management communication ports of the storage system.

SCv3000 and SCv3020 Storage System Front-Panel View
The front panel of the storage system contains power and status indicators, and a system identification button.
In addition, the hard drives are installed and removed through the front of the storage system chassis.

About the SCv3000 and SCv3020 Storage System

15

Figure 7. SCv3000 and SCv3020 Storage System Front-Panel View

Item

Name

1

Power indicator

Icon

Description
Lights when the storage system power is on
•
•

2

Status indicator

Off – No power
On steady green – At least one power supply is providing power to the
storage system

Lights when the startup process for both storage controllers is complete with
no faults detected.
NOTE: The startup process can take 5–10 minutes or more.
•
•
•

3

Identification button

Blinking blue continuously – A user sent a command to the storage system to
make the LED blink so that the user can identify the storage system in the
rack.
•
•

4

Hard drives

Off – One or both storage controllers are running startup routines, or a
fault has been detected during startup
On steady blue – Both storage controllers have completed the startup
process and are in normal operation
Blinking amber – Fault detected

—

The identification LED blinks on the control panel of the chassis, to allow
users to find the storage system when looking at the front of the rack.
The identification LEDs on the storage controllers also blink, which allows
users to find the storage system when looking at the back of the rack.

Can have up to 30 internal 2.5-inch SAS hard drives

SCv3000 and SCv3020 Storage System Drives
The SCv3000 and SCv3020 storage system supports Dell Enterprise Plus Value drives.
The drives in an SCv3000 storage system are installed horizontally. The drives in an SCv3020 storage system are installed vertically.
The indicators on the drives provide status and activity information.

Figure 8. SCv300 and SCv320 Expansion Enclosure Drive Indicators

16

About the SCv3000 and SCv3020 Storage System

Item

Control/Feature

Indicator Code

1

Drive activity indicator

•
•

Blinking green – Drive has I/O activity
Steady green – Drive is detected and has no faults

2

Drive status indicator

•
•

Steady green – Normal operation
Blinking green – A command was sent by Dell Storage Manager to the drive to make
the LED blink so that users can identify the drive in the rack.
Blinking amber – Hardware or firmware fault

•

SCv3000 and SCv3020 Storage System Drive Numbering
The storage system holds up to 16 or 30 drives, which are numbered from left to right in rows starting from 0 at the top-left drive.
Drive numbers increment from left to right, and then top to bottom such that the first row of drives is numbered from 0 to 4 from
left to right, and the second row of drives is numbered from 5 to 9 from left to right.
Dell Storage Manager identifies drives as XX-YY, where XX is the number of the unit ID of the storage system and YY is the drive
position inside the storage system.

Figure 9. SCv3000 Storage System Drive Numbering

Figure 10. SCv3020 Storage System Drive Numbering

SCv3000 and SCv3020 Storage System Back-Panel View
The back panel of the storage system contains the storage controller indicators and power supply indicators.

Figure 11. SCv3000 and SCv3020 Storage System Back-Panel View

About the SCv3000 and SCv3020 Storage System

17

Item

Name

1

Power supply/cooling fan
module (2)

2

Storage controller (2)

Icon

Description
Contains a 1485 W power supply and fans that provide cooling for the storage
system, with AC input to the power supply of 200–240 V. In Dell Storage
Manager, the power supply/cooling fan module on the left side of the back
panel is Power Supply 1 and power supply/cooling fan module on the right side
of the back panel is Power Supply 2.

—

Each storage controller contains:
•
•

•
•
•
•
3

Power switch (2)

—

4

Power supply/cooling fan —
module LED handle

Power socket (2)

—

– Fibre Channel
– iSCSI
– SAS
SAS expansion ports – Two 12 Gbps SAS ports for back-end connectivity
to expansion enclosures
USB port – Single USB 2.0 port
MGMT port – Embedded Ethernet port for system management
Serial port – Micro-USB serial port used for an alternative initial
configuration and support-only functions

Controls power for the storage system. Each power supply/cooling fan module
has one power switch.
The handle of the power supply/cooling fan module indicates the DC power
status of the power supply and the fans.
•
•
•
•
•

5

Optional 10 GbE iSCSI mezzanine card with four SFP+ ports or four RJ45
10GBASE-T ports
One expansion slot for a front-end I/O card:

Not lit – No power
Solid green – Power supply has valid power source and is operational
Blinking amber – Error condition in the power supply
Blinking green – Firmware is being updated.
Blinking green then off – Power supply mismatch

Accepts the following standard computer power cords:
•
•

IEC320-C13 for deployments worldwide
IEC60320-C19 for deployments in Japan

Power Supply and Cooling Fan Modules
The SCv3000 and SCv3020 storage system supports two hot-swappable power supply/cooling fan modules.
The cooling fans and the power supplies are integrated into the power supply/cooling fan module and cannot be replaced separately.
If one power supply/cooling fan module fails, the second module continues to provide power to the storage system.
NOTE: When a power supply/cooling fan module fails, the cooling fan speed in the remaining module increases
significantly to provide adequate cooling. The cooling fan speed decreases gradually when a new power supply/cooling
fan module is installed.
CAUTION: A single power supply/cooling fan module can be removed from a powered on storage system for no more
than 90 seconds. If a power supply/cooling fan module is removed for longer than 90 seconds, the storage system might
shut down automatically to prevent damage.

18

About the SCv3000 and SCv3020 Storage System

SCv3000 and SCv3020 Storage Controller Features and Indicators
The SCv3000 and SCv3020 storage system includes two storage controllers in two interface slots.

SCv3000 and SCv3020 Storage Controller
The following figure shows the features and indicators on the storage controller.

Figure 12. SCv3000 and SCv3020 Storage Controller

Item

Control/Feature

1

I/O card slot

Icon

Description
Fibre Channel I/O card – Ports are numbered 1 to 4 from left to right
•

The LEDs on the 16 Gb Fibre Channel ports have the following meanings:

•

– All off – No power
– All on – Booting up
– Blinking amber – 4 Gbps activity
– Blinking green – 8 Gbps activity
– Blinking yellow – 16 Gbps activity
– Blinking amber and yellow – Beacon
– All blinking (simultaneous) – Firmware initialized
– All blinking (alternating) – Firmware fault
The LEDs on the 32 Gb Fibre Channel ports have the following meanings:
–
–
–
–
–
–
–
–

All off – No power
All on – Booting up
Blinking amber – 8 Gbps activity
Blinking green – 16 Gbps activity
Blinking yellow – 32 Gbps activity
Blinking amber and yellow – Beacon
All blinking (simultaneous) – Firmware initialized
All blinking (alternating) – Firmware fault

iSCSI I/O card – Ports are numbered 1 to 4 from left to right
NOTE: The iSCSI I/O card supports Data Center Bridging (DCB), but
the mezzanine card does not support DCB.
•

The LEDs on the iSCSI ports have the following meanings:
– Off – No power
– Steady Amber – Link
– Blinking Green – Activity

About the SCv3000 and SCv3020 Storage System

19

Item

Control/Feature

Icon

Description
SAS I/O card – Ports are numbered 1 to 4 from left to right
The SAS ports on SAS I/O cards do not have LEDs.

2

Identification LED

Blinking blue continuously – A command was sent by Dell Storage Manager to
the storage system to make the LED blink so that users can identify the
storage system in the rack.
The identification LED blinks on the control panel of the chassis, which allows
users to find the storage system when looking at the front of the rack.
The identification LEDs on the storage controllers also blink, which allows
users to find the storage system when looking at the back of the rack.

3

Cache to Flash (C2F)

•
•

Off – Running normally
Blinking green – Running on battery (shutting down)

4

Health status

•
•

Off – Unpowered
Blinking amber

•

– Slow blinking amber (2s on, 1s off) – Controller hardware fault was
detected. Use Dell Storage Manager to view specific details about the
hardware fault.
– Fast blinking amber (4x per second) – Power good and the preoperating system is booting
Blinking green

•

– Slow blinking green (2s on, 1s off) – Operating system is booting
– Blinking green (1s on, 1s off) – System is in safe mode
– Fast blinking green (4x per second) – Firmware is updating
Solid green – Running normal operation

5

Serial port (micro USB)

Used under the supervision of Dell Technical Support to troubleshoot and
support systems.

6

MGMT port

7

USB port

One USB 2.0 connector that is used for SupportAssist diagnostic files when
the storage system is not connected to the Internet.

8

Mini-SAS (ports 1 and 2)

Back-end expansion ports 1 and 2. LEDs with the ports indicate connectivity
information between the storage controller and the expansion enclosure:

—

Ethernet port used for storage system management and access to Dell
Storage Manager.
Two LEDs with the port indicate link status (left LED) and activity status
(right LED):
• Link and activity indicators are off – Not connected to the network
• Link indicator is green – The NIC is connected to a valid network at its
maximum port speed.
• Link indicator is amber – The NIC is connected to a valid network at less
than its maximum port speed.
• Activity indicator is blinking green – Network data is being sent or
received.

•
•
9

Mezzanine card

Steady green indicates the SAS connection is working properly.
Steady yellow indicates the SAS connection is not working properly.

The iSCSI ports on the mezzanine card are either 10 GbE SFP+ ports or 1
GbE/10 GbE RJ45 ports.
The LEDs on the iSCSI ports have the following meanings:

20

About the SCv3000 and SCv3020 Storage System

Item

Control/Feature

Icon

Description
•
•
•
•

Off – No connectivity
Steady green, left LED – Link (full speed)
Steady amber, left LED – Link (degraded speed)
Blinking green, right LED – Activity
NOTE: The mezzanine card does not support DCB.

Expansion Enclosure Overview
Expansion enclosures allow the data storage capabilities of the SCv3000 and SCv3020 storage system to be expanded beyond the
30 internal drives in the storage system chassis.
•

The SCv300 is a 2U expansion enclosure that supports up to 12 3.5‐inch hard drives installed in a four‐column, three-row
configuration.

•

The SCv320 is a 2U expansion enclosure that supports up to 24 2.5‐inch hard drives installed vertically side by side.

•

The SCv360 is a 4U expansion enclosure that supports up to 60 3.5‐inch hard drives installed in a twelve‐column, five-row
configuration.

SCv300 and SCv320 Expansion Enclosure Front-Panel Features and Indicators
The front panel shows the expansion enclosure status and power supply status.

Figure 13. SCv300 Front-Panel Features and Indicators

Figure 14. SCv320 Front-Panel Features and Indicators

Item

Name

1

System identification
button

The system identification button on the front control panel can be used to
locate a particular expansion enclosure within a rack. When the button is
pressed, the system status indicators on the control panel and the
Enclosure Management Module (EMM) blink blue until the button is
pressed again.

2

Power LED

The power LED lights when at least one power supply unit is supplying
power to the expansion enclosure.

3

Expansion enclosure status
LED

The expansion enclosure status LED lights when the expansion enclosure
power is on.

Icon

Description

About the SCv3000 and SCv3020 Storage System

21

Item

Name

Icon

Description
•
•

•

Solid blue during normal operation.
Blinks blue when a host server is identifying the expansion enclosure or
when the system identification button is pressed.
Blinks amber or remains solid amber for a few seconds and then turns
off when the EMMs are starting or resetting.
Blinks amber for an extended time when the expansion enclosure is in
a warning state.
Remains solid amber when the expansion enclosure is in the fault state.

•
•

SCv300 – Up to 12 3.5-inch SAS hot-swappable hard disk drives.
SCv320 – Up to 24 2.5-inch SAS hot-swappable hard disk drives.

•
•

4

Hard disk drives

SCv300 and SCv320 Expansion Enclosure Drives
Dell Enterprise Plus Value drives are the only drives that can be installed in SCv300 and SCv320 expansion enclosures. If a non-Dell
Enterprise Plus Valuedrive is installed, the Storage Center prevents the drive from being managed.
The drives in an SCv300 expansion enclosure are installed horizontally.

Figure 15. SCv300 Expansion Enclosure Drive Indicators

The drives in an SCv320 expansion enclosure are installed vertically.

Figure 16. SCv320 Expansion Enclosure Drive Indicators

Item

Name

Indicator Code

1

Drive activity indicator

•
•

Blinking green – Drive activity
Steady green – Drive is detected and has no faults

2

Drive status indicator

•
•
•
•

Steady green – Normal operation
Blinking green (on 1 sec. / off 1 sec.) – Drive identification is enabled
Steady amber – Drive is safe to remove
Off – No power to the drive

SCv300 and SCv320 Expansion Enclosure Drive Numbering
The Storage Center identifies drives as XX-YY, where XX is the unit ID of the expansion enclosure that contains the drive, and YY is
the drive position inside the expansion enclosure.
An SCv300 holds up to 12 drives, which are numbered from left to right in rows starting from 0.

22

About the SCv3000 and SCv3020 Storage System

Figure 17. SCv300 Drive Numbering

An SCv320 holds up to 24 drives, which are numbered from left to right starting from 0. A

Figure 18. SCv320 Drive Numbering

SCv300 and SCv320 Expansion Enclosure Back-Panel Features and Indicators
The back panel provides controls to power up and reset the expansion enclosure, indicators to show the expansion enclosure status,
and connections for back-end cabling.

Figure 19. SCv300 and SCv320 Expansion Enclosure Back Panel Features and Indicators

Item

Name

Description

1

Power supply unit and cooling fan
module (PS1)

600 W power supply

2

Enclosure management module
(EMM 0)

The EMM provides a data path between the expansion enclosure and the
storage controllers. The EMM also provides the management functions for
the expansion enclosure.

3

Enclosure management module
(EMM 1)

The EMM provides a data path between the expansion enclosure and the
storage controllers. The EMM also provides the management functions for
the expansion enclosure.

4

Information tag

A slide-out label panel that records system information such as the Service
Tag

5

Power switches (2)

Controls power for the expansion enclosure. There is one switch for each
power supply.

6

Power supply unit and cooling fan
module (PS2)

600 W power supply

About the SCv3000 and SCv3020 Storage System

23

SCv300 and SCv320 Expansion Enclosure EMM Features and Indicators
The SCv300 and SCv320 expansion enclosure includes two enclosure management modules (EMMs) in two interface slots.

Figure 20. SCv300 and SCv320 Expansion Enclosure EMM Features and Indicators

Item

Name

Icon

Description

1

SAS port status
(1–4)

•
•
•

Green – All the links to the port are connected
Amber – One or more links are not connected
Off – Expansion enclosure is not connected

2

EMM status
indicator

•
•
•
•

On steady green – Normal operation
Amber – Expansion enclosure did not boot or is not properly configured
Blinking green – Automatic update in progress
Blinking amber two times per sequence – Expansion enclosure is unable to
communicate with other expansion enclosures
Blinking amber (four times per sequence) – Firmware update failed
Blinking amber (five times per sequence) – Firmware versions are different
between the two EMMs

•
•
3

SAS ports 1–4
(Input or Output)

Provides SAS connections for cabling the storage controller to the next expansion
enclosure in the chain. (single port, redundant, and multichain configuration).

4

USB Mini-B (serial
debug port)

Not for customer use

5

Unit ID display

Displays the expansion enclosure ID

SCv360 Expansion Enclosure Front-Panel Features and Indicators
The SCv360 front panel shows the expansion enclosure status and power supply status.

Figure 21. SCv360 Front-Panel Features and Indicators

24

About the SCv3000 and SCv3020 Storage System

Item

Name

Description

1

Power LED

The power LED lights when at least one power supply unit is supplying power to the
expansion enclosure.

2

Expansion enclosure
status LED

The expansion enclosure status LED indicates when the system is being identified or when
the expansion enclosure is in the fault state.
•
•
•

Off during normal operation.
Blinks blue when a host server is identifying the expansion enclosure or when the
system identification button is pressed.
Remains solid blue when the expansion enclosure is in the fault state.

SCv360 Expansion Enclosure Drives
Dell Enterprise Plus drives are the only drives that can be installed in SCv360 expansion enclosures. If a non-Dell Enterprise Plus
drive is installed, the Storage Center prevents the drive from being managed.
The drives in an SCv360 expansion enclosure are installed horizontally.

Figure 22. SCv360 Drive Indicators

Item

Name

Indicator Code

1

Drive activity indicator

•
•

Blinking blue – Drive activity
Steady blue – Drive is detected and has no faults

2

Drive status indicator

•
•
•

Off – Normal operation
Blinking amber (on 1 sec. / off 1 sec.) – Drive identification is enabled
Steady amber – Drive has a fault

SCv360 Expansion Enclosure Drive Numbering
The Storage Center identifies drives as XX-YY, where XX is the unit ID of the expansion enclosure that contains the drive, and YY is
the drive position inside the expansion enclosure.
An SCv360 holds up to 60 drives, which are numbered from left to right in rows starting from 0.

About the SCv3000 and SCv3020 Storage System

25

Figure 23. SCv360 Drive Numbering

SCv360 Expansion Enclosure Back Panel Features and Indicators
The SCv360 back panel provides controls to power up and reset the expansion enclosure, indicators to show the expansion
enclosure status, and connections for back-end cabling.

Figure 24. SCv360 Back Panel Features and Indicators

26

About the SCv3000 and SCv3020 Storage System

Item

Name

Description

1

Power supply unit and
cooling fan module (PS1)

Contains redundant 900 W power supplies and fans that provide cooling for the expansion
enclosure.

2

Power supply indicators

AC power indicators:

•

•

•
•

AC power indicator
for power supply 1
Power supply/cooling
fan indicator
AC power indicator
for power supply 2

•
•

Green: Normal operation. The power supply module is supplying AC power to the
expansion enclosure
Off: Power switch is off, the power supply is not connected to AC power, or has a fault
condition
Flashing Green: AC power is applied but is out of spec.

Power supply/cooling fan indicator:
•
•

Amber: Power supply/cooling fan fault is detected
Off: Normal operation

3

Power supply unit and
Contains redundant 900 W power supplies and fans that provide cooling for the expansion
cooling fan module (PS2) enclosure.

4

Enclosure management
module 1

EMMs provide the data path and management functions for the expansion enclosure.

5

Enclosure management
module 2

EMMs provide the data path and management functions for the expansion enclosure.

SCv360 Expansion Enclosure EMM Features and Indicators
The SCv360 includes two enclosure management modules (EMMs) in two interface slots.

Figure 25. SCv360 EMM Features and Indicators

Item

Name

Description

1

EMM status indicator

•
•
•
•
•

Off – Normal operation
Amber – fault has been detected
Blinking amber two times per sequence – Expansion enclosure is unable to
communicate with other expansion enclosures
Blinking amber (four times per sequence) – Firmware update failed
Blinking amber (five times per sequence) – Firmware versions are different between
the two EMMs

2

SAS port status indicator

•
•
•

3

Unit ID display

Displays the expansion enclosure ID

4

EMM power indicator

•
•

5

SAS ports 1–4 (Input or
Output)

Provides SAS connections for cabling the storage controller to the next expansion
enclosure in the chain (single port, redundant, and multichain configuration).

Blue – All the links to the port are connected
Blinking blue – One or more links are not connected
Off – Expansion enclosure is not connected

Blue – Normal operation
Off – Power is not connected

About the SCv3000 and SCv3020 Storage System

27

2
Install the Storage Center Hardware
This section describes how to unpack the Storage Center equipment, prepare for the installation, mount the equipment in a rack, and
install the drives.

Unpacking Storage Center Equipment
Unpack the storage system and identify the items in your shipment.

Figure 26. SCv3000 and SCv3020 Storage System Components

1.

Documentation

2.

Storage system

3.

Rack rails

4.

USB cables (2)

5.

Power cables (2)

6.

Front bezel

Safety Precautions
Always follow these safety precautions to avoid injury and damage to Storage Center equipment.
If equipment described in this section is used in a manner not specified by Dell, the protection provided by the equipment could be
impaired. For your safety and protection, observe the rules described in the following sections.
NOTE: See the safety and regulatory information that shipped with each Storage Center component. Warranty
information is included within this document or as a separate document.

Installation Safety Precautions
Follow these safety precautions:
•

Dell recommends that only individuals with rack-mounting experience install the storage system in a rack.

•

Make sure the storage system is always fully grounded to prevent damage from electrostatic discharge.

28

Install the Storage Center Hardware

•

When handling the storage system hardware, use an electrostatic wrist guard (not included) or a similar form of protection.

The chassis must be mounted in a rack. The following safety requirements must be considered when the chassis is being mounted:
•

The rack construction must be capable of supporting the total weight of the installed chassis. The design should incorporate
stabilizing features suitable to prevent the rack from tipping or being pushed over during installation or in normal use.

•

When loading a rack with chassis, fill from the bottom up; empty from the top down.

•

To avoid danger of the rack toppling over, slide only one chassis out of the rack at a time.

Electrical Safety Precautions
Always follow electrical safety precautions to avoid injury and damage to Storage Center equipment.
WARNING: Disconnect power from the storage system when removing or installing components that are not hotswappable. When disconnecting power, first power down the storage system using the Storage Manager and then unplug
the power cords from all the power supplies in the storage system.
•

Provide a suitable power source with electrical overload protection. All Storage Center components must be grounded before
applying power. Make sure that a safe electrical earth connection can be made to power supply cords. Check the grounding
before applying power.

•

The plugs on the power supply cords are used as the main disconnect device. Make sure that the socket outlets are located near
the equipment and are easily accessible.

•

Know the locations of the equipment power switches and the room's emergency power-off switch, disconnection switch, or
electrical outlet.

•

Do not work alone when working with high-voltage components.

•

Use rubber mats specifically designed as electrical insulators.

•

Do not remove covers from the power supply unit. Disconnect the power connection before removing a power supply from the
storage system.

•

Do not remove a faulty power supply unless you have a replacement model of the correct type ready for insertion. A faulty power
supply must be replaced with a fully operational module power supply within 24 hours.

•

Unplug the storage system chassis before you move it or if you think it has become damaged in any way. When powered by
multiple AC sources, disconnect all power sources for complete isolation.

Electrostatic Discharge Precautions
Always follow electrostatic discharge (ESD) precautions to avoid injury and damage to Storage Center equipment.
Electrostatic discharge (ESD) is generated by two objects with different electrical charges coming into contact with each other. The
resulting electrical discharge can damage electronic components and printed circuit boards. Follow these guidelines to protect your
equipment from ESD:
•

Dell recommends that you always use a static mat and static strap while working on components in the interior of the chassis.

•

Observe all conventional ESD precautions when handling plug-in modules and components.

•

Use a suitable ESD wrist or ankle strap.

•

Avoid contact with backplane components and module connectors.

•

Keep all components and printed circuit boards (PCBs) in their antistatic bags until ready for use.

General Safety Precautions
Always follow general safety precautions to avoid injury and damage to Storage Center equipment.
•

Keep the area around the storage system chassis clean and free of clutter.

•

Place any system components that have been removed away from the storage system chassis or on a table so that they are not
in the way of other people.

•

While working on the storage system chassis, do not wear loose clothing such as neckties and unbuttoned shirt sleeves. These
items can come into contact with electrical circuits or be pulled into a cooling fan.

•

Remove any jewelry or metal objects from your body. These items are excellent metal conductors that can create short circuits
and harm you if they come into contact with printed circuit boards or areas where power is present.

Install the Storage Center Hardware

29

•

Do not lift the storage system chassis by the handles of the power supply units (PSUs). They are not designed to hold the
weight of the entire chassis, and the chassis cover could become bent.

•

Before moving the storage system chassis, remove the PSUs to minimize weight.

•

Do not remove drives until you are ready to replace them.
NOTE: To ensure proper storage system cooling, hard drive blanks must be installed in any hard drive slot that is not
occupied.

Prepare the Installation Environment
Make sure that the environment is ready for installing the Storage Center.
•

Rack Space — The rack must have enough space to accommodate the storage system chassis, expansion enclosures, and
switches.

•

Power — Power must be available in the rack, and the power delivery system must meet the requirements of the Storage
Center. AC input to the power supply is 200–240 V.

•

Connectivity — The rack must be wired for connectivity to the management network and any networks that carry front-end
I/O from the Storage Center to servers.

Install the Storage System in a Rack
Install the storage system and other Storage Center system components in a rack.
About this task
Mount the storage system and expansion enclosures in a manner that allows for expansion in the rack and prevents the rack from
becoming top‐heavy.
The SCv3000 and SCv3020 storage system ships with a ReadyRails II kit. The rails come in two different styles: tool-less and tooled.
Follow the detailed installation instructions located in the rail kit box for your particular style of rails.
NOTE: Dell recommends using two people to install the rails, one at the front of the rack and one at the back.
Steps
1. Position the left and right rail end pieces labeled FRONT facing inward.
2.

Align each end piece with the top and bottom holes of the appropriate U space.

Figure 27. Attach the Rails to the Rack

3.

30

Engage the back end of the rail until it fully seats and the latch locks into place.

Install the Storage Center Hardware

4.

Engage the front end of the rail until it fully seats and the latch locks into place.

5.

Align the system with the rails and slide the storage system into the rack.

Figure 28. Slide the Storage System Onto the Rails

6.

Lift the latches on each side of the front panel and tighten the screws to the rack.

Figure 29. Tighten the Screws

If the Storage Center system includes expansion enclosures, mount the expansion enclosures in the rack. See the instructions
included with the expansion enclosure for detailed steps.

Install the Storage Center Hardware

31

3
Connect the Front-End Cabling
Front-end cabling refers to the connections between the storage system and external devices such as host servers or another
Storage Center.
Dell recommends connecting the storage system to host servers using the most redundant option available. In addition, make sure
that the speed of the HBAs in the storage controller match the speed of the host server.

Types of Redundancy for Front-End Connections
Front-end redundancy is achieved by eliminating single points of failure that could cause a server to lose connectivity to the Storage
Center.
Depending on how the Storage Center is cabled and configured, the following types of redundancy are available.

Port Redundancy
If a port becomes unavailable because it is disconnected or a hardware failure has occurred, the port moves over to another port in
the same fault domain.

Storage Controller Redundancy
To allow for storage controller redundancy, a front-end port on each storage controller must be connected to the same switch or
server.
If a storage controller becomes unavailable, the front-end ports on the offline storage controller move over to the ports (in the same
fault domain) on the available storage controller.

Multipath I/O (MPIO)
MPIO allows a server to use multiple paths for I/O if they are available.
MPIO software offers redundancy at the path level. MPIO typically operates in a round-robin manner by sending packets first down
one path and then the other. If a path becomes unavailable, MPIO software continues to send packets down the functioning path.
NOTE: MPIO is operating-system specific, and it loads as a driver on the server or it is part of the server operating
system.

MPIO Behavior
The server must have at least two FC or iSCSI ports to use MPIO.
When MPIO is configured, a server can send I/O to multiple ports on the same storage controller.

MPIO Configuration Instructions for Host Servers
If a Dell Storage Manager wizard is used to configure host server access to the Storage Center, the Dell Storage Manager attempts
to automatically configure MPIO with best practices.
NOTE: Compare the host server settings applied by the Dell Storage Manager wizard against the latest Dell Storage
Center Best Practices documents (listed in the following table) on the Dell TechCenter site (http://
en.community.dell.com/techcenter/storage/).

32

Connect the Front-End Cabling

Table 1. MPIO Configuration Documents

Operating System

Document with MPIO Instructions

Linux

•
•
•

Dell Storage Center with Red Hat Enterprise Linux (RHEL) 6x Best Practices
Dell Storage Center with Red Hat Enterprise Linux (RHEL) 7x Best Practices
Dell Compellent Best Practices: Storage Center with SUSE Linux Enterprise Server 11

VMware vSphere

•
•

Dell Storage Center Best Practices with VMware vSphere 5.x
Dell Storage Center Best Practices with VMware vSphere 6.x

Windows Server 2008, 2008
R2, 2012, and 2012 R2

Dell Storage Center: Microsoft Multipath IO Best Practices

Connecting to Host Servers with Fibre Channel HBAs
A storage system with Fibre Channel front-end ports connects to one or more FC switches, which connect to one or more host
servers with Fibre Channel HBAs.

Fibre Channel Zoning
When using Fibre Channel for front-end connectivity, zones must be established to ensure that storage is visible to the servers. Use
the zoning concepts discussed in this section to plan the front-end connectivity before starting to cable the storage system.
Dell recommends creating zones using a single initiator host port and multiple Storage Center ports.

WWN Zoning Guidelines
When WWN zoning is configured, a device may reside on any port, or change physical ports and still be visible, because the switch is
seeking a WWN.
List of guidelines for WWN zoning.
•

Include all Storage Center virtual World Wide Port Names (WWPNs) in a single zone.

•

Include all Storage Center physical World Wide Port Names (WWPNs) in a single zone.

•

For each host server HBA port, create a zone that includes the HBA port WWPN and multiple Storage Center virtual WWPNs on
the same switch.

•

For Fibre Channel replication from Storage Center system A to Storage Center system B:
– Include all Storage Center physical WWPNs from system A and system B in a single zone.
– Include all Storage Center physical WWPNs of system A and the virtual WWPNs of system B on the particular fabric.
– Include all Storage Center physical WWPNs of system B and the virtual WWPNs of system A on the particular fabric.
NOTE: Some ports may not be used or dedicated for replication, however ports that are used must be in these zones.

Fibre Channel Replication
Storage Center System A (Virtual Port Mode) to Storage Center System B (Virtual Port Mode)
•

Include all Storage Center physical WWPNs from system A and system B in a single zone.

•

Include all Storage Center physical WWPNs of system A and the virtual WWPNs of system B on the particular fabric.

•

Include all Storage Center physical WWPNs of system B and the virtual WWPNs of system A on the particular fabric.
NOTE: Some ports may not be used or dedicated for replication, however ports that are used must be in these zones.

Connect the Front-End Cabling

33

Cable the Storage System with 2-Port Fibre Channel IO Cards
Connect the Fibre Channel ports on the storage controllers to host servers with Fibre Channel HBAs. The Fibre Channel ports of the
storage controllers connect to the host servers through the Fibre Channel switches in the SAN.
About this task

Figure 30. Connect the Storage System to Host Servers with Fibre Channel HBAs

1.

Host server

2.

Host server

3.

Fibre Channel switch 1 (member of fault domain 1)

4.

Fibre Channel switch 2 (member of fault domain 2)

5.

SCv3000 and SCv3020 storage system

Steps
1. Connect each host server to both Fibre Channel fabrics.
2.

Connect Storage Center fault domain 1 (shown in orange) to fabric 1.
•

3.

34

Connect port 1 of the Fibre Channel HBA in the top storage controller to switch 1.

• Connect port 1 of the Fibre Channel HBA in the bottom storage controller to switch 1.
Connect Storage Center fault domain 2 (shown in blue) to fabric 2.
•

Connect port 2 of the Fibre Channel HBA in the top storage controller to switch 2.

•

Connect port 2 of the Fibre Channel HBA in the bottom storage controller to switch 2.

Connect the Front-End Cabling

Cable the Storage System with 4-Port Fibre Channel IO Cards
Connect the Fibre Channel ports on the storage controllers to host servers with Fibre Channel HBAs. The Fibre Channel ports of the
storage controllers connect to the host servers through the Fibre Channel switches in the SAN.
About this task

Figure 31. Connect the Storage System to Fibre Channel Host Servers

1.

Host server

2.

Host server

3.

Fibre Channel switch 1 (member of fault domain 1)

4.

Fibre Channel switch 2 (member of fault domain 2)

5.

SCv3000 and SCv3020 storage system

Steps
1. Connect each host server to both Fibre Channel fabrics.
2.

3.

Connect fault domain 1 (shown in orange) to fabric 1.
•

Connect port 1 of the Fibre Channel HBA in the top storage controller to switch 1.

•

Connect port 3 of the Fibre Channel HBA in the top storage controller to switch 1.

•

Connect port 1 of the Fibre Channel HBA in the bottom storage controller to switch 1.

• Connect port 3 of the Fibre Channel HBA in the bottom storage controller to switch 1.
Connect fault domain 2 (shown in blue) to fabric 2.
•

Connect port 2 of the Fibre Channel HBA in the top storage controller to switch 2.

•

Connect port 4 of the Fibre Channel HBA in the top storage controller to switch 2.

•

Connect port 2 of the Fibre Channel HBA in the bottom storage controller to switch 2.

•

Connect port 4 of the Fibre Channel HBA in the bottom storage controller to switch 2.

Labeling the Front-End Cables
Label the front-end cables to indicate the storage controller and port to which they are connected.
Prerequisite
Locate the front-end cable labels that shipped with the storage system.
About this task
Apply cable labels to both ends of each cable that connects a storage controller to a front-end fabric or network, or directly to host
servers.

Connect the Front-End Cabling

35

Steps
1. Starting with the top edge of the label, attach the label to the cable near the connector.

Figure 32. Attach Label to Cable

2.

Wrap the label around the cable until it fully encircles the cable. The bottom of each label is clear so that it does not obscure the
text.

Figure 33. Wrap Label Around Cable

3.

36

Apply a matching label to the other end of the cable.

Connect the Front-End Cabling

Connecting to Host Servers with iSCSI HBAs or Network Adapters
A storage system with iSCSI front-end ports connects to one or more Ethernet switches, which connect to one or more host
servers with iSCSI HBAs or network adapters.

Cable the Storage System with 2–Port iSCSI IO Cards
Connect the iSCSI ports on the storage controllers to host servers with iSCSI HBAs. The iSCSI ports of the storage controllers
connect to the host servers through the Ethernet switches in the SAN.
About this task

Figure 34. Connect the Storage System to Host Servers with iSCSI HBAs

1.

Host server

2.

Host server

3.

Ethernet switch 1 (fault domain 1)

4.

Ethernet switch 2 (fault domain 2)

5.

SCv3000 and SCv3020 storage system

Steps
1. Connect each host server to both iSCSI networks.
2.

Connect fault domain 1 (shown in orange) to iSCSI network 1.
•

3.

Connect port 1 of the iSCSI HBA in the top storage controller to switch 1.

• Connect port 1 of the iSCSI HBA in the bottom storage controller to switch 1.
Connect iSCSI fault domain 2 (shown in blue) to iSCSI network 2.
•

Connect port 2 of the iSCSI HBA in the top storage controller to switch 2.

•

Connect port 2 of the iSCSI HBA in the bottom storage controller to switch 2.

Connect the Front-End Cabling

37

Cable the Storage System with 4–Port iSCSI IO Cards
Connect the iSCSI ports on the storage controllers to host servers with iSCSI HBAs. The iSCSI ports of the storage controllers
connect to the host servers through the Ethernet switches in the SAN.
About this task

Figure 35. Connect the Storage System to Host Servers with iSCSI HBAs

1.

Host server

2.

Host server

3.

Ethernet switch 1 (member of fault domain 1)

4.

Ethernet switch 2 (member of fault domain 2)

5.

SCv3000 and SCv3020 storage system

Steps
1. Connect each host server to both iSCSI networks.
2.

3.

38

Connect fault domain 1 (shown in orange) to iSCSI network 1.
•

Connect port 1 of the iSCSI HBA in the top storage controller to switch 1.

•

Connect port 3 of the iSCSI HBA in the top storage controller to switch 1.

•

Connect port 1 of the iSCSI HBA in the bottom storage controller to switch 1.

• Connect port 3 of the iSCSI HBA in the bottom storage controller to switch 1.
Connect iSCSI fault domain 2 (shown in blue) to iSCSI network 2.
•

Connect port 2 of the iSCSI HBA in the top storage controller to switch 2.

•

Connect port 4 of the iSCSI HBA in the top storage controller to switch 2.

•

Connect port 2 of the iSCSI HBA in the bottom storage controller to switch 2.

•

Connect port 4 of the iSCSI HBA in the bottom storage controller to switch 2.

Connect the Front-End Cabling

Connect a Storage System to a Host Server Using an iSCSI Mezzanine Card
When connecting to multiple front-end protocols, you can use a mezzanine card to connect to iSCSI hosts. The iSCSI ports on the
mezzanine card connect to the iSCSI host servers through the Ethernet switches in the SAN. The mezzanine card is only used when
the HBA card slot in the storage controller is already populated with an HBA.
About this task
NOTE: The ports on the mezzanine cards are numbered 1 to 4 from left to right.

Figure 36. Connect iSCSI Ports to Host Servers with iSCSI HBAs

1.

Host server

2.

Host server

3.

Ethernet switch 1 (member of fault domain 1)

4.

Ethernet switch 2 (member of fault domain 2)

5.

SCv3000 and SCv3020 storage system
NOTE: The mezzanine card is only used when the HBA card slot in the storage controller is already populated with an
HBA. Cabling for the initial HBA card is not shown in the diagram above.

To connect the iSCSI host server to iSCSI networks:
Steps
1. Connect each iSCSI host server to both iSCSI networks.
2.

3.

Connect fault domain 1 (shown in orange) to iSCSI network 1.
•

Connect port 1 of the mezzanine card in the top storage controller to switch 1.

•

Connect port 3 of the mezzanine card in the top storage controller to switch 1.

•

Connect port 1 of the mezzanine card in the bottom storage controller to switch 1.

• Connect port 3 of the mezzanine card in the bottom storage controller to switch 1.
Connect block access fault domain 2 (shown in blue) to iSCSI network 2.
•

Connect port 2 in the mezzanine card in the top storage controller to switch 2.

•

Connect port 4 in the mezzanine card in the top storage controller to switch 2.

•

Connect port 2 in the mezzanine card in the bottom storage controller to switch 2.

•

Connect port 4 in the mezzanine card in the bottom storage controller to switch 2.

Connect the Front-End Cabling

39

Labeling the Front-End Cables
Label the front-end cables to indicate the storage controller and port to which they are connected.
Prerequisite
Locate the pre-made front-end cable labels that shipped with the storage system.
About this task
Apply cable labels to both ends of each cable that connects a storage controller to a front-end fabric or network, or directly to host
servers.
Steps
1. Starting with the top edge of the label, attach the label to the cable near the connector.

Figure 37. Attach Label to Cable

2.

Wrap the label around the cable until it fully encircles the cable. The bottom of each label is clear so that it does not obscure the
text

Figure 38. Wrap Label Around Cable

3.

40

Apply a matching label to the other end of the cable.

Connect the Front-End Cabling

Connecting to Host Servers with SAS HBAs
An SCv3000 and SCv3020 storage system with front-end SAS ports connects directly to host servers with SAS HBAs.

Cable the Storage System with 4-Port SAS HBAs to Host Servers with One SAS HBA per
Server
A storage system with four front-end SAS ports on each storage controller can connect to up to four host servers, if each host
server has one SAS HBA with two ports.
About this task
This configuration includes four fault domains spread across both storage controllers. The storage controllers are connected to each
host server using two SAS connections.
If a storage controller becomes unavailable, all of the standby paths on the other storage controller become active.
Steps
1. Connect fault domain 1 (shown in orange) to host server 1.
a. Connect a SAS cable from storage controller 1: port 1 to the SAS HBA on host server 1.
b. Connect a SAS cable from storage controller 2: port 1 to the SAS HBA on host server 1.
2.

Connect fault domain 2 (shown in blue) to host server 2.
a. Connect a SAS cable from storage controller 1: port 2 to the SAS HBA on host server 2.
b. Connect a SAS cable from storage controller 2: port 2 to the SAS HBA on host server 2.

3.

Connect fault domain 3 (shown in gray) to host server 3.
a. Connect a SAS cable from storage controller 1: port 3 to the SAS HBA on host server 3.
b. Connect a SAS cable from storage controller 2: port 3 to the SAS HBA on host server 3.

4.

Connect fault domain 4 (shown in red) to host server 4.
a. Connect a SAS cable from storage controller 1: port 4 to the SAS HBA on host server 4.
b. Connect a SAS cable from storage controller 2: port 4 to the SAS HBA on host server 4.

Connect the Front-End Cabling

41

Example

Figure 39. Storage System with Two 4-Port SAS Storage Controllers Connected to Four Host Servers with One SAS HBA per Server

Next step
Install or enable MPIO on the host servers.
NOTE: For the latest best practices, see the Dell Storage Center Best Practices document on the Dell TechCenter site
(http://en.community.dell.com/techcenter/storage/).

Labeling the Front-End Cables
Label the front-end cables to indicate the storage controller and port to which they are connected.
Prerequisite
Locate the pre-made front-end cable labels that shipped with the storage system.
About this task
Apply cable labels to both ends of each cable that connects a storage controller to a front-end fabric or network, or directly to host
servers.
Steps
1. Starting with the top edge of the label, attach the label to the cable near the connector.

42

Connect the Front-End Cabling

Figure 40. Attach Label to Cable

2.

Wrap the label around the cable until it fully encircles the cable. The bottom of each label is clear so that it does not obscure the
text.

Figure 41. Wrap Label Around Cable

3.

Apply a matching label to the other end of the cable.

Attach Host Servers (Fibre Channel)
Install the Fibre Channel host bus adapters (HBAs), install the drivers, and make sure that the latest supported firmware is installed.
About this task
•

Contact your solution provider for a list of supported Fibre Channel HBAs.

•

Refer to the Dell Storage Compatibility Matrix for a list of supported Fibre Channel HBAs.

Steps
1. Install Fibre Channel HBAs in the host servers.
NOTE: Do not install Fibre Channel HBAs from different vendors in the same server.
2.

Install supported drivers for the HBAs and make sure that the HBAs have the latest supported firmware.

3.

Use the Fibre Channel cabling diagrams to cable the host servers to the switches. Connecting host servers directly to the
storage system without using Fibre Channel switches is not supported.

Connect the Front-End Cabling

43

Attach the Host Servers (iSCSI)
Install the iSCSI host bus adapters (HBAs) or iSCSI network adapters, install the drivers, and make sure that the latest supported
firmware is installed.
•

Contact your solution provider for a list of supported iSCSI HBAs.

•

Refer to the Dell Storage Compatibility Matrix for a list of supported HBAs.

•

If the host server is a Windows or Linux host:
a. Install the iSCSI HBAs or network adapters dedicated for iSCSI traffic in the host servers.
NOTE: Do not install iSCSI HBAs or network adapters from different vendors in the same server.
b. Install supported drivers for the HBAs or network adapters and make sure that the HBAs or network adapter have the latest
supported firmware.
c. Use the host operating system to assign IP addresses for each iSCSI port. The IP addresses must match the subnets for
each fault domain.
CAUTION: Correctly assign IP addresses to the HBAs or network adapters. Assigning IP addresses to the wrong
ports can cause connectivity issues.

•

NOTE: If using jumbo frames, they must be enabled and configured on all devices in the data path, adapter ports,
switches, and storage system.
d. Use the iSCSI cabling diagrams to cable the host servers to the switches. Connecting host servers directly to the storage
system without using Ethernet switches is not supported.
If the host server is a vSphere host:
a. Install the iSCSI HBAs or network adapters dedicated for iSCSI traffic in the host servers.
b. Install supported drivers for the HBAs or network adapters and make sure that the HBAs or network adapter have the latest
supported firmware.
c. If the host uses network adapters for iSCSI traffic, create a VMkernel port for each network adapter (1 VMkernel per
vSwitch).
d. Use the host operating system to assign IP addresses for each iSCSI port. The IP addresses must match the subnets for
each fault domain.
CAUTION: Correctly assign IP addresses to the HBAs or network adapters. Assigning IP addresses to the wrong
ports can cause connectivity issues.
NOTE: If using jumbo frames, they must be enabled and configured on all devices in the data path, adapter ports,
switches, and storage system.
e. If the host uses network adapters for iSCSI traffic, add the VMkernel ports to the iSCSI software initiator.
f. Use the iSCSI cabling diagrams to cable the host servers to the switches. Connecting host servers directly to the storage
system without using Ethernet switches is not supported.

Attach the Host Servers (SAS)
On each host server, install the SAS host bus adapters (HBAs), install the drivers, and make sure that the latest supported firmware
is installed.
About this task
NOTE: Refer to the Dell Storage Compatibility Matrix for a list of supported SAS HBAs.
Steps
1. Install the SAS HBAs in the host servers.
NOTE: Do not install SAS HBAs from different vendors in the same server.
2.

Install supported drivers for the HBAs and make sure that the HBAs have the latest supported firmware installed.

3.

Use the SAS cabling diagram to cable the host servers directly to the storage system.

44

Connect the Front-End Cabling

NOTE: If deploying vSphere hosts, configure only one host at a time.

Connect the Management Ports to the Management Network
Connect the management port on each storage controller to the management network.
About this task

Figure 42. Connect the Management Ports to the Management Network

1.

Management network

3.

SCv3000 and SCv3020 storage system

2.

Ethernet switch

Steps
1. Connect the Ethernet switch to the management network.
2.

Connect the management ports to the Ethernet switch.
•

Connect the management port on the top storage controller to the Ethernet switch.

•

Connect the management port on the bottom storage controller to the Ethernet switch.

Labeling the Ethernet Management Cables
Label the Ethernet management cables that connect each storage controller to an Ethernet switch.
Prerequisite
Locate the Ethernet management cable labels that shipped with the SCv3000 and SCv3020 storage system.
About this task
Apply cable labels to both ends of each Ethernet management cable.
Steps
1. Starting with the top edge of the label, attach the label to the cable near the connector.

Connect the Front-End Cabling

45

Figure 43. Attach Label to Cable

2.

Wrap the label around the cable until it fully encircles the cable. The bottom of each label is clear so that it does not obscure the
text.

Figure 44. Wrap Label Around Cable

3.

46

Apply a matching label to the other end of the cable.

Connect the Front-End Cabling

4
Connect the Back-End Cabling
Back-end cabling refers to the connections between the storage system and expansion enclosures.
An SCv3000 and SCv3020 storage system can be deployed with or without expansion enclosures.
NOTE: When expansion enclosures are not used, there is no need to interconnect the SAS port of a storage controller.

Expansion Enclosure Cabling Guidelines
The connection between a storage system and expansion enclosures is referred to as a SAS chain. A SAS chain is made up of two
paths, which are referred to as the A side and B side. Each side of the SAS chain starts at a SAS port on one storage controller and
ends at a SAS port the other storage controller.
You can connect multiple SCv300 and SCv320 expansion enclosures to an SCv3000 and SCv3020 by cabling the expansion
enclosures in series.

Back-End SAS Redundancy
Use redundant SAS cabling to make sure that an unavailable I/O port or storage controller does not cause a Storage Center outage.
If an I/O port or storage controller becomes unavailable, the Storage Center I/O continues on the redundant path.

Back-End Connections for an SCv3000 and SCv3020 Storage System
With Expansion Enclosures
The SCv3000 and SCv3020 supports up to 16 SCv300 expansion enclosures, up to eight SCv320 expansion enclosures, and up to
three SCv360 expansion enclosures per SAS chain.
The following sections show common cabling between the SCv3000 and SCv3020 and expansion enclosures. Locate the scenario
that most closely matches the Storage Center that you are configuring and follow the instructions, modifying them as necessary.

Connect the Back-End Cabling

47

SCv3000 and SCv3020 and One SCv300 and SCv320 Expansion Enclosure
This figure shows an SCv3000 and SCv3020 storage system cabled to one SCv300 and SCv320 expansion enclosure.

Figure 45. SCv3000 and SCv3020 and One SCv300 and SCv320 Expansion Enclosure

1.

Storage system

2.

Storage controller 1

3.

Storage controller 2

4.

Expansion enclosure

The following table describes the back-end SAS connections from an SCv3000 and SCv3020 storage system to one SCv300 and
SCv320 expansion enclosure.
Table 2. SCv3000 and SCv3020 Connected to One SCv300 and SCv320 Expansion Enclosure

Path

Connections

Chain 1: Side A (orange)

Chain 1: Side B (blue)

48

1.

Storage controller 1: port 1 to the expansion enclosure: top EMM, port 1

2.

Expansion enclosure: top EMM, port 2 to storage controller 2: port 2

1.

Storage controller 2: port 1 to the expansion enclosure: bottom EMM, port 1

2.

Expansion enclosure: bottom EMM, port 2 to storage controller 1: port 2

Connect the Back-End Cabling

SCv3000 and SCv3020 and Two SCv300 and SCv320 Expansion Enclosures
This figure shows an SCv3000 and SCv3020 storage system cabled to two SCv300 and SCv320 expansion enclosures.

Figure 46. SCv3000 and SCv3020 and Two SCv300 and SCv320 Expansion Enclosures

1.

Storage system

2.

Storage controller 1

3.

Storage controller 2

4.

Expansion enclosure 1

5.

Expansion enclosure 2

The following table describes the back-end SAS connections from an SCv3000 and SCv3020 storage system to two SCv300 and
SCv320 expansion enclosures.
Table 3. SCv3000 and SCv3020 Connected to Two SCv300 and SCv320 Expansion Enclosures

Path

Connections

Chain 1: Side A (orange)

1.
2.
3.

Storage controller 1: port 1 to expansion enclosure 1: top EMM, port 1
Expansion enclosure 1: top EMM, port 2 to expansion enclosure 2: top EMM, port 1
Expansion enclosure 2: top EMM, port 2 to storage controller 2: port 2

Chain 1: Side B (blue)

1.
2.
3.

Storage controller 2: port 1 to expansion enclosure 1: bottom EMM, port 1
Expansion enclosure 1: bottom EMM, port 2 to expansion enclosure 2: bottom EMM, port 1
Expansion enclosure 2: bottom EMM, port 2 to storage controller 1: port 2

SCv3000 and SCv3020 Storage System and One SCv360 Expansion Enclosure
This figure shows an SCv3000 and SCv3020 storage system cabled to one SCv360 expansion enclosure.

Connect the Back-End Cabling

49

Figure 47. SCv3000 and SCv3020 and One SCv360 Expansion Enclosure

1.

Storage system

2.

Storage controller 1

3.

Storage controller 2

4.

Expansion enclosure

The following table describes the back-end SAS connections from an SCv3000 and SCv3020 storage system to one SCv360
expansion enclosure
Table 4. SCv3000 and SCv3020 and One SCv360 Expansion Enclosure

Path

Connections

Chain 1: Side A (orange)

Chain 1: Side B (blue)

1.

Storage controller 1: port 1 to the expansion enclosure: left EMM, port 1

2.

Expansion enclosure: left EMM, port 3 to storage controller 2: port 2

1.

Storage controller 2: port 1 to the expansion enclosure: right EMM, port 1

2.

Expansion enclosure: right EMM, port 3 to storage controller 1: port 2

SCv3000 and SCv3020 Storage System and Two SCv360 Expansion Enclosures
This figure shows an SCv3000 and SCv3020 storage system cabled to two SCv360 expansion enclosures.

50

Connect the Back-End Cabling

Figure 48. SCv3000 and SCv3020 and Two SCv360 Expansion Enclosures

1.

Storage system

2.

Storage controller 1

3.

Storage controller 2

4.

Expansion enclosure 1

5.

Expansion enclosure 2

The following table describes the back-end SAS connections from an SCv3000 and SCv3020 storage system to two SCv360
expansion enclosures.
Table 5. SCv3000 and SCv3020 and Two SCv360 Expansion Enclosures

Path

Connections

Chain 1: Side A (orange)

1.
2.
3.

Storage controller 1: port 1 to expansion enclosure 1: left EMM, port 1
Expansion enclosure 1: left EMM, port 3 to expansion enclosure 2: left EMM, port 1
Expansion enclosure 2: left EMM, port 3 to storage controller 2: port 2

Chain 1: Side B (blue)

1.
2.
3.

Storage controller 2: port 1 to expansion enclosure 1: right EMM, port 1
Expansion enclosure 1: right EMM, port 3 to expansion enclosure 2: right EMM, port 1
Expansion enclosure 2: right EMM, port 3 to storage controller 1: port 2

Label the Back-End Cables
Label the back-end cables that interconnect the storage controllers or label the back-end cables that connect the storage system to
the expansion enclosures.
Prerequisite
Locate the cable labels provided with the expansion enclosures.
About this task
Apply cable labels to both ends of each SAS cable to indicate the chain number and side (A or B).
Steps
1. Starting with the top edge of the label, attach the label to the cable near the connector.

Connect the Back-End Cabling

51

Figure 49. Attach Label to Cable

2.

Wrap the label around the cable until it fully encircles the cable. The bottom of each label is clear so that it does not obscure the
text.

Figure 50. Wrap Label Around Cable

3.

52

Apply a matching label to the other end of the cable.

Connect the Back-End Cabling

5
Discover and Configure the Storage Center
The Discover and Configure Uninitialized Storage Centers wizard sets up a Storage Center to make it ready for volume creation.
Use the Dell Storage Manager to discover and configure the Storage Center. After configuring a Storage Center, you can set up a
localhost or VMware vSphere host using the host setup wizards.
The storage system hardware must be installed and cabled before the Storage Center can be configured.

Connect Power Cables and Turn On the Storage System
Connect power cables to the storage system components and turn on the hardware.
About this task
•

If the storage system is installed without expansion enclosures, connect power cables to the storage system chassis and turn on
the storage system.

•

If the storage system is installed with expansion enclosures, connect power cables to the expansion enclosure chassis. Make sure
you power on each expansion enclosure before turning on the storage system.
NOTE: When powering on expansion enclosures with spinning hard drives, wait approximately three minutes for the
drives to spin up before powering on the storage system.

Steps
1. Make sure that the power switches are in the OFF position before connecting the power cables.
2.

Connect the power cables securely to both power supply/cooling fan modules in the storage system chassis.

Figure 51. Connect the Power Cables

3.

Connect the power cables plugged into the left power supply to one power distribution unit (PDU).

4.

Connect the power cables plugged into the right power supply to a second power distribution unit (PDU).

5.

Turn on the storage system by pressing the power switches on both power supply/cooling fan modules to the ON position.

Discover and Configure the Storage Center

53

Figure 52. Turn On the Storage System

CAUTION: Do not power off the storage system until it can be discovered with Storage Manager. During the initial
power up, the storage system might take up to twenty minutes to boot completely.
NOTE:
•

If the LEDs on a storage controller do not turn on, the storage controller might not be fully seated in the storage
system chassis. If this issue occurs, press both power buttons to turn off the storage system. Reseat the storage
controller, and then press both power buttons again to turn on the storage system.

•

If the power supply units do not power on, confirm that the power source is 200 to 240 volts (V). The 200 to 240 V
power supply units do not display any LED indications if they are plugged into a 110 V outlet.

Locate Your Service Tag
Your storage system is identified by a unique service tag and Express Service Code.
The Service Tag and Express Service Code are found on the front of the system by pulling out the information tag. Alternatively, the
information might be on a sticker on the back of the storage system chassis. This information is used by Dell to route support calls to
the appropriate personnel.This information is used by the manufacturer to route support calls to the appropriate personnel.
NOTE: The Quick Resource Locator (QRL) code on the information tag is unique to your system. Scan the QRL to get
immediate access to your system information using your smart phone or tablet.

Record System Information
Use the worksheet found in the appendix of this guide to record the information you will need to install the SCv3000 and SCv3020
storage system.

Supported Operating Systems for Storage Center Automated Setup
Setting up a Storage Center using the Discover and Configure Uninitialized Storage Centers wizard and the host setup wizards
requires 64-bit versions of the following operating systems:
•

Red Hat Enterprise Linux 6 or later

•

SUSE Linux Enterprise 12 or later

•

Windows Server 2008 R2 or later

Install and Use the Dell Storage Manager
You must start the Dell Storage Manager as an Administrator to run the Discover and Configure Uninitialized Storage Centers wizard.
1.

Install the Dell Storage Manager on a host server.

54

Discover and Configure the Storage Center

To discover and configure a Storage Center, the software must be installed on a host server that is on the same subnet as the
storage system.
2.

To start the software on a Windows computer, right-click on the Dell Storage Manager shortcut and select Run as
administrator. To start the software on a Linux computer, execute the command ./Client from the var/lib/dell/bin
directory.

3.

Click Discover and Configure Uninitialized Storage Centers. The Discover and Configure Uninitialized Storage Centers
wizard appears.

Discover and Select an Uninitialized Storage Center
The first page of the Discover and Configure Uninitialized Storage Centers wizard provides a list of prerequisite actions and
information required before setting up a Storage Center.
Prerequisites
•
•
•
•

The host server, on which the Storage Manager software is installed, must be on the same subnet or VLAN as the Storage
Center.
Temporarily disable any firewall on the host server that is running the Storage Manager.
Layer 2 multicast must be allowed on the network.
Make sure that IGMP snooping is disabled on the switch ports connected to the Storage Center.

Steps
1. Make sure that you have the required information that is listed on the first page of the wizard. This information is needed to
configure the Storage Center.
2.

Click Next. The Select a Storage Center to Initialize page appears and lists the uninitialized Storage Centers discovered by the
wizard.
NOTE: If the wizard does not discover the Storage Center that you want to initialize, perform one of the following
actions:
•
•
•

3.

Make sure that the Storage Center hardware is physically attached to all necessary networks.
Click Rediscover.
Click Troubleshoot Storage Center Hardware Issue to learn more about reasons why the Storage Center is not
discoverable.
• Follow the steps in Deploy the Storage Center Using the Direct Connect Method.
Select the Storage Center to initialize.

4.

(Optional) Click Enable Storage Center Indicator to turn on the indicator light for the selected Storage Center. You can use the
indicator to verify that you have selected the correct Storage Center.

5.

Click Next.

6.

If the Storage Center is partially configured, the Storage Center login pane appears. Enter the management IPv4 address and
the Admin password for the Storage Center, then click Next to continue.

Deploy the Storage Center Using the Direct Connect Method
Use the direct connect method to manually deploy the Storage Center when it is not discoverable.
1.

Use an Ethernet cable to connect the computer running the Dell Storage Manager to the management port of the top
controller.

2.

Cable the bottom controller to the management network switch.

3.

Click Discover and Configure Uninitialized Storage Centers. The Discover and Configure Uninitialized Storage Centers
wizard opens.

4.

Fill out the information the initial configuration pages and stop when the Confirm Configuration page is displayed.

5.

At this point, recable the management port of the top controller to the management network.

6.

Connect the computer to the same subnet or VLAN as the Storage Center.
a. Click Next.
b. If the cable is not properly connected or the host cannot access the controller, an Error setting up connection message is
displayed. Correct the connection, and click OK.
Discover and Configure the Storage Center

55

c. If the deployment wizard is closed, click Discover and Configure Uninitialized Storage Centers to relaunch the deployment
wizard.
d. Type Admin in the User Name field, type the password entered on the Set Administrator Information page in the
Password field, and click Next.

Customer Installation Authorization
Authorize the installation of the storage system.
1.

Type your name in the Approving Customer Name field.

2.

Type your title in the Approving Customer Title field.

3.

Click OK.

Set System Information
Use the Set System Information page to provide Storage Center and storage controller configuration information. This information
is needed when connecting to the Storage Center using Storage Manager.
1.

Type a descriptive name for the Storage Center in the Storage Center Name field.

2.

Type the system management IPv4 address for the Storage Center in the Virtual Management IPv4 Address field.
The virtual management IPv4 address is the IP address used to manage the Storage Center. The virtual management IPv4
address is different than the storage controller management IPv4 addresses.

3.

Type the management IPv4 address for the top storage controller in the Top Controller Management IPv4 Address field.

4.

Type the management IPv4 address for the bottom storage controller in the Bottom Controller Management IPv4 Address
field.
NOTE: The storage controller management IPv4 addresses and virtual management IPv4 address must be within the
same subnet.

5.

Type the subnet mask of the management network in the Subnet Mask field.

6.

Type the gateway address of the management network in the Gateway IPv4 Address field.

7.

Type the domain name of the management network in the Domain Name field.

8.

Type the DNS server addresses of the management network in the DNS Server and Secondary DNS Server fields.

9.

Click Next.
The Set Administrator Information page opens.

Set Administrator Information
Use the Set Administrator Information page to set a new password and an email address for the Admin user.
1.

Type a new password for the default Storage Center administrator user in the New Admin Password and Confirm Password
fields.

2.

Type the email address of the default Storage Center administrator user in the Admin Email Address field.

3.

Click Next.

4.

56

•

For a storage system with Fibre Channel ports, the Confirm Configuration page opens.

•

For a storage system with iSCSI ports, the Configure iSCSI Fault Tolerance page opens.

• For a storage system with front-end SAS ports, the Confirm Configuration page opens.
Verify the information and click Apply Configuration. After you click Apply Configuration, you will not be able to change the
information until after the Storage Center is fully configured.

Discover and Configure the Storage Center

Confirm the Storage Center Configuration
Make sure that the configuration information shown on the Confirm Configuration page is correct before continuing.
1.

Verify that the Storage Center settings are correct.

2.

If the configuration information is correct, click Apply Configuration.
If the configuration information is incorrect, click Back and provide the correct information.
NOTE: After you click the Apply Configuration button, the configuration cannot be changed until after the Storage
Center is fully configured.

Initialize the Storage Center
The Storage Center sets up the storage system using the information provided on the previous pages.
1.

The Storage Center performs system setup tasks. The Initialize Storage Center page displays the status of these tasks.
To learn more about the initialization process, click More information about Initialization.
•

If one or more of the system setup tasks fails, click Troubleshoot Initialization Error to learn how to resolve the issue.

•

If the Configuring Disks task fails, click View Disks to see the status of the drives detected by the Storage Center.

•
2.

If any of the Storage Center front-end ports are down, the Storage Center Front-End Ports Down dialog box opens. Select
the ports that are not connected to the storage network, then click OK.
When all of the Storage Center setup tasks are complete, click Next.

Configure Key Management Server Settings
The Key Management Server Settings page opens if the Storage Center is licensed for SEDs. Use this page to specify the key
management server network settings and select the SSL certificate files.
1.

Specify the network settings for the key management server.

2.

If the key management server is configured to verify client certificates against credentials, type the user name and password of
the certificates.

3.

Select the key manager server certificate files.

4.

Click Next.

Create a Storage Type
Select the datapage size and redundancy level for the Storage Center.
1.

Select a datapage size.
•

Standard (2 MB Datapage Size): Default datapage size, this selection is appropriate for most applications.

•

High Performance (512 KB Datapage Size): Appropriate for applications with high performance needs, or in environments
in which snapshots are taken frequently under heavy IO. Selecting this size increases overhead and reduces the maximum
available space in the Storage Type. Flash Optimized storage types use 512 KB by default.

•
2.

High Density (4 MB Datapage Size): Appropriate for systems that use a large amount of disk space and take snapshots
infrequently.
Modify the redundancy for each tier as needed.
•

For single-redundant RAID levels, select Redundant.

3.

• For dual-redundant RAID levels, select Dual Redundant.
To have the system attempt to keep existing drives on the same redundancy level when adding new drives, select the Attempt
to maintain redundancy when adding or removing disks check box.

4.

Click Next.

Discover and Configure the Storage Center

57

Configure Ports
Set up Fibre Channel, iSCSI and SAS ports.
1.

Select the check box of each type of port you want to configure. You must select at least one type to continue.
NOTE: If a port type is grayed out, no ports of that type have been detected.

2.

Click Next.

Configure Fibre Channel Ports
For a Storage Center with Fibre Channel front-end ports, the Review Fault Domains page displays information about the fault
domains that were created by the Storage Center.
Prerequisite
One port from each controller within the same fault domain must be cabled.
NOTE: If the Storage Center is not cabled correctly to create fault domains, the Cable Ports page opens and explains the
issue. Click Refresh after cabling more ports.
Steps
1. Review the fault domains that have been created.
2.

(Optional) Click Copy to clipboard to copy the fault domain information.

3.

(Optional) Review the information on the Zoning, Hardware, and Cabling Diagram tabs.
NOTE: The ports must already be zoned.

4.

Click Next.

Configure iSCSI Ports
For a Storage Center with iSCSI front-end ports, enter network information for the fault domains and ports.
Prerequisite
One port from each controller within the same fault domain must be cabled.
NOTE: If the Storage Center is not cabled correctly to create fault domains, the Cable Ports page opens and explains the
issue. Click Refresh after cabling more ports.
Steps
1. On the Set IPv4 Addresses for iSCSI Fault Domain 1 page, enter network information for the fault domain and its ports.
NOTE: Make sure that all the IP addresses for iSCSI Fault Domain 1 are in the same subnet.
2.

Click Next.

3.

On the Set IPv4 Addresses for iSCSI Fault Domain 2 page, enter network information for the fault domain and its ports. Then
click Next.
NOTE: Make sure that all the IP addresses for iSCSI Fault Domain 2 are in the same subnet.

4.

Click Next.

5.

Review the fault domain information.

6.

(Optional) Click Copy to clipboard to copy the fault domain information.

7.

(Optional) Review the information on the Hardware and Cabling Diagram tabs.

8.

Click Next.

58

Discover and Configure the Storage Center

Configure SAS Ports
For a Storage Center with SAS front-end ports, the Review Fault Domains page displays information about the fault domains that
were created by the Storage Center.
Prerequisites
•

One port from each controller within the same fault domain must be cabled.

•

The ports for each fault domain must be cabled to the same server.
NOTE: If the Storage Center is not cabled correctly to create fault domains, the Cable Ports page opens and explains the
issue. Click Refresh after cabling more ports.

Steps
1. Review the fault domains that have been created.
2.

(Optional) Click Copy to clipboard to copy the fault domain information.

3.

(Optional) Review the information on the Hardware and Cabling Diagram tabs.

4.

Click Next.

Configure Time Settings
Configure an NTP server to set the time automatically, or set the time and date manually.
1.

From the Region and Time Zone drop-down menus, select the region and time zone used to set the time.

2.

Select Use NTP Server and type the host name or IP address of the NTP server, or select Set Current Time and set the time
and date manually.

3.

Click Next.

Configure SMTP Server Settings
Enable SMTP email to receive information from the Storage Center about errors, warnings, and events.
1.

Select Enable SMTP Email.

2.

Configure the SMTP server settings.
a. In the Recipient Email Address field, type the email address where the information will be sent.
b. In the SMTP Mail Server field, type the IP address or fully qualified domain name of the SMTP email server. Click Test
Server to verify connectivity to the SMTP server.
c. (Optional) In the Backup SMTP Server field, type the IP address or fully qualified domain name of a backup SMTP email
server. Click Test Server to verify connectivity to the SMTP server.
d. If the SMTP server requires emails to contain a MAIL FROM address, specify an email address in the Sender Email Address
field.
e. (Optional) In the Common Subject Line field, type a subject line to use for all emails sent by the Storage Center.
f. If the SMTP server requires clients to authenticate before sending email, select the Use Authorized Login (AUTH LOGIN)
checkbox, then type a user name and password in the Login ID and Password fields.

3.

Click Next.

Using Dell SupportAssist
As an integral part of Dell’s ability to provide best-of-class support for your Enterprise-class products, Dell SupportAssist proactively
provides the information required to diagnose support issues, enabling the most efficient support possible and reducing the effort
required by you.
A few key benefits of SupportAssist are:
•

Enables proactive service requests and real-time troubleshooting

Discover and Configure the Storage Center

59

•

Supports automatic case creation based on event alerting

•

Enables ProSupport Plus and optimizes service delivery

•

Provides automatic health checks

•

Enables remote Storage Center updates

Dell strongly recommends enabling comprehensive support service at time of incident and proactive service with SupportAssist.

Enable SupportAssist
The SupportAssist Data Collection and Storage page displays the text of the SupportAssist data agreement and allows you to accept
or opt out of using SupportAssist.
1.

To allow SupportAssist to collect diagnostic data and send this information to Dell Technical Support, select By checking this
box you accept the above terms.

2.

Click Next.

3.

If you did not select By checking this box you accept the above terms, the SupportAssist Recommended pane opens.
•

Click No to return to the SupportAssist Data Collection and Storage page and accept the agreement.

•

Click Yes to opt out of using SupportAssist and proceed to the Update Storage Center page.

Review the SupportAssist Data Collection and Storage Agreement
The SupportAssist Data Collection and Storage page displays the text of the SupportAssist data agreement and allows you to
accept or opt out of using SupportAssist.
1.

To allow SupportAssist to collect diagnostic data and send this information to Dell Technical Support, select By checking this
box you accept the above terms.

2.

Click Next.

3.

If you did not select By checking this box you accept the above terms, the SupportAssist Recommended pane opens.
•

4.

Click No to return to the SupportAssist Data Collection and Storage page and accept the agreement.

• Click Yes to opt out of using SupportAssist and proceed to the Update Storage Center page.
If the Support data agreement is not accepted, the Storage Center cannot check for updates. To proceed without checking for
updates, click Next.
You will have to use the Storage Center Update Utility to update the Storage Center software before continuing. See the Dell
Storage Center Update Utility Administrator’s Guide or contact Dell Technical Support for detailed instructions about using the
Storage Center Update Utility.

Provide Contact Information
Enter contact information for technical support to use when sending support-related communications from SupportAssist.
1.

Specify the contact information.

2.

To receive SupportAssist email messages, select Yes, I would like to receive emails from SupportAssist when issues arise,
including hardware failure notifications.

3.

Select the preferred contact method, language, and available times.

4.

Type a shipping address where replacement Storage Center components can be sent.

5.

Click Next.

60

Discover and Configure the Storage Center

Update the Storage Center
The Storage Center attempts to contact the SupportAssist Update Server to check for updates. If you are not using SupportAssist,
you must use the Storage Center Update Utility to update the Storage Center software before continuing.
NOTE:
•

If no update is available, the Storage Center Up to Date page opens. Click Next.

•

If an update is available, the current and available versions are listed.

•

If you cannot update Storage Center using standard methods (or example, you have no Internet access) use the Storage
Center Update Utility to install Storage Center software updates. See the Storage Center Update Utility Administrator’s
Guide or contact Dell Technical Support for instructions on how to proceed.

•

If the site uses a web proxy to access Internet, configure the proxy settings:
a.

In the Setup SupportAssist Proxy Settings dialog box, select Enabled.

b.

Specify the proxy settings.

c.

Click OK.

Complete the Configuration and Continue With Setup
The Storage Center is now configured. The Configuration Complete page provides links to a Dell Storage Manager tutorial and
wizards to perform the next setup tasks.
About this task
Configure iDRAC, configure a VMware host, or create volumes to complete setup tasks.
Steps
1. (Optional) Click one of the Next Steps to configure a localhost, configure a VMware host, configure iDRAC, or create a volume.
When you have completed the step, you are returned to the Configuration Complete page.
2.

Click Finish. When the wizard is complete, continue to step 3.

3.

If no expansion enclosures are attached to the storage system, unconfigure the four backend ports.

Modify iDRAC Interface Settings for a Storage System
The iDRAC interface provides functions to help deploy, update, monitor and maintain the storage system.
About this task
Configure the iDRAC so it can be used to perform out-of-band system management.
Steps
1. When you reach the Configuration Complete page, scroll down to Advanced Steps.
2.

Click Modify BMC Settings. The Edit BMC Settings dialog box opens.

3.

Select how to assigned an IP Address to the iDRAC from the Configure via drop-down menu.
•

4.

To specify a static IP address for the iDRAC, select Static.

• To allow a DHCP server to assign an IP address to the iDRAC, select DHCP.
If you selected to specify a static IP address, specify the iDRAC IP address for the bottom storage controller and the top
storage controller.
a. In the BMC IP Address field, type an IP address for the iDRAC.
b. In the BMC Net Mask field, type the network mask.
c. In the BMC Gateway IP Address field, type the default route for the iDRAC.

5.

Click OK.

6.

Log in to the iDRAC and configure the iDRAC password. You will be prompted to change the iDRAC password when you log in.
The default password is root/calvin.
NOTE: Any hardware errors reported in the iDRAC can be ignored. Storage Manager is the official interface to check
hardware status.

Discover and Configure the Storage Center

61

Unconfigure Unused I/O Ports
Unconfigure a port when it is down and will not be used.
Prerequisites
•

The Storage Center must be a SCv3000 and SCv3020 storage system.

•

The I/O port must be down.

Steps
1. Click the Storage view.
2.

In the Storage pane, select a Storage Center.

3.

Click the Hardware tab.

4.

In the Hardware tab navigation pane, expand Controllers → storage controller → IO Ports.

5.

Right-click on the down I/O port and select Unconfigure Port. The Storage Manager unconfigures the port.

62

Discover and Configure the Storage Center

6
Perform Post-Setup Tasks
Perform connectivity and failover tests to make sure that the Storage Center deployment was successful.
NOTE: Before testing failover, use Storage Manager to place the storage system in Maintenance mode. When you are
finished, use Storage Manager to place the storage system back into normal operational mode.

Update Storage Center Using Dell Storage Manager
Use this procedure to update the Storage Center using Dell Storage Manager.
1.

Click Storage and select a Storage Center.

2.

In the Summary tab, select Actions → System → Check for Updates.

3.

Click Install to update to the latest version.

4.

If the update fails, click Retry Update to try to update again.
a. The Setup SupportAssist Proxy Settings dialog box opens if the Storage Center cannot connect to the Dell SupportAssist
Update Server. If the site does not have direct access to the Internet but uses a web proxy, configure the proxy settings:

5.

•

Select Enabled

•

Specify the proxy settings.

• Click OK. The Storage Center attempts to contact the SupportAssist Update Server to check for updates.
When the update is complete, click Next.

Check the Status of the Update
Return to Dell Storage Manager to determine whether the update has completed.
About this task
NOTE: The update process should take between 60 and 90 minutes to complete. During the update, Dell Storage
Manager might disconnect from the Storage Center. You will be able to reconnect to the Storage Center after the update
completes.
Steps
1. Click Storage, and select a Storage Center.
2.

In the Summary tab, select Actions → System → Check for Updates.

Change the Operation Mode of a Storage Center
Change the operation mode of a Storage Center before performing maintenance or installing software updates so that you can
isolate alerts from those events.
About this task
NOTE: Do not change the mode of the Storage Center from Pre-production mode until setup and testing is complete.
Steps
1. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens.
2.

Click the General tab.

3.

In the Operation Mode field select Maintenance. Selecting Maintenance isolates alerts from those that would occur during
normal operation.
Perform Post-Setup Tasks

63

4.

Click OK.

Verify Connectivity and Failover
This section describes how to verify that the Storage Center is set up properly and performs failover correctly.
The process includes creating test volumes, copying data to verify connectivity, and shutting down a storage controller to verify
failover and MPIO functionality.

Create Test Volumes
Connect a server to the Storage Center, create one or more test volumes, and map them to the server to prepare for connectivity
and failover testing.
Prerequisite
NOTE: The localhost must have network connection to both the iSCSI connection and Data Collector host IP.
Steps
1. Configure a localhost to access the Storage Center using the Set up localhost on Storage Center wizard.
a. In the Storage view, select a Storage Center.
b. Click the Storage tab, then click Servers → Create Server from Localhost.
2.

Connect to the Storage Center using the Dell Storage Manager.

3.

Create two small test volumes (TestVol1 and TestVol2) on the server.

4.

Map TestVol1 to storage controller 1 and TestVol2 to storage controller 2.

5.

Partition and format the test volumes on the server.

Test Basic Connectivity
Verify basic connectivity by copying data to the test volumes.
1.

Connect to the server to which the volumes are mapped.

2.

Create a folder on the TestVol1 volume, copy at least 2 GB of data to the folder, and verify that the data copied successfully.

3.

Create a folder on the TestVol2 volume, copy at least 2 GB of data to the folder, and verify that the data copied successfully.

Test Storage Controller Failover
Test the Storage Center to make sure that a storage controller failover does not interrupt I/O.
About this task
NOTE: Before restarting a storage controller, use Storage Manager to change the operation mode to Maintenance mode.
When you are finished, use Storage Manager to place the storage system back into normal operational mode.
Steps
1. Connect to the server, create a Test folder on the server, and copy at least 2 GB of data into it.
2.

Restart storage controller 1 while copying data to verify that the failover event does not interrupt I/O.
a. Copy the Test folder to the TestVol1 volume.
b. During the copy process, restart the storage controller (through which TestVol1 is mapped) by selecting it from the
Hardware tab and clicking Shutdown/Restart Controller.
c. Verify that the copy process continues while the storage controller restarts.
d. Wait several minutes and verify that the storage controller has finished restarting.

3.

Restart storage controller 2 while copying data to verify that the failover event does not interrupt I/O.
a. Copy the Test folder to the TestVol2 volume.
b. During the copy process, restart the storage controller (through which TestVol2 is mapped) by selecting it from the
Hardware tab and clicking Shutdown/Restart Controller.
c. Verify that the copy process continues while the storage controller restarts.
d. Wait several minutes and verify that the storage controller has finished restarting.

64

Perform Post-Setup Tasks

Test MPIO
Perform the following tests for a Storage Center with Fibre Channel or iSCSI front-end connectivity if the network environment and
servers are configured for MPIO.
1.

Create a Test folder on the server and copy at least 2 GB of data into it.

2.

Make sure that the server is configured to use load-balancing MPIO (round-robin).

3.

Manually disconnect a path while copying data to TestVol1 to verify that MPIO is functioning correctly.
a. Copy the Test folder to the TestVol1 volume.
b. During the copy process, disconnect one of the paths and verify that the copy process continues.
c. Reconnect the path.

4.

Repeat the previous steps as necessary to test additional paths.

5.

Restart the storage controller that contains the active path while I/O is being transferred and verify that the I/O process
continues.

6.

If the front-end connectivity of the Storage Center is Fibre Channel or iSCSI and the Storage Center is not in a production
environment, restart the switch that contains the active path while I/O is being transferred, and verify that the I/O process
continues.

Clean Up Test Volumes
After testing is complete, delete the volumes used for testing.
About this task
NOTE: During deployment, a Storage Type is created for each tier that defines the Redundancy Level. If you delete all
test volumes, the Storage Type for each tier reverts to the default redundancy level. Creating new volumes will then
require setting Storage Types to the desired redundancy level manually. It is recommended that before deleting any test
volumes, you create at least one volume in each Storage Type required by the customer. If all volumes are deleted before
creating new volumes, you will need to manually update the redundancy levels for each Storage Type.
Steps
1. Use the Dell Storage Manager to connect to the Storage Center.
2.

Click the Storage tab.

3.

From the Storage tab navigation pane, select the Volumes node.

4.

Create new volumes for the customer in each tier as required by their application.

5.

Select the test volumes to delete.

6.

Right-click on the selected volumes and select Delete. The Delete dialog box opens.

7.

Click OK.

Send Diagnostic Data Using Dell SupportAssist
After replacing components, use Dell SupportAssist to send diagnostic data to Dell Technical Support.
1.

Use Dell Storage Manager to connect to the Storage Center.

2.

In the Summary tab, click Send SupportAssist Information Now, which is located under SupportAssist Actions in the Status
pane.
The Send SupportAssist Information Now dialog box opens.

3.

Select Storage Center Configuration and Detailed Logs.

4.

Click OK.

Perform Post-Setup Tasks

65

7
Adding or Removing Expansion Enclosures
This section describes how to add an expansion enclosure to a storage system and how to remove an expansion enclosure from a
storage system.

Adding Expansion Enclosures to a Storage System Deployed Without
Expansion Enclosures
Install the expansion enclosures in a rack, but do not connect the expansion enclosures to the storage system.
For more information, see the Dell SCv300 and SCv320 Expansion Enclosure Getting Started Guide or the Dell SCv360 Expansion
Enclosure Getting Started Guide.
NOTE: To preserve the integrity of the existing data, use caution when adding expansion enclosures to a storage system.

Install New SCv300 and SCv320 Expansion Enclosures in a Rack
Prerequisite
Install the expansion enclosures in a rack, but do not connect the expansion enclosures to the storage system. For more information,
see the Dell SCv300 and SCv320 Expansion Enclosure Getting Started Guide
Steps
1. Cable the expansion enclosures together to form a chain.
a. Connect a SAS cable from expansion enclosure 1: top, port 2 to expansion enclosure 2: top, port 1.
b. Connect a SAS cable from expansion enclosure 1: bottom, port 2 to expansion enclosure 2: bottom, port 1.
c. Repeat the previous steps to connect additional expansion enclosures to the chain.

Figure 53. Cable the Expansion Enclosures Together

1.

Expansion enclosure 1

2.

Expansion enclosure 2

2.

Connect to the Storage Center using the Dell Storage Manager.

3.

Check the drive count of the Storage Center system before adding the expansion enclosure. Make sure the number of drives
installed plus the drives in the new expansion enclosure does not exceed 500 drives.
a. Select the Storage tab.
b. In the Storage tab navigation pane, select the Disks node.
c. On the Disks tab, record the number of drives that are accessible by the Storage Center.

66

Adding or Removing Expansion Enclosures

Compare this value to the number of drives accessible by the Storage Center after adding expansion enclosures to the
storage system.
4.

Click the Hardware tab and select the Enclosures node in the Hardware tab navigation pane.

5.

Click Add Enclosure. The Add New Enclosure wizard starts.
a.
b.
c.
d.

Click Next to validate the existing cabling.
Select the expansion enclosure type and click Next.
If the drives are not installed, install the drives in the expansion enclosures.
Turn on the expansion enclosure. When the drives spin up, make sure that the front panel and power status LEDs show
normal operation.
e. Click Next.
f. Add the expansion enclosure to the A-side chain. Click Next to validate the cabling.
g. Add the expansion enclosure to the B-side chain. Click Next to validate the cabling.
h. Click Finish.

6.

To manually manage new unassigned drives:
a.
b.
c.
d.
e.
f.

7.

Click the Storage tab.
In the Storage tab navigation pane, select the Disks node.
Click Manage Unassigned Disks. The Manage Unassigned Disks dialog box opens.
From the Disk Folder drop-down menu, select the drive folder for the unassigned drives.
Select Perform RAID rebalance immediately.
Click OK.

Label the back-end cables.

Add the SCv300 and SCv320 Expansion Enclosures to the A-Side of the Chain
Connect the expansion enclosures to one side of the chain at a time to maintain drive availability.
1.

Cable the expansion enclosures to the A-side of the chain.
a. Connect a SAS cable from storage controller 1: port 1 to the first expansion enclosure in the chain, top EMM, port 1.
b. Connect a SAS cable from storage controller 2: port 2 to the last expansion enclosure in the chain, top EMM, port 2.

Figure 54. Connect the A-Side Cables to the Expansion Enclosures

1.

Storage system

2.

Storage controller 1

3.

Storage controller 2

4.

Expansion enclosure 1

Adding or Removing Expansion Enclosures

67

5.
2.

Expansion enclosure 2

Label the back-end cables.

Add the SCv300 and SCv320 Expansion Enclosures to the B-Side of the Chain
Connect the expansion enclosures to one side of the chain at a time to maintain drive availability.
1.

Cable the expansion enclosures to the B-side of the chain.
a. Connect a SAS cable from storage controller 1: port 2 to expansion enclosure 2: bottom EMM, port 2.
b. Connect a SAS cable from storage controller 2: port 1 to expansion enclosure 1: bottom EMM, port 1.

Figure 55. Connect the B-Side Cables to the Expansion Enclosures

2.

1.

Storage system

2.

Storage controller 1

3.

Storage controller 2

4.

Expansion enclosure 1

5.

Expansion enclosure 2

Label the back-end cables.

Install New SCv360 Expansion Enclosures in a Rack
Install the expansion enclosures in a rack, but do not connect the expansion enclosures to the storage system. For more information,
see the Dell SCv360 Expansion Enclosure Getting Started Guide
About this task
Steps
1. Cable the expansion enclosures together to form a chain.
a. Connect a SAS cable from expansion enclosure 1: left, port 2 to expansion enclosure 2: left, port 1.
b. Connect a SAS cable from expansion enclosure 1: right, port 2 to expansion enclosure 2: right, port 1.
c. Repeat the previous steps to connect additional expansion enclosures to the chain.

68

Adding or Removing Expansion Enclosures

Figure 56. Cable the Expansion Enclosures Together

1.

Expansion enclosure 1

2.

Expansion enclosure 2

2.

Connect to the Storage Center using the Dell Storage Manager.

3.

Check the drive count of the Storage Center system before adding the expansion enclosure. Make sure the number of drives
installed plus the drives in the new expansion enclosure does not exceed 500 drives.
a. Select the Storage tab.
b. In the Storage tab navigation pane, select the Disks node.
c. On the Disks tab, record the number of drives that are accessible by the Storage Center.
Compare this value to the number of drives accessible by the Storage Center after adding expansion enclosures to the
storage system.

4.

Click the Hardware tab and select the Enclosures node in the Hardware tab navigation pane.

5.

Click Add Enclosure. The Add New Enclosure wizard starts.
a.
b.
c.
d.

Click Next to validate the existing cabling.
Select the expansion enclosure type and click Next.
If the drives are not installed, install the drives in the expansion enclosures.
Turn on the expansion enclosure. When the drives spin up, make sure that the front panel and power status LEDs show
normal operation.
e. Click Next.
f. Add the expansion enclosure to the A-side chain. Click Next to validate the cabling.
g. Add the expansion enclosure to the B-side chain. Click Next to validate the cabling.
h. Click Finish.

6.

To manually manage new unassigned drives:
a.
b.
c.
d.
e.
f.

7.

Click the Storage tab.
In the Storage tab navigation pane, select the Disks node.
Click Manage Unassigned Disks. The Manage Unassigned Disks dialog box opens.
From the Disk Folder drop-down menu, select the drive folder for the unassigned drives.
Select Perform RAID rebalance immediately.
Click OK.

Label the back-end cables.

Add the SCv360 Expansion Enclosures to the A-Side of the Chain
Connect the expansion enclosures to one side of the chain at a time to maintain drive availability.
1.

Cable the expansion enclosures to the A-side of the chain.
a. Connect a SAS cable from storage controller 1: port 1 to the first expansion enclosure in the chain, left EMM, port 1.
b. Connect a SAS cable from storage controller 2: port 2 to the last expansion enclosure in the chain, left EMM, port 2.

Adding or Removing Expansion Enclosures

69

Figure 57. Connect the A-Side Cables to the Expansion Enclosures

2.

1.

Storage system

2.

Storage controller 1

3.

Storage controller 2

4.

Expansion enclosure 1

5.

Expansion enclosure 2

Label the back-end cables.

Add an SCv360 Expansion Enclosure to the B-Side of the Chain
Connect the expansion enclosure to one side of the chain at a time to maintain drive availability.
1.

70

Disconnect the B-side cable (shown in blue) from the expansion enclosure: right EMM, port 2. The A-side cables continue to
carry I/O while the B-side is disconnected.

Adding or Removing Expansion Enclosures

Figure 58. Disconnect B-Side Cable from the Existing Expansion Enclosure

1.

Storage system

2.

Storage controller 1

3.

Storage controller 2

4.

Expansion enclosure 1

5.

New expansion enclosure (2)

2.

Use a new SAS cable to connect expansion enclosure 1: right EMM, port 2 to the new expansion enclosure (2): right EMM, port
1.

3.

Connect the B-side cable that was disconnected in step 1 to the new expansion enclosure (2): right EMM, port 2.

Figure 59. Connect B-Side Cables to the New Expansion Enclosure

1.

Storage system

2.

Storage controller 1

Adding or Removing Expansion Enclosures

71

3.

Storage controller 2

5.

New expansion enclosure (2)

4.

Expansion enclosure 1

Adding a Single Expansion Enclosure to a Chain Currently in Service
To preserve the integrity of the existing data, use caution when adding an expansion enclosure to a live Storage Center system.
Prerequisites
Install the expansion enclosure in a rack, but do not connect the expansion enclosure to the storage system. For more information,
see the Dell SCv300 and SCv320 Expansion Enclosure Getting Started Guide or the Dell SCv360 Expansion Enclosure Getting
Started Guide.
To add an expansion enclosure to an existing chain, connect the expansion enclosure to the end of the chain.
Steps
1. Connect to the Storage Center using the Dell Storage Manager.
2.

Check the drive count of the Storage Center system before adding the expansion enclosure.

3.

Click the Hardware tab and select Enclosures in the Hardware tab navigation pane.

4.

Click Add Enclosure. The Add New Enclosure wizard starts.
a. Confirm the details of your current installation and click Next to validate the existing cabling.
b. Turn on the expansion enclosure. When the drives spin up, make sure that the front panel and power status LEDs show
normal operation.
c. Click Next.
d. Add the expansion enclosure to the A-side chain. Click Next to validate the cabling.
e. Add the expansion enclosure to the B-side chain. Click Next to validate the cabling.
f. Click Finish.

5.

To manually manage new unassigned drives:
a.
b.
c.
d.
e.
f.

6.

Click the Storage tab.
In the Storage tab navigation pane, select the Disks node.
Click Manage Unassigned Disks. The Manage Unassigned Disks dialog box opens.
From the Disk Folder drop-down menu, select the drive folder for the unassigned drives.
Select Perform RAID rebalance immediately.
Click OK.

Label the new back-end cables.

Check the Drive Count
Use the Dell Storage Manager to determine the number of drives that are currently accessible to the Storage Center.
1.

Connect to the Storage Center using the Dell Storage Manager.

2.

Select the Storage tab.

3.

In the Storage tab navigation pane, select the Disks node.

4.

On the Disks tab, record the number of drives that are accessible by the Storage Center.
Compare this value to the number of drives accessible by the Storage Center after adding an expansion enclosure to the
storage system.

Add an SCv300 and SCv320 Expansion Enclosure to the A-Side of the Chain
Connect the expansion enclosure to one side of the chain at a time to maintain drive availability.
1.

Turn on the expansion enclosure being added. When the drives spin up, make sure that the front panel and power status LEDs
show normal operation.

2.

Disconnect the A-side cable (shown in orange) from the expansion enclosure: top EMM, port 2. The B-side cables continue to
carry I/O while the A-side is disconnected.

72

Adding or Removing Expansion Enclosures

Figure 60. Disconnect A-Side Cable from the Existing Expansion Enclosure

1.

Storage system

2.

Storage controller 1

3.

Storage controller 2

4.

Expansion enclosure 1

3.

Use a new SAS cable to connect expansion enclosure 1: top EMM, port 2 to the new expansion enclosure (2): top EMM, port 1.

4.

Connect the A-side cable that was disconnected in step 2 to the new expansion enclosure (2): top EMM, port 2.

Figure 61. Connect A-Side Cables to the New Expansion Enclosure

5.

1.

Storage system

2.

Storage controller 1

3.

Storage controller 2

4.

Expansion enclosure 1

5.

New expansion enclosure (2)

Label the back-end cables.

Add an SCv300 and SCv320 Expansion Enclosure to the B-Side of the Chain
Connect the expansion enclosure to one side of the chain at a time to maintain drive availability.
1.

Disconnect the B-side cable (shown in blue) from the expansion enclosure: bottom EMM, port B. The A-side cables continue to
carry I/O while the B-side is disconnected.

Adding or Removing Expansion Enclosures

73

Figure 62. Disconnect B-Side Cable from the Existing Expansion Enclosure

1.

Storage system

2.

Storage controller 1

3.

Storage controller 2

4.

Expansion enclosure 1

5.

New expansion enclosure (2)

2.

Use a new SAS cable to connect expansion enclosure 1: bottom EMM, port 2 to the new expansion enclosure (2): bottom
EMM, port 1.

3.

Connect the B-side cable that was disconnected in step 1 to the new expansion enclosure (2): bottom EMM, port 2.

Figure 63. Connect B-Side Cables to the New Expansion Enclosure

74

1.

Storage system

2.

Storage controller 1

3.

Storage controller 2

4.

Expansion enclosure 1

5.

New expansion enclosure (2)

Adding or Removing Expansion Enclosures

Add an SCv360 Expansion Enclosure to the A-Side of the Chain
Connect the expansion enclosure to one side of the chain at a time to maintain drive availability.
1.

Turn on the expansion enclosure being added. When the drives spin up, make sure that the front panel and power status LEDs
show normal operation.

2.

Disconnect the A-side cable (shown in orange) from the expansion enclosure: left EMM, port 2. The B-side cables continue to
carry I/O while the A-side is disconnected.

Figure 64. Disconnect A-Side Cable from the Existing Expansion Enclosure

1.

Storage system

2.

Storage controller 1

3.

Storage controller 2

4.

Expansion enclosure 1

3.

Use a new SAS cable to connect expansion enclosure 1: left EMM, port 2 to the new expansion enclosure (2): left EMM, port 1.

4.

Connect the A-side cable that was disconnected in step 2 to the new expansion enclosure (2): left EMM, port 2.

Figure 65. Connect A-Side Cables to the New Expansion Enclosure

1.

Storage system

2.

Storage controller 1

Adding or Removing Expansion Enclosures

75

5.

3.

Storage controller 2

5.

New expansion enclosure (2)

4.

Expansion enclosure 1

Label the back-end cables.

Add an SCv360 Expansion Enclosure to the B-Side of the Chain
Connect the expansion enclosure to one side of the chain at a time to maintain drive availability.
1.

Disconnect the B-side cable (shown in blue) from the expansion enclosure: right EMM, port 2. The A-side cables continue to
carry I/O while the B-side is disconnected.

Figure 66. Disconnect B-Side Cable from the Existing Expansion Enclosure

1.

Storage system

2.

Storage controller 1

3.

Storage controller 2

4.

Expansion enclosure 1

5.

New expansion enclosure (2)

2.

Use a new SAS cable to connect expansion enclosure 1: right EMM, port 2 to the new expansion enclosure (2): right EMM, port
1.

3.

Connect the B-side cable that was disconnected in step 1 to the new expansion enclosure (2): right EMM, port 2.

76

Adding or Removing Expansion Enclosures

Figure 67. Connect B-Side Cables to the New Expansion Enclosure

1.

Storage system

2.

Storage controller 1

3.

Storage controller 2

4.

Expansion enclosure 1

5.

New expansion enclosure (2)

Removing an Expansion Enclosure from a Chain Currently in Service
To remove an expansion enclosure, you disconnect the expansion enclosure from one side of the chain at a time.
About this task
During this process, one side of the chain is disconnected. The Storage Center directs all I/O to the other side of the chain, which
remains connected.
CAUTION: Make sure that your data is backed up before removing an expansion enclosure.
Before physically removing an expansion enclosure, make sure that none of the drives in the expansion enclosure are managed by
the Storage Center software.
Steps
1. Connect to the Storage Center using the Dell Storage Manager.
2.

Use the Dell Storage Manager to release the drives in the expansion enclosure.

3.

Select the expansion enclosure to remove and click Remove Enclosure. The Remove Enclosure wizard starts.

4.

Confirm the details of your current installation and click Next to validate the cabling.

5.

Locate the expansion enclosure in the rack. Click Next.

6.

Disconnect the A-side chain.
a. Disconnect the A-side cables that connect the expansion enclosure to the storage system. Click Next.
b. Reconnect the A-side cables to exclude the expansion enclosure from the chain. Click Next to validate the cabling.

7.

Disconnect the B-side chain.
a. Disconnect the B-side cables that connect the expansion enclosure to the storage system. Click Next.
b. Reconnect the B-side cables to exclude the expansion enclosure from the chain. Click Next to validate the cabling.

8.

Click Finish.

Adding or Removing Expansion Enclosures

77

Release the Drives in the Expansion Enclosure
Use the Dell Storage Manager to release the drives in an expansion enclosure before removing the expansion enclosure.
About this task
Because releasing drives causes all of the data to move off the drives, this procedure might take some time.
NOTE: Do not release drives unless the remaining drives have enough free space for the restriped data.
Steps
1. Connect to the Storage Center using the Dell Storage Manager.
2.

Click the Hardware tab.

3.

In the Hardware tab navigation pane, expand the expansion enclosure to remove.

4.

Select the Disks node.

5.

Select all of the drives in the expansion enclosure.

6.

Right-click on the selected drives and select Release Disk. The Release Disk dialog box opens.

7.

Select Perform RAID rebalance immediately.

8.

Click OK.

When all of the drives in the expansion enclosure are in the Unassigned drive folder, the expansion enclosure is safe to remove.

Disconnect the SCv300 and SCv320 Expansion Enclosure from the A-Side of the Chain
Disconnect the A-side cables from the expansion enclosure that you want to remove.
1.

Disconnect the A-side cable (shown in orange) from expansion enclosure 1: top EMM, port 1. The B-side cables continue to
carry I/O while the A-side is disconnected.

2.

Remove the A-side cable between expansion enclosure 1: top EMM, port 2 and expansion enclosure 2: top EMM, port 1.

Figure 68. Disconnecting the A-Side Cables from the Expansion Enclosure

78

1.

Storage system

2.

Storage controller 1

3.

Storage controller 2

4.

Expansion enclosure 1

Adding or Removing Expansion Enclosures

5.
3.

Expansion enclosure 2

Connect the A-side cable to expansion enclosure 2: top EMM, port 1.

Figure 69. Reconnecting the A-Side Cable to the Remaining Expansion Enclosure

1.

Storage system

2.

Storage controller 1

3.

Storage controller 2

4.

Expansion enclosure 1

5.

Expansion enclosure 2

Disconnect the SCv300 and SCv320 Expansion Enclosure from the B-Side of the Chain
Disconnect the B-side cables from the expansion enclosure that you want to remove.
1.

Disconnect the B-side cable (shown in blue) from expansion enclosure 1: bottom EMM, port 1. The A-side cables continue to
carry I/O while the B-side is disconnected.

2.

Remove the B-side cable between expansion enclosure 1: bottom EMM, port 2 and expansion enclosure 2: bottom EMM, port 1.

Adding or Removing Expansion Enclosures

79

Figure 70. Disconnecting the B-Side Cables from the Expansion Enclosure

3.

1.

Storage system

2.

Storage controller 1

3.

Storage controller 2

4.

Expansion enclosure 1

5.

Expansion enclosure 2

Connect the B-side cable to expansion enclosure 2: bottom EMM, port 1.
The expansion enclosure is now disconnected and can be removed.

Figure 71. Reconnecting the B-Side Cable to the Remaining Expansion Enclosure

80

1.

Storage system

2.

Storage controller 1

3.

Storage controller 2

4.

Disconnected expansion enclosure

Adding or Removing Expansion Enclosures

5.

Expansion enclosure 1

Disconnect the SCv360 Expansion Enclosure from the A-Side of the Chain
Disconnect the A-side cables from the expansion enclosure that you want to remove.
1.

Disconnect the A-side cable (shown in orange) from expansion enclosure 1: left EMM, port 1. The B-side cables continue to
carry I/O while the A-side is disconnected.

2.

Remove the A-side cable between expansion enclosure 1: left EMM, port 2 and expansion enclosure 2: left EMM, port 1.

Figure 72. Disconnecting the A-Side Cables from the Expansion Enclosure

3.

1.

Storage system

2.

Storage controller 1

3.

Storage controller 2

4.

Expansion enclosure 1

5.

Expansion enclosure 2

Connect the A-side cable to expansion enclosure: left EMM, port 1.

Adding or Removing Expansion Enclosures

81

Figure 73. Reconnecting the A-Side Cable to the Remaining Expansion Enclosure

1.

Storage system

2.

Storage controller 1

3.

Storage controller 2

4.

Expansion enclosure 1

5.

Expansion enclosure 2

Disconnect the SCv360 Expansion Enclosure from the B-Side of the Chain
Disconnect the B-side cables from the expansion enclosure that you want to remove.
1.

Disconnect the B-side cable (shown in blue) from expansion enclosure 1: right EMM, port 1. The A-side cables continue to carry
I/O while the B-side is disconnected.

2.

Remove the B-side cable between expansion enclosure 1: right EMM, port 2 and expansion enclosure 2: right EMM, port 1.

82

Adding or Removing Expansion Enclosures

Figure 74. Disconnecting the B-Side Cables from the Expansion Enclosure

3.

1.

Storage system

2.

Storage controller 1

3.

Storage controller 2

4.

Expansion enclosure

Connect the B-side cable to expansion enclosure 2: right EMM, port 1.
The expansion enclosure is now disconnected and can be removed.

Figure 75. Reconnecting the B-Side Cable to the Remaining Expansion Enclosure

1.

Storage system

2.

Storage controller 1

3.

Storage controller 2

4.

Expansion enclosure 1

Adding or Removing Expansion Enclosures

83

5.

84

Expansion enclosure 2

Adding or Removing Expansion Enclosures

8
Troubleshooting Storage Center Deployment
This section contains troubleshooting steps for common Storage Center deployment issues.

Troubleshooting Storage Controllers
To troubleshoot storage controllers:
1.

Check the status of the storage controller using the Dell Storage Manager.

2.

Check the position of the storage controllers.

3.

Check the pins and reseat the storage controller.
a. Remove the storage controller.
b. Verify that the pins on the storage system backplane and the storage controller are not bent.
c. Reinstall the storage controller.

4.

Determine the status of the storage controller link status indicators. If the indicators are not green, check the cables.
a.
b.
c.
d.

Shut down the storage controller.
Reseat the cables on the storage controller.
Restart the storage controller.
Recheck the link status indicators. If the link status indicators are not green, replace the cables.

Troubleshooting Hard Drives
To troubleshoot hard drives:
1.

Check the status of the hard drive using the Dell Storage Manager.

2.

Determine the status of the hard drive indicators.
•

3.

If the hard drive status indicator blinks amber ON for two seconds and OFF for one second, the hard drive has failed.

• If the hard drive status indicator is not lit, proceed to the next step.
Check the connectors and reseat the hard drive.
CAUTION: Perform this step only on unmanaged drives or after you confirm that the particular drive contains no
user data. The Fault LED alone is not an indication that you can safely remove the drive.
a. Remove the hard drive.
b. Check the hard drive and the backplane to ensure that the connectors are not damaged.
c. Reinstall the hard drive. Make sure the hard drive makes contact with the backplane.

Troubleshooting Expansion Enclosures
To troubleshoot expansion enclosures:
1.

Check the status of the expansion enclosure using the Dell Storage Manager.

2.

If an expansion enclosure and/or drives are missing from the Dell Storage Manager, you might need to check for and install
Storage Center updates to use the expansion enclosure and/or drives.

3.

If an expansion enclosure firmware update fails, check the back-end cabling and ensure that redundant connections are used.

Troubleshooting Storage Center Deployment

85

Troubleshooting With Lasso
Lasso is a Dell application used to collect diagnostic information from one central location for a SAN environment. You can gather
information from storage arrays, attached hosts, and switches to accurately analyze the storage area network.

Lasso Application
Download the Lasso application from this link: http://www.dell.com/supporttcontents/US/en/04/category/product-support/selfsupport-knowledgebase/enterprise-resource-center/enterprise-tools

Lasso Documentation
Download the Lasso User’s Guide and Release Notes from this link: dell.com/support/home/us/en/19/product-support/product/
dell-lass0-v4.7.2/manuals

Lasso Requirements
Before installing Lasso, make sure that the following prerequisites are met:
•

User account has Administrator privileges

•

System has one of the following Windows (32–bit or 64–bit) operating systems:
– Windows 7, 8, 8.1, 10
– Windows Server 2008, 2012, 2012 R2
NOTE: Windows Server Core is not supported.

•

IP connectivity to all defined devices

•

Java 1.6 or later

•

Microsoft .NET Framework 2.0 or later is required on Windows hosts. The Microsoft .NET 2.0 Framework can be downloaded
from this link: Microsoft.NET 2.0 download

For Lasso to successfully collect data from the SCv3000 and SCv3020, the following conditions must be met:
•

SupportAssist is configured with Dell Storage Manager and has been tested.

•

Port 443 is available.

•

The server on which Lasso is installed is not:
– A virtual center server
– An SCVMM server
– An IIS server and does not have the IIS Microsoft service running
– Actively browsing to an https://site.
– If the Dell Storage Manager Data Collector is installed and running, close the application. All these applications use port 443
and will cause a conflict with your collection.

The following credentials are required for collecting diagnostic information from the Storage Center:
•

SCOS
– SCOS management IP
– SCOS Administrator user and password

86

Troubleshooting Storage Center Deployment

A
Set Up a Local Host or VMware Host
After configuring a Storage Center, you can set up block-level storage for a local host running the Dell Storage Manager, VMware
ESXi host, or multiple VMware ESXi hosts in a vSphere cluster.

Set Up a VMware ESXi Host from Initial Setup
Configure a VMware ESXi host to access block-level storage on the Storage Center.
Prerequisites
•

Client must be running on a system with a 64-bit operating system.

•

The Dell Storage Manager must be run by a Dell Storage Manager user with the Administrator privilege.

•

On a Storage Center with Fibre Channel I/O ports, configure Fibre Channel zoning before starting this procedure.

Steps
1. On the Configuration Complete page of the Discover and Configure Storage Center wizard, click Configure VMware
vSpheres to access a Storage Center.
The Set up VMware Host on Storage Center wizard opens.
2.

Type the vCenter or ESXi IP address or host name, user name, and password. Then click Next.
•

3.

If the Storage Center has iSCSI ports and the host is not connected to any interface, the Log into Storage Center via
iSCSI page opens. Select the target fault domains, and then click Log In.

• In all other cases, the Verify vSpheres Information page opens. Proceed to the next step.
Select an available port, and then click Create Server.
The server definition is created on the Storage Center.

4.

The Host Setup Successful page displays the best practices that were set by the wizard and best practices that were not set.
Make a note of any best practices that were not set by the wizard. It is recommended that these updates be applied manually
before starting I/O to the Storage Center.

5.

(Optional) Select Create a Volume for this host to create a volume after finishing host setup.

6.

Click Finish.

Set Up a Local host from Initial Setup
Configure the local host from Initial Setup to access block-level storage on the Storage Center.
Prerequisites
•

Client must be running on a system with a 64-bit operating system.

•

The Dell Storage Manager must be run by a Dell Storage Manager user with the Administrator privilege.

•

On a Storage Center with Fibre Channel I/O ports, configure Fibre Channel zoning before starting this procedure.

Steps
1. On the Configuration Complete page of the Discover and Configure Storage Center wizard, click Set up block level storage
for this host.
The Set up localhost for Storage Center wizard opens.
•

2.

If the Storage Center has iSCSI ports and the host is not connected to any interface, the Log into Storage Center via
iSCSI page opens. Select the target fault domains, and then click Log In.

• In all other cases, the Verify localhost Information page opens. Proceed to the next step.
On the Verify localhost Information page, verify that the information is correct. Then click Create Server.
The server definition is created on the Storage Center for the connected and partially connected initiators.

Set Up a Local Host or VMware Host

87

3.

The Host Setup Successful page displays the best practices that were set by the wizard and best practices that were not set.
Make a note of any best practices that were not set. It is recommended that these updates be applied manually before starting
I/O to the Storage Center.

4.

(Optional) Select Create a Volume for this host to create a volume after finishing host setup.

5.

Click Finish.

Set Up Multiple VMware ESXi Hosts in a VMware vSphere Cluster
Configure multiple VMware ESXi hosts that are part of the vSphere cluster from initial setup to access block-level storage on the
Storage Center.
Prerequisites
•

Client must be running on a system with a 64-bit operating system.

•

The Dell Storage Manager must be run by a Dell Storage Manager user with the Administrator privilege.

•

On a Storage Center with Fibre Channel I/O ports, configure Fibre Channel zoning before starting this procedure.

Steps
1. On the Configuration Complete page of the Discover and Configure Storage Center wizard, click Configure VMware
vSphere to access a Storage Center.
The Set up VMware Host on Storage Center wizard opens.
2.

Type the vCenter IP address or host name, user name, and password. Then click Next.
•

3.

If the Storage Center has iSCSI ports and the host is not connected to any interface, the Log into Storage Center via
iSCSI page opens. Select the hosts and target fault domains, and then click Log In.

• In all other cases, the Verify vSpheres Information page appears. Proceed to the next step.
Select an available port, and then click Create Servers.
The server definition is created on the Storage Center for each of the connected or partially connected hosts.

4.

The Host Setup Successful page displays the best practices that were set by the wizard and best practices that were not set.
Make a note of any best practices that were not set. It is recommended that these updates be applied manually before starting
I/O to the Storage Center.

5.

(Optional) Select Create a Volume for this host to create a volume after finishing host setup.

6.

Click Finish.

88

Set Up a Local Host or VMware Host

B
Initialize the Storage Center Using the USB Serial
Port
If the Discover and Configure Storage Center wizard does not discover the Storage Center that you want to initialize, use the serial
port on a storage controller to find it.

Install the USB Serial Port Driver
Use this procedure to install a serial port driver that gives you access to the console ports on the storage system.
About this task
Install the serial port driver as a standard Windows driver.
The serial port for the SCv3000 and SCv3020 storage controller is an FT4232 USB device, which has four console ports. Only two
ports are used when working with SCv3000 and SCv3020 storage systems:
•

The first port is used to communicate with the Storage Center software.

•

The second port is used to communicate with the Integrated Dell Remote Access Controller (iDRAC).
NOTE: Depending on the computer you are using, the port numbers might not always be 1 for the Storage Center
software and 2 for the iDRAC. Communication with the Storage Center software always occurs over the port with the
lower number.

Steps
1. Use the micro-USB serial port cable to connect a Windows computer to the micro-USB connector on the top storage
controller.
2.

Wait for the Windows computer to identify the USB serial port.

3.

Install the serial port driver.
•

If the Windows operating system opens a hardware wizard, follow the instructions to install the USB serial port driver.

•

If a hardware wizard does not open:
1.

Log in to the Dell Digital Locker at www.dell.com/support/licensing, and download the FTDI D2XX driver .zip file.

2.

Extract the content of the FTDI D2XX driver .zip file to a folder on the computer.

3.

Start the Device Manager.

4.

Find the hardware in the Device Manager.

5.

Right click on the hardware, select Update Driver Software, and browse to folder where driver files were extracted.

4.

6. Select the folder with the driver files, and click Next.
Reboot the computer.

5.

Verify that the driver was successfully installed:
•

Check for the drivers manually:
– Start Device Manager. The four COM ports appear in the list of devices.
– Note the number of the first COM port. This number is the port used to connect to the Storage Center software.

•

If you cannot install the USB serial port driver, contact Dell Technical Support.

Establish a Terminal Session
Use this procedure to establish a terminal session with the Storage Center software on the top storage controller.
Prerequisites
Install the serial port driver that enables you to work with SCv3000 and SCv3020 storage systems ports.

Initialize the Storage Center Using the USB Serial Port

89

•

Port 1 is used to communicate with the Storage Center software.

•

Port 2 is used to communicate with the iDRAC.

Steps
1. Make sure the micro-USB serial port cable is connected to the computer and the micro-USB connector on the top storage
controller.
2.

Open a terminal emulator program on the computer.

3.

To facilitate troubleshooting, enable logging in the terminal emulator.

4.

Configure the serial connection in the terminal emulator as shown in the following table.
Table 6. Serial Connection Settings

5.

Setting

Value

Connection Type

Serial

Serial Line

COM1

Baud Rate (Speed)

115200

Data Bits

8

Stop Bits

1

Parity

None

Flow Control

XON/XOFF

Column Mode

132

Line Wrapping

Off

Press Enter several times to initiate the connection.
The terminal echoes back to indicate that connectivity is established.

Discover the Storage Center Using the Setup Utility Tool
Use the Setup Utility tool to discover the Storage Center.
1.

Log in to the top storage controller.
•

The login is __setup__
NOTE: The login contains two underscore characters before the word setup and two underscore characters after
the word setup.

•

The password is StorageCenterSetup

The Setup Utility tool discovers the uninitialized Storage Center.
2.

Type Y and press Enter to continue.

3.

Type the system information, controller information, and user information in the prompts that are displayed.

4.

After entering the information about the storage system, review the information. At the prompt Do you wish to continue with
this configuration?, press Enter.
Configuration of the Storage Center begins, and messages appear to show the progress of the configuration.

5.

After the configuration successfully completes, open the Dell Storage Manager and connect to the Storage Center using the
management address.
The Discover and Configure Uninitialized Storage Center wizard opens to the Create Storage Type page.
NOTE: If you cannot discover the Storage Center using the Setup Utility tool, contact Dell Technical Support.

90

Initialize the Storage Center Using the USB Serial Port

C
Worksheet to Record System Information
Use the following worksheet to record the information that is needed to install the SCv3000 and SCv3020 storage system.

Storage Center Information
Gather and record the following information about the Storage Center network and the administrator user.
Table 7. Storage Center Network

Service Tag

________________

Management IPv4 address (Storage Center management address)

___ . ___ . ___ . ___

Top Controller IPv4 address (Controller 1 MGMT port)

___ . ___ . ___ . ___

Bottom Controller IPv4 address (Controller 2 MGMT port)

___ . ___ . ___ . ___

Subnet mask

___ . ___ . ___ . ___

Gateway IPv4 address

___ . ___ . ___ . ___

Domain name

________________

DNS server address

___ . ___ . ___ . ___

Secondary DNS server address

___ . ___ . ___ . ___

Table 8. Storage Center Administrator

Password for the default Storage Center Admin user

________________

Email address of the default Storage Center Admin user

________________

iSCSI Fault Domain Information
For a storage system with iSCSI front-end ports, gather and record network information for the iSCSI fault domains. This information
is needed to complete the Discover and Configure Uninitialized Storage Centers wizard.
NOTE: For a storage system deployed with two Ethernet switches, Dell recommends setting up each fault domain on
separate subnets.
Table 9. iSCSI Fault Domain 1

Target IPv4 address

___ . ___ . ___ . ___

Subnet mask

___ . ___ . ___ . ___

Gateway IPv4 address

___ . ___ . ___ . ___

IPv4 address for storage controller module 1: port 1

___ . ___ . ___ . ___

IPv4 address for storage controller module 2: port 1

___ . ___ . ___ . ___

(Four port HBA only) IPv4 address for storage controller module 1: port 3

___ . ___ . ___ . ___

(Four port HBA only) IPv4 address for storage controller module 2: port 3

___ . ___ . ___ . ___

Worksheet to Record System Information

91

Table 10. iSCSI Fault Domain 2

Target IPv4 address

___ . ___ . ___ . ___

Subnet mask

___ . ___ . ___ . ___

Gateway IPv4 address

___ . ___ . ___ . ___

IPv4 address for storage controller module 1: port 2

___ . ___ . ___ . ___

IPv4 address for storage controller module 2: port 2

___ . ___ . ___ . ___

(4-port HBA only) IPv4 address for storage controller module 1: port 4

___ . ___ . ___ . ___

(4-port HBA only) IPv4 address for storage controller module 2: port 4

___ . ___ . ___ . ___

Additional Storage Center Information
The Network Time Protocol (NTP) and Simple Mail Transfer Protocol (SMTP) server information is optional. The proxy server
information is also optional, but it may be required to complete the Discover and Configure Uninitialized Storage Centers wizard.
Table 11. NTP, SMTP, and Proxy Servers

NTP server IPv4 address

___ . ___ . ___ . ___

SMTP server IPv4 address

___ . ___ . ___ . ___

Backup SMTP server IPv4 address

___ . ___ . ___ . ___

SMTP server login ID

________________

SMTP server password

________________

Proxy server IPv4 address

___ . ___ . ___ . ___

Fibre Channel Zoning Information
For a storage system with Fibre Channel front-end ports, record the physical and virtual WWNs of the Fibre Channel ports in Fault
Domain 1 and Fault Domain 2. This information is displayed on the Review Front-End page of the Discover and Configure
Uninitialized Storage Centers wizard. Use this information to configure zoning on each Fibre Channel switch.
Table 12. Physical WWNs in Fault Domain 1

Physical WWN of storage controller 1: port 1

________________

Physical WWN of storage controller 2: port 1

________________

(4-port HBA only) Physical WWN of storage controller 1: port 3

________________

(4-port HBA only) Physical WWN of storage controller 2: port 3

________________

Table 13. Virtual WWNs in Fault Domain 1

Virtual WWN of storage controller 1: port 1

________________

Virtual WWN of storage controller 2: port 1

________________

(4-port HBA only) Virtual WWN of storage controller 1: port 3

________________

(4-port HBA only) Virtual WWN of storage controller 2: port 3

________________

92

Worksheet to Record System Information

Table 14. Physical WWNs in Fault Domain 2

Physical WWN of storage controller 1: port 2

________________

Physical WWN of storage controller 2: port 2

________________

(4-port HBA only) Physical WWN of storage controller 1: port 4

________________

(4-port HBA only) Physical WWN of storage controller 2: port 4

________________

Table 15. Virtual WWNs in Fault Domain 2

Virtual WWN of storage controller 1: port 2

________________

Virtual WWN of storage controller 2: port 2

________________

(4-port HBA only) Virtual WWN of storage controller 1: port 4

________________

(4-port HBA only) Virtual WWN of storage controller 2: port 4

________________

Worksheet to Record System Information

93

D
HBA Server Settings
This appendix provides recommended HBA card settings that provide the most effective communication between the server and the
Storage Center.

Settings by HBA Manufacturer
Storage Center has been tested to work with servers using Dell, Cisco, Emulex, and Qlogic HBAs.
NOTE: Cisoc, Emulex, and Qlogic HBAs require additional configuration to improve the connection speeds between the
server and the Storage Center. For more information regarding the compatibility of an HBA, see the Dell Storage
Compatibility Matrix.

Dell 12 Gb SAS HBAs
Dell 12 Gb SAS HBAs are fully compatible with Storage Center and do not require further configuration.

Cisco Fibre Channel HBAs
Cisco manufactures Fibre Channel HBAs that are compatible with Storage Centers.
NOTE: For more information regarding the compatibility of a Cisco Fibre Channel HBA, see the Dell Storage Compatibility
Matrix.
Configure a Cisco Fibre Channel HBA with the following settings:
Field

Setting

FCP Error Recovery

Disabled (default)

Flogi Retries

60

Flogi Timeout

4000 (default)

Plogi Retries

60

Plogi Timeout

20000 (default)

Port Down Timeout

10000 (default)

Port Down IO Retry

60 (default)

Link Down Timeout

30000 (default)

Emulex HBAs
Emulex manufactures HBAs for iSCSI and Fibre Channel connections that are compatible with Storage Centers.
NOTE: For more information regarding the compatibility of an HBA, see the Dell Storage Compatibility Matrix. For more
information about Emulex, see www.emulex.com.

Configure Emulex HBA Settings
Configure Emulex HBA settings to enable the HBA to communicate more effectively with the Storage Center. Configure Emulex
HBA settings with the Emulex HBAnywhere utility or the Emulex LightPulse BIOS. After configuring the settings based on the
manufacturer of the HBA, configure the settings that apply to the operating system running on the server.
Configure an Emulex HBA to match the following settings:

94

HBA Server Settings

Table 16. Emulex HBA Settings

Field

Setting

NodeTimeOut

60

QueueDepth

254

Topology

1

QLogic HBAs
Qlogic manufactures HBAs that are compatible with Storage Centers.
NOTE: For more information regarding the compatibility of an HBAs, see the Dell Storage Compatibility Matrix. For more
information about QLogic, see www.qlogic.com

Configure QLogic HBA Settings
Configure QLogic HBA settings to enable the HBA to communicate more effectively with the Storage Center. The following settings
can be configured on any of the compatible QLogic HBAs from the QLogic Fast!UTIL BIOS or the QLogic SANsurfer. After
configuring the settings based on the manufacturer of the HBA, configure the settings that apply to the operating system running on
the server.

QLogic Fibre Channel HBAs
Configure a QLogic Fibre Channel HBA to match the following settings:
Table 17. Fibre Channel HBA Settings

Field

Settings

Connection options

1 for point-to-point only

Login retry count

60 attempts

Port down retry count

60 attempts

Link down timeout

30 seconds

Execution Throttle

256

QLogic iSCSI HBAs
Configure a QLogic iSCSI HBA to match the following settings:
Table 18. iSCSI HBA Settings

Field

Settings

ARP Redirect

Enabled

Settings by Server Operating System
To ensure effective communication with the Storage Center, configure the HBA settings from the server operating system. The
following server operating systems can be configured to provide more effective communication with Storage Center.
•

Citrix XenServer

•

Microsoft Windows Server

•

Novell Netware

•

Red Hat Enterprise Linux

HBA Server Settings

95

Citrix XenServer
Configure the server HBA settings for the Citrix XenServer to ensure that the server performs a proper storage system failover
when working with Storage Center.
NOTE: If the server is configured in a high-availability cluster, contact Citrix for best practices for setting high-availability
timeout values.

Versions 5.x to 6.2
For Citrix XenServer versions 5.x through 6.2, to ensure that XenServer volumes will persist after a Storage Center controller
failover, apply the following timeout values. These settings are located in the mpathHBA file located in the /opt/xensource/sm/
directory. When finished, save the file and reboot the server.
Table 19. Citrix XenServer HBA Settings for Versions 5.x to 6.2

Field

Setting

DEFAULT_TIMEOUT

60

MPATH_TIMEOUT

60

Version 6.5
For Citrix XenServer version 6.5 and later, the multipath configuration file has been relocated. To ensure that XenServer volumes will
persist after a Storage Center controller failover, apply the following timeout value. This setting is located in the defaults section
of the multipath.conf configuration file located in the /etc directory. When finished, save the file and reboot the server.
The following code provides an example:
defaults {
user_friendly_names no
replace_wwid_whitespace yes
dev_loss_tmo 30
}
NOTE: The default value for the dev_loss_tmo timeout setting is 30. However, Dell recommends that the default is set
to 60.
Table 20. Citrix XenServer HBA Settings for Version 6.5 and Later

Field

Setting

dev_loss_tmo

60

Microsoft Windows Server
Double-check that the timeout value for a Microsoft Windows Server is set to 60 seconds.
Make sure that the TimoutValue is set to 60 in the following Registry Editor location.
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Disk
NOTE: It is recommended that the latest service pack be installed prior to installing the clustering service.

Microsoft MPIO Settings
The following settings are recommended for Microsoft Windows Servers with MPIO installed.

Recommended MPIO Registry Settings
Configure the MPIO registry settings in the following registry location:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\mpio\Parameters

96

HBA Server Settings

Table 21. MPIO Registry Settings

Field

Setting

PDORemovePeriod

120

PathRecoveryInterval

25

UseCustomPathRecoveryInterval

1

Recommended iSCSI Initiator Settings
Configure the iSCSI initiator settings in the following registry location:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4D36E97B-E325-11CE-BFC1-08002BE10318}\\Parameters
Table 22. iSCSI Initiator Settings

Field

Setting

MaxRequestHoldTime

90

LinkDownTime

35

EnableNOPOut

1

Novell Netware
Servers running Novell Netware require that the portdown value be reconfigured to allow enough time for the storage systems to fail
over.
To the end of the Fibre Channel driver load line of nwserver/startup.ncf add :
/LUNS /ALLPATHS /ALLPORTS /PORTDOWN=60

Red Hat Enterprise Linux
Timeout values determine the time a server waits before destroying a connection after losing connectivity. With a single-path
configuration that value should be set to 60 seconds to allow the WWN of the failed port to transfer to a port on the other storage
system. With a multipath configuration the timeout value should be set to 5 seconds because the ports will fail over immediately.
Configure the timeout values as shown below based on the manufacturer of the HBA card and the path configuration.

Version 5.x
Configure these timeout values for servers running RHEL version 5.x. The settings vary based on the manufacturer of the HBA card.
Add one of these settings to the end of the file /etc/modprobe.conf based on the manufacturer of the HBA card.

Qlogic HBA Settings
Path Configuration

Timeout Setting

Single Path

options qla2xxx qlport_down_retry=60

Multipath

options qla2xxx qlport_down_retry=5

Emulex HBA Settings
Path Configuration

Timeout Setting

Single Path

options lpfc lpfc_devloss_tmo=60

Multipath

options lpfc lpfc_devloss_tmo=5

HBA Server Settings

97

Version 6.x
Changing HBA settings in RHEL version 6.x requires the creation of a new configuration file that contains the settings in the table
below.
For Qlogic HBA cards, create a configuration file in /etc/modprobe.d/ named qla2xxx.conf that contains one of the following
parameters.

Qlogic HBA Settings
Path Configuration

Timeout Setting

Single Path

options qla2xxx qlport_down_retry=60

Multipath

options qla2xxx qlport_down_retry=5

For Emulex HBA cards, create a configuration file in /etc/modprobe.d/ named lpfc.conf that contains one of the following
parameters.

Emulex HBA Settings
Path Configuration

Timeout Setting

Single Path

options lpfc lpfc_devloss_tmo=60

Multipath

options lpfc lpfc_devloss_tmo=5

98

HBA Server Settings

E
iSCSI Settings
This appendix lists recommended and required settings when using iSCSI cards.

Flow Control Settings
This section provides information about flow control and the recommended flow control settings.

Ethernet Flow Control
802.3x flow control is a mechanism for temporarily pausing data transmission when a NIC, an HBA port, or a switch port is
transmitting data faster than its target port can accept the data.
Ethernet flow control allows a switch port to stop network traffic between two nodes by sending a PAUSE frame to another switch
port or edge device. The PAUSE frame temporarily pauses transmission until the port is again able to service requests.

Switch Ports and Flow Control
Recommendations for using Ethernet Flow Control depends on the switch port hardware.
•

Ethernet Flow Control should be set to ON for switch ports connected to Storage Center storage system card ports.

•

Switch port settings for server NICs and other switch ports in the switch network should be set to ON.

Flow Control
Dell recommends the following settings as best practice when enabling flow control:
•

A minimum of receive (RX) flow control should be enabled for all switch interfaces used by servers or storage systems for iSCSI
traffic.

•

Symmetric flow control should be enabled for all server interfaces used for iSCSI traffic. Storage Center automatically enables
this feature.
NOTE: To find best practices for iSCSI SAN switch configuration, go to the Switch Configuration Guides wiki page.

Jumbo Frames and Flow Control
Some switches have limited buffer sizes and can support either jumbo frames or flow control, but cannot support both at the same
time. If you must choose between the two features, Dell recommends choosing flow control.
NOTE: All the switches listed in the Dell Storage Compatibility Matrix support both jumbo frames and flow control at the
same time.
However, if you use jumbo frames, be aware of the following:
•

To simplify troubleshooting initial deployments, make sure that all servers, switches, and storage are fully operational before
enabling jumbo frames.

•

All devices connected through iSCSI must support 9K jumbo frames or larger.

•

All devices used to connect iSCSI devices must support 9K jumbo frames. Every switch, router, WAN accelerator, and any other
network device that handles iSCSI traffic must support 9K jumbo frames. If you are not sure that every device in your iSCSI
network supports 9K jumbo frames, then do not turn on jumbo frames.

•

Devices on both sides (server and SAN) must have jumbo frames enabled. It is recommended that any change to the jumbo
frames enabled/disabled setting is conducted during a maintenance window.

iSCSI Settings

99

•

If MTU frame is not set correctly on the data paths, then devices cannot communicate. Packets that are larger than MTU size
are discarded and do not reach the destination.

•

QLogic 4010 series cards do not support jumbo frames.

Perform the following steps in Dell Storage Manager to display the model number of an iSCSI I/O card:
1.

Use the Dell Storage Manager to connect to the Storage Center.

2.

Click the Hardware tab.

3.

From the Hardware tab navigation pane, click the Controllers node.

4.

In the right pane, click the IO Ports tab.

5.

In the iSCSI area of the IO Ports tab, the Description column displays the model numbers of the iSCSI I/O cards.

Other iSCSI Settings
The following tables lists Dell recommended iSCSI settings and best practices.
Table 23. Recommended iSCSI HBA Settings

Setting

iSCSI Best Practice

Full Duplex

•
•

Use auto-negotiate for all interfaces that negotiate at full-duplex and at the maximum speed of the
connected port (1 GbE or 10 GbE).
If a switch cannot correctly auto-negotiate at full-duplex or at the maximum speed of the
connection, it should be hard set at full-duplex and at the maximum speed of the connected port (1
GbE or 10 GbE).

MTU

Verify the optimal MTU setting for replications. The default is 1500 but sometimes WAN circuits or VPNs
create additional overhead that can cause packet fragmentation. This fragmentation may result in iSCSI
replication failure and/or suboptimal performance. Adjust the MTU setting using the Dell Storage
Manager.

Switch

•
•
•

VLAN

•
•
•
•

100

iSCSI Settings

Configure switch interfaces that connect directly to servers or storage systems to forward using
PortFast or Edgeport. Go to the Switch Configuration Guides wiki page and refer to the guide for
the current switch.
Ensure that any switches used for iSCSI are of a non-blocking design.
When deciding which switches to use, remember that you are running iSCSI traffic over the switch.
Use only quality, managed, enterprise-class networking equipment. It is not recommended to use
SBHO (small business/home office) class equipment outside of lab/test environments. Check the
Dell Storage Compatibility Matrix to ensure it has been fully tested to work in a SAN.
To find best practices for a VLAN, go to the Switch Configuration Guides wiki page and refer to the
guide for the current switch.
Maintain two separate VLANs when using multipathed iSCSI.
Disable unicast storm control on every switch that handles iSCSI traffic.
Disable multicast at the switch level for all iSCSI VLANs. Set multicast storm control to enabled (if
available) when multicast cannot be disabled.



Source Exif Data:
File Type                       : PDF
File Type Extension             : pdf
MIME Type                       : application/pdf
PDF Version                     : 1.6
Linearized                      : No
Author                          : Dell
Create Date                     : 2017:10:31 12:58:52+00:00
Modify Date                     : 2017:10:31 12:58:52+00:00
Creator                         : AH XSL Formatter V6.3 MR3 for Windows (x64) : 6.3.4.25128 (2016/07/01 17:39JST)
Producer                        : Antenna House PDF Output Library 6.3.815 (Windows (x64))
Keywords                        : manual, guide, documentation
Title                           : Dell SCv3000 and SCv3020 Storage System  Deployment Guide
Subject                         : Deployment Guide
Trapped                         : False
Page Count                      : 100
Page Mode                       : UseOutlines
Tagged PDF                      : Yes
Language                        : EN
EXIF Metadata provided by EXIF.tools

Navigation menu