Remote Boot And Storage Guide Remote_Boot_and_Storage_Guide

User Manual: Remote_Boot_and_Storage_Guide

Open the PDF directly: View PDF PDF.
Page Count: 64

DownloadRemote Boot And Storage Guide Remote_Boot_and_Storage_Guide
Open PDF In BrowserView PDF
Remote Boot and Remote Storage for Intel®
Ethernet Adapters and Devices

Overview
Welcome to the Remote Boot and Storage Guide for Intel® Ethernet Adapters and devices. This guide covers
initiator (client) hardware setup, software configuration on initiators and targets, and troubleshooting tips for
remote boot and remote storage configurations.

Intel® Boot Agent
The Intel® Boot Agent is a software product that allows your networked client computer to boot using a
program code image supplied by a remote server. Intel Boot Agent complies with the Pre-boot eXecution
Environment (PXE) Version 2.1 Specification. It is compatible with legacy boot agent environments that use
BOOTP protocol.

Intel® Ethernet iSCSI Boot
Intel® Ethernet iSCSI Boot provides the capability to boot a client system from a remote iSCSI disk volume
located on an iSCSI-based Storage Area Network (SAN).

Intel® Ethernet FCoE Boot
Intel® Ethernet FCoE Boot provides the capability to boot a client system from a remote disk volume located
on an Fibre Channel Storage Area Network (SAN).

Using Intel® PROSet for Windows Device Manager
There are two ways to navigate to the FCoE properties in Windows Device Manager: by using the "Data
Center" tab on the adapter property sheet or by using the Intel® "Ethernet Virtual Storage Miniport Driver for
FCoE Storage Controllers" property sheet.

DCB (Data Center Bridging)
Data Center Bridging (DCB) is a collection of standards-based extensions to classical Ethernet. It provides a
lossless data center transport layer that enables the convergence of LANs and SANs onto a single unified
fabric.
Furthermore, DCB is a configuration Quality of Service implementation in hardware. It uses the VLAN priority
tag (802.1p) to filter traffic. That means that there are 8 different priorities that traffic can be filtered into. It also
enables priority flow control (802.1Qbb) which can limit or eliminate the number of dropped packets during
network stress. Bandwidth can be allocated to each of these priorities, which is enforced at the hardware level
(802.1Qaz).
Adapter firmware implements LLDP and DCBX protocol agents as per 802.1AB and 802.1Qaz respectively.
The firmware based DCBX agent runs in willing mode only and can accept settings from a DCBX capable
peer. Software configuration of DCBX parameters via dcbtool/lldptool are not supported.

NOTES:
l

On X710 based devices running Microsoft Windows, Data Center Bridging (DCB) is only
supported on NVM version 4.52 and newer. Older NVM versions must be updated before
the adapter is capable of DCB support in Windows.

l

XL710 based devices running Microsoft Windows do not support Data Center Bridging
(DCB).

iSCSI Over DCB
Intel® Ethernet adapters support iSCSI software initiators that are native to the underlying operating system.
In the case of Windows, the Microsoft iSCSI Software Initiator, enables connection of a Windows host to an
external iSCSI storage array using an Intel Ethernet adapter.
In the case of Open Source distributions, virtually all distributions include support for an Open iSCSI Software
Initiator and Intel® Ethernet adapters will support them. Please consult your distribution documentation for
additional configuration details on their particular Open iSCSI initiator.
Intel® 82599 and X540-based adapters support iSCSI within a Data Center Bridging cloud. Used in
conjunction with switches and targets that support the iSCSI/DCB application TLV, this solution can provide
guaranteed minimum bandwidth for iSCSI traffic between the host and target. This solution enables storage
administrators to segment iSCSI traffic from LAN traffic, similar to how they can currently segment FCoE
from LAN traffic. Previously, iSCSI traffic within a DCB supported environment was treated as LAN traffic by
switch vendors. Please consult your switch and target vendors to ensure that they support the iSCSI/DCB
application TLV.

Intel® Ethernet FCoE (Fibre Channel over Ethernet)
Fibre Channel over Ethernet (FCoE) is the encapsulation of standard Fibre Channel (FC) protocol frames as
data within standard Ethernet frames. This link-level encapsulation, teamed with an FCoE-aware Ethernet-toFC gateway, acts to extend an FC fabric to include Ethernet-based host connectivity. The FCoE specification
focuses on encapsulation of FC frames specific to storage class traffic, as defined by the Fibre Channel FC-4
FCP specification.

Jumbo Frames
The base driver supports FCoE mini-Jumbo Frames (2.5k bytes) independent of the LAN Jumbo Frames
setting.

FCoE VN to VN (VN2VN) Support
FCoE VN to VN, also called VN2VN, is a standard for connecting two end-nodes (ENodes) directly using
FCoE. An ENode can create a VN2VN virtual link with another remote ENode by not connecting to FC or
FCoE switches (FCFs) in between, so neither port zoning nor advance fibre channel services is required. The
storage software controls access to, and security of, LUNs using LUN masking. The VN2VN fabric may have
a lossless Ethernet switch between the ENodes. This allows multiple ENodes to participate in creating more
than one VN2VN virtual link in the VN2VN fabric. VN2VN has two operational modes: Point to Point S
(PT2PT) and Multipoint.
NOTE: The mode of operation is used only during initialization.

Point to Point (PT2PT) Mode
In Point to Point mode, there are only two ENodes, and they are connected either directly or through a
lossless Ethernet switch:

MultiPoint Mode
If more than two ENodes are detected in the VN2VN fabric, then all nodes should operate in Multipoint mode:

Enabling VN2VN in Microsoft Windows
To enable VN2VN in Microsoft Windows:
1. Start Windows Device Manager.
2. Open the appropriate FCoE miniport property sheet (generally under Storage controllers) and click on
the Advanced tab.
3. Select the VN2VN setting and choose "Enable."

UEFI
The UEFI network driver for Intel® Ethernet Network Connection enables network connectivity under UEFI. It
can be used in conjunction with UEFI software components available from other sources to perform network
functions in the UEFI environment. Intel's UEFI network driver supports Intel's FLB3 file format. This format
extends the header information in the FLB file, enabling more than 16 image types, including a combined
Option ROM and NVM image.
NOTE: If you update your adapter's NVM image, you must completely power cycle your system,
including removing main power, for the update to take effect.

Supported UEFI Implementations
The UEFI network driver supports UEFI platforms based on the following UEFI/EFI specifications:
l

UEFI 2.3 (http://www.uefi.org)

l

UEFI 2.2

l

UEFI 2.1

l

UEFI 2.0

l

EFI 1.10 (http://www.intel.com/technology/efi)

UEFI driver binaries are provided for 64-bit (x86-64), and Itanium processor family platforms.

Supported Adapters and Devices
Intel Boot Agent
Intel Boot Agent supports all Intel 10 Gigabit Ethernet, 1 Gigabit Ethernet, and PRO/100 Ethernet Adapters.

FCoE
A list of Intel Ethernet Adapters that support FCoE can be found at
http://www.intel.com/support/go/network/adapter/fcoefaq.htm

iSCSI
A list of Intel Ethernet Adapters that support iSCSI can be found at
http://www.intel.com/support/go/network/adapter/iscsiadapters.htm

Flash Images
"Flash" is a generic term for nonvolatile RAM (NVRAM), firmware, and option ROM (OROM). Depending on
the device, it can be on the NIC or on the system board.

Enabling the Flash
If you have an Intel Desktop Adapter installed in your client computer, the flash ROM device is already
available in your adapter, and no further installation steps are necessary. For Intel Server Adapters, the flash
ROM can be enabled using the BootUtil utility. For example, from the command line type:
BOOTUTIL -E
BOOTUTIL -NIC=1 -FLASHENABLE
The first line will enumerate the ports available in your system. Choose a port. Then type the second line,
selecting the port you wish to enable. For more details, see the bootutil.txt file.

Updating the Flash in Microsoft Windows
Intel® PROSet for Windows* Device Manager can flash the Boot ROM. However, if you need to enable or
disable the Boot ROM use BootUtil.
Intel® PROSet for Windows Device Manager can only be used to program add-in Intel PCI, PCI-X, and PCIe
network adapters. User BootUtil to program LOM (LAN On Motherboard) network connections and other
devices.

Using Intel PROSet to flash the UEFI Network Driver Option ROM
Intel® PROSet for Windows Device Manager can install the UEFI network driver on an Intel network
adapter's option ROM. The UEFI network driver will load automatically during system UEFI boot when
installed in the option ROM. UEFI specific *.FLB images are included on the CD release media. The "Boot
Options" tab in Intel® PROSet for Windows Device Manager will allow the UEFI *.FLB image to be installed
on the network adapter.

Installing/Updating the Flash in MS-DOS Environments
Use BootUtil to install or update the device's flash in an MS-DOS environment:
1. Obtain or create an MS-DOS-bootable USB drive (or other bootable media) and copy the BootUtil utility
to it.You can obtain the most up-to-date version from Intel's website at www.intel.com/support/
2. Boot your computer to an MS-DOS prompt.
CAUTION: The next several steps require that your computer be booted only to MS-DOS.
These steps cannot be performed from an MS-DOS Command Prompt window or using an
MS-DOS task within Windows.
3. Type BOOTUTIL -nic=x -upgrade (where x is the number of the adapter you wish to update) and
press Enter. Refer to the bootutil.txt file for more information.
--or-Type BOOTUTIL -all -upgrade (to update all the adapters).

A message similar to the one below appears showing a list of all compatible network adapters found in
your system, assuming both the adapter and the flash ROM device are properly installed.
NOTE: Actual adapter-related data may vary depending upon the adapters installed.

Adapter Choices
===

================

=======

====

==============

=======

NIC

Network Address

Series

WOL

Boot ROM Type

Version

===

================

=======

====

==============

=======

1

00D0B7D36018

Gigabit

No

PXE

4.1.17

2

000347003E35

Gigabit

No

PXE

4.1.17

4. Type Y (yes) to create a backup of the current contents of the flash ROM device (not yet updated) onto
a file. If such a file already exists, you'll be asked if you want to overwrite the file. If you type Y (yes),
the flash image file is overwritten with the current contents of the flash ROM. The new Intel Boot Agent
image is then written into the flash ROM device used by the adapter. The process takes approximately
one minute.
NOTE: The BootUtil utility automatically names the flash image file (backup file) with a
.IBA extension.
--or-Type N (no) to cause BootUtil to proceed without first saving a copy of the current contents of
the flash ROM device onto a file. BootUtil asks you to confirm your choice as follows:
Continue Update without Restore Image? (Y)es or (N)o:
If you type N (no), BootUtil cancels the update, leaving the flash contents unchanged, and
returns to the DOS prompt. If you type Y (yes), a new Intel Boot Agent image is written into the
flash ROM device used by the adapter.
5. You may need to go into the BIOS to change the boot order.

Updating the Flash from Linux
The BootUtil command line utility can update the flash on an Intel network adapter. Run BootUtil with the
following command line options to update the flash on all supported Intel network adapters. For example, enter
the following command line:
bootutil64e –up=efi –all
BootUtil can only be used to program add-in Intel network adapters. LOM (LAN On Motherboard) network
connections cannot be programmed with the UEFI network driver option ROM.
See the bootutil.txt file for details on using BootUtil.

Installing the UEFI Network Driver Option ROM from the UEFI Shell
The BootUtil command line utility can install the UEFI network driver on an Intel network adapter's option
ROM. The UEFI network driver will load automatically during system UEFI boot when installed into the option
ROM. Run BootUtil with the following command line options to install the UEFI network driver on all
supported Intel network adapters:
For x64 systems:
FS0:\>bootutil64e –up=efi –all
For ia64 systems:
FS0:\>bootutil64 –up=efi64 –all
BootUtil can only be used to program add-in Intel PCI, PCI-X, and PCIe network adapters. LOM (LAN On
Motherboard) network connections cannot be programmed with the UEFI network driver option ROM.
See the bootutil.txt file for details on using BootUtil.

UEFI Network Device Driver for Intel® Ethernet Network Connections
UEFI Network Stack
As of UEFI 2.1 there are two network stack configurations under UEFI. The most common configuration is the
PXE based network stack. The alternate network stack provides IPv4 TCP, UDP, and MTFTP network
protocol support. As of UEFI 2.1 the PXE and IP-based network stacks cannot be loaded or operate
simultaneously. The following two sections describe each UEFI network stack configuration.
Reference implementations of the PXE and IP based network stack source code are available for download at
www.tianocore.org.

Loading the UEFI Network Driver
The network driver can be loaded using the UEFI shell "load" command:
load e3040e2.efi

Configuring UEFI Network Stack for PXE
The PXE (Preboot eXecution Environment) based UEFI network stack provides support for UEFI network
boot loaders downloaded from a WFM compliant PXE server. Services which can be enabled include
Windows Deployment Services (WDS), Linux network installation (Elilo), and TFTP file transfers. To enable
UEFI PXE services the following network protocol drivers must be loaded with: snp.efi, bc.efi, and
pxedhcp4.efi. These drivers can be loaded from the UEFI "load" shell command, but are often included as part
of the UEFI system firmware. The UEFI shell command "drivers" can be used to determine if the UEFI PXE
drivers are included in the UEFI implementation. The drivers command will output a table listing drivers loaded
in the system. The following entries must be present in order to network boot a UEFI system over PXE:
DRV

VERSION

TYPE

CFG

DIAG

#D

#C

DRIVER NAME

IMAGE
NAME

F5

00000010

D

-

-

2

-

Simple Network Protocol
Driver

SNP

F7

00000010

D

-

-

2

-

PXE Base Code Driver

BC

F9

00000010

D

-

-

2

-

PXE DHCPv4 Driver

PxeDhcp4

FA

03004000

B

X

X

2

2

Intel(R) Network Connection
3.0.00

/e3000e2.efi

A network boot option will appear in the boot options menu when the UEFI PXE network stack and Intel UEFI
network driver have been loaded. Selecting this
boot option will initiate a PXE network boot.

Configuring UEFI Network Stack for TCP/UDP/MTFTP
An IP-based network stack is available to applications requiring IP-based network protocols such as TCP,
UDP, or MTFTP. The following UEFI network drivers must be built into the UEFI platform implementation to
enable this stack: SNP (Simple Network Protocol), MNP (Managed Network Protocol), ARP, DHCP4, IPv4,
ip4config, TCPv4, UDPv4, and MTFTPv4. These drivers will show up in the UEFI "drivers" command output
if they are included in the platform UEFI implementation:
DRV

VERSION

TYPE

CFG

DIAG

#D

#C

DRIVER NAME

IMAGE
NAME

F5

00000010

D

-

-

2

-

IP4 CONFIG Network Service Driver

Ip4Config

F7

00000010

D

-

-

2

-

Simple Network Protocol
Driver

SNP

F8

00000010

D

-

-

2

-

ARP Network Service Driver

Arp

F9

00000010

D

-

-

2

-

Tcp Network Service Driver

Tcp4

FA

00000010

D

-

-

2

-

IP4 Network Service Driver

Ip4

FB

00000010

D

-

-

2

-

DHCP Protocol Driver

Dhcp4

FC

00000010

D

-

-

6

-

UDP Network Service Driver

Udp4

FD

00000010

D

-

-

2

-

MTFTP4 Network Service

Mtftp4

FE

00000010

B

-

-

2

6

MNP Network Service Driver

/mnp.efi

FF

03099900

B

X

X

2

2

Intel(R) Network Connection
3.0.00

/e3000e2.efi

The ifconfig UEFI shell command must be used to configure each network interface. Running "ifconfig -?"
from the UEFI shell will display usage instructions for ifconfig.

Unloading the UEFI Network Driver
To unload a network driver from memory the UEFI "unload" command is used. The syntax for using the unload
command is as follows: "unload [driver handle]", where driver handle is the number assigned to the driver in
the far left column of the "drivers" output screen.

Force Speed and Duplex
The UEFI network driver supports forced speed and duplex capability. The force speed and duplex menu can
be accessed with UEFI shell command "drvcfg":
drvcfg -s [driver handle] [control handle]
The following speed and duplex configurations can be selected:

l

Autonegotiate (recommended)

l

100 Mbps, full duplex

l

100 Mbps, half duplex

l

10 Mbps, full duplex

l

10 Mbps, half duplex

The speed and duplex setting selected must match the speed and duplex setting of the connecting network
port. A speed and duplex mismatch between ports will result in dropped packets and poor network
performance. It is recommended to set all ports on a network to autonegotiate. Connected ports must be set
to autonegotiate in order to establish a 1 gigabit per second connection.
Fiber-optic and 10 gigabit ethernet adapters do not support forced speed and duplex.

Diagnostic Capability
The UEFI network driver features built in hardware diagnostic tests. The diagnostic tests are called with the
UEFI shell drvdiag command.
drvdiag -s -Performs a basic hardware register test.
drvdiag -e -Performs an internal loopback transmit and receive test.

Client/Initiator Setup
To set up your client system,
1. Enable the Flash on the selected port or adapter.
2. Update the Flash with the latest Flash Image.
3. Configure the boot protocol you wish to use.

Boot Agent Client Configuration Setup
The Intel® Boot Agent software provides configuration options that allow you to customize the behavior of the
Intel Boot Agent software. You can configure the Intel Boot Agent in any of the following environments:
l

A Microsoft* Windows* Environment

l

A Microsoft* MS-DOS* environment

l

A pre-boot environment (before operating system is loaded)

The Intel Boot Agent supports PXE in pre-boot, Microsoft Windows*, and DOS environments. In each of
these environments, a single user interface allows you to configure PXE protocols on Intel® Ethernet
Adapters.
To enter the Intel Boot Agent setup menu, press and hold -S during system start-up.

Configuring the Intel® Boot Agent in a Microsoft Windows Environment
If you use the Windows operating system on your client computer, you can use Intel® PROSet for Windows*
Device Manager to configure and update the Intel Boot Agent software. Intel PROSet is available through the
device manager. Intel PROSet provides a special tab, called the Boot Options tab, used for configuring and
updating the Intel Boot Agent software.
To access the Boot Options tab:
1. Open Intel PROSet for Windows Device Manager by opening the System Control Panel. On the
Hardware tab, click Device Manager.
2. Select the appropriate adapter and click the Boot Options tab. If the tab does not appear, update your
network driver.
3. The Boot Options tab shows a list of current configuration parameters and their corresponding values.
Corresponding configuration values appear for the selected setting in a drop-down box. A brief
description of the setting’s function appears in the Description box below it. See Boot Agent
Configuration Settings for a list of configuration parameters, their possible values, and detailed
descriptions.
4. Select a setting you want to change from the Settings selection box.
5. Select a value for that setting from the Value drop-down list.
6. Repeat the preceding two steps to change any additional settings.
7. Once you have completed your changes, click Apply Changes to update the adapter with the new
values.

Configuring the Intel® Boot Agent in an MS-DOS Environment
Intel provides a utility, Intel® Ethernet Flash Firmware Utility (BootUtil) for installing and configuring the Intel
Boot Agent using the DOS environment. See bootutil.txt for complete information.

Configuring the Intel® Boot Agent in a Pre-Boot PXE Environment
NOTE: Intel Boot Agent may be disabled in the BIOS.

You can customize the behavior of the Intel Boot Agent software through a pre-boot (operating system
independent) configuration setup program contained within the adapter's flash ROM. You can access this preboot configuration setup program each time the client computer cycles through the boot process.
When the boot process begins, the screen clears and the computer begins its Power On Self Test (POST)
sequence. Shortly after completion of the POST, the Intel Boot Agent software stored in flash ROM executes.
The Intel Boot Agent then displays an initialization message, similar to the one below, indicating that it is
active:
Initializing Intel(R) Boot Agent Version X.X.XX
PXE 2.0 Build 083
NOTE: This display may be hidden by the manufacturer's splash screen. Consult your manufacturer's documentation for details.
The configuration setup menu shows a list of configuration settings on the left and their corresponding values
on the right. Key descriptions near the bottom of the menu indicate how to change values for the configuration
settings. For each selected setting, a brief "mini-Help" description of its function appears just above the key
descriptions.
1. Highlight the setting you need to change by using the arrow keys.
2. Once you have accessed the setting you want to change, press the spacebar until the desired value
appears.
3. Once you have completed your changes, press F4 to update the adapter with the new values. Any
changed configuration values are applied as the boot process resumes.
The table below provides a list of configuration settings, their possible values, and their detailed descriptions:
Intel Boot Agent Configuration Settings
Configuration
Setting

Possible Values

Description

Network
Boot
Protocol

PXE
(Preboot
eXecution
Environment)

Select PXE for use with network management programs, such as
LANDesk* Management Suite.

Boot Order

Use BIOS
Setup Boot
Order

Sets the boot order in which devices are selected during boot up if the
computer does not have its own control method.

NOTE: Depending on the configuration of the Intel Boot Agent, this
parameter may not be changeable.

Try network
first, then
local drives
Try local
drives first,
then network
Try network
only
Try local
drives only
Legacy OS
Wakeup
Support. (For
82559-based
adapters
only)

0 = Disabled
(Default
Value)
1 = Enabled

If your client computer's BIOS supports the BIOS Boot Specification
(BBS), or allows PnP-compliant selection of the boot order in the
BIOS setup program, then this setting will always be Use BIOS
Setup Boot Order and cannot be changed. In this case, refer to the
BIOS setup manual specific to your client computer to set up boot
options.
If your client computer does not have a BBS- or PnP-compliant
BIOS, you can select any one of the other possible values listed for
this setting except for Use BIOS Setup Boot Order.

If set to 1, the Intel Boot Agent will enable PME in the adapter’s PCI
configuration space during initialization. This allows remote wakeup
under legacy operating systems that don’t normally support it. Note
that enabling this makes the adapter technically non-compliant with
the ACPI specification, which is why the default is disabled.

NOTE: If, during PXE boot, more than one adapter is installed in a computer and you want to boot
from the boot ROM located on a specific adapter, you can do so by moving the adapter to the top
of the BIOS Boot Order or by disabling the flash on the other adapters.
While the configuration setup menu is displayed, diagnostics information is also displayed in the lower half of
the screen. This information can be helpful during interaction with Intel Customer Support personnel or your IT
team members. For more information about how to interpret the information displayed, refer to Diagnostics
Information for Pre-boot PXE Environments.

iSCSI Initiator Setup
Configuring Intel® Ethernet iSCSI Boot on a Microsoft* Windows* Client Initiator
Requirements
1. Make sure the iSCSI initiator system starts the iSCSI Boot firmware. The firmware should be configured properly, be able to connect to iSCSI target, and detect the boot disk.
2. You will need Microsoft* iSCSI Software Initiator with integrated software boot support. This boot version of the initiator is available here.
3. To enable crash dump support, follow the steps in Crash Dump Support.

Configuring Intel® Ethernet iSCSI Boot on a Linux* Client Initiator
1. Install the Open-iSCSI initiator utilities.
#yum -y install iscsi-inititator-utils
2. Refer to www.open-iscsi.org/docs/README.

3. Configure your iSCSI array to allow access.
a. Examine /etc/iscsi/initiatorname.iscsi for the Linux host initiator name.
b. Update your volume manager with this host initiator name.
4. Set iscsi to start on boot.
#chkconfig iscscd on
#chkconfig iscsi on
5. Start iSCSI service (192.168.x.x is the IP Address of your target).
#iscsiadm -n discovery -t s -p 192.168.x.x
Observe the target names returned by iscsi discovery.
6. Log onto the target (-m XXX -T is XXX -l XXX -).
iscsiadm -m node -T iqn.2123-01.com:yada:yada: -p 192.168.2.124 -l

iSCSI Boot POST Setup
Intel® Ethernet iSCSI Boot features a setup menu which allows two network ports in one system to be
enabled as iSCSI Boot devices. To configure Intel® iSCSI Boot, power-on or reset the system and press the
Ctrl-D key when the message "Press  to run setup..." is displayed. After pressing the
Ctrl-D key, you will be taken to the Intel® iSCSI Boot Port Selection Setup Menu.
NOTE: When booting an operating system from a local disk, Intel® Ethernet iSCSI Boot should
be disabled for all network ports.
Intel® Ethernet iSCSI Boot Port Selection Menu
The first screen of the Intel® iSCSI Boot Setup Menu displays a list of Intel® iSCSI Boot-capable adapters.
For each adapter port the associated PCI device ID, PCI bus/device/function location, and a field indicating
Intel® Ethernet iSCSI Boot status is displayed. Up to 10 iSCSI Boot-capable ports are displayed within the
Port Selection Menu. If there are more Intel® iSCSI Boot-capable adapters, these are not listed in the setup
menu.

The usage of this menu is described below:

l

One network port in the system can be selected as the primary boot port by pressing the 'P' key when
highlighted. The primary boot port will be the first port used by Intel® Ethernet iSCSI Boot to connect to
the iSCSI target. Only one port may be selected as a primary boot port.

l

One network port in the system can be selected as the secondary boot port by pressing the 'S' key
when highlighted. The secondary boot port will only be used to connect to the iSCSI target disk if the
primary boot port fails to establish a connection. Only one port may be selected as a secondary boot
port.

l

Pressing the 'D' key with a network port highlighted will disable Intel® Ethernet iSCSI Boot on that
port.

l

Pressing the 'B' key with a network port highlighted will blink an LED on that port.

l

Press the Esc key to leave the screen.

Intel® Ethernet iSCSI Boot Port Specific Setup Menu

The port specific iSCSI setup menu has four options:
l

Intel® iSCSI Boot Configuration - Selecting this option will take you to the iSCSI Boot Configuration
Setup Menu. The iSCSI Boot Configuration Menu is described in detail in the section below and will
allow you to configure the iSCSI parameters for the selected network port.

l

CHAP Configuration - Selecting this option will take you to the CHAP configuration screen. The
CHAP Configuration Menu is described in detail in the section below.

l

Discard Changes and Exit - Selecting this option will discard all changes made in the iSCSI Boot
Configuration and CHAP Configuration setup screens, and return back to the iSCSI Boot Port Selection Menu.

l

Save Changes and Exit - Selecting this option will save all changes made in the iSCSI Boot Configuration and CHAP Configuration setup screens. After selecting this option, you will return to the
iSCSI Boot Port Selection Menu.

Intel® iSCSI Boot Configuration Menu
The Intel® iSCSI Boot Configuration Menu allows you to configure the iSCSI Boot and Internet Protocol (IP)
parameters for a specific port. The iSCSI settings can be configured manually or retrieved dynamically from a
DHCP server.

Listed below are the options in the Intel® iSCSI Boot Configuration Menu:
l

Use Dynamic IP Configuration (DHCP) - Selecting this checkbox will cause iSCSI Boot to attempt
to get the client IP address, subnet mask, and gateway IP address from a DHCP server. If this checkbox is enabled, these fields will not be visible.

l

Initiator Name - Enter the iSCSI initiator name to be used by Intel® iSCSI Boot when connecting to an
iSCSI target. The value entered in this field is global and used by all iSCSI Boot-enabled ports in the
system. This field may be left blank if the "Use DHCP For Target Configuration" checkbox
is enabled. For information on how to retrieve the iSCSI initiator name dynamically from a DHCP
server see the section "DHCP Server Configuration".

l

Initiator IP - Enter the client IP address to be used for this port as static IP configuration in this field.
This IP address will be used by the port during the entire iSCSI session. This option is visible if DHCP
is not enabled.

l

Subnet Mask - Enter the IP subnet-mask in this field. This should be the IP subnet mask used on the
network which the selected port will be connecting to for iSCSI. This option is visible if DHCP is not
enabled.

l

Gateway IP - Enter the IP address of the network gateway in this field. This field is necessary if the
iSCSI target is located on a different sub-network than the selected Intel® iSCSI Boot port. This option
is visible if DHCP is not enabled.

l

Use DHCP for iSCSI Target Information - Selecting this checkbox will cause Intel® iSCSI Boot to
attempt to gather the iSCSI target's IP address, IP port number, iSCSI target name, and SCSI LUN ID
from a DHCP server on the network. For information on how to configure the iSCSI target parameters
using DHCP see the section "DHCP Server Configuration". When this checkbox is enabled, these
fields will not be visible.

l

Target Name - Enter the IQN name of the iSCSI target in this field. This option is visible if DHCP for
iSCSI target is not enabled.

l

Target IP - Enter the target IP address of the iSCSI target in this field. This option is visible if DHCP
for iSCSI target is not enabled.

l

Target Port - TCP Port Number.

l

Boot LUN - Enter the LUN ID of the boot disk on the iSCSI target in this field. This option is visible if
DHCP for iSCSI target is not enabled.

iSCSI CHAP Configuration
Intel® iSCSI Boot supports Mutual CHAP MD5 authentication with an iSCSI target. Intel® iSCSI Boot uses
the "MD5 Message Digest Algorithm" developed by RSA Data Security, Inc.

The iSCSI CHAP Configuration menu has the following options to enable CHAP authentication:
l

Use CHAP - Selecting this checkbox will enable CHAP authentication for this port. CHAP allows the
target to authenticate the initiator. After enabling CHAP authentication, a user name and target password must be entered.

l

User Name - Enter the CHAP user name in this field. This must be the same as the CHAP user name
configured on the iSCSI target.

l

Target Secret - Enter the CHAP password in this field. This must be the same as the CHAP password
configured on the iSCSI target and must be between 12 and 16 characters in length. This password
can not be the same as the Initiator Secret.

l

Use Mutual CHAP – Selecting this checkbox will enable Mutual CHAP authentication for this port.
Mutual CHAP allows the initiator to authenticate the target. After enabling Mutual CHAP authentication, an initiator password must be entered. Mutual CHAP can only be selected if Use CHAP is
selected.

l

Initiator Secret - Enter the Mutual CHAP password in this field. This password must also be configured on the iSCSI target and must be between 12 and 16 characters in length. This password can
not be the same as the Target Secret.

The CHAP Authentication feature of this product requires the following acknowledgements:
This product includes cryptographic software written by Eric Young (eay@cryptsoft.com). This product
includes software written by Tim Hudson (tjh@cryptsoft.com).
This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit.
(http://www.openssl.org/).
Intel® PROSet for Windows* Device Manager
Many of the functions of the Intel® iSCSI Boot Port Selection Setup Menu can also be configured or revised
from Windows Device Manager. Open the adapter's property sheet and select the Data Options tab. You
must install the latest Intel Ethernet Adapter drivers and software to access this.

iSCSI Over DCB (Data Center Bridging)
iSCSI installation includes the installation of the iSCSI DCB Agent (iscsidcb.exe) user mode service.
NOTE: DCB does not install in a VM. iSCSI over DCB is only supported in the base OS. An iscsi
initiator running in a VM will not benefit from DCB ethernet enhancements.
Configuring iSCSI Over DCB
Enable DCB on the adapter by the following:
1. From Windows Device Manager, expand Networking Adapters and highlight the appropriate
adapter (such as Intel® Ethernet Server Adapter X520). Right click on the Intel adapter and select
Properties.
2. In the Property Page, select the Data Center Tab.
Data Center Bridging is most often configured at the switch. If the switch is not DCB capable, the DCB
handshake will fail but the iSCSI connection will not be lost. The Data Center Tab provides feedback as to
the DCB state, operational or non- operational, as well as providing additional details should it be nonoperational.
Using iSCSI over DCB with ANS Teaming
The Intel® iSCSI Agent is responsible for maintaining all packet filters for the purpose of priority tagging iSCSI
traffic flowing over DCB-enabled adapters. The iSCSI Agent will create and maintain a traffic filter for an ANS
Team if at least one member of the team has an "Operational" DCB status. However, if any adapter on the
team does not have an "Operational" DCB status, the iSCSI Agent will log an error in the Windows Event Log

for that adapter. These error messages are to notify the administrator of configuration issues that need to be
addressed, but do not affect the tagging or flow of iSCSI traffic for that team, unless it explicitly states that the
TC Filter has been removed.
Go here for more information about DCB.

FCoE Client Setup
Installing and Configuring Intel® Ethernet FCoE Boot on a Microsoft* Windows* Client
WARNINGS:
l

Do not update the base driver via the Windows Update method
Doing so may render the system inoperable, generating a blue screen. The FCoE Stack
and base driver need to be matched. The FCoE stack may get out of sync with the base
driver if the base driver is updated via Windows Update. Updating can only be done via the
Intel® Network Connections Installer.

l

If you are running Microsoft* Windows Server* 2012 R2, you must install
KB2883200.
Failure to do so may result in an Error 1719 and a blue screen.

New Installation on a Windows Server* system
From the Intel CD: Click the FCoE/DCB checkbox to install Intel® Ethernet FCoE Protocol Driver and DCB.
The MSI Installer installs all FCoE and DCB components including Base Driver.
Microsoft Hotfixes
The following Microsoft hotfixes have been found to be needed for specific use cases:
Windows 2008 R2
l

KB983554 - High-performance storage devices fix

l

KB2708811 - Data corruption occurs under random write stress

Multipath I/O (MPIO)
Windows 2008 R2
l

KB979743 - MPIO - write errors

l

KB981379 - MS DSM - target issues

Windows 2008 R2 SP1
l

KB2406705
Configuring MPIO Timers: 
contains additional information about these registry settings.
Set the PathRecoveryInterval value to 60

Intel® Ethernet FCoE Configuration Using Intel® PROSet for Windows* Device Manager
Many FCoE functions can also be configured or revised using Intel PROSet for Windows* Device Manager,
accessed from the FCoE Properties button within the Data Center tab. You can use Intel PROSet to
perform the following tasks:

l

Configure FCoE initiator specific settings

l

Go to the corresponding port driver

l

Review FCoE initiator information

l

Obtain general information

l

Review statistics

l

Obtain information about the initiator

l

Obtain information about attached devices

l

FIP discovered VLANs and status

In addition, you can find some FCoE RSS performance settings under the Performance Options of the
Advanced tab of the Network Adapter device properties. For additional information see the Receive Side
Scaling subsection of the Microsoft Windows Advanced Features section of the Intel(R) 10GbE Adapter
Guide.
NOTES:
l

PROSetCL.EXE is used for DCB/FCoE configuration on Microsoft* Windows* Server
2008 Core and Microsoft* Windows* Server 2008 R2 Core operating systems.

l

From the Boot Options Tab, the user will see the Flash Information Button. Clicking on
the Flash Information Button will open the Flash Information Dialog. From the Flash
Information Dialog, clicking on the Update Flash button allows Intel® iSCSI Remote
Boot, Intel® Boot Agent (IBA), Intel® Ethernet FCoE Boot, EFI, and CLP to be written.
The update operation writes a new image to the adapter's Flash and modifies the
EEPROM, which may temporarily disable the operation of the Windows* network device
driver. You might need to reboot the computer following this operation.

l

You cannot update the flash image of a LOM; this button will be disabled.

1. Create a disk target (LUN) on an available Fibre Channel target. Configure this LUN to be accessible to
the WWPN address of the initiator of the host being booted.
2. Make sure the client system starts the Intel® Ethernet FCoE Boot firmware. The firmware should be
configured properly, be able to connect to Fibre Channel target, and detect the boot disk.
Intel® PROSet for Windows* Device Manager
Many of the functions of the Intel® Ethernet FCoE Boot Port Selection Setup Menu can also be configured or
revised using Intel® PROSet for Windows Device Manager.
l

Intel® Ethernet FCoE Boot version is displayed on the Boot Options tab if the combo image supports
FCoE Boot.

l

Intel® Ethernet FCoE Boot is an Active Image option if FCoE Boot is supported by the combo image.

l

The Active Image setting enables/disables Intel® Ethernet FCoE Boot in the EEPROM.

l

Intel® Ethernet FCoE Boot settings are displayed if FCoE Boot is the active image.

Installing Windows Server from a Remote Disk ("Diskless Install")
After the Option ROM is installed, if you wish to install the Windows Server operating system directly to the
FCoE disk, do the following:

1. Locate the FCoE drivers in \APPS\FCOEBOOT\Winx64\. Extract all zipped files and copy to a
CD/DVD or USB media.
2. Boot the install media.
3. Perform a Custom install and proceed to the "Where do you want to install Windows?" screen.
4. Use Load Driver to load the FCoE drivers. Browse to the location you chose previously and load the
following two drivers in the specified order:
1. Intel(R) Ethernet Setup Driver for FCoE.
2. Intel(R) Ethernet Virtual Storage Miniport Driver for FCoE.
Note: the FCoE drivers will block any other network traffic from the FCoE-supported ports until after
Step 7 in this section. Do not attempt to install an NDIS miniport for any FCoE-supported ports prior to
Step 7 in this section.
5. You should now see the FCoE disk or disks appear in the list of available install targets. All disks
accessible by this initiator through the boot port should appear.
6. Select the FCoE disk configured for boot in the Option ROM and continue the install until Windows is
installed and you are at the desktop.
7. Follow the instructions for installing Windows Server and the FCoE stack. This will install the networking drivers and configure the FCoE drivers to work with the networking drivers. Note that you cannot deselect the FCoE feature. You will be prompted to reboot at the end of the installation process.
8. Windows may prompt you to reboot once again after it returns to the desktop.

Installing Windows Server with Local Disk
After the Option ROM is installed, if you wish to install Windows Server with local disk, do the following:
1. Follow the instructions for installing Windows Server and the FCoE stack.
2. Verify that the FCoE Boot disk is available in the Fabric View tab of Intel® PROSet for Windows
Device Manager, and verify that you are online using Windows Disk Manager.
3. Open a command prompt, run the fcoeprep.bat batch file. To find the batch file, navigate to the
\APPS\FCOEBOOT\Winx64\ directory.
4. Shut Windows down and capture the OS image to a local disk partition.
5. Transfer the image from the local hard drive to the FCoE target. This may be done from within the local
Windows installation.
6. For Windows 2008 R2 SP1 only: Run bcdboot.exe from the local Windows installation to make the
FCoE disk bootable.
l

If a System Reserved partition exists on the FCoE disk, type: bcdboot F:\Windows /s
E:
where E: is the FCoE System Reserved partition and F: is the FCoE partition with the Windows directory.

l

If a System Reserved partition does not exist, type: bcdboot E:\Windows /s E:
where E: is the FCoE partition with the Windows directory.

7. Shut down and remove the local disk.
8. Configure the system BIOS to boot from the FCoE disk and boot.
NOTE: See Microsoft's documentation for more detailed instructions.

Upgrading Windows Drivers on an Intel® Ethernet FCoE-Booted System
Upgrading an FCoE-booted system can only be done via the Intel® Network Connections Installer. A reboot
is required to complete the upgrade. You cannot upgrade a port's Windows driver and software package if the
port is in the path to the virtual memory paging file and is also part of a Microsoft Server 2012 NIC Team
(LBFO Team). To complete the upgrade, remove the port from the LBFO team and restart the upgrade.
Validation and Storage Certification
The software components for Intel® Ethernet FCoE are comprised of two major components: the Intel®
Ethernet base driver and the Intel® Ethernet FCoE Driver. They are developed and validated as an ordered
pair. You are strongly encouraged to avoid scenarios, either through upgrades or Windows update, where the
Intel® Ethernet driver version is not the version released with the corresponding Intel® Ethernet FCoE driver.
For more information, visit the download center.
NOTES:
l

Individually upgrading/downgrading the Intel® Ethernet FCoE driver will not work and may
even cause a blue screen; the entire FCoE package must be the same version. Upgrade
the entire FCoE package using the Intel® Network Connections installer only.

l

If you uninstalled the Intel® Ethernet Virtual Storage Miniport Driver for FCoE component,
just find the same version that you uninstalled and re-install it; or uninstall and then reinstall the entire FCoE package.

Intel and the storage vendors spend considerable effort ensuring that their respective products operate with
each other as expected for every version that we release. However, given the sheer number of releases and
each respective organizations' differing schedules, you are strongly encouraged to use their storage
vendor's support matrix to ensure that the versions that they are deploying for the Intel® Ethernet Protocol
Driver, the switch and storage vendor have been tested as an integrated set.

Setting up Intel® Ethernet FCoE Boot on a Linux* Client
Intel® Ethernet FCoE Boot Option ROM Setup
FCoE Port Selection Menu
To configure Intel® Ethernet FCoE Boot, power-on or reset the system and input the Ctrl-D key combination
when the message "Press  to run setup..." is displayed. After inputting the Ctrl-D key
combination, you will be taken to the Intel® Ethernet FCoE Boot Port Selection Setup Menu.

The first screen of the Intel® Ethernet FCoE Boot Setup Menu displays a list of Intel® FCoE Boot-capable
adapters. For each adapter port, the associated SAN MAC address, PCI device ID, PCI bus/device/function
location, and a field indicating FCoE Boot status is displayed. Up to 10 FCoE Boot-capable ports can be
displayed within the Port Selection Menu. If there are more Intel® FCoE Boot-capable adapters, these are not
listed in the setup menu.
Highlight the desired port and press Enter.
FCoE Boot Targets Configuration Menu

FCoE Boot Targets Configuration: Discover Targets is highlighted by default. If the Discover VLAN
value displayed is not what you want, enter the correct value. Highlight Discover Targets and then press
Enter to show targets associated with the Discover VLAN value. Under Target WWPN, if you know the
desired WWPN you can manually enter it or press Enter to display a list of previously discovered targets.

FCoE Target Selection Menu

Highlight the desired Target from the list and press Enter.

Manually fill in the LUN and Boot Order values.
Boot Order valid values are 0-4, where 0 means no boot order or ignore the target. A 0 value also
indicates that this port should not be used to connect to the target. Boot order values of 1-4 can only be
assigned once to target(s) across all FCoE boot-enabled ports.
VLAN value is 0 by default. You may do a Discover Targets which will display a VLAN. If the VLAN
displayed is not the one you require, enter the VLAN manually and then perform Discover Targets on
that VLAN.
Hit Save.
NOTE: After the Discover Targets function is executed, the Option ROM will attempt to remain
logged into the fabric until the FCoE Boot Targets Configuration Menu is exited.
l

Keyboard Shortcuts: Up/Down, TAB and SHIFT-TAB to move between the controls.
Left/Right/Home/End/Del/Backspace in the edit boxes.

l

Press the Esc key to leave the screen.

UEFI Setup for Intel® FCoE Boot
Once you complete the configuration, it will be stored in the system's firmware memory.
Before beginning the configuration, update the adapter's UEFI FCoE Option ROM using the BootUtil tool and
the latest BootIMG.FLB file. Use the following command:
BOOTUTIL64E.EFI -up=efi+efcoe -nic=PORT -quiet
where PORT is the NIC adapter number (in the following example nic=1)
NOTE: The UEFI FCoE driver must be loaded before you perform the following steps.

Accessing the FCoE Configuration Screen
Boot the system into its BIOS and proceed as follows:
1. Select the Advanced tab, then choose PCI Configuration, then UEFI Option ROM Control, then
FCOE Configuration.
2. The following screen is displayed:

Adding an FCoE Attempt
An FCoE Attempt is a configured instance of a target from which the system will attempt to boot over FCoE.

1. From the FCoE Configuration menu, select Add an Attempt. All supported ports are displayed.
2. Select the desired port. The FCoE Boot Targets Configuration screen is displayed.

3. Select Discover Targets to automatically discover available targets (alternatively, you can manually
enter the fields on the FCoE Boot Targets Configuration screen). The Select from Discovered Targets option displays a list of previously discovered targets.
4. Select Auto-Discovery. Note that the auto-discovery process may take several minutes. When autodiscovery is complete, the Select Target screen is displayed. Discover VLAN is the VLAN associated
with a discovered adapter. There can be more than one target on a given VLAN.

5. Select the desired target from the list. The FCoE Boot Targets Configuration screen is displayed with
completed fields for the selected target.
6. Press F10 (Save) to add this FCoE attempt. The FCoE Configuration screen is displayed with the
newly added FCoE attempt listed.

Deleting an Existing FCoE Attempt
1. From the FCoE Configuration menu, select Delete Attempts.
2. Select one or more attempts to delete, as shown below (note that the example now shows three added
attempts).

3. To delete the selected attempts, choose Commit Changes and Exit. To exit this screen without
deleted the selected attempts, choose Discard Changes and Exit.
Changing the Order of FCoE Attempts
1. From the FCoE Configuration menu, select Change Attempt Order.
2. Press the Enter key to display the Change Attempt Order dialog, shown below.

3. Use the arrow keys to change the attempt order. When satisfied, press the Enter key to exit the dialog.

The new attempt order is displayed.
4. To save the new attempt order, select Commit Changes and Exit. To exit without saving changes,
select Discard Changes and Exit.

Target/Server Setup
Intel Boot Agent Server System Setup
Overview
For the Intel® Boot Agent software to perform its intended job, there must be a server set up on the same
network as the client computer. That server must recognize and respond to the PXE or BOOTP boot protocols
that are used by the Intel Boot Agent software.
NOTE: When the Intel Boot Agent software is installed as an upgrade for an earlier version boot
ROM, the associated server-side software may not be compatible with the updated Intel Boot
Agent. Contact your system administrator to determine if any server updates are necessary.

Linux* Server Setup
Consult your Linux* vendor for information about setting up the Linux Server.

Windows* Deployment Services
Nothing is needed beyond the standard driver files supplied on the media. Microsoft* owns the process and
associated instructions for Windows Deployment Services. For more information on Windows Deployment
Services perform a search of Microsoft articles at: http://technet.microsoft.com/en-us/library/default.aspx

iSCSI Boot Target Configuration
For specific information on configuring your iSCSI target system and disk volume, refer to instructions
provided by your system or operating system vendor. Listed below are the basic steps necessary to setup
Intel® Ethernet iSCSI Boot to work with most iSCSI target systems. The specific steps will vary from one
vendor to another.
NOTE: To support iSCSI Boot, the target needs to support multiple sessions from the same initiator. Both the iSCSI Boot firmware initiator and the OS High Initiator need to establish an iSCSI
session at the same time. Both these initiators use the same Initiator Name and IP Address to
connect and access the OS disk but these two initiators will establish different iSCSI sessions. In
order for the target to support iSCSI Boot, the target must be capable of supporting multiple sessions and client logins.
1. Configure a disk volume on your iSCSI target system. Note the LUN ID of this volume for use when
configuring in Intel® Ethernet iSCSI Boot firmware setup.
2. Note the iSCSI Qualified Name (IQN) of the iSCSI target, which will likely look like:
iqn.1986-03.com.intel:target1
This value is used as the iSCSI target name when you configuring your initiator system's Intel®
Ethernet iSCSI Boot firmware.
3. Configure the iSCSI target system to accept the iSCSI connection from the iSCSI initiator. This
usually requires listing the initiator's IQN name or MAC address for permitting the initiator to access to
the disk volume. See the "Firmware Setup" section for information on how to set the iSCSI initiator
name.

4. One-way authentication protocol can optionally be enabled for secure communications. ChallengeHandshake Authentication Protocol (CHAP) is enabled by configuring username/password on iSCSI
target system. For setting up CHAP on the iSCSI initiator, refer to the section "Firmware Setup" for
information.

Booting from Targets Larger than 2TB
You can connect and boot from a target LUN that is larger than 2 Terabytes with the following restrictions:
l

The block size on the target must be 512 bytes

l

The following operating systems are supported:

l

l

VMware* ESX 5.0, or later

l

Red Hat* Enterprise Linux* 6.3, or later

l

SUSE* Enterprise Linux 11SP2, or later

l

Microsoft* Windows Server* 2012, or later

You may be able to access data only within the first 2 TB.
NOTE: The Crash Dump driver does not support target LUNs larger than 2TB.

DHCP Server Configuration
If you are using Dynamic Host Configuration Protocol (DHCP), the DHCP server needs to be configured to
provide the iSCSI Boot configurations to the iSCSI initiator. You must set up the DHCP server to specify
Root Path option 17 and Host Name option 12 to respond iSCSI target information back to the iSCSI initiator.
DHCP option 3, Router List may be necessary, depending on the network configuration.
DHCP Root Path Option 17:
The iSCSI root path option configuration string uses the following format:
iscsi:::::
l

Server name: DHCP server name or valid IPv4 address literal.
Example: 192.168.0.20.

l

Protocol: Transportation protocol used by ISCSI. Default is tcp (6).
No other protocols are currently supported.

l

Port: Port number of the iSCSI. A default value of 3260 will be used if this field is left
blank.

l

LUN: LUN ID configured on iSCSI target system. Default is zero.

Target name: iSCSI target name to uniquely identify an iSCSI target in IQN format.
Example: iqn.1986-03.com.intel:target1
DHCP Host Name Option 12:
l

Configure option 12 with the hostname of the iSCSI initiator.
DHCP Option 3, Router List:
Configure option 3 with the gateway or Router IP address, if the iSCSI initiator and iSCSI target
are on different subnets.

Creating a Bootable Image for an iSCSI Target
There are two ways to create a bootable image on an iSCSI target:

l

Install directly to a hard drive in an iSCSI storage array (Remote Install).

l

Install to a local disk drive and then transfer this disk drive or OS image to an iSCSI Target (Local
Install).

Microsoft* Windows*
Microsoft* Windows Server* natively supports OS installation to an iSCSI target without a local disk and also
natively supports OS iSCSI boot. See Microsoft's installation instructions and Windows Deployment
Services documentation for details.
SUSE* Linux Enterprise Server
For the easiest experience installing Linux onto an iSCSI target, you should use SLES10 or greater. SLES10
provides native support for iSCSI Booting and installing. This means that there are no additional steps outside
of the installer that are necessary to install to an iSCSI target using an Intel Ethernet Server Adapter. Please
refer to the SLES10 documentation for instructions on how to install to an iSCSI LUN.
Red Hat Enterprise Linux
For the easiest experience installing Linux onto an iSCSI target, you should use RHEL 5.1 or greater. RHEL
5.1 provides native support for iSCSI Booting and installing. This means that there are no additional steps
outside of the installer that are necessary to install to an iSCSI target using an Intel Ethernet Server Adapter.
Please refer to the RHEL 5.1 documentation for instructions on how to install to an iSCSI LUN.

Microsoft Windows Server iSCSI Crash Dump Support
Crash dump file generation is supported for iSCSI-booted Windows Server x64 by the Intel iSCSI Crash
Dump Driver. To ensure a full memory dump is created:
1. Set the page file size equal to or greater than the amount of RAM installed on your system is necessary
for a full memory dump.
2. Ensure that the amount of free space on your hard disk is able to handle the amount of RAM installed
on your system.
To setup crash dump support follow these steps:
1. Setup Windows iSCSI Boot.
2. If you have not already done so, install the latest Intel Ethernet Adapter drivers and Intel PROSet for
Windows Device Manager.
3. Open Intel PROSet for Windows Device Manager and select the Boot Options Tab.
4. From Settings select iSCSI Boot Crash Dump and the Value Enabled and click OK.

FCoE Boot Target Configuration
For specific information on configuring your FCoE target system and disk volume, refer to instructions
provided by your system or operating system vendor. Listed below are the basic steps necessary to setup
Intel® Ethernet FCoE Boot to work with most FCoE targets. The specific steps will vary from one vendor to
another.
Installing Microsoft Windows Server from a Remote Disk ("Diskless Install")
After the Option ROM is installed, if you wish to install the Windows Server operating system directly to the
FCoE disk, do the following:

1. Locate the FCoE drivers in \APPS\FCOEBOOT\Winx64\. Extract all zipped files and copy to a
CD/DVD or USB media.
2. Boot the install media.
3. Perform a Custom install and proceed to the "Where do you want to install Windows?" screen.
4. Use Load Driver to load the FCoE drivers. Browse to the location you chose previously and load the
following two drivers in the specified order:
1. Intel(R) Ethernet Setup Driver for FCoE.
2. Intel(R) Ethernet Virtual Storage Miniport Driver for FCoE.
Note: the FCoE drivers will block any other network traffic from the FCoE-supported ports until after
Step 7 in this section. Do not attempt to install an NDIS miniport for any FCoE-supported ports Step 7
in this section.
5. You should now see the FCoE disk or disks appear in the list of available install targets. All disks
accessible by this initiator through the boot port should appear.
6. Select the FCoE disk configured for boot in the Option ROM and continue the install until Windows is
installed and you are at the desktop.
7. Follow the instructions for installing Windows Server and the FCoE stack. This will install the networking drivers and configure the FCoE drivers to work with the networking drivers. Note that you cannot deselect the FCoE feature. You will be prompted to reboot at the end of the installation process.
8. Windows may prompt you to reboot once again after it returns to the desktop.
Installing Windows Server with Local Disk
After the Option ROM is installed, if you wish to install Windows Server with local disk, do the following:
1. Follow the instructions for installing Windows Server and the FCoE stack.
2. Verify that the FCoE Boot disk is available in the Fabric View tab of Intel® PROSet for Windows
Device Manager, and verify that you are online using Windows Disk Manager.
3. Open a command prompt, run the fcoeprep.bat batch file. To find the batch file, navigate to your architecture's directory within the \APPS\FCOEBOOT\Winx64\ directory.
4. Shut Windows down and capture the OS image to a local disk partition.
5. Transfer the image from the local hard drive to the FCoE target. This may be done from within the local
Windows installation.
6. For Windows 2008 R2 SP1 only: Run bcdboot.exe from the local Windows installation to make the
FCoE disk bootable.
l

If a System Reserved partition exists on the FCoE disk, type: bcdboot F:\Windows /s
E:
where E: is the FCoE System Reserved partition and F: is the FCoE partition with the Windows directory.

l

If a System Reserved partition does not exist, type: bcdboot E:\Windows /s E:
where E: is the FCoE partition with the Windows directory.

7. Shut down and remove the local disk.
8. Configure the system BIOS to boot from the FCoE disk and boot.
NOTE: See Microsoft's documentation for more detailed instructions.

SUSE* Linux Enterprise Server
For the easiest experience installing Linux onto an FCoE target, you should use SLES11 or greater. SLES11
provides native support for FCoE Booting and installing. This means that there are no additional steps outside
of the installer that are necessary to install to an iSCSI target using an Intel Ethernet Server Adapter. Please
refer to the SLES11 documentation for instructions on how to install to an iSCSI LUN.
Red Hat Enterprise Linux
For the easiest experience installing Linux onto an iSCSI target, you should use RHEL 6 or greater. RHEL 6
provides native support for iSCSI Booting and installing. This means that there are no additional steps outside
of the installer that are necessary to install to an iSCSI target using an Intel Ethernet Server Adapter. Please
refer to the RHEL 6 documentation for instructions on how to install to an iSCSI LUN.

Data Center Bridging (DCB) for Intel® Network Connections
Data Center Bridging provides a lossless data center transport layer for using LANs and SANs a single unified
fabric.
Data Center Bridging includes the following capabilities:
l

Priority-based flow control (PFC; IEEE 802.1Qbb)

l

Enhanced transmission selection (ETS; IEEE 802.1Qaz)

l

Congestion notification (CN)

l

Extensions to the Link Layer Discovery Protocol standard (IEEE 802.1AB) that enable Data Center
Bridging Capability Exchange Protocol (DCBX)

There are two supported versions of DCBX.
CEE Version: The specification can be found as a link within the following document:
http://www.ieee802.org/1/files/public/docs2008/dcb-baseline-contributions-1108-v1.01.pdf
IEEE Version: The specification can be found as a link within the following document:
https://standards.ieee.org/findstds/standard/802.1Qaz-2011.html
NOTE: The OS DCBX stack defaults to the CEE version of DCBX, and if a peer is transmitting
IEEE TLVs, it will automatically transition to the IEEE version.
For more information on DCB, including the DCB Capability Exchange Protocol Specification, go to
http://www.ieee802.org/1/pages/dcbridges.html

DCB for Windows Configuration:
Intel Ethernet Adapter DCB functions can be configured using Windows Device Manager. Open the adapter's
property sheet and select the Data Center tab.
NOTE: XL710 based devices running Microsoft Windows do not support Data Center Bridging
(DCB).
You can use the Intel® PROSet to perform the following tasks:

l

Display Status:
l

Enhanced Transmission Selection

l

Priority Flow Control

l

FCoE Priority
Non-operational status: If the Status indicator shows that DCB is non-operational, there may
be a number of possible reasons:
l

DCB is not enabled - select the checkbox to enable DCB.

l

One or more of the DCB features is in a non-operational state. The features which contribute to the non-operational status are PFC and APP:FCoE.

A non-operational status is most likely to occur when Use Switch Settings is selected or
Using Advanced Settings is active. This is generally a result of one or more of the DCB
features not getting successfully exchanged with the switch. Possible problems include:
l

One of the features is not supported by the switch.

l

The switch is not advertising the feature.

l

The switch or host has disabled the feature (this would be an advanced setting for the
host).

l

Disable/enable DCB

l

Troubleshooting information

Hyper-V (DCB and VMQ)
NOTE: Configuring a device in the VMQ + DCB mode reduces the number of VMQs available for
guest OSes.

DCB for Linux
DCB is supported on RHEL6 or later or SLES11 SP1 or later. See your operating system documentation for
specifics.

Troubleshooting and Known Issues
Intel® Boot Agent Messages
Message
Text

Cause

Invalid PMM
function number.

PMM is not installed or is not working correctly. Try updating the BIOS.

PMM allocation
error.

PMM could not or did not allocate the requested amount of memory for driver
usage.

PXE-E00: This
system does
not have
enough free conventional
memory. The
Intel Boot Agent
cannot continue.

System does not have enough free memory to run PXE image. The Intel Boot
Agent was unable to find enough free base memory (below 640K) to install the PXE
client software. The system cannot boot via PXE in its current configuration. The
error returns control to the BIOS and the system does not attempt to remote boot. If
this error persists, try updating your system's BIOS to the most-recent version.
Contact your system administrator or your computer vendor's customer support to
resolve the problem.

PXE-E01: PCI
Vendor and
Device IDs do
not match!

Image vendor and device ID do not match those located on the card. Make sure the
correct flash image is installed on the adapter.

PXE-E04: Error
reading PCI configuration
space. The Intel
Boot Agent cannot continue.

PCI configuration space could not be read. Machine is probably not PCI compliant.
The Intel Boot Agent was unable to read one or more of the adapter's PCI configuration registers. The adapter may be mis-configured, or the wrong Intel Boot
Agent image may be installed on the adapter. The Intel Boot Agent will return control
to the BIOSand not attempt to remote boot. Try to update the flash image. If this
does not solve the problem, contact your system administrator or Intel Customer
Support.

PXE-E05: The
LAN adapter's
configuration is
corrupted or has
not been
initialized. The
Intel Boot Agent
cannot continue.

The adapter's EEPROM is corrupted. The Intel Boot Agent determined that the
adapter EEPROM checksum is incorrect. The agent will return control to the
BIOSand not attempt to remote boot. Try to update the flash image. If this does not
solve the problem, contact your system administrator or Intel Customer Support.

PXE-E06:
Option ROM
requires DDIM
support.

The system BIOS does not support DDIM. The BIOS does not support the mapping
of the PCI expansion ROMs into upper memory as required by the PCI specification. The Intel Boot Agent cannot function in this system. The Intel Boot Agent
returns control to the BIOS and does not attempt to remote boot. You may be able to
resolve the problem by updating the BIOS on your system. If updating your system's BIOS does not fix the problem, contact your system administrator or your
computer vendor's customer support to resolve the problem.

PXE-E07: PCI
BIOS calls not
supported.

BIOS-level PCI services not available. Machine is probably not PCI compliant.

PXE-E09: Unexpected UNDI
loader error.
Status == xx

The UNDI loader returned an unknown error status. xx is the status returned.

PXE-E20:
BIOS extended
memory copy
error.

BIOS could not move the image into extended memory.

PXE-E20:
BIOS extended
memory copy
error. AH == xx

Error occurred while trying to copy the image into extended memory. xx is the BIOS
failure code.

PXE-E51: No
DHCP or
BOOTP offers
received.

The Intel Boot Agent did not receive any DHCP or BOOTP responses to its initial
request. Please make sure that your DHCP server (and/or proxyDHCP server, if
one is in use) is properly configured and has sufficient IP addresses available for
lease. If you are using BOOTP instead, make sure that the BOOTP service is running and is properly configured.

PXE-E53: No
boot filename
received.

The Intel Boot Agent received a DHCP or BOOTP offer, but has not received a
valid filename to download. If you are using PXE, please check your PXE and BINL
configuration. If using BOOTP, be sure that the TFTP service is running and that the
specific path and filename are correct.

PXE-E61:
Media test failure.

The adapter does not detect link. Please make sure that the cable is good and is
attached to a working hub or switch. The link light visible from the back of the
adapter should be lit.

PXE-EC1:
Base-code
ROM ID structure was not
found.

No base code could be located. An incorrect flash image is installed or the image
has become corrupted. Try to update the flash image.

PXE-EC3: BC
ROM ID structure is invalid.

Base code could not be installed. An incorrect flash image is installed or the image
has become corrupted. Try to update the flash image.

PXE-EC4:
UNDI ROM ID
structure was
not found.

UNDI ROM ID structure signature is incorrect. An incorrect flash image is installed
or the image has become corrupted. Try to update the flash image.

PXE-EC5:
UNDI ROM ID
structure is
invalid.

The structure length is incorrect. An incorrect flash image is installed or the image
has become corrupted. Try to update the flash image.

PXE-EC6:
UNDI driver
image is invalid.

The UNDI driver image signature was invalid. An incorrect flash image is installed
or the image has become corrupted. Try to update the flash image.

PXE-EC8:
!PXE structure
was not found
in UNDI driver
code segment.

The Intel Boot Agent could not locate the needed !PXE structure resource. An incorrect flash image is installed or the image has become corrupted. Try to update the
flash image.

PXE-EC9:
PXENV + structure was not
found in UNDI
driver code segment.

The Intel Boot Agent could not locate the needed PXENV+ structure. An incorrect
flash image is installed or the image has become corrupted. Try to update the flash
image.

PXE-M0F: Exiting Intel Boot
Agent.

Ending execution of the ROM image.

This option has
been locked
and cannot be
changed.

You attempted to change a configuration setting that has been locked by your system administrator. This message can appear either from within Intel® PROSet's
Boot Options tab when operating under Windows* or from the Configuration Setup
Menu when operating in a stand-alone environment. If you think you should be able
to change the configuration setting, consult your system administrator.

PXE-M0E:
Retrying network boot;
press ESC to
cancel.

The Intel Boot Agent did not successfully complete a network boot due to a network
error (such as not receiving a DHCP offer). The Intel Boot Agent will continue to
attempt to boot from the network until successful or until canceled by the user. This
feature is disabled by default. For information on how to enable this feature, contact
Intel Customer Support.

Intel Boot Agent Troubleshooting Procedures
Common Issues
The following list of problems and associated solutions covers a representative set of problems that you might
encounter while using the Intel Boot Agent.
After booting, my computer experiences problems
After the Intel® Boot Agent product has finished its sole task (remote booting), it no longer has any effect on
the client computer operation. Thus, any issues that arise after the boot process is complete are most likely
not related to the Intel Boot Agent product.
If you are having problems with the local (client) or network operating system, contact the operating system
manufacturer for assistance. If you are having problems with some application program, contact the
application manufacturer for assistance. If you are having problems with any of your computer's hardware or
with the BIOS, contact your computer system manufacturer for assistance.
Cannot change boot order
If you are accustomed to redefining your computer's boot order using the motherboard BIOS setup program,
the default settings of the Intel Boot Agent setup program can override that setup. To change the boot
sequence, you must first override the Intel Boot Agent setup program defaults. A configuration setup menu
appears allowing you to set configuration values for the Intel Boot Agent. To change your computer's boot
order setting, see Configuring the Boot Agent in a Pre-boot PXE Environment.
My computer does not complete POST
If your computer fails to boot with an adapter installed, but does boot when you remove the adapter, try
moving the adapter to another computer and using BootUtil to disable the Flash ROM.
If this does not work, the problem may be occurring before the Intel Boot Agent software even begins
operating. In this case, there may be a BIOS problem with your computer. Contact your computer
manufacturer's customer support group for help in correcting your problem.
There are configuration/operation problems with the boot process
If your PXE client receives a DHCP address, but then fails to boot, you know the PXE client is working
correctly. Check your network or PXE server configuration to troubleshoot the problem. Contact Intel
Customer Support if you need further assistance.
POST hang may occur if two or more ports on Quad Port Server Adapters are configured for PXE
If you have an Intel® Gigabit VT Quad Port Server Adapter, Intel® PRO/1000 PT Quad Port LP Server
Adapter, or an Intel® PRO/1000 PF Quad Port Server Adapter with two or more ports configured for PXE, you
may experience POST hangs on some server systems. If this occurs the suggested workaround is move the
adapter to another system and disable PXE on all but one port of the Adapter. You may also be able to prevent
this problem by disabling any on-board SCSI or SAS controllers in your system BIOS.

Diagnostics Information for Pre-boot PXE Environments
Anytime the configuration setup menu is displayed (see Configuring the Boot Agent in a Pre-boot PXE
Environment), diagnostics information is also displayed on the lower portion of the screen. The information
displayed appears similar to that shown in the lower half of the screen image below. This information can be
helpful during interaction with Intel Customer Support personnel or your IT team members.
NOTE: Actual diagnostics information may vary, depending upon the adapter(s) installed in your
computer.

Diagnostics information may include the following items:
Item

Description

PWA
Number

The Printed Wire Assembly number identifies the adapter's model and version.

MAC
Address

The unique Ethernet address assigned to the device.

Memory

The memory address assigned by the BIOS for memory-mapped adapter access.

I/O

The I/O port address assigned by the BIOS for I/O-mapped adapter access.

IRQ

The hardware interrupt assigned by the system BIOS.

UNB

The address in upper memory where the Boot Agent is installed by the BIOS.

PCI ID

The set of PCI identification values from the adapter in the form:
VendorID/DeviceID/SubvendorID/SubdeviceID/Revision

Slot

The PCI bus address (slot number)reported by the BIOS.
NOTE: The number displayed is the BIOS version of the PCI slot number. Therefore, actual
positions of NICs within physical slots may not be displayed as expected. Slots are not
always enumerated in an obvious manner, and this will only report what is indicated by the
BIOS.

Flags

A set of miscellaneous data either read from the adapter EEPROM or calculated by the

Boot Agent initialization code. This information varies from one adapter to the next and is
only intended for use by Intel customer support.

iSCSI Troubleshooting
The table below lists problems that can possibly occur when using Intel® Ethernet iSCSI Boot. For each
problem a possible cause and resolution are provided.
Problem
Intel® Ethernet iSCSI
Boot does not load on
system startup and the
sign-on banner is not
displayed.

After installing Intel
Ethernet iSCSI Boot,
the system will not boot
to a local disk or
network boot device.
The system becomes
unresponsive after Intel
Ethernet iSCSI Boot
displays the sign-on

Resolution
l

While the system logon screen may display for a longer time during
system startup, Intel Ethernet iSCSI Boot may not be displayed during POST. It may be necessary to disable a system BIOS feature in
order to display messages from Intel iSCSI Remote Boot. From the
system BIOS Menu, disable any quiet boot or quick boot options.
Also disable any BIOS splash screens. These options may be suppressing output from Intel iSCSI Remote Boot.

l

Intel Ethernet iSCSI Remote Boot has not been installed on the
adapter or the adapter's flash ROM is disabled. Update the network
adapter using the latest version of BootUtil as described in the
"Flash Images" section of this document. If BootUtil reports the flash
ROM is disabled, use the "BOOTUTIL -flashenable" command
to enable the flash ROM and update the adapter.

l

The system BIOS may be suppressing output from Intel Ethernet
iSCSI Boot.

l

Sufficient system BIOS memory may not be available to load Intel
Ethernet iSCSI Boot. Attempt to disable unused disk controllers and
devices in the system BIOS setup menu. SCSI controllers, RAID
controller, PXE enabled network connections, and shadowing of system BIOS all reduce the memory area available to Intel Ethernet
iSCSI Boot. Disable these devices and reboot the system to see if
Intel iSCSI Boot is able to initialize. If disabling the devices in the
system BIOS menu does not resolve the problem then attempt to
remove unused disk devices or disk controllers from the system.
Some system manufacturers allow unused devices to be disabled by
jumper settings.

l

A critical system error has occurred during iSCSI Remote Boot
initialization. Power on the system and press the 's' key or 'ESC' key
before Intel iSCSI Remote Boot initializes. This will bypass the Intel
Ethernet iSCSI Boot initialization process and allow the system to
boot to a local drive. Use the BootUtil utility to update to the latest
version of Intel Ethernet iSCSI Remote Boot.

l

Updating the system BIOS may also resolve the issue.

banner or after
connecting to the iSCSI
target.
"Intel® iSCSI
Remote Boot" does
not show up as a boot
device in the system
BIOS boot device
menu.

l

The system BIOS may not support Intel Ethernet iSCSI Boot.
Update the system BIOS with the most recent version available from
the system vendor.

l

A conflict may exist with another installed device. Attempt to disable
unused disk and network controllers. Some SCSI and RAID controllers are known to cause compatibility problems with Intel iSCSI
Remote Boot.

Error message
displayed:
"Failed to detect link"

l

Intel Ethernet iSCSI Boot was unable to detect link on the network
port. Check the link detection light on the back of the network
connection. The link light should illuminate green when link is
established with the link partner. If the link light is illuminated but the
error message still displays then attempt to run the Intel link and
cable diagnostics tests using DIAGS.EXE for DOS or Intel PROSet
for Windows.

Error message
displayed:
"DHCP Server not
found!"

iSCSI was configured to retrieve an IP address from DHCP but no DHCP
server responded to the DHCP discovery request. This issue can have
multiple causes:
l

The DHCP server may have used up all available IP address reservations.

l

The client iSCSI system may require static IP address assignment
on the connected network.

l

There may not be a DHCP server present on the network.

l

Spanning Tree Protocol (STP) on the network switch may be preventing the Intel iSCSI Remote Boot port from contacting the DHCP
server. Refer to your network switch documentation on how to disable Spanning Tree Protocol.

Error message
displayed:
"PnP Check Structure
is invalid!"

l

Intel Ethernet iSCSI Boot was not able to detect a valid PnP PCI
BIOS. If this message displays Intel Ethernet iSCSI Boot cannot run
on the system in question. A fully PnP compliant PCI BIOS is
required to run Intel iSCSI Remote Boot.

Error message
displayed:
"Invalid iSCSI
connection information"

l

The iSCSI configuration information received from DHCP or
statically configured in the setup menu is incomplete and an attempt
to login to the iSCSI target system could not be made. Verify that the
iSCSI initiator name, iSCSI target name, target IP address, and
target port number are configured properly in the iSCSI setup menu
(for static configuration) or on the DHCP server (for dynamic BOOTP
configuration).

Error message
displayed:
"Unsupported SCSI
disk block size!"

l

The iSCSI target system is configured to use a disk block size that
is not supported by Intel Ethernet iSCSI Boot. Configure the iSCSI
target system to use a disk block size of 512 bytes.

Error message
displayed:
"ERROR: Could not
establish TCP/IP
connection with iSCSI
target system."

l

Intel Ethernet iSCSI Boot was unable to establish a TCP/IP
connection with the iSCSI target system. Verify that the initiator and
target IP address, subnet mask, port and gateway settings are
configured properly. Verify the settings on the DHCP server if
applicable. Check that the iSCSI target system is connected to a
network accessible to the Intel iSCSI Remote Boot initiator. Verify
that the connection is not being blocked by a firewall.

Error message
displayed:
"ERROR: CHAP
authentication with
target failed."

l

The CHAP user name or secret does not match the CHAP
configuration on the iSCSI target system. Verify the CHAP
configuration on the Intel iSCSI Remote Boot port matches the
iSCSI target system CHAP configuration. Disable CHAP in the
iSCSI Remote Boot setup menu if it is not enabled on the target.

Error message
displayed:
"ERROR: Login
request rejected by
iSCSI target system."

l

A login request was sent to the iSCSI target system but the login
request was rejected. Verify the iSCSI initiator name, target name,
LUN number, and CHAP authentication settings match the settings
on the iSCSI target system. Verify that the target is configured to
allow the Intel iSCSI Remote Boot initiator access to a LUN.

When installing Linux to
Net App Filer, after a
successful target disk
discovery, error
messages may be seen
similar to those listed
below.

l

If these error messages are seen, unused iscsi interfaces on Net
App filer should be disabled.

l

Continuous=no should be added to the iscsi.conf file

l

A TCP/IP connection was successfully made to the target IP
address, however an iSCSI target with the specified iSCSI target
name could not be found on the target system. Verify that the configured iSCSI target name and initiator name match the settings on
the iSCSI target.

Iscsi-sfnet:hostx:
Connect failed with rc 113: No route to host
Iscsi-sfnet:hostx:
establish_session
failed. Could not
connect to target
Error message displayed.
"ERROR: iSCSI target
not found."

Error message displayed.
"ERROR: iSCSI target
can not accept any
more connections."

l

The iSCSI target cannot accept any new connections. This error
could be caused by a configured limit on the iSCSI target or a limitation of resources (no disks available).

Error message displayed.
"ERROR: iSCSI target
has reported an error."

l

An error has occurred on the iSCSI target. Inspect the iSCSI target
to determine the source of the error and ensure it is configured properly.

Error message displayed.

l

A system on the network was found using the same IP address as
the iSCSI Option ROM client.

ERROR: There is an IP
address conflict with
another system on the
network.

l

If using a static IP address assignment, attempt to change the IP
address to something which is not being used by another client on
the network.

l

If using an IP address assigned by a DHCP server, make sure there
are no clients on the network which are using an IP address which
conflicts with the IP address range used by the DHCP server.

iSCSI Known Issues
Microsoft Windows iSCSI Boot Issues
Microsoft Initiator does not boot without link on boot port:
After setting up the system for Intel® Ethernet iSCSI Boot with two ports connected to a target and
successfully booting the system, if you later try to boot the system with only the secondary boot port
connected to the target, Microsoft Initiator will continuously reboot the system.
To work around this limitation follow these steps:
1. Using Registry Editor, expand the following registry key:
\System\CurrentControlSet\Services\Tcpip\Parameters
2. Create a DWORD value called DisableDHCPMediaSense and set the value to 0.
Support for Platforms Booted by UEFI iSCSI Native Initiator
Starting with version 2.2.0.0, the iSCSI crash dump driver gained the ability to support platforms booted using
the native UEFI iSCSI initiator over supported Intel Network Adapters. This support is available on Windows
Server or newer and only on x64 architecture. Any hotfixes listed above must also be applied.
Since network adapters on UEFI platforms may not provide legacy iSCSI option ROM, the boot options tab in
DMIX may not provide the setting to enable the iSCSI crash dump driver. If this is the case, the following
registry entry has to be created:
HKLM\SYSTEM\CurrentControlSet\Control\Class\{4D36E97B-E325-11CE-BFC108002BE10318}\\Parameters
DumpMiniport
REG_SZ
iscsdump.sys

Moving iSCSI adapter to a different slot:
In a Windows* installation, if you move the iSCSI adapter to a PCI slot other than the one that it occupied
when the drivers and MS iSCSI Remote Boot Initiator were installed, then a System Error may occur during
the middle of the Windows Splash Screen. This issue goes away if you return the adapter to its original PCI
slot. We recommend not moving the adapter used for iSCSI boot installation. This is a known OS issue.
If you have to move the adapter to another slot, then perform the following:
1. Boot the operating system and remove the old adapter
2. Install a new adapter into another slot
3. Setup the new adapter for iSCSI Boot
4. Perform iSCSI boot to the OS via the original adapter
5. Make the new adapter iSCSI-bootable to the OS
6. Reboot
7. Move the old adapter into another slot
8. Repeat steps 2 - 5 for the old adapter you have just moved
Uninstalling Driver can cause blue screen
If the driver for the device in use for iSCSI Boot is uninstalled via Device Manager, Windows will blue screen
on reboot and the OS will have to be re-installed. This is a known Windows issue.
Adapters flashed with iSCSI image are not removed from the Device Manager during uninstall
During uninstallation all other Intel Network Connection Software is removed, but drivers for iSCSI Boot
adapters that have boot priority.
I/OAT Offload may stop with Intel® Ethernet iSCSI Boot or with Microsoft Initiator installed
A workaround for this issue is to change the following registry value to "0":
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\IOATDMA\Start
Only change the registry value if iSCSI Boot is enabled and if you want I/OAT offloading. A blue screen will
occur if this setting is changed to "0" when iSCSI Boot is not enabled. It must be set back to "3" if iSCSI Boot
is disabled or a blue screen will occur on reboot.
NDIS Driver May Not Load During iSCSI Boot F6 Install With Intel® PRO/1000 PT Server Adapter
If you are using two Intel® PRO/1000 PT Server Adapters in two PCI Express x8 slots of a rack mounted
Xeon system, Windows installation can be done only via a local HDD procedure.
Invalid CHAP Settings May Cause Windows® Server 2008 to Blue Screen
If an iSCSI Boot port CHAP user name and secret do not match the target CHAP user name and secret,
Windows Server 2008 may blue screen or reboot during installation or boot. Ensure that all CHAP settings
match those set on the target(s).
F6 Driver Does Not Support Standby Mode.
If you are performing an F6 Windows without a Local Disk installation, do not use Standby Mode.
Microsoft* Windows Server* 2008 Installation When Performing a WDS Installation
If you perform a WDS installation and attempt to manually update drivers during the installation, the drivers
load but the iSCSI Target LUN does not display in the installation location list. This is a known WDS limitation
with no current fix. You must therefore either perform the installation from a DVD or USB media or inject the
drivers on the WDS WinPE image.

Microsoft has published a knowledge base case explaining the limitation in loading drivers when installing with
iSCSI Boot via a WDS server.
http://support.microsoft.com/kb/960924
iSCSI Boot and Teaming in Windows
Teaming is not supported with iSCSI Boot. Creating a team using the primary and secondary iSCSI adapters
and selecting that team during the Microsoft initiator installation may fail with constant reboots. Do not select
a team for iSCSI Boot, even if it is available for selection during initiator installation.
For load balancing and failover support, you can use MSFT MPIO instead. Check the Microsoft Initiator User
Guide on how to setup MPIO.
Setting LAA (Locally Administered Address) on an iSCSI Boot-Enabled Port Will Cause System
Failure on Next Reboot
Do not set LAA on ports with iSCSI Boot enabled.
F6 installation may fail with some EMC targets
An F6 installation may fail during the reboot in step 10 of “Installing Windows 2003 without a Local Disk”
because of a conflict between the Intel F6 driver, the Microsoft iSCSI Initiator and the following EMC target
model firmware versions:
l

AX4-5 arrays: 02.23.050.5.705 or higher.

l

CX300, CX500, CX700, and CX-3 Series arrays: 03.26.020.5.021 or higher.

l

CX-4 Series arrays: 04.28.000.5.701 or higher, including all 04.29.000.5.xxx revisions.

To avoid the failure, ensure that the secondary iSCSI port cannot reach the target during the reboot in step 10.
With high iSCSI traffic on Microsoft* Windows 2003 Server* R2, link flaps can occur with 82598based silicon
This issue is caused by the limited support for Large Send Offload (LSO) in this Operating System. Please
note that if ISCSI traffic is required for Windows 2003 Server R2, LSO will be disabled.
Intel® Ethernet iSCSI Boot version does not match between displayed versions on DMIX and the
scrolling text during boot
If a device is not set to primary but is enumerated first, the BIOS will still use that device's version of iSCSI
Boot. Therefore the user may end up using an earlier version of Intel® Ethernet iSCSI Boot than expected.
The solution is that all devices in the system must have the same version of iSCSI Boot. To do this the user
should go to the Boot Options Tab and update the devices' flash to the latest version.
IPv6 iSCSI login to Dell EqualLogic arrays using jumbo frames
To establish an iSCSI session using IPv6 and jumbo frames with Dell EqualLogic arrays, TCP/UDP
checksum offloads on the Intel iSCSI adapter should be disabled.

Microsoft Windows iSCSI/DCB Known Issues
iSCSI over DCB using Microsoft* Windows Server* 2012
iSCSI over DCB (priority tagging) is not possible on the port on which VMSwitch is created. This is by design
in Microsoft* Windows Server* 2012.
Automatic creation of iSCSI traffic filters for DCB is only supported on networks which make use
of IPv4 addressing
The iSCSI for Data Center Bridging (DCB) feature uses Quality of Service (QOS) traffic filters to tag outgoing
packets with a priority. The Intel iSCSI Agent dynamically creates these traffic filters as needed on networks
using IPv4 addressing.

Automatic creation of iSCSI traffic filters for DCB, using Virtual Adapters created by Hyper-V, is
only supported on Microsoft* Windows Server* 2008 releases R2 and later.
The iSCSI for Data Center Bridging (DCB) feature uses Quality of Service (QOS) traffic filters to tag outgoing
packets with a priority. The Intel iSCSI Agent dynamically creates these traffic filters as needed for Windows
Server 2008 R2 and later.
IPv6 iSCSI login to Dell EqualLogic arrays using jumbo frames
To establish an iSCSI session using IPv6 and jumbo frames with Dell EqualLogic arrays, TCP/UDP
checksum offloads on the Intel iSCSI adapter should be disabled.

Linux Known Issues
Authentications errors on EqualLogic target may show up in dmesg when running Red Hat* Enterprise Linux 4
These error messages do not indicate a block in login or booting and may safely be ignored.
Channel Bonding
Linux Channel Bonding has basic compatibility issues with iSCSI Boot and should not be used.
iBFT System using RHEL 5.2
In an iBFT system using RHEL 5.2, Anaconda does not automatically start networking upon installation. The
user has to manually bring up networking through a console. Please refer to the RedHat documentation for
details on how to manually force up the network.
CHAP Support with RHEL 5.2
RHEL 5.2 does not support CHAP during installation time. If you use CHAP authentication on the target,
please disable CHAP during installation and enable it after the installation is complete.
RHEL 5.1
On RHEL5.1 systems, the wrong network interface is brought up on the first iSCSI Boot after installation.
This causes the system to hang and requires a reinstallation at the very least. The workaround for this issue is
to edit the init script soon after installation and change the interface you wish to bring up. We strongly
encourage our users to use RHEL5.2 to avoid this issue.
LRO and iSCSI Incompatibility
LRO (Large Receive Offload) is incompatible with iSCSI target or initiator traffic. A panic may occur when
iSCSI traffic is received through the ixgbe driver with LRO enabled. To workaround this, the driver should be
built and installed with:
# make CFLAGS_EXTRA=-DIXGBE_NO_LRO install
RHEL 5.X
From a remote LUN, iSCSI boot only works on the same port that was used to install to the remote LUN. You
cannot boot from an alternate LAN port after iSCSI is install.

FCoE Known Issues
Intel® Ethernet FCoE Windows Issues
Intel® Ethernet Virtual Storage Miniport Driver for FCoE may disappear from Device Manager
The Intel® Ethernet Virtual Storage Miniport Driver for FCoE may disappear from the Device Manager after
either:

l

A virtual network is removed.

l

The underlying Intel NIC adapter settings are modified.

This can occur when the corresponding Intel adapter is virtualized to create a new virtual network or delete or
modify an existing Virtual Network. It can also happen when the underlying Intel NIC adapter settings are
modified, including disabling or re-enabling the adapter.
As a workaround, the user should remove all the resource dependency of the Intel® Ethernet Virtual Storage
Miniport Driver for FCoE that are currently being used by the system before making any changes to the Intel
adapter for virtualization. For example, in one use case scenario, the user may have assigned the FCoE disk
(s) from the FCoE storage driver to run one of its Virtual Machines, and at the same time the user wants to
alter the configuration of the same Intel adapter for virtualization. In this scenario the user must remove the
FCoE disks(s) from the Virtual Machine before altering the Intel adapter configuration.
Virtual Port may disappear from Virtual Machine
When the Virtual Machine starts, it asks the Intel® Ethernet Virtual Storage Miniport Driver for FCoE ("the
driver") to create a Virtual Port. If the driver is subsequently disabled, the Virtual Port may disappear. The only
way to get the Virtual Port back is to enable the driver and reboot the Virtual Machine.
When installing FCoE after installing ANS and creating AFT Team, Storports are not installed
If the user installs ANS and creates an AFT team and then installs FCoE/DCB, the result is that DCB is off by
default. If the user then enables DCB on one port, the OS detects Storports and the user must manually click
on the new hardware wizard prompts for each of them to install. If the user does not do that, DCB status is
non-operational and the reason given is no peer.
Intel® PROSet for Windows Device Manager (DMiX) is not synched with FCoE CTRL-D Utility
When the user disables FCoE via the Control-D menu, the Intel PROSet for Windows Device Manager User
Interface states that the flash contains an FCoE image, but that the flash needs to be updated. Updating the
flash with the FCoE image again, re-enables FCoE and returns the user to the state where all the FCoE
settings are available.
If the user uses the control-D menu to disable FCoE, then they should use the control-D menu to enable it
because Intel PROSet for Windows Device Manager does not support enabling or disabling FCoE.
82599 and X540-based adapters don't display as SPC-3 compliant in Windows MPIO configuration
Because the FCoE initiator is a virtualized device it does not have its own unique hardware ID and thus is not
displayed as a SPC-3 compliant device in Windows MPIO configuration.
When removing ALB teaming, all FCOE functions fail, all DMIX tabs are grayed out, and both
adapter ports fail
For ANS teaming to work with Microsoft Network Load Balancer (NLB) in unicast mode, the team's LAA must
be set to cluster node IP. For ALB mode, Receive Load Balancing must be disabled. For further configuration
details, refer to http://support.microsoft.com/?id=278431
ANS teaming will work when NLB is in multicast mode, as well. For proper configuration of the adapter in this
mode, refer to http://technet.microsoft.com/en-ca/library/cc726473(WS.10).aspx
FCoE and TCP/IP traffic on the same VLAN may not work on some switches
This is a known switch design and configuration issue.

Intel® Ethernet FCoE Boot Issues
Option ROM Known Issues
Discovery problems with multiple FCoE VLANs
The FCoE Option ROM may not discover the desired VLAN when performing VLAN discovery from the
Discover Targets function. If the Discover VLAN box is populated with the wrong VLAN, then enter the
desired VLAN before executing Discover Targets.
Windows Known Issues
Brocade switch support in Release 16.4
Intel® Ethernet FCoE Boot does not support Brocade switches in Release 16.4. If necessary, please use
Release 16.2.
Windows uses a paging file on the local disk
After imaging, if the local disk is not removed before booting from the FCoE disk then Windows may use the
paging file from the local disk.
Crash dump to FCoE disks is only supported to the FCoE Boot LUN
The following scenarios are not supported:
l

Crash dump to an FCoE disk if the Windows directory is not on the FCoE Boot LUN.

l Use of the DedicatedDumpFile registry value to direct crash dump to another FCoE LUN.
FCoE uninstall from a local disk may be blocked because installer inaccurately reports system is
booted from FCoE

When the FCoE Option ROM connects to an FCoE disk during boot, the Windows installer may be unable to
determine if the system was booted from FCoE or not and will block the FCoE uninstall. To uninstall,
configure the Option ROM so that it does not connect to an FCoE disk.
Unable to create VLAN interfaces with Intel® Ethernet FCoE Boot enabled
When booted with FCoE, a user cannot create VLANs and/or Teams for other traffic types. This prevents
converged functionality for non-FCoE traffic.
Server adapter configured for FCoE Boot available as External-Shared vnic via Hyper-V
If a port is set as a boot port, when the user installs the Hyper V role in the system and then goes into the
Hyper V Network Manager to select which port to externally virtualize, the boot port displays, which it should
not.
When setting the port to a boot port in Intel PROSet for Windows Device Manager, a message shows that the
user should restart the system for the changes to be effective but does not force a restart. As a result the user
level applications are in boot mode (i.e., Data Center Tab is grayed out) but kernel level drivers haven’t been
restarted to indicate to the OS that the port is a boot port. When the user then adds the Hyper V service to the
system, the OS takes a snap shot of the ports available and this is the snap shot that it uses after the Hyper V
role is added, system restarted and the user goes into the Hyper V Virtual Network Manager to virtualize the
ports. As a result, the boot port also shows up.
Solutions:

Restart the system after setting a port to a boot port and before adding the Hyper V role. The port does not
appear in the list of virtualizable ports in the Hyper V Virtual network manager.
Disable/enable the port in Device Manager after setting it to boot and before adding the Hyper V role. The port
does not appear in the list of virtualizable ports in the Hyper V Virtual network manager.

FCoE Linkdown Timeout fails prematurely when Remote Booted
If an FCoE-booted port loses link for longer than the time specified in the Linkdown Timeout advanced
setting in the Intel® Ethernet Virtual Storage Miniport Driver for FCoE, the system will crash. Linkdown
Timeout values greater than 30 seconds may not provide extra time before a system crash.
Windows fails to boot properly after using the image install method
The following situation may arise when installing Windows for FCoE Boot using the imaging method:
Windows boots successfully from the FCoE LUN when the local drive is installed, but when the local drive is
removed, Windows seems to boot, but fails before reaching the desktop.
In this case it is likely that the Windows installation resides on both the FCoE LUN and local drive. This can
be verified by booting from the FCoE LUN with the local drive installed, then comparing the drive letter in the
path of files on the desktop with the drive letter for the boot partition in Windows' Disk Management tool. If the
drive letters are different, then the Windows installation is split between the two disks.
If this situation is occurring, please ensure that fcoeprep is run prior to capturing the image, and that the
system is not allowed to local boot between running fcoeprep and capturing the image. In addition, the local
drive could be removed from the system prior to the first boot from the FCoE LUN.

UEFI Known Issues
Long Initialization Times
Long initialization times observed with Intel’s UEFI driver are caused when the UNDI.Initialize command is
called with the PXE_OPFLAGS_INITIALIZE_CABLE_DETECT flag set. In this case, UNDI.Initialize will try
to detect the link state.
If the port is connected and link is up, initialize will generally finish in about 3.5 seconds (the time needed to
establish link, dependent on link conditions, link speed and controller type) and returns PXE_STATFLAGS_
COMMAND_COMPLETE. If the port is disconnected (link is down), initialize will complete in about 5
seconds and return PXE_STATFLAGS_INIIALIZED_NO_MEDIA (driver initializes hardware then waits for
link and timeouts when link is not establish in 5 seconds).
When UNDI.Initialize is called with PXE_OPFLAGS_INITIALIZE_DO_NOT_DETECT_CABLE the function
will not try to detect link status and will take less than 1 second to complete.
The behavior of UNDI.Initialize is described in UEFI specs 2.3.1: Initializing the network device will take up to
four seconds for most network devices and in some extreme cases (usually poor cables) up to twenty
seconds. Control will not be returned to the caller and the COMMAND_COMPLETE status flag will not be set
until the adapter is ready to transmit.

Glossary
TERM

DEFINITIONS

ACPI

Advanced Configuration and Power Interface

ARP

Address Resolution Protocol

BBS

BIOS Boot Specification

BC

Base Code. The PXE Base Code is comprised of a simple network stack (UDP/IP) and
a few common network protocols (DHCP, ARP, TFTP) that are useful for remote booting machines.

BINL

Binary Image Negotiation Layer

BIOS

Basic Input/Output System. The program a personal computer's microprocessor uses
to get the computer system started after you turn it on. It also manages data flow
between the computer's operating system and attached devices.

Boot Targets

The server-side system in an FCoE SAN configuration. The FCoE Boot Target system
hosts the FCoE target drives which are accessed by an FCoE Boot initiator.

BOOTP

Bootstrap Protocol. A legacy remote booting protocol developed originally for use with
UNIX. Used as Linux's server side PXE host software. Runs a Daemon once installed.

BootUtil

Intel® Ethernet Flash Firmware Utility (BootUtil).

CEE

Converged Enhanced Ethernet

CHAP

Challenge Handshake Authentication Protocol, CHAP is the standard authentication
protocol used on iSCSI SAN networks.

CLP

Command Line Protocol.

Data Link
Interface

Interface to the chip at the MAC layer.

DCB

Data Center Bridging

DCBX

DCB Exchange Protocol

DDIM

Device Driver Initialization Model

DDP

Direct Data Placement

TERM

DEFINITIONS

Descriptor
Queues

Descriptor queues are used by software to submit work requests like send and receive
and get completion status.

DHCP

Dynamic Host Configuration Protocol. An industry standard Internet protocol defined by
the IETF. DHCP was defined to dynamically provide communications-related configuration values such as network addresses to network client computers at boot time.
DHCP is specified by IETF RFCs 1534, 2131, and 2132.

EEPROM

Electrically Erasable Programmable Read-Only Memory

ETS

Enhanced Transmission Selection

FC

Fibre Channel

FCF

Fibre Channel Forwarder

FCoE

Fibre Channel over Ethernet

Flash

A high-density, truly non-volatile, high-performance, read-write memory solution, also
characterized by low power consumption, extreme ruggedness, and high reliability.

Flash ROM

The non-volatile memory embedded in Intel® Network Connections. Flash ROM is
used to store Intel® iSCSI Boot.

HBA

Host Bus Adapter

IRP

IO Request Packet

iSCSI

Internet SCSI

iSCSI initiator

The client side system in an iSCSI SAN configuration. The iSCSI initiator logs into the
iSCSI target system to access iSCSI target drives.

iSCSI target

The server side system in an iSCSI SAN configuration. The iSCSI target system hosts
the iSCSI target drives which are accessed by an iSCSI initiator.

LLDP

Link Layer Discovery Protocol, IEEE802.1AB

LOM

LAN On Motherboard. This is a network device that is built onto the motherboard (or
baseboard) of the machine.

LUN

Logical Unit Number (LUN) is the identifier of a device which is being addressed by protocols such as Fibre Channel and iSCSI

TERM

DEFINITIONS

Native
TCP/IP
Stack

TCP/IP stack implemented in software and provided as part of the operating system.

NFS

Network File System

NIC

Network Interface Card. Also referred to as an adapter or device. Technically, this is a
network device that is inserted into a bus on the motherboard or into an expansion
board. For the purposes of this document, the term NIC will be used in a generic sense,
meaning any device that enables a network connection (including LOMs and network
devices on external busses, such as USB 1394, etc).

Packet
Buffers

Packet buffers are hardware FIFOs that either receive or transmit packets. Each packet
buffer can be associated with one or more traffic classes

PCI

Peripheral Components Interface

PFC

Priority Flow Control

PMM

POST Memory Manager. A mechanism used by option ROMs to allocate RAM memory
for use during system startup.

PnP

Plug and Play. PnP refers to a set of industry standard specifications that allows
installed devices to self-configure.

POST

Power On Self-Test

proxyDHCP

Used to ease the transition of PXEclients and servers into an existing network infrastructure. proxyDHCP provides additional DHCP information that is needed by PXE clients and boot servers without making changes to existing DHCP servers.

PXE

Preboot Execution Environment. PXE provides a way for a system to initiate a network
connection to various servers prior to loading an operating system. This network connection supports a number of standard IP protocols such as DHCP and TFTP, and can
be used for purposes such as software installation and system inventory maintenance.

RDMA

Remote Direct Memory Access

RIS

Remote Installation Services. A Microsoft* service that uses PXE to deliver the Network Bootstrap Program to start the boot process.

ROM

Read-Only Memory. When used in this guide, ROM refers to a non-volatile memory storage device on a NIC.

TERM

DEFINITIONS

RSS

Receive Side Scaling is a mechanism for hardware to distribute receive packets to
queues that are associated with a specific processor core and thereby distributing the
processing load.

RX

Receive

SAN

Storage Area Network

SCSI

Small Computer System Interface

SNMP

Simple Network Management Protocol

TFTP

Trivial File Transfer Protocol. An industry standard Internet protocol defined by the IETF
to enable the transmission of files across the Internet. Trivial File Transfer Protocol
(TFTP, Revision 2) to support NBP download is specified by IETF RFC 1350.

TLV

Type Length Value

Transport
Interface

Interface to the chip at the transport layer.

TX

Transmit

UNDI

Universal Network Driver Interface. UNDI provides a hardware-independent mechanism for the PXE base code to use an adapter for network access without controlling
the adapter hardware directly.

USB

Universal Serial Bus. A Plug and Play (PnP) interface between a computer and add-on
devices.

VBD

Virtual Bus Driver. Driver that exposes two virtual physical Devices on a single physical device and enables sharing of LAN and SAN traffic on a common Ethernet port.

VFT

Virtual Fabric Tagging is a Fibre Channel defined extended frame header.

VLAN

Virtual LAN (VLAN) is a group of hosts with a common set of requirements that communicate as if they were attached to the same broadcast domain, regardless of their
physical location.

VMDq

Virtual Machine Device Queues

WOL

Wake on LAN*

VN2VN Key Terms
TERMS

DEFINITIONS

N_Port

A device port that generates/terminates FC-4 channel traffic

ENode
(FCoE Node)

A Fiber Channel node (FC-FS-3) that is able to transmit FCoE frames using one or
more ENode MACs

FCoE_LEP
(FCoE Link
End-Point)

The data forwarding component of an FCoE Entity that handles FC frame encapsulation/decapsulation, and transmission/reception of encapsulated frames through a
single Virtual Link

Lossless Ethernet network

An Ethernet network composed only of full duplex links, Lossless Ethernet MACs, and
Lossless Ethernet bridging elements

Virtual Link

The logical link connecting two FCoE_LEPs

VN_Port (Virtual N_Port)

An instance of the FC-2V sublevel of Fibre Channel that operates as an N_Port (see
FC-FS-3) and is dynamically instantiated on successful completion of a FIP FLOGI or
FIP NPIV FDISC Exchange

VN_Port
MAC
address

The MAC address used by an ENode for a particular VN_Port

Legal Disclaimers
INTEL SOFTWARE LICENSE AGREEMENT
IMPORTANT - READ BEFORE COPYING, INSTALLING OR USING.
Do not copy, install, or use this software and any associated materials (collectively, the "Software")
provided under this license agreement ("Agreement") until you have carefully read the following
terms and conditions.
By copying, installing, or otherwise using the Software, you agree to be bound by the terms of this
Agreement. If you do not agree to the terms of this Agreement, do not copy, install, or use the
Software.

LICENSES
Please Note:
l

If you are a network or system administrator, the "Site License" below shall apply to you.

l

If you are an end user, the "Single User License" shall apply to you.

l

If you are an original equipment manufacturer (OEM), the "OEM License" shall apply to you.

SITE LICENSE: You may copy the Software onto your organization's computers for your organization's use,
and you may make a reasonable number of back-up copies of the Software, subject to these conditions:
1. This Software is licensed for use only in conjunction with (a) physical Intel component
products, and (b) virtual ("emulated") devices designed to appear as Intel component
products to a Guest operating system running within the context of a virtual machine. Any
other use of the Software, including but not limited to use with non-Intel component
products, is not licensed hereunder.
2. Subject to all of the terms and conditions of this Agreement, Intel Corporation ("Intel") grants to you a
non-exclusive, non-assignable, copyright license to use the Software.
3. You may not copy, modify, rent, sell, distribute, or transfer any part of the Software except as provided
in this Agreement, and you agree to prevent unauthorized copying of the Software.
4. You may not reverse engineer, decompile, or disassemble the Software.
5. The Software may include portions offered on terms differing from those set out here, as set out in a
license accompanying those portions.
SINGLE USER LICENSE: You may copy the Software onto a single computer for your personal use, and
you may make one back-up copy of the Software, subject to these conditions:
1. This Software is licensed for use only in conjunction with (a) physical Intel component
products, and (b) virtual ("emulated") devices designed to appear as Intel component
products to a Guest operating system running within the context of a virtual machine. Any
other use of the Software, including but not limited to use with non-Intel component
products, is not licensed hereunder.
2. Subject to all of the terms and conditions of this Agreement, Intel Corporation ("Intel") grants to you a
non-exclusive, non-assignable, copyright license to use the Software.
3. You may not copy, modify, rent, sell, distribute, or transfer any part of the Software except as provided
in this Agreement, and you agree to prevent unauthorized copying of the Software.

4. You may not reverse engineer, decompile, or disassemble the Software.
5. The Software may include portions offered on terms differing from those set out here, as set out in a
license accompanying those portions.
OEM LICENSE: You may reproduce and distribute the Software only as an integral part of or incorporated in
your product, as a standalone Software maintenance update for existing end users of your products, excluding
any other standalone products, or as a component of a larger Software distribution, including but not limited to
the distribution of an installation image or a Guest Virtual Machine image, subject to these conditions:
1. This Software is licensed for use only in conjunction with (a) physical Intel component
products, and (b) virtual ("emulated") devices designed to appear as Intel component
products to a Guest operating system running within the context of a virtual machine. Any
other use of the Software, including but not limited to use with non-Intel component
products, is not licensed hereunder.
2. Subject to all of the terms and conditions of this Agreement, Intel Corporation ("Intel") grants to you a
non-exclusive, non-assignable, copyright license to use the Software.
3. You may not copy, modify, rent, sell, distribute or transfer any part of the Software except as provided
in this Agreement, and you agree to prevent unauthorized copying of the Software.
4. You may not reverse engineer, decompile, or disassemble the Software.
5. You may only distribute the Software to your customers pursuant to a written license agreement. Such
license agreement may be a "break-the-seal" license agreement. At a minimum such license shall safeguard Intel's ownership rights to the Software.
6. You may not distribute, sublicense or transfer the Source Code form of any components of the Software and derivatives thereof to any third party without the express written consent of Intel.
7. The Software may include portions offered on terms differing from those set out here, as set out in a
license accompanying those portions.
LICENSE RESTRICTIONS. You may NOT: (i) use or copy the Software except as provided in this
Agreement; (ii) rent or lease the Software to any third party; (iii) assign this Agreement or transfer the Software
without the express written consent of Intel; (iv) modify, adapt, or translate the Software in whole or in part
except as provided in this Agreement; (v) reverse engineer, decompile, or disassemble the Software; (vi)
attempt to modify or tamper with the normal function of a license manager that regulates usage of the
Software; (vii) distribute, sublicense or transfer the Source Code form of any components of the Software and
derivatives thereof to any third party without the express written consent of Intel; (viii) permit, authorize,
license or sublicense any third party to view or use the Source Code; (ix) modify or distribute the Source Code
or Software so that any part of it becomes subject to an Excluded License. (An "Excluded License" is one that
requires, as a condition of use, modification, or distribution, that (a) the code be disclosed or distributed in
source code form; or (b) others have the right to modify it.); (x) use or include the Source Code or Software in
deceptive, malicious or unlawful programs.
NO OTHER RIGHTS. No rights or licenses are granted by Intel to you, expressly or by implication, with
respect to any proprietary information or patent, copyright, mask work, trademark, trade secret, or other
intellectual property right owned or controlled by Intel, except as expressly provided in this Agreement. Except
as expressly provided herein, no license or right is granted to you directly or by implication, inducement,
estoppel, or otherwise. Specifically, Intel grants no express or implied right to you under Intel patents,
copyrights, trademarks, or other intellectual property rights.
OWNERSHIP OF SOFTWARE AND COPYRIGHTS. The Software is licensed, not sold. Title to all copies
of the Software remains with Intel. The Software is copyrighted and protected by the laws of the United States
and other countries and international treaty provisions. You may not remove any copyright notices from the

Software. You agree to prevent any unauthorized copying of the Software. Intel may make changes to the
Software, or to items referenced therein, at any time without notice, but is not obligated to support or update
the Software.
ADDITIONAL TERMS FOR PRE-RELEASE SOFTWARE. If the Software you are installing or using under
this Agreement is pre-commercial release or is labeled or otherwise represented as "alpha-" or "beta-"
versions of the Software ("pre-release Software"), then the following terms apply. To the extent that any
provision in this Section conflicts with any other term(s) or condition(s) in this Agreement with respect to prerelease Software, this Section shall supersede the other term(s) or condition(s), but only to the extent
necessary to resolve the conflict. You understand and acknowledge that the Software is pre-release
Software, does not represent the final Software from Intel, and may contain errors and other problems that
could cause data loss, system failures, or other errors. The pre-release Software is provided to you "as-is" and
Intel disclaims any warranty or liability to you for any damages that arise out of the use of the pre-release
Software. You acknowledge that Intel has not promised that pre-release Software will be released in the
future, that Intel has no express or implied obligation to you to release the pre-release Software and that Intel
may not introduce Software that is compatible with the pre-release Software. You acknowledge that the
entirety of any research or development you perform that is related to the pre-release Software or to any
product making use of or associated with the pre-release Software is done at your own risk. If Intel has
provided you with pre-release Software pursuant to a separate written agreement, your use of the pre-release
Software is also governed by such agreement.
LIMITED MEDIA WARRANTY. If the Software has been delivered by Intel on physical media, Intel
warrants the media to be free from material physical defects for a period of ninety days after delivery by Intel.
If such a defect is found, return the media to Intel for replacement or alternate delivery of the Software as Intel
may select.
EXCLUSION OF OTHER WARRANTIES. EXCEPT AS PROVIDED ABOVE, THE SOFTWARE IS
PROVIDED "AS IS" WITHOUT ANY EXPRESS OR IMPLIED WARRANTY OF ANY KIND
INCLUDING WARRANTIES OF MERCHANTABILITY, NONINFRINGEMENT, OR FITNESS FOR A
PARTICULAR PURPOSE. Intel does not warrant or assume responsibility for the accuracy or completeness
of any information, text, graphics, links, or other items contained within the Software.
LIMITATION OF LIABILITY. IN NO EVENT SHALL INTEL OR ITS SUPPLIERS BE LIABLE FOR
ANY DAMAGES WHATSOEVER (INCLUDING, WITHOUT LIMITATION, LOST PROFITS, BUSINESS
INTERRUPTION, OR LOST INFORMATION) ARISING OUT OF THE USE OF OR INABILITY TO USE
THE SOFTWARE, EVEN IF INTEL HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH
DAMAGES. SOME JURISDICTIONS PROHIBIT EXCLUSION OR LIMITATION OF LIABILITY FOR
IMPLIED WARRANTIES OR CONSEQUENTIAL OR INCIDENTAL DAMAGES, SO THE ABOVE
LIMITATION MAY NOT APPLY TO YOU. YOU MAY ALSO HAVE OTHER LEGAL RIGHTS THAT
VARY FROM JURISDICTION TO JURISDICTION. In the event that you use the Software in conjunction
with a virtual ("emulated") device designed to appear as an Intel component product, you acknowledge that
Intel is neither the author nor the creator of the virtual ("emulated") device. You understand and acknowledge
that Intel makes no representations about the correct operation of the Software when used with a virtual
("emulated") device, that Intel did not design the Software to operate in conjunction with the virtual
("emulated") device, and that the Software may not be capable of correct operation in conjunction with the
virtual ("emulated") device. You agree to assume the risk that the Software may not operate properly in
conjunction with the virtual ("emulated") device. You agree to indemnify and hold Intel and its officers,
subsidiaries and affiliates harmless against all claims, costs, damages, and expenses, and reasonable
attorney fees arising out of, directly or indirectly, any claim of product liability, personal injury or death
associated with the use of the Software in conjunction with the virtual ("emulated") device, even if such claim
alleges that Intel was negligent regarding the design or manufacture of the Software.

UNAUTHORIZED USE.THE SOFTWARE IS NOT DESIGNED, INTENDED, OR AUTHORIZED FOR
USE IN ANY TYPE OF SYSTEM OR APPLICATION IN WHICH THE FAILURE OF THE SOFTWARE
COULD CREATE A SITUATION WHERE PERSONAL INJURY OR DEATH MAY OCCUR (E.G
MEDICAL SYSTEMS, LIFE SUSTAINING OR LIFE SAVING SYSTEMS). If you use the Software for any
such unintended or unauthorized use, you shall indemnify and hold Intel and its officers, subsidiaries and
affiliates harmless against all claims, costs, damages, and expenses, and reasonable attorney fees arising
out of, directly or indirectly, any claim of product liability, personal injury or death associated with such
unintended or unauthorized use, even if such claim alleges that Intel was negligent regarding the design or
manufacture of the part.
TERMINATION OF THIS AGREEMENT. Intel may terminate this Agreement at any time if you violate its
terms. Upon termination, you will immediately destroy the Software or return all copies of the Software to
Intel.
APPLICABLE LAWS. Claims arising under this Agreement shall be governed by the laws of the State of
California, without regard to principles of conflict of laws. You agree that the terms of the United Nations
Convention on Contracts for the Sale of Goods do not apply to this Agreement. You may not export the
Software in violation of applicable export laws and regulations. Intel is not obligated under any other
agreements unless they are in writing and signed by an authorized representative of Intel.
GOVERNMENT RESTRICTED RIGHTS. The enclosed Software and documentation were developed at
private expense, and are provided with "RESTRICTED RIGHTS." Use, duplication, or disclosure by the
Government is subject to restrictions as set forth in FAR 52.227-14 and DFARS 252.227-7013 et seq. or its
successor. The use of this product by the Government constitutes acknowledgement of Intel’s proprietary
rights in the Software. Contractor or Manufacturer is Intel.
LANGUAGE; TRANSLATIONS. In the event that the English language version of this Agreement is
accompanied by any other version translated into any other language, such translated version is provided for
convenience purposes only and the English language version shall control.

Limited Lifetime Hardware Warranty
Intel warrants to the original owner that the adapter product delivered in this package will be free from defects
in material and workmanship. This warranty does not cover the adapter product if it is damaged in the process
of being installed or improperly used.
THE ABOVE WARRANTY IS IN LIEU OF ANY OTHER WARRANTY, WHETHER EXPRESS, IMPLIED
OR STATUTORY, INCLUDING BUT NOT LIMITED TO ANY WARRANTY OF NONINFRINGEMENT OF
INTELLECTUAL PROPERTY, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE
ARISING OUT OF ANY PROPOSAL, SPECIFICATION, OR SAMPLE.
This warranty does not cover replacement of adapter products damaged by abuse, accident, misuse, neglect,
alteration, repair, disaster, improper installation, or improper testing. If the adapter product is found to be
defective, Intel, at its option, will replace or repair the hardware product at no charge except as set forth below,
or refund your purchase price provided that you deliver the adapter product along with a Return Material
Authorization(RMA) number (see below), along with proof of purchase (if not registered), either to the dealer
from whom you purchased it or to Intel with an explanation of any deficiency. If you ship the adapter product,
you must assume the risk of damage or loss in transit. You must use the original container (or the equivalent)
and pay the shipping charge.
Intel may replace or repair the adapter product with either new or reconditioned parts, and any adapter product,
or part thereof replaced by Intel becomes Intel's property. Repaired or replaced adapter products will be
returned to you at the same revision level as received or higher, at Intel's option. Intel reserves the right to
replace discontinued adapter products with an equivalent current generation adapter product.

Returning a defective product
From North America:
Before returning any adapter product, contact Intel Customer Support and obtain a Return Material
Authorization (RMA) number by calling +1 916-377-7000.
If the Customer Support Group verifies that the adapter product is defective, they will have the RMA
department issue you an RMA number to place on the outer package of the adapter product. Intel cannot
accept any product without an RMA number on the package.
All other locations:
Return the adapter product to the place of purchase for a refund or replacement.

Intel Adapter Money-back Guarantee (North America Only)
Intel wants you to be completely satisfied with the Intel adapter product that you have purchased. Any time
within ninety (90) days of purchase, you may return your Intel adapter to the original place of purchase for a full
refund of the purchase price from your dealer. Resellers and distributors, respectively, accepting returns and
refunding money back to their customers may return Intel adapters to their original place of purchase. Intel
guarantees that it will accept returns under this policy and refund the original purchase price to customers
purchasing directly from Intel.

Limitation of Liability and Remedies
INTEL'S SOLE LIABILITY HEREUNDER SHALL BE LIMITED TO DIRECT, OBJECTIVELY
MEASURABLE DAMAGES. IN NO EVENT SHALL INTEL HAVE ANY LIABILITY FOR ANY INDIRECT
OR SPECULATIVE DAMAGES (INCLUDING, WITHOUT LIMITING THE FOREGOING,
CONSEQUENTIAL, INCIDENTAL, AND SPECIAL DAMAGES) INCLUDING, BUT NOT LIMITED TO,
INFRINGEMENT OF INTELLECTUAL PROPERTY, REPROCUREMENT COSTS, LOSS OF USE,
BUSINESS INTERRUPTIONS, LOSS OF GOODWILL, AND LOSS OF PROFITS, WHETHER ANY
SUCH DAMAGES ARISE OUT OF CONTRACT NEGLIGENCE, TORT, OR UNDER ANY WARRANTY,
IRRESPECTIVE OF WHETHER INTEL HAS ADVANCE NOTICE OF THE POSSIBILITY OF ANY SUCH
DAMAGES. NOTWITHSTANDING THE FOREGOING, INTEL'S TOTAL LIABILITY FOR ALL CLAIMS
UNDER THIS AGREEMENT SHALL NOT EXCEED THE PRICE PAID FOR THE PRODUCT. THESE
LIMITATIONS ON POTENTIAL LIABILITIES WERE AN ESSENTIAL ELEMENT IN SETTING THE
PRODUCT PRICE. INTEL NEITHER ASSUMES NOR AUTHORIZES ANYONE TO ASSUME FOR IT
ANY OTHER LIABILITIES.
Some states do not allow the exclusion or limitation of incidental or consequential damages, so the above
limitations may not apply to you.
Critical Control Applications: Intel specifically disclaims liability for use of the adapter product in critical
control applications (including, for example only, safety or health care control systems, nuclear energy control
systems, or air or ground traffic control systems) by Licensee or Sublicensees, and such use is entirely at the
user's risk. Licensee agrees to defend, indemnify, and hold Intel harmless from and against any and all claims
arising out of use of the adapter product in such applications by Licensee or Sublicensees.
Software: Software provided with the adapter product is not covered under the hardware warranty described
above. See the applicable software license agreement which shipped with the adapter product for details on
any software warranty.

Note: If you are a consumer under the Australian Consumer Law, this warranty does not apply to you. Please
visit Australian Limited Lifetime Hardware Warranty to view the limited warranty which is applicable to
Australian consumers.

Copyright and Legal Disclaimers
Copyright © 2008-2015 Intel Corporation. All rights reserved.
Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497 USA
Intel Corporation assumes no responsibility for errors or omissions in this document. Nor does Intel make any
commitment to update the information contained herein.
Intel, Itanium, and Pentium are trademarks of Intel Corporation in the U.S. and other countries.
*Other names and brands may be claimed as the property of others.

Customer Support
Intel support is available on the web or by phone. Support offers the most up-to-date information about Intel
products, including installation instructions, troubleshooting tips, and general product information.

Web and Internet Sites
Support: http://www.intel.com/support
Corporate Site for Network Products: http://www.intel.com/products/ethernet/overview.htm



Source Exif Data:
File Type                       : PDF
File Type Extension             : pdf
MIME Type                       : application/pdf
PDF Version                     : 1.4
Linearized                      : No
Title                           : Remote Boot and Storage Guide
Producer                        : madbuild
Create Date                     : 2015:08:05 01:06:06-07:00
Author                          : Intel Corporation
Subject                         : 
Modify Date                     : 2015:08:05 01:06:06-07:00
Page Count                      : 64
Page Mode                       : UseOutlines
Language                        : en-us
EXIF Metadata provided by EXIF.tools

Navigation menu