HX Data Platform Security Hardening Guide

Aaron Kapacinskas (akapacin)

Cisco HX Data Platform Security Hardening Guide

27 gen 2022 — This document provides recommended configuration settings and deployment architectures for Cisco HX Data Platform (HXDP) solutions.

PDF preview unavailable. Download the PDF instead.

HX-Hardening Guide
Cisco HX Data Platform Security Hardening Guide
Version 4.5.2a rev 3
September 2021

Cisco HX Platform Hardening Guide

Page 1

Document Information
Document Summary

Cisco HX

v.4.5.2a rev 3

Prepared for
Field

Prepared by
Aaron Kapacinskas

Last Modified

20 September 2021

Previous Version

4.5.2a rev2

Changes in this version:

TEMPEST information added to the certification section

FEDRAMP update for Intersight added to the certification section

SOC 2 Type 2 information added to the certification section

Information on ssh timeout for the admin shell added

Specific audit logs added to the auditing and logging section

Minimum infrastructure and ports required for local installation added to installation section

Additional port requirements for Intersight added to the Intersight section and to Appendix A

Port update for NRDR added to the replication port requirements section and Appendix A

Certification verification syntax added to the certificate import section in Appendix F

Intended Use and Audience
This document contains confidential material that is proprietary to Cisco Corporation. The materials, ideas and concepts contained herein are to be used exclusively to assist in the configuration of Cisco corporation's software solutions.
Legal Notices
All information in this document is provided in confidence and shall not be published or disclosed, wholly or in part to any other party without Cisco's written permission.

Cisco HX Platform Hardening Guide

Page 2

Contents

Document Information ................................................................................................................... 2
Intended Use and Audience ................................................................................................................................................ 2
Legal Notices ....................................................................................................................................................................... 2

Prerequisites .................................................................................................................................. 8

Introduction .................................................................................................................................... 8

Secure Product and Development Components ........................................................................... 8
Development Milestones ..................................................................................................................................................... 8

CSDL Philosophy ............................................................................................................................................................. 8

CSDL Product Adherence Methodologies ....................................................................................................................... 9

Vulnerability Handling ........................................................................................................................................................ 10

Tenable IO Scanning ..................................................................................................................................................... 10

CERT Advisory .............................................................................................................................................................. 11

VMware ESX Patching................................................................................................................................................... 11

HXDP Patching .............................................................................................................................................................. 12

Additional Vulnerability Testing Measures ..................................................................................................................... 13

Secure Platform "Modules" ................................................................................................................................................ 13

Control Plane ................................................................................................................................................................. 13

Data Security ................................................................................................................................................................. 13

Management Security .................................................................................................................................................... 13

Certification Process.......................................................................................................................................................... 13

ACVP ............................................................................................................................................................................. 13

Current Certifications ......................................................................................................................................................... 14

FIPS ............................................................................................................................................................................... 14

Common Criteria ............................................................................................................................................................ 15

Other Certifications and Procedural Guidelines ................................................................................................................ 16

ISO 27001 ...................................................................................................................................................................... 16

FISMA ............................................................................................................................................................................ 16

FedRAMP....................................................................................................................................................................... 16

SOC 2 Type 2 ................................................................................................................................................................ 16

TEMPEST ...................................................................................................................................................................... 16

Cisco HX Platform Hardening Guide

Page 3

IAVA ............................................................................................................................................................................... 17 HIPAA ............................................................................................................................................................................ 17 NERC CIP ...................................................................................................................................................................... 17 CNSA ............................................................................................................................................................................. 18 DISA APL ....................................................................................................................................................................... 19 Targeted Certifications ................................................................................................................................................... 19

HX Components and Environment .............................................................................................. 19
Solution Components ........................................................................................................................................................ 19 Cisco UCS ..................................................................................................................................................................... 20 Cisco UCS Fabric Interconnects (FIs) ........................................................................................................................... 21 HX Nodes....................................................................................................................................................................... 21 Management Interfaces: HX Connect and the VMware vCenter Plug-in ...................................................................... 23 VMware vCenter ............................................................................................................................................................ 23 VMware ESX.................................................................................................................................................................. 23 VMs ................................................................................................................................................................................ 23 Client Machines ............................................................................................................................................................. 24

HX Secure Network Environment and Component Requirements ............................................. 24
Port Requirements for Communication ............................................................................................................................. 25 Scans Showing Undocumented Ports ............................................................................................................................... 25 Port Requirements and Logical Traffic Flow for Replication ............................................................................................. 26 Intersight Connectivity Requirements................................................................................................................................ 27 Unicast and Multicast Requirements ................................................................................................................................. 29
Datastore Access ........................................................................................................................................................... 29 Auto Support and Smart Call Home (SCH) ....................................................................................................................... 29 Installation and ESX Best Practices and Security Considerations .................................................................................... 30
Cisco HX Installer (HX Installer) .................................................................................................................................... 30 Minimum Infrastructure and Port Requirements for Local Installation ........................................................................... 31 Default Passwords ......................................................................................................................................................... 33 VLANs and vSwitches.................................................................................................................................................... 33 FI Traffic and Architecture ................................................................................................................................................. 35 UCSM Requirements ..................................................................................................................................................... 35 VNICs ............................................................................................................................................................................. 36 East-West Traffic ........................................................................................................................................................... 36

Cisco HX Platform Hardening Guide

Page 4

North-South Traffic......................................................................................................................................................... 36 Upstream Switch ............................................................................................................................................................ 36 VLANs ............................................................................................................................................................................ 36 Disjoint L2 Networks ...................................................................................................................................................... 36 Cisco HyperFlex Edge (HX Edge) ................................................................................................................................. 37

HX Data Security .......................................................................................................................... 37
Encryption Services ........................................................................................................................................................... 37 SEDs .............................................................................................................................................................................. 38 Key Management ........................................................................................................................................................... 39 Certificate Signing Requests (CSRs)............................................................................................................................. 40 Networking Considerations ............................................................................................................................................ 41 Encryption Partners ....................................................................................................................................................... 41 VM Level Encryption ...................................................................................................................................................... 41 Secure Communications ................................................................................................................................................ 42 Usage of NFS in HXDP.................................................................................................................................................. 42

HX Management ........................................................................................................................... 44
Management Interfaces ..................................................................................................................................................... 44 HX Connect.................................................................................................................................................................... 44 vCenter Plug-in .............................................................................................................................................................. 46 STCLI and HXCLI .......................................................................................................................................................... 47 Secure Admin Shell Access (HXDP 4.5.1(a) and above) .............................................................................................. 49 REST APIs ..................................................................................................................................................................... 51
AAA Domains .................................................................................................................................................................... 52 vCenter........................................................................................................................................................................... 52 AD Integration ................................................................................................................................................................ 52
User Management ............................................................................................................................................................. 52 Cisco HyperFlex User Overview........................................................................................................................................ 53
Local Users .................................................................................................................................................................... 53 UI Users ......................................................................................................................................................................... 53 CLI Users ....................................................................................................................................................................... 55 Auditing, Logging, Support Bundles .............................................................................................................................. 55 Setting Up Remote Logging for HX Prior to HXDP 4.0.1.a............................................................................................ 57 Setting Up Remote Logging for HXDP 4.0.1.a and Later .............................................................................................. 57

Cisco HX Platform Hardening Guide

Page 5

Password Requirements ................................................................................................................................................ 59 Password Guidelines ..................................................................................................................................................... 60 Session Timeouts .......................................................................................................................................................... 61

HX Platform Hardening ................................................................................................................ 64
US Federal STIG (Secure Technical Implementation Guide) Settings ............................................................................. 64 SSL Certificate Replacement ............................................................................................................................................ 65 Secure Boot ....................................................................................................................................................................... 65 SSL Certificate Thumbprint (Hash) and Signatures .......................................................................................................... 69 Dynamic Self-Signed Certificates in HX ............................................................................................................................ 69 UCSM Certificate Management......................................................................................................................................... 69 HX and Perfect Forward Secrecy (PFS) ........................................................................................................................... 70 TLS Weak Protocol Disable............................................................................................................................................... 71 TLS Weak Cipher Disable ................................................................................................................................................. 72 SSH (ESX) Lockdown Mode and Root Logins .................................................................................................................. 72 Tech Support Mode ........................................................................................................................................................... 73 Third Party Software Execution on FIs and HXDP ............................................................................................................ 73 Whitelisting and other STCLI Security Commands ........................................................................................................... 73 HX Data Platform Firewalling: IP Tables ........................................................................................................................... 74 Replication ......................................................................................................................................................................... 76 Specific ESX Environment Hardening Settings Relevant to HXDP .................................................................................. 76 Specific USC Environment Hardening Settings Relevant to HXDP .................................................................................. 76 Control VM (SCVM) Customization ................................................................................................................................... 76

References.................................................................................................................................... 77
ESX Hardening Guide ....................................................................................................................................................... 77 UCS Hardening Guide ....................................................................................................................................................... 77 Cisco CSDL ....................................................................................................................................................................... 77 Syslog-ng Configuration .................................................................................................................................................... 77 Secure Boot Assurance Via Attestation ............................................................................................................................ 77

Appendix A: Networking Ports .................................................................................................... 78
Intersight ........................................................................................................................................................................... 82

Appendix B: URLs Needed for Smart Call Home, Post Install Scripts, Intersight ...................... 83

Appendix C: ESX Hardening Settings ......................................................................................... 85

Appendix D: Acronym Glossary .................................................................................................. 95

Cisco HX Platform Hardening Guide

Page 6

Appendix E: Sample Syslog-ng Configuration File..................................................................... 97 Appendix F: Certificate Management and Use Cases ................................................................. 99
SCVM: How to generate and replace External CA Certificate in HX 4.0+ .............................................................. 99 HX Certificate Management ............................................................................................................................................ 100 vCenter: How to generate and replace External CA Certificate ...................................................................................... 100 ESX: How to generate and replace External CA Certificate ................................................................................... 101 ESX: To re-generate self-signed certificate ................................................................................................................ 101 HX Cluster: Re-registration ............................................................................................................................................ 102 Hyperflex Use Cases ...................................................................................................................................................... 102 Observations and Notes............................................................................................................................................... 103
vCenter: self-signed cert with certMgmt mode = vmsa (Default Mode)....................................................................... 103 vCenter: CA signed cert with certMgmt mode = vmsa (Default Mode) ....................................................................... 103 vCenter: CA signed cert with certMgmt mode = custom ............................................................................................. 103 Hyperflex Use Cases: HX 4.5.1a+ and NGINX Self-Signed Without CA Signed ...................................................... 103
Appendix G: SCH Configuration and Proxy .............................................................................. 106
Configuring Smart Call Home for Data Collection ................................................................................................. 106

Cisco HX Platform Hardening Guide

Page 7

Prerequisites
We recommend reviewing the release notes, installation guide, and user guide before proceeding with any configuration. The Cisco HyperFlex Data Platform (HXDP) should be installed and functioning per the installation guide. Please contact your Cisco Representative or Cisco Support if assistance is needed.
Introduction
The Cisco HyperFlex Data Platform Security Hardening Guide provides guidance for HyperFlex (HX) users in ensuring that their product is deployed in a more robust and secure manner. It is necessary to understand the architecture and components of the solution in order to complete this properly. This document provides recommended configuration settings and deployment architectures for HXDP-based solutions. It is intended to be used in conjunction with product documentation for deployments where extra consideration for platform security is required. For product documentation, please contact your Cisco representative.
Secure Product and Development Components
Cisco HyperFlex product components are developed, integrated, and tested using the Cisco Secure Development Lifecycle (CSDL). Secure product development and deployment has several components ranging from inherent design and development practices, testing the implementation, and finally a set of recommendations for deployments that maximize the security of the system.
Development Milestones
Each iteration of the product's development addresses needs for ongoing security fixes and general feature enhancements that include security components (new deployment models, changes in management, partner on-boarding, etc.). At every stage of development, the Hardening Guide undergoes potential enhancements relative to findings and new features.
· The HX Hardening Guide has the following components: 1. VMware ESXi settings 2. Cisco UCS settings 3. HX Hardening
· The system is configured in QA to accommodate the relevant settings identified above and run through a typical deployment test.
· The result is a validated set of best practices for security and is communicated through the CSDL process and exposed in the Hardening Guide.
CSDL Philosophy
A poor product design can open the way to vulnerabilities. The CSDL is designed to mitigate these potential issues.
At Cisco, our "secure design" approach requires two types of considerations:
· Design with security in mind · Use threat modeling to validate the design's security

Cisco HX Platform Hardening Guide

Page 8

Designing with security in mind is an ongoing commitment to personal and professional improvement through:
· Training · Applying the Product Security Baseline (PSB) design principles · Consider other industry-standard secure design principles · Be aware of common attack methods and design safeguards against them · Take full advantage of designs and libraries that are known to be highly secure · Consider all entry points We also reduce design-based vulnerabilities by considering known threats and attacks. With threat modeling, we:
· Follow the flow of data through the system. · Identify trust boundaries where data may be compromised. · Based on the data flow diagram, generate a list of threats and mitigations from a database of known threats,
tailored by product type. · Prioritize and implement mitigations to the identified threats. The goal of this effort is to ensure a security mind set at every stage of development: · Secure Design · Secure Coding · Secure Analysis · Vulnerability Testing · Secure deployments
HX product development focuses on two areas to satisfy the CSDL model: · Internal Requirements o Adhere to the secure development process · Market based requirements o Complete and validate against certifications (Federal) o Document and educate (HX Hardening Guide)
CSDL Product Adherence Methodologies
Cisco CSDL adheres to Cisco Product Development Methodology (PDM), ISO27034 and ISO9000 compliance requirements. ISO 27034 standard provides an internationally recognized standard for application security. Details for ISO 27034 can be found here. The ISO 9000 family of quality management systems standards is designed to help organizations ensure that they meet the needs of customers and other stakeholders while meeting statutory and regulatory requirements related to a product or service. ISO 9000 details are here.
The CSDL process is not a one-time approach to product development. It is recursive, with vulnerability testing, penetration testing, and threat modelling plugging into subsequent development that feeds back into the process. This process follows ISO9000 and ISO27034 standards as part of an internationally recognized set of guidelines. The approaches involved often take a solution-wide methodology. For example, the use of our continually updated CiscoSSL crypto module to guarantee that HX (along with other elements in the Cisco offering) are always secure and meet FIPS certification requirements.

Cisco HX Platform Hardening Guide

Page 9

Vulnerability Handling
Tenable IO Scanning
Common Vulnerabilities and Exposures (CVE) scanning is a critical part of most deployments. Many industries and Federal organizations standardize on Tenable IO (formerly Nessus Scanner) to implement various DISA or CIS audits.
· CIS is Center for Internet Security · DISA is Defense Information Systems Agency
In our CSDL efforts, we use Tenable IO, produced by Tenable, in our development process · Tenable IO Scanner ­ https://www.tenable.com/products/tenable-io
The vulnerability scanning workflow is as follows: · Choose a build to test against (based on dot release development timing) · Update the scanner signatures and plug-ins for our test date · Freeze the scanner ­ line in the sand · Test  Report Fix as needed  Rescan  check-in the safe build · Fixes are immediately scheduled for Critical and High · CSDL may identify others in Medium and Low and Info that need remediation.
A typical Nessus scan configuration summary might look like this: · HX 3.0(1b) · Compliance checks: o DISA RHEL 5 o CIS L2 Ubuntu 16.04 LTS o CIS Apache 2.2 · Plug-ins: o All plug-ins enabled, same day update · Sample Report: o Output is color coded. o 5 Alert Levels: Critical, High, Medium, Low, Info. o Notes: System is clean, one low warning, rest are info only.

Cisco HX Platform Hardening Guide

Page 10

CERT Advisory
Computer Emergency Response Team (CERT) advisories come up as new vulnerabilities are identified. Cisco's internal CERT team monitors and alerts product groups to potential issues that might affect their respective components. When these items are identified by CERT or are otherwise indicated by vendor partners (VMware, etc.). Patches are either developed or acquired from the respective vendors.
VMware ESX Patching
Patches for VMware are immediately supported if they are within the regularly supported VMware dot release. There are no hard commitments for when support for new VMware dot releases will be available, but there are continuous release onboarding processes that occur within QA for each new VMware release.
We do not support cluster level remediation through VUM or vLCM: neither of aware of the HX cluster running on top.
You can try to remediate one host at a time in those tools, but this has diminishing value. The HX ESXi upgrading process is a simple one click as long as it is launched through HX Connect or Intersight. If you use Intersight, it is even simpler than VUM or vLCM as you have the HX customized image that is selected from a drop down and just proceed. No downloading of files into repos or customizing builds are required in that scenario.

Cisco HX Platform Hardening Guide

Page 11

HXDP Patching
Cisco scans development builds weekly for CVEs using Tenable's Nessus scanner. Based on these results, we begin developing the patch for critical CVEs related directly to HX as soon as they are discovered. The fixes are rolled into an immediate release or a regularly scheduled incremental with turnaround within 90 days, depending on severity.
The HyperFlex release model identifies Long Term Support releases and Feature Releases. Long Term Support releases are supported, maintained, and patched for 30 months from initial release. You can identify LTS releases because they have the X.Y(2x) designation. For example, 4.5(2a) is an LTS release.

Cisco HX Platform Hardening Guide

Page 12

Additional Vulnerability Testing Measures
Cisco also utilizes an internal tool for threat modeling called ThreatBuilder (v2.1). This tool is used to explicitly map out application components and services and to identify potential attack surfaces and develop line items for direct evaluation. This information along with industry tools are used for vulnerability and exploit testing by Cisco's ASIG (Advanced Security Initiatives Group). ASIG also uses fuzzing and manual testing as part of their suite of tools.
Secure Platform "Modules"
At a high level, HX system security can be broken down into 3 broad categories. These are the Control Plane, Data Security and Management Security.
Control Plane
The control plane deals with system communication. This is the subsystem that implements FIPS compliant encrypted communication protocol engine for communication that may originate outside of the system, for example, from an administrator. It also deals with inter-component communication between nodes which happens on a trusted, internal, non-routed 10GB network.

Data Security
Securing data in the system is the job of the Secure Encrypted Disk (SED) subsystem. The HX nodes are SED capable, meaning that they can incorporate and function using encrypted disks. Key management for this can be handled locally or via remote KMIP servers in HA configurations.
Management Security
Managing the system through the UI or through the command line requires secure communication mechanisms. This is handled via HTTPS for the vCenter plug-in or for HX Connect (the native HTML 5 UI). SSH for encrypted command line access is also handled. Management security also entails role-based access control as well as auditing and logging of system activities and user input.
Certification Process
Federal compliance and audit-based certifications are a critical component of a standardized and predictable security posture. They are critical in most Federal deployments, especially those dealing with financial and defense arenas. The Cisco Global Certification Team (GCT) works to complete various certifications.
ACVP
The Automated Cryptographic Validation Protocol (ACVP) is part of a NIST program to automate FIPS and Common Criteria testing superseding the process used in the Cryptographic Algorithm Validation Program (CAVP) and the Cryptographic Module Validation Program (CMVP). Details can be found here:
https://csrc.nist.gov/Projects/Automated-Cryptographic-Validation-Testing
Beginning in CY2020, Cisco began using ACVP built into the CiscoSSL module for HyperFlex in order to process Federally accepted cryptographic certifications. The process will use a series of ACVP/NIST proxy infrastructure servers to complete the certifications using communication directly to NIST validation servers. The figure below shows the general product architecture used for ACVP.

Cisco HX Platform Hardening Guide

Page 13

Current Certifications
FIPS -- The Federal Information Processing Standard (FIPS) Publication 140-2, is a U.S. government computer security
standard used to approve cryptographic modules.
HyperFlex is compliant with FIPS140-2 level 1 via direct implementation of the FIPS compliant CiscoSSL crypto module. The module, once implemented, is vetted by a 3rd party that is federally certified to ascertain compliance status.
· Utilizes CiscoSSL module o Already FIPS compliant o SSH approved cipher list o SSL/TLS implementation o Eliminates weak or compromised components  Regularly updated
· Lab validates that the module is incorporated correctly o Build logs o Source access identifying calls to the module
· All admin access points to the cluster are covered here o SSH for CLI o HTTPS for UI
A comprehensive list of Cisco FIPS compliant products is listed here along with the corresponding reference with NIST.
· Cisco FIPS Certified Products: http://www.cisco.com/c/en/us/solutions/industries/government/global-governmentcertifications/fips-140.html
· Cryptographic Module Validation Program (CMVP) vendor list: · http://csrc.nist.gov/groups/STM/cmvp/documents/140-1/1401vend.htm

Cisco HX Platform Hardening Guide

Page 14

Common Criteria for Information Technology Security Evaluation (Common Criteria or CC) is an International
standard (ISO/IEC 15408) for computer security certification, currently in v3.1 rev 4.
· System users specify their security functional and assurance requirements through the use of protection profiles, vendors can then make claims about the security attributes of their products, and testing laboratories can evaluate the products to determine if they actually meet the claims. o Customers have some security needs defined in a set of CC guidelines o This is my system, this is what I say it can do to meet those o Let's (vendor and lab) agree on a test, here's the procedure o Here's my results o You (lab) run it on your own and verify o Deliver certification
The second (first in CYQ42017) certification was completed in CYQ4 2019 for EAL2 on 3.5.2a and is currently available here: https://www.commoncriteriaportal.org/files/epfiles/Certification%20Report%20NSCIB-CC-215885-CR.pdf
The 3.5 code line is a "long term support" release in the HX release model and will be available, maintained and patched for 30 months from initial release. Cisco is currently in the process of certifying 4.5.2a for Common Criteria. Acceptance is expected in late CY 2021. The follow on release will be 5.0.2a.

Cisco HX Platform Hardening Guide

Page 15

Other Certifications and Procedural Guidelines
ISO 27001 isn't a certification for specific pieces of hardware as much as a dozen or so "Best Practices" in the form of
checklists/guidelines for how organizations manage their security controls internally. It observes things like Building Access, Password Management, Badging into a Copier to make copies, etc. Training on a frequent basis is a part of the standard.
Cisco is ISO 27001 certified. This is a link to our ISO 27001 certificate: https://www.cisco.com/c/en/us/about/approachquality/iso-27001.html
FISMA (Federal Information Security Management Act) Cisco HyperFlex has not participated in a FISMA audit to
date. For FISMA, Federal information systems must meet the minimum-security requirements. These requirements are defined in the second mandatory security standard required by the FISMA legislation, FIPS 200 "Minimum Security Requirements for Federal Information and Information Systems". Organizations must meet the minimum security requirements by selecting the appropriate security controls and assurance requirements as described in NIST Special Publication 800-53, "Recommended Security Controls for Federal Information Systems".
FedRAMP (Federal Risk and Authorization Management Program) Cisco HyperFlex is not FEDRAMP authorized for
cloud based security because Cisco is not a public cloud provider. For FEDRAMP in particular, the onus will fall on the cloud provider (Google/Azure/AWS etc.) unless you mean to include private cloud in a FEDRAMP assessment or authorization. If this is the case, then the private cloud will need to meet the FedRAMP standards and a POC is recommended.
As Intersight adoption increases, requests to host Intersight in different regions / clouds with the required certifications are desirable. Cisco has initiated a FedRAMP gap analysis effort to pursue this certification.
SOC 2 Type 2 (Service Organization Control 2) The Intersight team has completed the requirements for a
Service Organization Control (SOC) 2, Type 2 External Audit covering Cisco's SaaS platform. Developed by the American Institute of CPAs (AICPA), SOC 2 is a common compliance framework specifically designed for service providers that store customer data in the cloud. It requires companies to establish long-term, ongoing internal practices regarding the security of customer data.
A SOC 2 Type 2 report is an industry-recognized report that provides reasonable assurance that platform controls are suitably designed, operating effectively as necessary, and meet the following criteria:
· Security: The service is protected against unauthorized access. · Availability: The service is available for operation and use as committed or agreed upon. · Confidentiality: The service adheres to privacy commitment.

A detailed report with right privileges can be viewed from Trust Portal. There have been no exceptions reported during the certification process.

TEMPEST (Telecommunications Electronics Material Protected from Emanating Spurious Transmissions) TEMPEST is
a certification that tests for electromagnetic pulse emanations, i.e, Emissions Security (EMSEC). While the actual

Cisco HX Platform Hardening Guide

Page 16

standard remains classified, NSA's program information can be found here: https://apps.nsa.gov/iad/programs/iadinitiatives/tempest.cfm
TEMPEST certification is for electromagnetic emissions that can be monitored by outsiders (monitors, keyboards, server enclosures, etc.) to reconstitute the images or data rendered on these devices. UCS hardware (and by extension, HX) is not TEMPEST compliant. This compliance is typically only relevant to desktop class equipment in open workspaces ­ and if needed in certain environments agencies will place servers and other appliances in TEMPEST approved cabinets or rooms that shield everything contained inside.
Third party companies, however, often take Cisco gear, shield it, and get it TEMPEST certified. This is a presentation from a company that has TEMPEST certified a number of Cisco products: https://www.fbcinc.com/source/virtualhall_images/DOS_February/API/API_Product_Presentation.pdf
IAVA (Information Assurance Vulnerability Alert) patches are routine alerts. They are part of the IAVM (Information
Assurance Vulnerability Management) Program and detail vulnerability fixes that are deemed critical for all systems in an environment by the DoD from the DoD CERT list. If a vulnerability is on IAVA's list, it will get sent to the admins that are signed up by an organization to receive them and they must be fixed to remain in compliance. Tenable IO (see the Vulnerability Scanning section) scans will pick these up and, based on severity, will be remedied in patch releases.
HIPAA (Health Insurance Portability and Accountability Act) requires healthcare organizations use data encryption
technology to protect sensitive patient information. However, the law does not specify which types of encryption to use in order to accomplish this task. Key management mechanisms are not specifically called out either. In these respects, HXDP satisfies the HIPAA requirements. HXDP, however, is not officially certified with HIPPA because a fully compliant solution includes all elements of the ecosystem. HXDP would qualify as a compliant component.
NERC CIP (North American Electric Reliability Corporation Critical Infrastructure Protection) is centered on the physical
security and cybersecurity of assets deemed to be critical to the electricity infrastructure. There are currently 11 CIP standards subject to enforcement, governing topics from system security management to recovery plans. NERC CIP compliance is more about policy and procedure than technology, and the responsibility of compliance is on the utility company not the technology provider. So, there's isn't an "FERC/NERC compliant HCI", per se. The idea is identifying capabilities that helps the customer facilitate compliance, and there are multiple HX security features, system configurations and hardening, as well as continuous security monitoring and advisories that are pertinent toward that goal.

Cisco HX Platform Hardening Guide

Page 17

A couple of examples (note: not exhaustive):

· CIP-007-6 R1 ­ Ports and Services o Requirement: CIP-007-6 Part 1.1 requires to enable only logical network accessible ports that have been determined to be needed by the Responsible Entity. o Mitigations: The HX Data Platform Hardening Guide provides guidance on port requirements, STCLI security commands for whitelisting, setting up IPtables on HX nodes to secure network traffic, etc.
· CIP-007-6 R2 ­ Security Patch Management o Requirement: CIP-007-6 Part 2.1 requires a patch management process for tracking, evaluating, and installing cyber security patches for applicable Cyber Assets. o Requirement: CIP-007-6 Part 2.2 requires, at least once every 35 calendar days, to evaluate security patches for applicability that have been released since the last evaluation from the source or sources identified in Part 2.1. o Mitigations: proactive PSIRT advisories publish guidance on current security vulnerabilities and mitigations that impact HX
· CIP-007-6 R4 ­ Security Event Monitoring o Requirement: CIP-007-6 Part 4.1 requires logging events for identification and investigation of cybersecurity incidents that includes minimally: detected successful logins, detected failed access and login attempts, detected malicious code o Mitigations: centralized audit logging in HXDP, position Next Gen Firewall for threat protection
The idea here is a comprehensive audit record and well-defined RBAC roles and division of user duties. Continuous
monitoring would be a solution type of responsibility that would be handled with ecosystem components like Tetration and
Splunk (analysis of syslog). Here is the overview of NERC information system compliance:

Energy producers and distributors that make up the bulk electric system for North America have multiple IT security and compliance challenges, which range from protecting consumers' payment card data and complying with the Payment Card Industry Data Security Standard, to adhering to the general internal audit control and disclosure requirements under Sarbanes-Oxley. In addition, utilities and firms that fall under the authority of the Federal Energy Regulatory Commission (FERC) must meet the cyber security standards of the FERC's certified Electric Reliability Operator (ERO), the North American Electric Reliability Corporation (NERC).

Just as physical surveillance tools such as video cameras are a critical part of physical security controls under NERC, the core technical requirements for cyber security as outlined in NERC CIP Standards 002-009 and other associated guidance from NERC require accountability throughout the authentication, access control, delegation, separation of duties, continuous monitoring and reporting of electronic access to critical infrastructure. And specific requirements from NERC CIP 005, 004, 007 and 008 taken together establish a clear obligation that all electronic access be audited, monitored and archived in such a way that an organization can reproduce detailed privileged user sessions 24 hours per day, 7 days per week. This continuous monitoring requirement would be difficult to achieve with a combination of manual processes and system-level logs, which often do not tie actions to a unique identity.

Additional specific details are available from NERC itself:

https://www.nerc.com/pa/comp/Pages/default.aspx

https://www.nerc.com/pa/comp/guidance/Pages/default.aspx

CNSA (Commercial National Security Algorithm) is a schema that is called out by the NSA via this IETF memo:

https://tools.ietf.org/id/draft-jenkins-cnsa-cmc-profile-00.html Cisco HX Platform Hardening Guide

Page 18

It describes which algorithms should be in use and what their profiles should look like. It is intended to give guidance for secure and interoperable communications for national security reasons:
"This document specifies a profile of the Certificate Management over CMS (CMC) protocol for managing X.509 public key certificates in applications using the CNSA Suite."
Cisco supports both elliptic cryptographic certificates (ECC) and RSA certificates so this requirement is met:
"Elliptic Curve Digital Signature Algorithm (ECDSA) and Elliptic Curve Diffie-Hellman (ECDH) key pairs are on the curve P-384. FIPS 186-4 [DSS], Appendix B.4, provides useful guidance for elliptic curve key pair generation that SHOULD be followed by systems that conform to this document. RSA key pairs (public, private) are identified by the modulus size expressed in bits; RSA-3072 and RSA-4096 are computed using moduli of 3072 bits and 4096 bits, respectively."
HyperFlex's FIPS certification via CiscoSSL implements Federally approved crypto modules to satisfy the complexity requirements as well. The fact sheet here lists the approved algorithms:
https://apps.nsa.gov/iaarchive/customcf/openAttachment.cfm?FilePath=/iad/library/ia-guidance/ia-solutions-forclassified/algorithm-guidance/assets/public/upload/Commercial-National-Security-Algorithm-CNSA-SuiteFactsheet.pdf&WpKes=aF6woL7fQp3dJipHPErrFKTuHeUCZyCdxdcF3A
CNSA compliance is just a matter of making sure to implement a cryptographic ecosystem according to the CNSA requirements since HX support all the documented methods.
DISA APL (Defense Information Security Agency Approved Product List) This certification is a multifaceted US Federal
approval for products to operate in secure environments. It is currently under way with the Cisco Global Certification Team and the HX Business Unit. It is targeted for the HX 5.0.1a release.
Targeted Certifications
Future targeted certification are always under evaluation with the Global Certification Team. We are in the process of recertifying HX with Common Criteria in the HX 4.5.2a release. DISA APL is expected to commence in the Fall of CY 2021.
HX Components and Environment
This section details the different components in a typical HX deployment. It is critical to the secure environment that the various parts are hardened as needed.
Solution Components
An HX deployment consists of HX nodes on UCS connected to each other and the upstream switch via a pair of Fabric Interconnects (FIs). There may be one or more cluster and clusters can share the same FIs or be connected to their own, independent set. Clusters can be paired and use HXNR (Native Replication) for protection of VMs. Intervening optimizations appliances may also be deployed to aid with (or monitor or shape) cluster to cluster traffic. The following illustration shows a typical physical layout for this kind of deployment.

Cisco HX Platform Hardening Guide

Page 19

Cisco UCS
The physical HX node is deployed on a Cisco UCS 220 or 240 platform in either a hybrid or all flash configuration. A service profile is a software definition of a server and its LAN and SAN network connectivity, in other words, a service profile defines a single server and its storage and networking characteristics. Service profiles are stored in the Cisco 6248/6296 and 6332/6332-16UP Series Fabric Interconnects and are managed via specific versions of UCSM (the

Cisco HX Platform Hardening Guide

Page 20

web interface for the FI) or via purpose written software using the API. When a service profile is deployed to a server, UCS Manager automatically configures the server, adapters, fabric extenders, and fabric interconnects to match the configuration specified in the service profile. This automation of device configuration reduces the number of manual steps required to configure servers, network interface cards (NICs), host bus adapters (HBAs), and LAN and SAN switches.
The service profile for the HX nodes is created during cluster build at install time and is applied to the appropriate devices attached to the FI (identified by PID and associated hardware). These profiles should have their own, easily identifiable name and should not be edited after creation. They are preconfigured by the HX Installer with the settings required for HX to operate securely and efficiently (VLANs, MAC pools, management IPs, QoS profiles, etc.).
It is also worth noting that some larger UCS customer use custom MAC pool and UUID schema for all UCS domain deployments in the data center. Cisco does not support custom naming schemes. HX is an appliance, and to ensure consistent quality, user experience, and full TAC supportability these mundane details have been automated. For UUID, HXDP leverages the hardware derived UUID. For MAC, HXDP has a specific enumeration that cannot be changed.
Cisco UCS Fabric Interconnects (FIs) Cisco UCS FIs are a networking switch or head unit to which the UCS chassis connects. Fabric Interconnects are a core part of Cisco's Unified Computing System, which is designed to improve scalability and reduce the total cost of ownership of data centers by integrating all components into a single platform, which acts as a single unit. Access to networks and storage is then provided through the UCS fabric interconnect. Each HX node is dual connected, one SFP port to each FI for HA. This ensures that all vNICs on the UCS are dual connected as well, guaranteeing node availability. vNIC configuration is automated during HX installation and should not be altered.
HX Nodes The HX node itself is composed of the software components required to create the storage infrastructure for the system's hypervisor. This is done via the HX Data Platform (HXDP) that is deployed at installation on the node. The HX Data Platform utilizes PCI pass-through which removes storage (hardware) operations from the hypervisor making the system highly performant. The HX nodes use special plug-ins for VMware called VIBs that are used for redirection of NFS datastore traffic to the correct distributed resource, and for hardware offload of complex operations like snapshots and cloning.
The following illustration shows a typical HX node architecture.

Cisco HX Platform Hardening Guide

Page 21

These nodes are incorporated into a distributed cluster as shown below. Each node contains the following VMNIC and vSwitch architecture for versions prior to HXDP 3.5.x:

Cisco HX Platform Hardening Guide

Page 22

For HXDP versions 3.5.x and above, the VMNIC ordering has been changed to the following:

Management Interfaces: HX Connect and the VMware vCenter Plug-in
HX Connect is the native HTML 5.0 UI for the cluster. The HX vCenter plug-in is another management interface available in vCenter once the cluster is deployed. These are separate interfaces. Both are accessed via HTTPS in a web browser and are subject to the same user management (including RBAC) that is available for the CLI or the API

VMware vCenter
The Cisco HX Data Platform requires VMware vCenter to be deployed to manage certain aspects of cluster creation such as ESX clustering for HA and DRS, VM deployment, user authentication and various datastore operations. The HX vCenter plug-in is a management utility that integrates seamlessly within vCenter and allows comprehensive administration, management, and reporting of the cluster.
It is important to note that all compute and converged nodes must share a single vCenter cluster object for a given cluster. This 1:1 mapping is a requirement today.
Administrator users created in the vCenter can login to the Storage Controller VM CLI using the full name in the following format:<user>@vsphere.local/password. However, read-only users created in the vCenter cannot login to the Storage Controller VM CLI.

VMware ESX
ESX is the hypervisor component in the solution. It abstracts node compute and memory hardware for the guest VMs.
HXDP integrates closely with ESX to facilitate network and storage virtualization.

VMs
The HX environment provides storage for the guest VMs deployed in ESX using VLAN segmented networking. The VMs are available for external resources, typical of any elastic infrastructure deployment.

Cisco HX Platform Hardening Guide

Page 23

Client Machines
Client machines are defined here as external hosts that need to access resources deployed in HX. These can be anything from end users to other servers in a distributed application architecture. These clients access from external networks and are always isolated from any HX internal traffic by network segmentation, firewalling and whitelisting rules.
HX Secure Network Environment and Component Requirements
The HX networking environment is segmented and isolated to provide out-of-the-box traffic security. This section identifies the networking communication (port) requirements and offers best practices for the Installer along with information regarding FI traffic and ESX networking (vSwitches).

Cisco HX Platform Hardening Guide

Page 24

Port Requirements for Communication
The diagram and table below indicate the various components, networking ports, and communication direction for HX.

Note that ICMP is required between CVM IPs and between CVM IPs and vCenter in order to conduct a cluster re-register command. See Appendix A for a comprehensive table on the port requirements.

Scans Showing Undocumented Ports

There are a few cases where users may scan an HX system and see undocumented or transient ports that appear to be

open. This can happen when scanning externally on the management network and it can also happen if users place

scanners on the closed data or replication networks. The ports you may see will be well outside of the normal well-known

port range (0-1023), often with values between 30000-50000, and are ephemeral ports. An ephemeral port is a short-lived

port number used by a transport protocol (TCP/UDP). Ephemeral ports are allocated automatically from a predefined

range by the IP stack software. These ports are not tied to any specific service and will be automatically filtered, deleted,

and potentially re-used in the future. Filtered means that a firewall, filter, or other network obstacle is blocking the port so

Cisco HX Platform Hardening Guide

Page 25

that Nmap, Nessus, Zenmap or similar scanners cannot tell whether the port it is open or closed. Closed ports have no application listening on them, though they could open up at any time. The default firewall rule in HX is to "block all" by default and only "allow selected". An example of ports that may transiently appear in scans is listed below:
· 43387,45913,49775
These ports pose no security risk.
Port Requirements and Logical Traffic Flow for Replication
The following ports are opened for inter-cluster communications, during cluster-pairing: 9338, 3049, 9098, 4049, 4059, 8889.
These are the ports that are used in HX Replication:
· ICMP · datasvcmgr_peer = 9338 · datasvcmgr_peer = 9339 · NRDR = 9350 · scvm (Storage Controller VM) = 3049 · cmap = 4049 · nrnfs = 4059 · replsvc = 9098 · nr (master for coordination) = 8889
Firewall entries are made on the source and destination machine during pairing to allow HX Data Platform access to the system(s) bi-directionally. This traffic needs to be allowed on WAN routers for each HXDP node IP address and cluster CIP-M address.
The following illustration shows the logical traffic flow for replication:

Cisco HX Platform Hardening Guide

Page 26

Intersight Connectivity Requirements
Reference the HX Edge preinstall checklist for Intersight specifics.

Before installing the HX cluster on a set of HX servers, make sure that the device connector on the corresponding Cisco IMC instance is properly configured to connect to Cisco Intersight and be claimed.

· All device connectors must properly resolve svc.intersight.com and allow outbound initiated HTTPS connections

on port 443. The current version of the HX Installer supports the use of an HTTP proxy.

· All controller VM management interfaces must properly resolve svc.intersight.com and allow outbound initiated

HTTPS connections on port 443. The current version of HX Installer supports the use of an HTTP proxy if direct

Internet connectivity is unavailable.

Cisco HX Platform Hardening Guide

Page 27

· IP connectivity (L2 or L3) is required from the CIMC management IP on each server to all of the following: ESXi management interfaces, HyperFlex controller VM management interfaces, and vCenter server. Any firewalls in this path should be configured to allow the necessary ports (see appendix A).

When redeploying HyperFlex on the same servers, new controller VMs must be downloaded from Intersight into all ESXi hosts. This requires each ESXi host to be able to resolve svc.intersight.com and allow outbound initiated HTTPS connections on port 443. Use of a proxy server for controller VM downloads is supported and can be configured in the HyperFlex Cluster Profile if desired.

Post-cluster deployment, the new HX cluster is automatically claimed in Intersight for ongoing management.

Intersight installs require the UCSM or CIMC IP/Network to have SSH access to the ESX and SCVM IPs/network. Any firewalls in this path should be configured to allow the necessary ports

In summary:
Network Communication Requirements for CIMC:
· Communication between CIMC and vCenter via ports 80, 443 and 8089 during installation phase. · IP connectivity (L2 or L3) is required from the CIMC management IP on each server to all of the following: ESXi
management interfaces, HyperFlex controller VM management interfaces, and vCenter server. Any firewalls in this path should be configured to allow the necessary ports as outlined in the Hyperflex Hardening Guide. · This communication needs to be persistent. It is required for any and all upgrades (including firmware), monitoring, and UI cross-launch. · CIMC to Intersight should only require 443. Per the preinstall guide: · All device connectors must properly resolve svc.intersight.com and allow outbound-initiated HTTPS connections on port 443. The current HX Installer supports the use of an HTTP proxy. The IP addresses of ESXi management must be reachable from Cisco UCS Manager over all the ports that are listed as being needed from installer to ESXi management, to ensure deployment of ESXi management from Cisco Intersight. · Allow port 22 between the UCSM (or CIMC) VLAN and the ESXi/SCVM management VLAN

Intersight Connectivity Consider the following prerequisites pertaining to Intersight connectivity:
· Before installing the HX cluster on a set of HX servers, make sure that the device connector on the corresponding Cisco IMC instance is properly configured to connect to Cisco Intersight and claimed.
· Communication between CIMC and vCenter via ports 80, 443 and 8089 during installation phase. · All device connectors must properly resolve svc.intersight.com and allow outbound initiated HTTPS connections
on port 443. The current version of the HX Installer supports the use of an HTTP proxy. · All controller VM management interfaces must properly resolve svc.intersight.com and allow outbound initiated
HTTPS connections on port 443. The current version of HX Installer supports the use of an HTTP proxy if direct Internet connectivity is unavailable. · IP connectivity (L2 or L3) is required from the CIMC management IP on each server to all of the following: ESXi management interfaces, HyperFlex controller VM management interfaces, and vCenter server. Any firewalls in this path should be configured to allow the necessary ports as outlined in the Hyperflex Hardening Guide. · Starting with HXDP release 3.5(2a), the Intersight installer does not require a factory installed controller VM to be present on the HyperFlex servers. When redeploying HyperFlex on the same servers, new controller VMs must be downloaded from Intersight into all ESXi hosts. This requires each ESXi host to be able to resolve svc.intersight.com and allow outbound initiated HTTPS connections on port 443. Use of a proxy server for controller VM downloads is supported and can be configured in the HyperFlex Cluster Profile if desired. · Post-cluster deployment, the new HX cluster is automatically claimed in Intersight for ongoing management.

Cisco HX Platform Hardening Guide

Page 28

Unicast and Multicast Requirements
Starting with version 3.0(1a), HXDP no longer uses the UCARP protocol and is 100% unicast traffic moving forward.
For previous versions that did use UCARP, since the well-known multicast address of 224.0.0.18 was used, there is no configuration needed on the switches to be able to support HX. This UCARP protocol falls under the IPv4 multicast linklocal scope of 224.0.0.0/24. Link scoped multicast packets are flooded throughout the VLAN and IGMP snooping does not take effect on these multicast groups. Hence there is a very small amount of "management multicast" in use, but nothing that requires any network changes or specific infrastructure to support it.

Datastore Access
Access to the HX datastores by client machines is restricted to mounting by HX nodes only. This access is automatically granted during cluster install when the component nodes are identified. Access is also granted or revoked during expansion or removal respectively, when nodes are added or removed from the system. Access to the datastores for migration or backup purposes may be granted via the command line using the STCLI whitelist command. HX nodes are not listed in the whitelist list because this is a manual, administrative setting for external machine access only. It should only be used during VM ingress/egress from the system as required and the list should be immediately purged once operations are complete.
The mount syntax needs to look like the following in order to work (mount ip:ip:/<datastore> <local dir>/) where the IP is the CIP (not the CIP-M). Here it is in action once the mounting host has been added to the whitelist:

kaptain@kaptain-vm:~/temp$ sudo mount 10.a.b.c:10.a.b.c:/ds01 mountpoint/

kaptain@kaptain-vm:~/temp$ su

Password:

root@kaptain-vm:/home/kaptain/temp# cd mountpoint/

root@kaptain-vm:/home/kaptain/temp/mountpoint# ls

auth.log

rhttpproxy.4.gz vmkernel.4.gz

vprobed.log

clomd.log

rhttpproxy.5.gz vmkernel.5.gz

vprobe.log

Auto Support and Smart Call Home (SCH)
You can configure the HX storage cluster to send automated email notifications regarding documented events. The data collected in the notifications can be used to help troubleshoot issues in your HX storage cluster.
Auto Support is the alert notification service provided through HX Data Platform. If you enable Auto Support, notifications are sent from HX Data Platform to the designated email addresses or email aliases that you want to receive the notifications. Typically, Auto Support is configured during HX storage cluster creation by configuring the SMTP mail server and adding email recipients. Only unauthenticated SMTP is supported for ASUP.

If the Enable Auto Support check box was not selected during configuration, Auto Support can be enabled post-cluster creation using the following methods:

Cisco HX Platform Hardening Guide

Page 29

Post-Cluster ASUP Configuration Method HX Connect user interface
Command Line Interface (CLI) REST APIs

Associated Topic Configuring Auto Support Using HX Connect Configuring Notification Settings Using CLI Cisco HyperFlex Support REST APIs on Cisco DevNet.

Auto Support can also be used to connect your HX storage cluster to monitoring tools.

Smart Call Home is an automated support capability that monitors your HX storage clusters and then flags issues and initiates resolution before your business operations are affected. This results in higher network availability and increased operational efficiency.

Call Home is a product feature embedded in the operating system of Cisco devices that detects and notifies the user of a variety of fault conditions and critical system events. Smart Call Home adds automation and convenience features to enhance basic Call Home functionality. SCH supports a secure proxy for message transfer (see Appendix G). After Smart Call Home is enabled, Call Home messages and alerts are sent when triggered. This includes:

· Automated, around-the-clock device monitoring, proactive diagnostics, real-time email alerts, service ticket notifications, and remediation recommendations.
· Proactive messaging sent to your designated contacts by capturing and processing Call Home diagnostics and inventory alarms. These email messages contain links to the Smart Call Home portal and the TAC case if one was automatically created.
· Expedited support from the Cisco Technical Assistance Center (TAC). With Smart Call Home, if an alert is critical enough, a TAC case is automatically generated and routed to the appropriate support team through https, with debug and other CLI output attached.
See Appendix B and G for additional SCH information.

Installation and ESX Best Practices and Security Considerations
Before conducting any installation, review and complete the pre-installation checklist maintained here:
http://www.cisco.com/c/dam/en/us/td/docs/hyperconverged_systems/HyperFlex_HX_DataPlatformSoftware/HyperFlex_pr einstall_checklist/Cisco_HX_Data_Platform_Preinstallation_Checklist_form.pdf

Cisco HX Installer (HX Installer)
During initial configuration, the cluster is installed on site using the HX installer. This installer can safely be removed from the environment immediately after cluster creation. It is typical for secure environments to isolate the deployment network during installation. In this scenario, the installer is never externally available during configuration. Since it is removed post deployment, installer threat exposure is minimized.

The following services in vSphere must be enabled after you create the HX Storage Cluster in vCenter: · DRS · vMotion
Cisco HX Platform Hardening Guide

Page 30

· High Availability
The installer verifies that the cluster components are correct and available as needed. This ensures that the deployment has no gaps that could jeopardize security.
· Ensure firmware and BOM compliance · Deploy and cluster create (requires UCSM credentials for SED)
o All nodes should be SED capable­ no mixing of SED & non-SED drives · Server Selection to shows for SED Capable nodes and validates non-SED node configurations · Creates Service profiles
o VLANS o IP addressing o VNIC ordering o QoS configuration o MAC pools · Creates ESX vSwitches with appropriate VLANS and address spaces · Deploys HX Data Platform · Deploys ESX Plug-ins · Configures and starts the storage cluster. · Sets default passwords and generates node-node communication secure certificates
Hard passwords are enforced on HX UI interfaces and HX Data Platform settings during install.
Minimum Infrastructure and Port Requirements for Local Installation
The following diagram shows the minimum required infrastructure needed to conduct a local installation of HX using the on-premises installer.

Cisco HX Platform Hardening Guide

Page 31

NTP, HX Installer and a DNS server address are the minimum required components. You will need to follow the nested vCenter procedure if you are leaving that component out at build time.
There are some things that can be eliminated from this diagram at the expense of some function:

· Remove SMTP if you are not using auto support phone home functionality · Remove SNMP if you are not monitoring anything · Remove the SSO server if you are only using vCenter credentials
Cisco HX Platform Hardening Guide

Page 32

· Remove DNS if you are only using IP addresses. DNS, is a required field during install, but if you are not using it and do not intend to use it in the future, you can use a dummy address during installation.

Some things in this diagram can be removed after the deployment is complete to reduce attack surfaces:
· Remove DNS if not in use because you used only IP addresses · Remove the HX Installer · Migrate vCenter to HX for a nested deployment (this can be problematic if the cluster is experiencing problems
and you need to access vCenter). If you used an external vCenter you will need to deploy a new vCenter on HX and use the stcli cluster reregister commands to move the instance to the new VC.

DHCP is only required if you intend to use it for the VM Network segment.

For a HyperV deployment, AD with AD integrated DNS are both required. vCenter is not required at any point. SCVMM is not required either. The other mandatory components remain in place.

Default Passwords
Once the deployment using the installer is complete, make sure that any default passwords are changed or updated. The ESX hypervisor default password is Cisco123. There is no default set for the HXDP nodes since a hard password is enforced at install. Log in to each ESX node via CLI and update the root password as needed using passwd root.

VLANs and vSwitches
VLANs are created for each type of traffic and for each vSwitch. There are typically 4 vSwitches created during the install with associated VLANs for each. The vSwitches are for ESX management, HX management, ESX Data (vMotion), and HX Data (storage traffic between nodes for the datastores). HX Data Platform Installer creates the vSwitches automatically. The zones that these switches handle are described below:

· Management Zone: This zone comprises the connections needed to manage the physical hardware, the hypervisor hosts, and the storage platform controller virtual machines (HXDP). These interfaces and IP addresses need to be available to staff responsible in administering the HX system, throughout the LAN/WAN. This zone must provide access to Domain Name System (DNS) and Network Time Protocol (NTP) services, and allow Secure Shell (SSH) communication. The VLAN used for management traffic must be able to traverse the network uplinks from the Cisco UCS domain, reaching both the Primary Fabric Interconnect (FI-A) and Subordinate Fabric Interconnect (FI-B). In this zone are multiple physical and virtual components: o Fabric Interconnect management ports. o Cisco UCS external management interfaces used by the servers and blades, which answer via the FI management ports. o ESXi host management interfaces. o Storage Controller VM management interfaces. o A roaming HX cluster management interface.
· VM Zone: This zone comprises the connections needed to service network IO to the guest VMs that will run inside the HyperFlex hyperconverged system. This zone typically contains multiple VLANs that are trunked to the Cisco UCS Fabric Interconnects via the network uplinks and tagged with 802.1Q VLAN IDs. These interfaces and IP addresses need to be available to all staff and other computer endpoints which need to communicate with the guest VMs in the HX system, throughout the LAN/WAN.
· Storage Zone: This zone comprises the connections used by the Cisco HX Data Platform software, ESXi hosts, and the storage controller VMs to service the HX Distributed File system. These interfaces and IP addresses need to be

Cisco HX Platform Hardening Guide

Page 33

able to communicate with each other at all times for proper operation. During normal operation, this traffic all occurs within the Cisco UCS domain, however there are hardware failure scenarios where this traffic would need to traverse the network northbound of the Cisco UCS domain. For that reason, the VLAN used for HX storage traffic must be able to traverse the network uplinks from the Cisco UCS domain, reaching FI-A from FI-B, and vice-versa. This zone is primarily jumbo frame traffic therefore jumbo frames must be enabled on the Cisco UCS uplinks. In this zone are multiple components:
o A vmkernel interface on each ESXi host in the HX cluster, used for storage traffic. o Storage Controller VM storage interfaces. o A roaming HX cluster storage interface. · vMotion Zone: This zone comprises the connections used by the ESXi hosts to enable vMotion of the guest VMs from host to host. During normal operation, this traffic all occurs within the Cisco UCS domain, however there are hardware failure scenarios where this traffic would need to traverse the network northbound of the Cisco UCS domain. For that reason, the VLAN used for HX storage traffic must be able to traverse the network uplinks from the Cisco UCS domain, reaching FI-A from FI-B, and vice-versa.
These vSwitches and their associated port groups are tied to a pair of VNICs on each node in an active/standby mode for HA. They typical networking configuration is shown below:

Cisco HX Platform Hardening Guide

Page 34

For an in-depth discussion of Virtual Distributed Switches (VDS) with HX, see the following resource:
http://www.cisco.com/c/dam/en/us/products/collateral/hyperconverged-infrastructure/whitepaper-c11-737724.pdf
The question often arises, "In a multi-cluster setup, does each HX cluster need to have separate VLAN/Subnets for the storage management and storage data interfaces?" In other words, would there be issues if the same VLAN/subnet is used for each cluster for storage management and storage data interfaces on the controller VM? It is recommend that a unique data VLAN per cluster is used as a best practice. This ensures data is secured within the cluster and there isn't contention or broadcast traffic from other clusters on the same network.
However, this isn't a hard requirement and it is possible to put multiple clusters on the same storage VLAN, but you risk performance issues in heavily loaded environments. It is worth noting that for deployments using releases prior to HXDP 3.0, clusters require that the cluster management IP (CIP) have a unique IP in the last octet. For example, if you have a /16 subnet, don't use 172.16.100.10 and 172.16.101.10 as two cluster management IPs within the same VLAN. The installer has a check to detect this, but you should avoid this situation altogether.

FI Traffic and Architecture
Traffic through the FIs comes in two general flavors. Intra-cluster traffic (between nodes), and extra-cluster traffic (client machine or replication related. All of the FI configurations are managed, accessed, and modified through Cisco UCS Manager (UCSM).

UCSM Requirements
UCSM is the interface used to set up the FIs for Cisco UCS Service Profiles and for general hardware management.

During installation, the HX Installer verifies that the appropriate UCSM build is in place for HX and that the hardware is

running a supported firmware version. You are given the option to upgrade these at installation if needed.

Cisco HX Platform Hardening Guide

Page 35

Cisco recommends disabling Serial over LAN (SoL) once the deployment is complete since it is no longer needed for ESX configuration. It is also recommended to change any default or simple passwords that were used. Be aware that if you disable SoL, cluster expansion will fail during the Hypervisor Configuration step. You will need to re-enable before continuing.
VNICs
For an in-depth discussion of vNIC see the following: https://supportforums.cisco.com/document/29931/what-concept-behind-vnic-and-vhba-ucs
The VNICs for each vSwitch are in a predefined order and should not be altered in UCSM or ESX. Any changes to these (including active/standby status) could affect HX functionality.

East-West Traffic
East-West traffic on the FI is networking traffic that goes between HX nodes. This traffic is local to the system and does not travel out of the FI to the upstream switch. This has the advantage of being extremely fast by virtue of its low latency, low hop count, and high bandwidth. It also means that this traffic is not subject to external inspection since it never leaves the local system.
North-South Traffic
North-South traffic on the FI is networking traffic that goes outside the FI to an upstream switch and/or router. NorthSouth traffic occurs during external client machine access to HX hosted VMs or for HX access to external services (NTP, vCenter, SNMP etc.). This traffic may be subject to VLAN settings upstream.

Upstream Switch
Configure the upstream switches to accommodate non-native VLANs. HX Installer sets the VLANs as non-native by
default.

VLANs
Use a separate subnet and VLANs for each of the networks.
Do not use VLAN 1, the default VLAN, because it can cause networking issues, especially if Disjoint Layer 2 configuration is used. Use a different VLAN.

Disjoint L2 Networks
Please make sure to read and understand the following disjoint layer two document if this is a requirement in your
environment:

https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/unified-computing/white_paper_c11692008.html

You can just add new vNICs for your use case. We support the manual addition of vNICs and vHBAs to the configuration.

Please see the HX VSI CVD:

https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/HX171_VSI_ESXi6U2.html for step by step

instructions on how to do this safely. Follow the same procedures outlined in the CVD. Please do not use pin groups, as

it may not properly prune the traffic and can cause connectivity issues as the designated receiver may not be set

correctly.

Cisco HX Platform Hardening Guide

Page 36

Cisco HyperFlex Edge (HX Edge)
Typical HX Edge deployments use a trunk port configuration on the top of rack switch(es). VLAN trunking should limit the allowed VLANs to those required for the HyperFlex services and user VMs. By default, the switches will allow all VLANs to pass and could pose a security risk of allowing unfettered network access. See the Cisco HyperFlex Edge Deployment Guide for sample configurations that use "switchport trunk allowed VLAN" commands.
For HX Edge configurations with the add-in PCIe Quad port NIC, ensure any unused Ethernet ports remain disconnected from any virtual switches in ESXi. This will prevent unauthorized access to the virtual switching environment.
SED deployments are currently not supported with HyperFlex Edge. VM encryption by virtue of 3rd party encryption clients will work to encrypt the VMs deployed on Edge. Vormetric and Gemalto Safenet provide such clients. VMware vSphere 6.5 also incorporates a VM encryption capability that should work but has not yet been officially qualified.
HX Data Security
HX data-at-rest security is accomplished via Secure Encrypted Disks (SEDs) and is managed by Cisco HX Connect in conjunction with UCSM and local or remote key stores using the Key Management Interoperability Protocol v1.1. The Encryption FAQ contains a comprehensive treatment of the subject with respect to HX:
https://www.cisco.com/c/dam/en/us/support/docs/hyperconverged-infrastructure/hyperflex-hx-dataplatform/HX_Encryption_FAQ.pdf
Encryption Services
An encrypted cluster is created at build time. It cannot be converted to a non-SED cluster after the fact. A non-SED cluster cannot be converted to a SED cluster once the cluster is created. A SED based cluster can have encryption enabled and disabled at will. You are free to move between the two states whenever you want. The components for an encrypted cluster consist of the SED capable HX nodes with UCSM and the Key Management Infrastructure.

· Data is only as secure as the encryption keys. · Key management is the tasks involved with protecting, storing, backing up and organizing keys.
Cisco HX Platform Hardening Guide

Page 37

· Specialized vendors provide enterprise key management offerings.
SEDs
SEDs provide native data-at-rest encryption, typically using AES 256.. All qualified disks are FIPS 140-2 Level 2 validated components for data-at-rest encryption. The hardware encryption is built-in, thereby incurring no deployment overhead. The performance is comparable non-SED system and is transparent to data optimization functions (dedupe, compression).
Several encryption keys are associated with a SED implementation. · Media Encryption Key ­ data is always stored in encrypted form · Key Encryption Key secures the media encryption key
SEDs provide a mechanism for secure erase ensuring security during decommission:
Secure cluster Expansion · Only SED capable node can be added to HX Cluster with SEDs · Local key ­ seamless secure expansion · Remote key ­ secure expansion requires lockstep with certificates/key management · Certificates required to add new node securely · Deployment will show warning and include steps to proceed and link to UI for certificate download · User follows steps to upload certificate(s) and continue the deployment
SED on HX Edge is not currently supported (see the HX Edge section above).
HXDP 3.0 release introduced support for Microsoft Hyper-V as a cluster hypervisor. SEDs are currently not supported with Hyper-V.
Can access to the SEDs via the CVM be an attack vector if the CVM is compromised? In other words, since the control VM has direct ownership over the HX node disks (through VMDirectPath IO), and since the drives are self-encrypting, does this mean you effectively have unencrypted raw access to the disks through that login? Technically you can access data directly from the root shell, however, you would not be able to do much with this access. Since data is striped per disk, per node, and across file tree vnodes even, you would not be able to reconstruct the data into anything meaningful. You would only have small bits of information in various disks on various nodes. If this is still a concern, you can certainly encrypt via software (see Encryption Partners below) at the VM level, thereby mitigating any fractional data reads from even a compromised CVM.
The ESXi boot volume is not encrypted and there is not an option to encrypt this drive. There is no user data stored on this device. Theft of or raw access to this drive would only expose the ESXi operating system. The CVM root drive is separate from the data drives. The HX system/log drive is not encrypted and there is not an option to encrypt this drive. There is no user data stored on this drive. Theft of or raw access to this drive would only expose time stamps and block locations of the cache and capacity drives, both of which are encrypted.
Connectivity between a SED-enabled cluster and the KMS have a few requirements in environments that are firewall segmented. Access between FI-A/FI-B/VIP and the KMS is not required. Only the CIMC IPs of each node need to access the KMS. Allowing CIMC IPs along with KMS IP and port 5696 is sufficient.

Cisco HX Platform Hardening Guide

Page 38

Key Management
Configuring encryption services supports both local and remote key configurations. If you are not suing local keys then you need to configure a KMIP server. KMIP server key handling is performed via encryption partners (Thales Vormetric and Gemalto Safenet). The server specifics are entered using the Encryption workflow in HX Connect.

Cisco HX Platform Hardening Guide

Page 39

Key Management best practices: · Always deploy at least two KMIP servers, clustered for high-availability · No agents or software to deploy for key management · Configure key backup and recovery · Self-signed and CA signed certificates can be used
Workflows supported: · Disable/Enable · Re-key · Secure Erase
Certificate Signing Requests (CSRs)
A component of the remote encryption workflow generates CSRs. CSRs need to be downloaded and signed. Signing can be "self" which refers to signing the CSR with a key you have generated yourself and installed on your KMIP infrastructure. If you are using a Certificate Authority (CA) then you will need to get the CSRs signed with your validated key from the CA.
The diagram below shows the CA/CSR/signing relationship.

Source: https://rusvpn.com/en/blog/what-is-a-ca-certificate-and-how-does-it-work/
HyperFlex supports RSA certificates and, by virtue of use of the CiscoSSL module, also supports ECC (Elliptic Curve Cryptography) certificates. RSA is currently the industry standard for public-key cryptography and is used by most SSL/TLS certificates.

Cisco HX Platform Hardening Guide

Page 40

A popular alternative first proposed in 1985, is Elliptic Curve Cryptography using a different formulaic approach to encryption. While RSA is based on the difficulty of factoring large integers, ECC relies on discovering the discrete logarithm of a random elliptic curve.
Networking Considerations
When using a KMS (Key Management Server) for remote key management, some additional networking ports may need to be opened. Port 443 is required for policy configuration between the control VMs and UCSM. Additionaly, port 5696 is required for TLS communication between the CMIC of each node and the KMS server itself for secure information exchange. See Appendix A.
Encryption Partners
Cisco HX partners with two industry-leading encryption and KMIP service providers.
Gemalto Safenet: · Enterprise Key Management (EKM) solution · Single, centralized platform for managing cryptographic keys and applications · Simultaneously manage multiple, disparate encryption appliances and associated keys through a single, centralized key management platform · Also provides a high performance encrypt/decrypt engine when combined with SafeNet's Data Protection portfolio
Thales Vormetric: · Data Security Manager solution · Single, centralized platform for managing cryptographic keys and applications · Simultaneously manage multiple, disparate encryption appliances and associated keys through a single, centralized key management platform. · Also, provides a transparent encryption client for guest VMs.
Note: KMIP 1.1 compliant key managers not explicitly listed as supported require qualification.

VM Level Encryption
VM software encryption works above the HXDP storage layer. Encryption at a VM level of granularity is available with our partner solutions. Note that you can expect there will no longer be any deduplication space savings, since encryption at this level necessarily "makes unique" all data sent to the storage subsystem.

Vormetric Transparent Client https://www.thalesesecurity.com/products/data-encryption/vormetric-transparent-encryption

Gemalto: https://safenet.gemalto.com/data-encryption/data-center-security/protect-file-encryption-software/
Cisco HX Platform Hardening Guide

Page 41

ESX 6.5 VM encryption: https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.security.doc/GUID-B3DA9865-A28F-4EFDACF4-CBC8813ED110.html
Secure Communications
All communication occurring with the HX platform management interfaces is FIPS compliant using SSH or HTTPS. See the section on Management Security above.
Note that when accessing the CLI using SSH and checking versions, the following will show up (depending on HX version):
ssh -V CiscoSSH 1.6.20, OpenSSH_7.6p1, CiscoSSL 1.0.2u.6.2.374-fips
OpenSSH appears here because CiscoSSH is based on OpenSSH and this versioning information is put in place for reference.
Usage of NFS in HXDP
The HX Data Platform uses a proprietary variant of NFSv3 to present an HX controlled files system to the hypervisor using a plugin called IOvisor. Each node runs an IOvisor instance in order to properly allocate the correct read and write resources in the distributed architecture. This communication path is completely internal, and available only between the hypervisor and the HX CVM present in each node that manages access to the underlying distributed storage. All other components in the system are disallowed from mounting the resource presented by the IOvisor.
Each CVM physically controls the underlying storage hardware and participates in allocating this resource to the distributed file system within which the datastores are created. The diagram below describes the logical placement of the HX IOvisor within the node architecture.

Cisco HX Platform Hardening Guide

Page 42

VM IO is destined for the configured datastore, which is an abstracted container for the storage subsystem. This container is an NFS datastore created on the HX platform. The VMware storage stack utilizes an NFSv3 client to mount the datastore, presented up from the IOvisor.
The IOvisor, sometimes referred to as the SCVM client, lives as a process in user space inside ESXi and can be thought of as a simple NFS proxy. It behaves as a server for the VMware NFS client, while looking like a client to the controller VMs.
The IOvisor is very thin code, on the order of 2000 lines of code. It is designed to be a stateless router for IO and has a very small footprint. It is installed into ESXi as a VMware installation bundle (VIB) that is auto-deployed during cluster installation. As such, it is set up to only allow mounts from cluster nodes. This allowed mapping is only updated during node failure or expansion events.
The IOvisor looks at the incoming NFS request and determines which distributed resource it belongs to. The IO routing process can be visualized in the figure below. Notice that NFS is maintained internal to the ESXi-HX CVM communication path only, and is never exposed to the guest VMs or any other outside resource.

Cisco HX Platform Hardening Guide

Page 43

stCtlVM VM1 VM2
EESSXXi i1 1
IOvisor 2
stCtlVM VM15 VM6

3

EESSXXi i3

IOvisor

stCtlVM VM13 VM4
EESSXXi i2
IOvisor
stCtlVM VM17 VM8
EESSXXi i4
IOvisor

1. VM writes to a particular VMDK file at a given offset
2a. IOvisor determines from the file handle and offset the correct caching vNode responsible for that data
2b. IOvisor consults cluster map to determine the current physical node responsible for the cache vNode. (in this case we draw arrows to all control VMs to indicate the data will be striped across the cluster
2c. IOvisor forwards the IO to pnode 3
3. Processing begins at the top of the stack on pnode 3, eventually making its way onto persistent spinning disk

THE IOVISOR SERVES AS AN "IO ROUTER" IN THE DISTRIBUTED CLUSTER. THE CODE IS A THIN LAYER THAT SITS IN ESXI USER SPACE.
The IO must be directed via the IOvisor to the correct physical node (pNode). Each controller VM queries the Cluster Resource Manager (CRM) via the cluster IP to retrieve the pNode mappings. The communication occurs via a special NFS procedure that has been added as an extension to the base NFS protocol. The mapping table is then cached locally so there is minimal traffic to the cluster IP. The deterministic process to route to the correct node leaves the IOvisor as a stateless IO proxy.

HX Management
There are four relevant management interfaces to consider with HX. There are the UI interfaces (native and vCenter plug-in) and there is the CLI and the REST API.
Management Interfaces
HX Connect
HX Connect is a native UI for managing the HX cluster. This includes configuring replication and encryption along with some VM management functions.

Cisco HX Platform Hardening Guide

Page 44

HX Connect has a security warning banner that can be disabled on a global basis by the administrator(s).
The session to the interface is encrypted via FIPS compliant SSL communications. The mechanics of the session are described below and caution should be taken by the administrator(s) when logging out of sessions to ensure that all tokens are revoked and sessions are terminated.
Session architecture: 1. When a user logs in server provides an access token to the user. This access token is used to validate this user and do all subsequent actions. 2. Idle Timeout is by default set to 30 minutes. You can change/view this idle timeout in UI in user settings (Click on the top right user icon and you will see the user setting.) 3. Idle timeout is global and can be changed or viewed by a user with admin role. 4. If an admin user changes the idle timeout it will be reflected for all the users. 5. From HX Connect perspective, if a user is not doing any activity on the GUI and is idle then after 30 minutes (default idle timeout), the user is logged out. 6. Once the user logs out, the access token is revoked.
Details of the transaction: Session Management happens at the HX Connect browser end, and token management happens in AAA [backend]. Once a user logs in, a session starts. The "start of session" implies that HX Connect creates a cookie and installs it in the browser.
This cookie is removed under the following circumstances:
1. When user logs out explicitly. 2. When idle timeout occurs. 3. When user closes the browser completely.

Cisco HX Platform Hardening Guide

Page 45

If you log in using HX Connect (session starts):
1. You share the same session if you open another tab in browser window. 2. You share the same session if you open another window from the browser you logged-in.
In addition, this means that if you login using HX Connect and open another window or tab and navigate to the CIP-M, and then logout, you log out from all tabs and windows.
Please note that the cookie is not removed when:
1. User closes a tab. 2. User closes a window in the browser [however, the browser process is still running, i.e. another widow of this
browser is still alive].
This means that the login session is still active in the above two cases.
Associated with the session is a token. This is managed by AAA. This token will be invalidated when the user logs out.
If you close the browser completely without logging out, you will no longer have a session, but the token will be alive. Therefore, it is recommended that you logout before closing the browser.
Multiple Sessions for same user are supported if user logs in:
1. From different machines. 2. From different kinds of browser [such as, Chrome and Safari] on the same machine.
HX Connect also provides a support bundle collection interface that allows the user to collect and download all system component logs, including audit files. These can then be examined or uploaded to support.
vCenter Plug-in
The vCenter plug-in is an https accessible UI available after logging in to vCenter. The portlet to access the plug-in is in the summary page for the cluster or accessible in the VC inventory list. The besides providing an admin interface for datastore creation and cluster consumption overviews, the plug-in has a monitor tab that permits event and task browsing along with hardware status.

Cisco HX Platform Hardening Guide

Page 46

The plug-in's right click context menu also allows the administrator to create VAAI offloaded snapshots of VMs, perform cloning operations, and generate system wide support bundles for log collection. These can then be examined or uploaded to support.
Plug-in session mechanics operate in the same manner as vCenter sessions and are managed by editing the appropriate vCenter configuration files.

STCLI and HXCLI
A session via FIPS compliant SSH cipher suites is used to access the CLI (STCLI and HXCLI). STCLI is being deprecated for general use and users are encouraged to use HXCLI for their command line administrative needs. All administrative functions along with some extra options are available via the CLI. See the HX STCLI reference for an exhaustive list.

http://www.cisco.com/c/en/us/support/hyperconverged-systems/hyperflex-hx-data-platform-software/tsd-products-supportseries-home.html

A warning banner can be configured on the control VM (HXDP) for display on access using the MOTD functionality available in the base OS. This can also be done for ESX.
· At the control VM CLI add a file called /etc/update-motd.d/00-springpath-motd · At the ESX CLI use the web or C# client to set config.etc.issue for the DCUI and config.etc.motd for SSH both
under advanced options. Alternatively, us /etc/issue for DCUI and /etc/motd for SSH. · There is no customization on the vSphere Web client logging into vCenter.

The STCLI and HXCLI security subset of commands enables the administrator to configure external machines to access the datastore, configure the root password, synchronize SSH keys across the nodes or enable, set, and disable encryption. Access should not be granted unless the external system is trusted. Access should be revoked when move/copy/migration operations are complete.

Cisco HX Platform Hardening Guide

Page 47

Cisco HX Platform Hardening Guide

Page 48

STCLI/HXCLI security usage: stcli/hxcli security [-h] {password,whitelist,ssh,encryption} ...
NOTE: There are significant changes to the CLI interface Starting with HX 4.5.1(a), described below.
Secure Admin Shell Access (HXDP 4.5.1(a) and above)
With the release of HXDP 4.5.1(a) a significant change in security posture has been introduced with respect to the CLI and root users. Users are no longer able to access the root account on a controller VM via the normal routes. Users are also placed into a restricted access shell upon SSH to the system as admin. This restricted admin shell has the following characteristics:
· Removes root access over ssh to the command line of Controller VMs via management interfaces · Command line access to Controller VM must authenticate as the admin user
· Reduces attack surface by restricting admin user to execute only allowed commands that cannot manipulate the system
· Restricts changes to the Controller VMs to HXDP upgrades/updates only · A full list of commands in the restricted admin shell can be listed by typing "?" or "help" and pressing enter from the admin shell command line.
· Base commands are limited and do not allow executables, downloads, or modification to system files · Some scripts are allowlisted and available via "priv"
· E.g. hx_post_install · A full list of commands in priv can be listed by typing "priv" and pressing enter from the admin shell. · root access is available only for troubleshooting by su to root from within secure shell after customer-initiated Consent Token challenge-response with TAC
Access the root account is only available once a specific challenge-response workflow has been completed. Only your TAC representative can generate the proper response token. Only this token can grant access and only within the timeframe specified by the user during the initiation of the consent token workflow.
Before contacting TAC be sure you understand the consent token workflow: · The consent token challenge-response takes place from CLI on the CIP-M node by typing "su" once you are logged in via SSH as admin.

Cisco HX Platform Hardening Guide

Page 49

· Be sure to have access to TAC · The successful challenge/response performs a background sync of the token on all nodes.

· Verify with HX Connect · system information --> node tab --> columns showing admin/root status.
· If a node cannot sync, that node will not have token. · If a node does not have the token, commands will not take effect on that node from the CIP-M root shell · If a node doesn't have consent token, you need to reproduce consent token workflow on that node

Cisco HX Platform Hardening Guide

Page 50

You will have root access on the system for the duration of time entered in time period question that was answered in the workflow. In the screen above, that time is 10 minutes. During this time interval you can log out and log back in to the CLI as admin and su to root without challenge, using the hard root password set at installation. Once this time interval is expired, you will have to re-run the challenge/response with a new set of tokens to regain access.
REST APIs
Cisco HXDP comes with a comprehensive REST based API for use in developing custom software that can access the system. The built-in REST API:
· Contains well-documented syntax and examples with REST API explorer · Secure token based access with RBAC and auditing · Accessed Via: http://<Cluster-IP>/apiexplorer

Cisco HX Platform Hardening Guide

Page 51

REST APIs can be used to authenticate users and grant or validate access tokens. AAA provides role-based access control to REST APIs which allows users to perform various operations on resources in a cluster. These APIs are supported in version v1 of the AAA REST API and are subject to change and deprecation in future versions.
A rate limit is enforced on the /auth API in a 15-minute window. This means that /auth can be invoked (successfully) a maximum of 5 times. A user is allowed to create a maximum of 8 unrevoked tokens. Subsequent call to /auth will automatically revoke the oldest issued token to make room for the new token. A maximum of 16 unrevoked tokens can be present in system. In order to prevent brute-force attacks, after 10 consecutive failed authentication attempts, a user account is locked for a period of 120 seconds. Access Tokens issued are valid for 18 days (1555200 second).
Starting with HXDP 4.0.1a, a subset of the REST API entries are reserved for STIG specific functions. The following is a list of the current set:
· configure_stig_parameters · configure_stig_parameters_host · configure_stig_parameters_vm · configure_stig_parameters_vCenter · remove_stig_parameters · check_stig_parameters
Examine the API explorer discussed above for more detailed explanations of the values passed and returned by these STIG API calls.

AAA Domains
Authentication, authorization and accounting (AAA) is managed by HX depending on the access method. HX Data Platform supports Role-Based Access Control (RBAC). AAA is implemented with Open Authorization (OAuth), Security Assertion Markup Language (SAML), or Lightweight Directory Access Protocol (LDAP). It is integrated with the ESX cluster authentication mechanism. HX Connect and the STCLI primarily use this database for user authentication. Access to HX Connect or the STCLI is also available using a local admin account in the event that vCenter is unavailable. Beginning in HX 3.5(1a) the local root user is no longer available for HX Connect logins.
vCenter
vCenter maintains a set of user accounts and roles in a database. vCenter itself can be integrated with an external AD or LDAP user management system. HX RBAC integrates directly to this mechanism. See the HX RBAC documentation for configuration steps.
AD Integration
You can join a Platform Services Controller appliance or a vCenter Server Appliance with an embedded Platform Services Controller to an Active Directory domain and attach the users and groups from this Active Directory domain to your vCenter Single Sign-On domain.
https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.vcsa.doc/GUID-08EA2F92-78A7-4EFF-880E2B63ACC962F3.html

User Management
RBAC settings configure users with one or more roles. Roles are assigned privileges to act on a resource. For example, one role has a privilege to perform virtual machine power on, another role has a privilege to only monitor a virtual

Cisco HX Platform Hardening Guide

Page 52

machine. Users are created through vCenter. vCenter supports Active Directory (AD) users and groups. Two roles are supported with HX. Privileges associated with these roles cannot be modified.
· Administrator: o Most tasks that can be performed on a HX Storage Cluster require administrator privileges. o Administrative users grant privileges to the roles. o Administrator users have access to the HX Data Platform interfaces: HX Connect, HX Data Platform Plug-in, the Storage Controller VM command line for running STCLI commands, and HX Data Platform REST APIs.
· Read-only: o This role allows users to monitor status and summary information through HX Connect and the HX Data Platform Plug-in. o Read Only users have access to the HX Data Platform interfaces: HX Connect and the HX Data Platform Plug-in.
Cisco HyperFlex User Overview
The user types (updated for HX 3.5(1a)) allowed to perform actions on or view content in the HX Data Platform, include: · adminA predefined user included with HX Data Platform. The password is set during HX Cluster creation. Same password is applied to root. This user has read and modify permissions. · rootA predefined user included with HX Data Platform. The password is set during HX Cluster creation. Same password is applied to admin. This user has read and modify permissions. · administratorA created HX Data Platform user. This user is created through vCenter and assigned the RBAC role, administrator. This user has read and modify permissions. The password is set during user creation. · read-onlyA created HX Data Platform user. This user is created through vCenter and assigned the RBAC role, read-only. This user only has read permissions. The password is set during user creation.
You can use REST APIs with the read only user. If you have a read-only user, (s)he can perform only GET operations. They will receive an access error if they perform PUT, POST, or DELETE operations. This is present in 2.6 and 3.0 onwards for VMware clusters. The non-read only users are called admin users. They are CVM users and users belonging to administrator group in vCenter.
Local Users
The main cluster local user is "admin". The cluster maintains a separate administrative account called root that is created at install time. This root user has full privileges to the system and hard passwords are enforced during creation.
Creating other users is not supported on the system. Only the admin user can SSH to the system. Vulnerability scanning is not a sanctioned use of the root shell. Creation of new user accounts for the sake of remote access for scans are not supported. Root is a protected account and should never be used. Use of root starting in 4.5.1a is strictly forbidden except if initiated by a TAC support case using a tokenized workflow.
UI Users
Create new users for HX using vCenter with roles. This applies to the HX vCenter Plug-in and HX Connect UIs.
1. Log into the GUI plug-in for the cluster and select Administration under the Home icon.

Cisco HX Platform Hardening Guide

Page 53

2. Under Single Sign-On, add the user.

Cisco HX Platform Hardening Guide

Page 54

CLI Users
The "admin" user is the only supported CLI user under normal operations.
To set or change the password for the local node admin user: · Change the admin password in pre-4.5.1a releases using passwd admin from root · Change the admin password in 4.5.1a or greater using hxcli security password set · Change on all nodes for consistency
To set or change the vCenter maintained administrative user password for UI users: · log in to vCenter · select AdministrationUsers and Groups · Edit the password for the user

Auditing, Logging, Support Bundles
An audit trail, maintained in a set of audit logs, is a security-relevant chronological set of records that provide documentary evidence of the sequence of activities that have affected the system. They contain records of system changes at any time a specific operation, procedure, or event occurs. A full set of logs for the entire system can be gathered with a support bundle. However, STCLI and REST command are recorded continuously and can be examined by looking at just a few files instead of the generating a comprehensive log dump. STCLI commands use the REST architecture to execute their commands so they are also capture in the REST log. These audit records are maintained on each node of the system and are contained primarily in the following files on each node in the /var/log/springpath directory:
· stMgr.log · audit-rest.log
Additional information relevant to an audit may be found here as well: · admin.log · hxcli.log

Auditing is required for compliance purposes and for forensic examination of system activity. A typical audit-rest.log entry will look like this:
2017-06-29-23:26:38.096 - Audit - 127.0.0.1 -> 127.0.0.1 ­ create /aaa/v1/auth?grant_type=password; 201; null 3341ms
· Timestamp - source IP ­ dest IP ­ http api method ­ URL retrieved ­ response code (200 success, 4xx error) ­ user issued ­ response time
What sources are captured: · GUI -- REST API auditing ­ Any calls to REST o A method to audit UI usage as well as 3rd party integrated software o /var/log/springpath/Audit-rest.log · STCLI (RBAC) auditing · STCLI calls utilize the API · Audit trail records will have the keyword "Audit". · Collect all such Audit trail records and save it

Cisco HX Platform Hardening Guide

Page 55

The cluster root user or a node root user can manipulate the audit logs. Read-only users or any other RBAC user account cannot alter the logs files
Replication log files that can be used for auditing traffic or general troubleshooting are listed below:
/var/log/springpath/nrcli.log /var/log/springpath/debug-repl-cipmonitor.log /var/log/springpath/nr-stat-history.log /var/log/springpath/user-replsvc.log /var/log/springpath/error-replsvc.log /var/log/springpath/replicationNetworkConfig.log
Support Bundles for the HX system can be generated in two ways. There are menu interfaces to generate them in both HX Connect and the vCenter Plug-in UI.
1. Generating the Support Bundle using vCenter:

Cisco HX Platform Hardening Guide

Page 56

2. Similarly, the Support Bundle can be generated in HX Connect:

If Auto Support has not been configured during install, be sure to configure it now. This can be done via HX Connect (see the Support menu in the above illustration) or via STCLI/HXCLI using STCLI services ASUP. Auto Support enables:
· HTTP based auto-support data collection for proactive case creation o Continuous monitoring thru auto-support for 30+ events to detect early problems o Critical events integrated with Smart Call Home o Auto-generate SRs (tickets)
· Email notifications for critical events
The log bundle in vCenter includes the plug-in log file and is located in the regular vCenter log location. For Windows the default is C:\ProgramData\VMware\vCenterServer\logs. For VCSA (vCenter Server Appliance) the default log location is /var/log/vmware.
Setting Up Remote Logging for HX Prior to HXDP 4.0.1.a
The various audit logs are available for each node and specific to that node. The CIP-M maintains logs for the local node and any command access to this node via the CIP-M. If you need to consolidate the logs for all components for simplified auditing, this can be done with syslog. You will need to build a syslog infrastructure with a syslog server at (preferably) an external location that each node can access. You will then configure syslog-ng on each HXDP node, rsyslog.d on each ESXi node, and finally the syslog destination on UCSM. These will each act as syslog clients with the remote syslog server as the destination.
For syslog-ng configuration see this documentation and check the references section at the end of this paper: https://syslog-ng.com/open-source-log-management

Setting Up Remote Logging for HXDP 4.0.1.a and Later
Beginning with HXDP 4.0.1a, HyperFlex includes a built-in syslog mechanism. Using the HX Connect Remote Logging wizard, you are able to enter the information required for each HX Cluster node to send its audit log records to a centralized remote server. You are required to have a remote log collection server built and accessible by the management interfaces on each cluster node. See Appendix E for a sample syslog-ng configuration file used for an Ubuntu-based log collector.

By clicking the gear symbol in the top right of the HX Connect UI, you can select remote logging. You will be presented

with the following:

Cisco HX Platform Hardening Guide

Page 57

This is the default configuration and it is for unencrypted transport over TCP using port 6514. The port is configurable, however you must use either plain text or encrypted transport. If you select the drop-down you are given the option to change the connection type to TLS (encrypted).

Cisco HX Platform Hardening Guide

Page 58

The wizard then prompts you to upload a client certificate and key pair. This certificate and key will be used for each node to securely communicate with the remote log collection server. If the certificate is CA signed, it does not need to be uploaded.
If using self-signed certificates on the remote server, place them under /etc/syslog-ng/CA. Regardless of connection type selected, the system will attempt to connect to the remote server immediately. The server certificate will automatically be placed into the trusted certificate store on the syslog-ng client nodes.
The following syntax can be used with openssl to generate a self-signed certificate and key for both the client system and the remote server in the absence of a CA certificate:
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365
Password Requirements
Hard passwords are required for the cluster root user during installation. This password can be updated using the CLI from any node. SSH to the node and issue stcli security password.
Passwords for users maintained in the vCenter authentication database can have password difficulty set based on vCenter configuration. See your vCenter documentation for this.

Cisco HX Platform Hardening Guide

Page 59

Local node users created using "useradd" will be subject to warnings based on the password settings in /etc/pam.d/common-password, however since only root users can be created locally, these can be bypassed. It is not recommended to create local node root users for this reason.

Password Guidelines
The storage controller VM password for the predefined users admin and root are specified during HX Installer deployment. After installation, you can change passwords through the stcli command line.

Component

Permission Level

Username

Password

Notes

HX Data Platform root OVA

root

Cisco123

HX Data Platform root Installer VM

root

Cisco123

HX Connect
HX Storage Controller VM

administrator or read- User defined through User defined through

only

vCenter.

vCenter.

Predefined admin or r As specified during HX

ootusers.

installation.

admin

User defined during HX installation.

As specified during HX installation.

User defined through Strong password

vCenter.

required.

Requires leading local/ for login: local/admin or lo cal/root
Should match across all nodes in storage cluster.
In 4.5.1a+ use the hxcli security password set command

HX Storage

root

Controller VM

vCenter

admin

Cisco HX Platform Hardening Guide

User defined during HX installation.

As specified during HX installation.

User defined through Strong password

vCenter.

required.

administrator@vsphe SSO enabled. re.local default.

Must match across all nodes in storage cluster.
Use the stcli command when changing the password after installation.
Ensure the vCenter credentials meet the
Page 60

Component
ESX Server Hypervisor UCS Manager FI

Permission Level

Username

Password

SSO enabled.

As configured.

As configured, MYDOMAIN\name or name@mydomain.co m

root

SSO enabled.

SSO enabled.

As configured.

As configured.

root

root

As specified during HX

installation.

admin admin

As configured. As configured.

As configured. As configured.

Notes
vSphere 5.5 requirements if the ESX servers are at version 5.5.
Read only users do not have access to HX Data Platform Plug-in.
Must match across all ESX servers in storage cluster.
Use vCenter or esxclicommand when changing the password after HX installation.

Session Timeouts
vCenter session timeouts are managed by vCenter configuration settings. Idle timeouts for TLS connections when using
the HyperFlex plug-in are set in the following file making the noted changes:

· vSphere Web Client sessions terminate after 120 minutes by default. You can change this default in the webclient.properties file, as discussed in the vCenter Server and Host Management documentation.
· Login to the vCenter host system and navigate to this properties file. The location of this file depends on the base operating system on which vCenter is installed.
· Edit the file to include the line session.timeout = value where value is the timeout value in minutes. For example, to set the timeout value to 60 minutes, include the line session.timeout = 60.
· Restart the service

Alternatively: · In the vSphere Web Client, navigate to the vCenter Server instance. · Select the Manage tab. · Under Settings, select General. · Click Edit. · Select Timeout settings. · In Normal operations, type the timeout interval in seconds for normal operations.
Cisco HX Platform Hardening Guide

Page 61

Cisco HX Platform Hardening Guide

Page 62

HX Connect Idle session timeouts for HX Connect sessions can be set in the dashboard view under the administrative icon.

CLI Prior to 4.5.1a, the idle timeout for an STCLI session can be set on each HXDP CVM. SSH to the STCLI of each node and navigate to the /etc/ssh/sshd_config. Uncomment and change the ClientAliveInterval by setting a time. Once editing is complete, restart the sshd services. ClientAliveInterval 60 would drop the connection after 60 seconds of inactivity.

For deployments of HX that are 4.5.1a or later, setting the SSH timeout for the admin shell requires editing the following file: /etc/lshell.conf

This file is accessible via root shell only. The admin user will need to complete the token challenge-response workflow in order to edit the file. Once you can edit the file, simply change the last line here by specifying the number of seconds before session termination:

# 6 hours of inactivity will kill the admin session

timer

: 21600

Manually Clearing Sessions To clear session data associated with a given user immediately you can run this command from any CVM while logged in as root:
> python /opt/springpath/clearsession.py root
Users can run into this situation if they close the browser session without logging out.

Cisco HX Platform Hardening Guide

Page 63

HX Platform Hardening
This section provides information on setting specific configurations for HX, ESX and UCS to further enhance system security.
US Federal STIG (Secure Technical Implementation Guide) Settings
Cisco implements relevant Secure Technical Implementation Guide (STIG) settings as defined by the Defense Information Security Agency (DISA) for several aspects of the HXDP ecosystem. The STIG adherence is accomplished through implementation of settings explicitly called out in the following DISA STIGS:
· U_General_Purpose_Operating_System_V1R4_SRG · U_VMware_vSphere_6-0_ESXi_V1R4_STIG · U_VMware_vSphere_6-0_vCenter_Server_for_Windows_V1R4_STIG · U_VMware_vSphere_6-0_Virtual_Machine_V1R1_STIG
These can be found and downloaded from the Federal DISA site here. These STIG settings are automated via script and are generally available starting in HXDP v3.5. Please note that DISA STIGs are dynamic, and as such, will be updated frequently, so you can expect this list to change. The corresponding automation in HXDP will follow suit. Some of these settings, while desirable for secure daily operation, have potential repercussions for cluster upgrade and expansion. As such, some settings may need to be temporarily disabled to accommodate changes of this nature. See the administration guide for your version of HXDP for instructions on running STIG automation and caveats around them for certain cluster operations.
Some settings derived from the DISA STIG set have become the default. For example, to improve our security posture, starting with HXDP 3.0, it is now the default to set promiscuous mode, forged transmits, and MAC change to REJECT in ESXi. Any new cluster install on v3.0 or greater and any upgrades to v3.0 or greater should have these settings automatically applied. There are some versioning caveats, however.
You may not manually set these to reject before HXDP 3.0 as it is not compatible with management clustering. Please upgrade to HXDP 3.0 or later if these settings are required. The lesson here is to verify your version with respect to the STIG settings that are applied. Settings that ship with a specific version have been thoroughly tested with that version.
A technote on the STIG settings can be found here:
https://www.cisco.com/c/en/us/td/docs/hyperconverged_systems/HyperFlex_HX_DataPlatformSoftware/TechNotes/b_Ho w_to_Configure_vCenter_Security_Hardening_Settings.html
Note that the STIG scripts must be run from each CVM node. SSH to each CVM management IP and change to the following path:
/usr/share/springpath/storfs-misc/hx-scripts
From this location run the stig_security_settings.py script. Note that this script has default configuration values in the stig_conifg.ini file located in the same directory. These may be edited as needed but will no longer match the vetted settings. Every setting set by the STIG script is idempotent, so multiple executions of the script will not adversely affect the system and you can reset your compliance baseline at any time by running it if things have changed in the interim.

Cisco HX Platform Hardening Guide

Page 64

Beginning with HXDP 4.0.1a, a subset of the REST API contains STIG related entries. See the section on REST APIs above.
Starting with HXDP version 4.5.1a, there is no access to the root user without the tokenized workflow initiated by TAC so you are unable to simply run the STIG script. You can, however, still activate and reset the STIG settings from the command line (Secure Admin Shell), using a CURL call to the HX API.
Here's the general format: https://developer.cisco.com/docs/ucs-dev-center-hyperflex/#!api-documentation-format/try-it-out
Location of STIG APIs: https://developer.cisco.com/docs/ucs-dev-center-hyperflex/#!support-service
To apply STIG run this API using CURL: configure_stig_parameters
STIG settings for DISA STIG ESXi v6.5 and 6.7 are currently under evaluation and will be added to the script in a future release.
SSL Certificate Replacement
During Cisco HyperFlex deployment, a set of local certificates is generated between the components to allow for trusted communication. Many organizations have their own certificate authority already in place. It is recommended that you replace the default SSL Certificates with your certificates. The following link describes how to replace the certificates in a pre-HXDP 4.0.1a system. https://www.cisco.com/c/en/us/support/docs/hyperconverged-infrastructure/hyperflex-hx-data-platform/213847-replacehyperflex-self-signed-ssl-certif.html Starting with HXDP 4.0.1a, this process has been automated using scripts. Please see Appendix F for a full treatment of certificate management.
Secure Boot
Beginning with HXDP 4.5.1(a) HX supports an option for secure boot though a setting on the UCS UEFI BIOS that is enabled in the HX Connect UI under the Upgrade tab. It is recommended that you use this operation to automate enabling UEFI secure boot on all nodes in the Hyperflex cluster. Once UEFI secure boot is enabled, this operation cannot be reversed from within HX Connect. Changing boot settings manually on UCS servers managed by HyperFlex is not recommended. Currently the secure boot process, when enabled, is in effect during boot including the hypervisor but not the control VM. CVM secure boot will be released in a future version. The end to end security model that this enables, when combined with the secure admin shell, encompasses hardware trust anchor, to secure hypervisor boot, to (eventual) CVM secure boot with access protected by a secure appliance shell. This is all externally verifiable using attestation with vCenter.

Cisco HX Platform Hardening Guide

Page 65

The current implementation covers the following:
· Hypervisor secure boot, secured by public keys stored in the write-protected hardware root of trust
· Ensures only a trusted HX ESXi image, including drivers, is booted by verifying signatures
· Supports attestation of secure boot of ESXi by vCenter (requires min. ESXi 6.7 and TPM 2.0)
The detailed process flow for secure boot of the hypervisor and attestation capability is shown below. Note that the certificate-based hardware root of trust validates the UCS firmware which ensures a clean BIOS that is set for key validation of the hypervisor bootloader and so on. This guarantees that the hardware and hypervisor in the HX system have not been tampered with. External validation of this can be made through attestation with vCenter using the TPM 2.0 module in UCS.

Cisco HX Platform Hardening Guide

Page 66

The Cisco "HW Root of Trust" ensures secure boot by enabling a trusted hardware module. The Cisco IMC secure boot is handled via (Hardware) HW Root of Trust. Immutable keys are embedded in write-protected devices on every UCS server. Additionally, system BIOS secure boot is also encoded at manufacturing, and Cisco resolves both Firmware and BIOS via HW Root of Trust measures. Cisco also employs anti-counterfeit measures to ensure the physical hardware is authentic and signed by Cisco.

The HW Root of Trust is a Cisco ACT2 Trust Anchor Module (TAM). This module has the following characteristics:

Cisco HX Platform Hardening Guide

Page 67

 Immutable Identity with IEEE 802.1AR (Secure UDI- X.509 cert)  Anti-Theft & Anti-Counterfeiting  Built-In Cryptographic Functions  Secure Storage for Certificates and Objects  Certifiable NIST SP800-92 Random Number Generation
Once a system is securely booted, it is often important to get external verification that this is indeed the case. This is done through attestation. "Attestation" is evidence of a result, i.e., "The host was booted with secure boot enabled and signed code". Host Secure Boot Assurance via Attestation:
· Requires minimum ESXi 6.7 and Trusted Platform Module (TPM) 2.0 · TPM stores platform measurements of a known good boot; vCenter compares current boot against values stored
in TPM · For more information, see the link in the references at the end of this paper.
Secure boot is enabled in the HX Connect UI under the Upgrade tab.

Select the Secure Boot Mode checkbox to change the boot mode of the ESXi hosts from Legacy to UEFI Secure Boot

anchored to the Cisco hardware root of trust on the Cisco Integrated Management Controller (CIMC). After Secure Boot

is enabled, it cannot be disabled. Changing boot mode will cause a rolling reboot of each ESXi host in the HX cluster. A

Cisco HX Platform Hardening Guide

Page 68

maintenance window is recommended for this operation. Enabling Secure Boot is a one-time operation and cannot be combined with other upgrade workflows.

SSL Certificate Thumbprint (Hash) and Signatures
Prior to HXDP 3.5.2g, the SSL thumbprint used on all HX certificates used the SHA1 hash. SHA1 has a rare chance (only achieved several years ago) of generating a hash collision that could potentially offer an avenue whereby two certificates could have the same thumbprint. The thumbprint is used for reference and doesn't represent a security threat when used with SHA1 (see Windows certificates for example). However, beginning with HX 3.5.2g, HX uses a SHA256 hash to generate the thumbprint for enhanced uniqueness. All signatures are SHA256 and have been for a while.

Dynamic Self-Signed Certificates in HX
Prior to 4.0.2a the self-signed certificates generated during install were the same (private key) for all nodes (CMVs) and all clusters. These, of course could be changed after the install on a cluster-wide basis, but often were not. These certificates had an expiration of 12/2019. From 4.0.2a onwards, all certificates are dynamically generated and unique per cluster, but the same per CVM within that cluster. The requirement to keep the certificate the same per CVM within a cluster is to ensure that whichever node gets the management CIP (which can change during a failure event) will not break secure access to HX Connect.
During a fresh install of HX, a set of self-signed certificates is generated for secure intra-cluster communication. This certificate is created when you run the post-install script. The post-install script must be run on fresh installations.
The script creates a new self-signed certificate that is unique to the cluster. This certificate is pushed to each node. Automatic re-registration of the cluster with vCenter using the new certificate is performed. The post-install script can be re-run later to generate a new certificate if desired. This script is located in each CVM at the following path:
/usr/share/springpath/storfs-misc/hx-scripts/post_install.sh
Dynamic self-signed certificates created using the post-install script use the "admin" account and password when prompted.
Upon upgrade of HXDP from pre-4.0.2a to 4.0.2a, and if self-signed certificates are used, the upgrade regenerates a unique certificate per cluster dynamically, and installs it on each CVM.
It is recommended to replace your self-signed certificates with CA signed certificates to improve your security posture.

UCSM Certificate Management
Setting UCSM certificates is covered in the UCSM management guide here:

https://www.cisco.com/c/en/us/support/docs/servers-unified-computing/ucs-infrastructure-ucs-manager-software/213523creating-and-using-3rd-party-certificate.html

This can be automated using the UCSM API references for larger deployments.

Setting the CIMC/KVM certificate is a separate procedure. This can be conducted through the UCSM UI as described

here:

Cisco HX Platform Hardening Guide

Page 69

https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/ucs-manager/GUI-User-Guides/Admin-Management/40/b_Cisco_UCS_Admin_Mgmt_Guide_4-0/b_Cisco_UCS_Admin_Mgmt_Guide_40_chapter_0110.html#task_gxr_3qq_ncb
It is also possible to automate this using the XML API for IMC. See the reference here: https://www.cisco.com/c/en/us/support/servers-unified-computing/ucs-c-series-integrated-managementcontroller/products-programming-reference-guides-list.html When using these guides for HX CIMC/KVM be sure to use the C-series references.
HX and Perfect Forward Secrecy (PFS)
Perfect Forward Secrecy (PFS) is a mechanism to prevent private key compromise from affecting a larger set of historical or future communications. To understand the problem, let's first review how TLS handshake works.

Source: https://blogs.msdn.microsoft.com/kaushal/2013/08/02/ssl-handshake-and-https-bindings-on-iis/

As the diagram shows, the server's private key is used to encrypt the pre-master secret. The pre-master secret is used to derive the symmetric key which encrypts and decrypts messages in the TLS session. This dependency on the server private key means if an attack is somehow able to get a hold of the private key on the server, they will be able to guess at the session key. If the attacker has also been eavesdropping and recording the HTTP traffic, they can then decrypt and read any past message, and even future messages if the private key isn't changed.

The solution is to disable RSA based key exchange (see the next two sections) in favor of newer Diffie-Hellman Ephemeral (Elliptic) Key Exchange methods. These methods are known as Perfect Forward Secrecy.

Cisco HX Platform Hardening Guide

Page 70

Source: https://security.stackexchange.com/questions/45963/diffie-hellman-key-exchange-in-plain-english
In this architecture, new parameters are used for each session. Private keys are not used to deduce the shared (session) key. Some environments require a subset of the default TLS ciphers to be disabled in order to accomplish this. The procedure to do this must be performed on each controller VM.

TLS Weak Protocol Disable
Some environments require a subset of the default TLS ciphers to be disabled. This procedure to do this must be
performed on each controller VM.

On each controller VM, edit /etc/nginx/conf.d/springpath.conf and change the line starting with ssl_protocols:

Comment out the existing one, and replace with the new one as shown below:

#ssl_protocols ssl_protocols

TLSv1 TLSv1.1 TLSv1.2; TLSv1.2;

Save the file exit your editor (vi). Restart nginx using service nginx restart from the CVM CLI.

Strict TLS 1.2 support cannot be enabled prior to HXDP 3.0. There are ecosystem interop issues between UCSM, HXDP vCenter, and HXDP DR functionality that prevent this. Starting with HXDP 3.0, across the board all components are strict

Cisco HX Platform Hardening Guide

Page 71

TLS 1.2 implementations. All of 3.0 and above is strictly TLS 1.2, including new provisioning and upgrades from 2.x to 3.x. From 3.0 onwards, HXDP does not support anything less than TLS 1.2.

TLS Weak Cipher Disable
To remove weak ciphers you must edit the /etc/nginx/conf.d/springpath.conf file. Change the following section in bold:

server

{

### server port and name ###

listen

443;

...

...

...

### Add SSL specific settings here ### ### Disable SSLv3 & RC4 cipher, to suppress POODLE & BEAST attacks ssl_protocols TLSv1.2; ssl_ciphers !AES256-SHA:!ECDHE-RSA-AES256-SHA:!ECDHE-RSA-DES-CBC3-SHA:!DES-CBC3SHA:!ECDHE-RSA-AES128-SHA:!AES128-SHA:!aNULL:!eNULL:FIPS@STRENGTH:!RSA;

Note: append !RSA at the end.
Save the file exit your editor (vi). Restart nginx using service nginx restart from the CVM CLI.
SSH (ESX) Lockdown Mode and Root Logins
ESX SSH lockdown mode can be enabled on each ESX node of the HX cluster. This applies only to a post-install system. SSH traffic must not be blocked during install. Lockdown of SSH for ESXi is supported in HXDP 2.5 and above. The following constraints apply to the deactivation of remote SSH access to the system for versions prior to 3.5(1a):
1. HX Snapshots for VMs are disabled (redo-log based snapshots still function). 2. The source VM for a ReadyClone operation must remain powered off for a cloning operation. Once the operation
is complete, the source VM can be powered back on. Clones themselves are unaffected. 3. System upgrades are disabled until SSH is re-enabled.
SSH needs to be enabled before cluster expansion can take place. It can be disabled again afterwards.
In HXDP 3.0 and above, snapshots and native replication do not use SSH to interface with ESXi; i.e., neither "root" nor "hxuser" based SSH logins are performed. With respect to logins to hostd (ESXi), for vSphere API access, only "hxuser" is used. The hxuser password is randomly generated on cluster creation during setup. All nodes within the same cluster have the same hxuser password. This password is used in several workflows and upgrades. Changing this password isn't supported without cluster re-creation. Root login is only used during cluster creation, node expansion, and initial installation.

Cisco HX Platform Hardening Guide

Page 72

Lockdown Mode in HX 3.5(1a) is either Disabled, Normal or Strict. When Lockdown is enabled, the ESXi host can only be accessed through the vCenter server or the Direct Console User Interface (DCUI). Enabling Lockdown mode affects which users are authorized to access host services. Once Lockdown mode is enabled, and if root or administrator@vsphere.local or any other use is not part of the Exception user list, SSH to that ESX is not allowed. Similarly, if the host has been removed from the vCenter for some reason, adding the host back to vCenter is not allowed. Here is an overview of the features:
· Lockdown Mode exists in three states: o Disabled  Can SSH to host o Normal  Can connect through DCUI or VC o Strict  Can connect only using VC
· Upgrade checks whether Lockdown Mode is enabled o If enabled, prompts the user to disable for upgrade to proceed
· Upgrade will not proceed even in normal Lockdown mode
Normal vs. Strict mode have additional different behaviors and exceptions. For a comprehensive examination of system behavior in each mode and for troubleshooting guidelines for Lockdown, see the HyperFlex Installation Guide.

Tech Support Mode
Available starting in HX 3.5(1a), Tech Support Mode, also called "Controller Access Over SSH", is specifically designed to allow for CVM troubleshooting.
· Tech Support Mode is enabled by default o Allows SSH access to the CVM management interface
· Tech Support Mode can be disabled o SSH to CVM management IP is disallowed
· Status of Tech Support Mode is listed in the status banner at the top of System Information in HX Connect · If Tech Support Mode is disabled, the user will be prompted to enable it for upgrades to proceed

Third Party Software Execution on FIs and HXDP
Cisco does not support the installation of 3rd party software on either Fabric Interconnects (FIs) or on HXDP nodes (ESXi or HX CVM). For FI's, external software is not supported by virtue of the UCSM kernel-space type management shell. It is not possible to load or run any applications. For HXDP, Cisco does not recommend or support the installation and/or execution of 3rd party applications. In the current release (4.0.X) it is recommended that you use HXDP's tech support mode along with ESXi's lockdown mode at the same time in order to safeguard against accidental or malicious attempts to run external applications on the HX CVM or the node hypervisor. In a future release, HXDP will have a kernel space shell making this precaution redundant.

Whitelisting and other STCLI Security Commands
The HX datastores are a protected resource only mountable by HX nodes participating in the cluster (either by installation or by expansion). These protected datastore(s) cannot be mounted by other systems unless they are whitelisted. To whitelist a system for the cluster, ssh to a node and use the stcli security whitelist commands:
Remove systems from the list when not in immediate use.

Cisco HX Platform Hardening Guide

Page 73

root@SpringpathControllerEWA35H09RF:~# stcli security usage: stcli security [-h] {password,whitelist,ssh,encryption} ...
root@SpringpathControllerEWA35H09RF:~# stcli security password usage: stcli security password [-h] {set} ...
root@SpringpathControllerEWA35H09RF:~# stcli security whitelist usage: stcli security whitelist [-h] {list,add,remove,clear} ...
root@SpringpathControllerEWA35H09RF:~# stcli security ssh usage: stcli security ssh [-h] {resync} ...
root@SpringpathControllerEWA35H09RF:~# stcli security encryption usage: stcli security encryption [-h] {ucsm-ro-user} ... root@SpringpathControllerEWA35H09RF:~#

HX Data Platform Firewalling: IP Tables
Each HXDP node maintains a set of IP Tables firewall entries. This serves to explicitly set traffic that is allowed to communicate in and out with the HXDP node. The table is maintained automatically and shouldn't have to be edited. These entries are listed for reference below. They are also automatically updated when HX Native Replication is enabled so that cluster-cluster traffic is permitted.

root@ucs-stctlvm-137-1:~# iptables -S -P INPUT ACCEPT -P FORWARD ACCEPT -P OUTPUT ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A INPUT -d 10.a.b.c/32 -i eth0 -p tcp -m tcp --dport 443 -j ACCEPT -A INPUT -d 10.a.b.d/32 -i eth0 -p tcp -m tcp --dport 443 -j ACCEPT -A INPUT -d 10.a.b.e/32 -i eth1 -p tcp -m tcp --dport 443 -j ACCEPT -A INPUT -d 10.a.b.c/32 -i eth0 -p tcp -m tcp --dport 8888 -j ACCEPT -A INPUT -d 10.a.b.d/32 -i eth0 -p tcp -m tcp --dport 8888 -j ACCEPT -A INPUT -d 10.a.b.e/32 -i eth1 -p tcp -m tcp --dport 8888 -j ACCEPT -A INPUT -d 10.a.b.c/32 -i eth0 -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -d 10.a.b.d/32 -i eth0 -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -d 10.a.b.e/32 -i eth1 -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -d 10.a.b.c/32 -i eth0 -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -d 10.a.b.d/32 -i eth0 -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -d 10.a.b.e/32 -i eth1 -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -d 10.a.b.c/32 -i eth0 -p tcp -m tcp --dport 123 -j ACCEPT -A INPUT -d 10.a.b.d/32 -i eth0 -p tcp -m tcp --dport 123 -j ACCEPT -A INPUT -d 10.a.b.e/32 -i eth1 -p tcp -m tcp --dport 123 -j ACCEPT -A INPUT -d 10.a.b.c/32 -i eth0 -p udp -m udp --dport 427 -j ACCEPT -A INPUT -d 10.a.b.d/32 -i eth0 -p udp -m udp --dport 427 -j ACCEPT -A INPUT -d 10.a.b.e/32 -i eth1 -p udp -m udp --dport 427 -j ACCEPT -A INPUT -d 10.a.b.c/32 -i eth0 -p udp -m udp --dport 8125 -j ACCEPT -A INPUT -d 10.a.b.d/32 -i eth0 -p udp -m udp --dport 8125 -j ACCEPT -A INPUT -d 10.a.b.e/32 -i eth1 -p udp -m udp --dport 8125 -j ACCEPT -A INPUT -s 10.a.b.g/32 -d 10.a.b.l/32 -i eth1 -j ACCEPT -A INPUT -s 10.a.b.g/32 -d 10.a.b.e/32 -i eth1 -j ACCEPT
Cisco HX Platform Hardening Guide

Page 74

-A INPUT -s 10.a.b.f/32 -d 10.a.b.l/32 -i eth1 -j ACCEPT -A INPUT -s 10.a.b.f/32 -d 10.a.b.e/32 -i eth1 -j ACCEPT -A INPUT -s 10.a.b.h/32 -d 10.a.b.l/32 -i eth1 -j ACCEPT -A INPUT -s 10.a.b.h/32 -d 10.a.b.e/32 -i eth1 -j ACCEPT -A INPUT -s 10.a.b.i/32 -d 10.a.b.l/32 -i eth1 -j ACCEPT -A INPUT -s 10.a.b.i/32 -d 10.a.b.e/32 -i eth1 -j ACCEPT -A INPUT -s 10.a.b.j/32 -d 10.a.b.l/32 -i eth1 -j ACCEPT -A INPUT -s 10.a.b.j/32 -d 10.a.b.e/32 -i eth1 -j ACCEPT -A INPUT -s 10.a.b.k/32 -d 10.a.b.l/32 -i eth1 -j ACCEPT -A INPUT -s 10.a.b.k/32 -d 10.a.b.e/32 -i eth1 -j ACCEPT -A INPUT -p udp -m udp --dport 32768:65535 -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -j DROP root@ucs-stctlvm-137-1:~# root@ucs-stctlvm-137-1:~# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED ACCEPT tcp -- anywhere ucs139-cip-m.eng.test-domain.com tcp dpt:https ACCEPT tcp -- anywhere ucs-stctlvm-137-1.eng.test-domain.com tcp dpt:https ACCEPT tcp -- anywhere ucs-stctlvm-137.eng.test-domain.com tcp dpt:https ACCEPT tcp -- anywhere ucs139-cip-m.eng.test-domain.com tcp dpt:8888 ACCEPT tcp -- anywhere ucs-stctlvm-137-1.eng.test-domain.com tcp dpt:8888 ACCEPT tcp -- anywhere ucs-stctlvm-137.eng.test-domain.com tcp dpt:8888 ACCEPT tcp -- anywhere ucs139-cip-m.eng.test-domain.com tcp dpt:ssh ACCEPT tcp -- anywhere ucs-stctlvm-137-1.eng.test-domain.com tcp dpt:ssh ACCEPT tcp -- anywhere ucs-stctlvm-137.eng.test-domain.com tcp dpt:ssh ACCEPT tcp -- anywhere ucs139-cip-m.eng.test-domain.com tcp dpt:http ACCEPT tcp -- anywhere ucs-stctlvm-137-1.eng.test-domain.com tcp dpt:http ACCEPT tcp -- anywhere ucs-stctlvm-137.eng.test-domain.com tcp dpt:http ACCEPT tcp -- anywhere ucs139-cip-m.eng.test-domain.com tcp dpt:ntp ACCEPT tcp -- anywhere ucs-stctlvm-137-1.eng.test-domain.com tcp dpt:ntp ACCEPT tcp -- anywhere ucs-stctlvm-137.eng.test-domain.com tcp dpt:ntp ACCEPT udp -- anywhere ucs139-cip-m.eng.test-domain.com udp dpt:svrloc ACCEPT udp -- anywhere ucs-stctlvm-137-1.eng.test-domain.com udp dpt:svrloc ACCEPT udp -- anywhere ucs-stctlvm-137.eng.test-domain.com udp dpt:svrloc ACCEPT udp -- anywhere ucs139-cip-m.eng.test-domain.com udp dpt:8125 ACCEPT udp -- anywhere ucs-stctlvm-137-1.eng.test-domain.com udp dpt:8125 ACCEPT udp -- anywhere ucs-stctlvm-137.eng.test-domain.com udp dpt:8125 ACCEPT all -- ucs-stctlvm-139.eng.test-domain.com ucs139-cip.eng.test-domain.com ACCEPT all -- ucs-stctlvm-139.eng.test-domain.com ucs-stctlvm-137.eng.test-domain.com \\ ACCEPT all -- ucs139v.eng.test-domain.com ucs139-cip.eng.test-domain.com ACCEPT all -- ucs139-v.eng.test-domain.com ucs-stctlvm-137.eng.test-domain.com \\ ACCEPT all -- ucs136-v.eng.testdomain.com ucs139-cip.eng.test-domain.com ACCEPT all -- ucs136-v.eng.test-domain.com ucs-stctlvm-137.eng.test-domain.com \\ ACCEPT all -- ucs137-v.eng.testdomain.com ucs139-cip.eng.test-domain.com ACCEPT all -- ucs137-v.eng.test-domain.com ucs-stctlvm-137.eng.test-domain.com \\ ACCEPT all -- ucs-stctlvm138.eng.test-domain.com ucs139-cip.eng.test-domain.com ACCEPT all -- ucs-stctlvm-138.eng.test-domain.com ucs-stctlvm-137.eng.test-domain.com \\ ACCEPT all -- ucs138v.eng.test-domain.com ucs139-cip.eng.test-domain.com ACCEPT all -- ucs138-v.eng.test-domain.com ucs-stctlvm-137.eng.test-domain.com \\ ACCEPT udp -- anywhere anywhere udp dpts:32768:65535 ACCEPT icmp -- anywhere anywhere DROP all -- anywhere anywhere

Cisco HX Platform Hardening Guide

Page 75

Chain FORWARD (policy ACCEPT) target prot opt source destination \\
Chain OUTPUT (policy ACCEPT) target prot opt source destination \\

Replication
Replication setting changes are maintained globally once replication is enabled on the cluster. Firewall entries are updated for ports needed for replication (see Networking Requirements above).
When replication is enabled, a new NIC is non-disruptively added to HXDP. This NIC is assigned an IP address in a new replication VLAN. The HX Service Profile on the FI (via UCSM) is automatically updated.
Replication traffic is not encrypted on the wire from the cluster. Secure replication requires an IPSEC capable WAN connection or relies on a trusted network. Data on the wire is always compressed so it's general appearance is not plain text.

Specific ESX Environment Hardening Settings Relevant to HXDP
See Appendix B for a set of ESX hardening configuration settings. These items are general recommendations from the
UCS verified ESX hardening guide.

Specific USC Environment Hardening Settings Relevant to HXDP
The UCSM build used for the system must match the supported UCSM version in the preinstall checklist.
http://www.cisco.com/c/dam/en/us/td/docs/hyperconverged_systems/HyperFlex_HX_DataPlatformSoftware/HyperFlex_pr einstall_checklist/Cisco_HX_Data_Platform_Preinstallation_Checklist_form.pdf
Refer to the UCS Hardening guide specifically for settings relevant to the build you are running.

Control VM (SCVM) Customization
Control VM customization is not supported and can be problematic. You should not modify the CVM hardware settings in vCenter. For example, the USB0 NIC interface is present in the CVM and some environments will try to remove any hardware devices from the CVM that do not seem to be in use. This should not be done.

USB0 is used for SED communications but should still be left alone in non-SED deployments. The IP address assigned to usb0 is a 169.254/16 IPv4 APIPA private address and is not routable. Iptables rules are also preconfigured so that all inbound packets to USB0 from any network will be dropped. In other words, an attacker would be required to first break into the Controller VM itself and gain local access in order to exploit the usb0 interface.

It is important to reiterate that changes to the CVM configuration like disabling this interface should not be made. It is configured on all clusters starting with the M5 platform and, although this communication channel may not be used today except for SED, in the future things like software encryption and other capabilities may

Cisco HX Platform Hardening Guide

Page 76

start to use it and expect it to be present. Any other devices present in the factory CVM should, similarly, not be altered.
References
ESX Hardening Guide
· ESX https://www.vmware.com/security/hardening-guides.html
UCS Hardening Guide
· UCS http://www.cisco.com/c/en/us/about/security-center/ucs-hardening.html
Cisco CSDL
· CSDL http://www.cisco.com/c/en/us/about/security-center/security-programs/secure-development-lifecycle/sdlprocess.html
Syslog-ng Configuration
· Syslog-ng configuration: https://www.techrepublic.com/article/how-to-use-syslog-ng-to-collect-logs-from-remotelinux-machines/
Secure Boot Assurance Via Attestation
· VMWare Blog: https://www.yelof.com/2018/04/30/vsphere-6-7-esxi-and-tpm-2-0/

Cisco HX Platform Hardening Guide

Page 77

Appendix A: Networking Ports
The following table lists the ports required for component communication for the HyperFlex solution.

Component Time Server

Service NTP

HX Installer SSH

HTTP

Port Protocol 123 UDP
22 TCP
80 TCP

Source
Each ESX Node Each SCVM Node UCSM
HX Installer HX Installer HX Installer
HX Installer HX Installer HX Installer HX Installer

HTTPS
vSphere SDK Heartbeat CIMC SoL

443 TCP

HX Installer HX Installer HX Installer HX Installer

8089 TCP

HX Installer HX Installer

9333 TCP

HX Installer

902 TCP/UDP HX Installer

2400

TCP CIMC OOB

Mail Server

ICMP SMTP

25 TCP

Cisco HX Platform Hardening Guide

HX Installer, CVM IPs
Each SCVM Node CIP-M

Destination

Notes

Time Server
Time Server Time Server

Each ESX Node Each SCVM Node CIP-M
UCSM Each ESX Node Each SCVM Node CIP-M
UCSM Each ESX Node Each SCVM Node CIP-M
UCSM Each ESX Node
Each ESX Node vCenter, Each ESX Node Mgmt Network
ESX/CVM IPs, vCenter

Mgmt addresses Mgmt addresses Cluster Mgmt UCSM mgmt addresses Mgmt addresses Mgmt addresses Cluster Mgmt UCSM mgmt addresses. (port not required, can be disabled) Mgmt addresses Mgmt addresses Cluster Mgmt UCSM mgmt addresses Mgmt addresses Cluster Data Network
bidirectional Mgmt addresses. Also required for cluster reregistration with vCenter.

Mail Server Mail Server

Page 78

Monitoring

Name Server

SNMP Poll SNMP Trap
DNS

161 UDP 162 UDP
53 TCP/UDP

vCenter

HTTP HTTPS (plugin)

80 TCP 443 TCP

HTTPS (VC SSO) 7444 TCP

HTTPS (plugin) 9443 TCP

CIM Server

5989 TCP

9080 TCP

Heartbeat

902 TCP/UDP

User

SSH

22 TCP

HTTP

80 TCP

HTTPS

443 TCP

Cisco HX Platform Hardening Guide

UCSM
Monitoring Server UCSM

Mail Server
UCSM Monitoring Server

Each ESX Node Each SCVM Node
CIP-M
UCSM

Name Server
Name Server Name Server Name Server

vCenter vCenter vCenter vCenter vCenter vCenter vCenter vCenter vCenter vCenter vCenter vCenter
vCenter vCenter

Each SCVM Node CIP-M Each ESX Node Each SCVM Node CIP-M Each ESX Node Each SCVM Node CIP-M Each ESX Node Each SCVM Node CIP-M Each ESX Node
Each ESX Node Each ESX Node

User

Each ESX Node

User

Each SCVM Node

User

CIP-M

User

HX Installer

User

UCSM

User

vCenter

User

SSO Server

User

Each SCVM Node

User

CIP-M

User

UCSM

User

HX Installer

User

vCenter

User

Each SCVM Node

Mgmt addresses Mgmt addresses Cluster Mgmt
Bidirectional Bidirectional Bidirectional Bidirectional Bidirectional Bidirectional Bidirectional Bidirectional Bidirectional Bidirectional Bidirectional
Introduced in ESXi 6.5
Mgmt addresses Mgmt addresses Cluster Mgmt
UCSM mgmt addresses
Mgmt addresses Cluster Mgmt
Page 79

SSO Server

HTTPS (SSO) HTTPS (plugin) KVM HTTPS (SSO)

7444 TCP 9443 TCP 2068 TCP 7444 TCP

Stretch Witness

Zookeeper

2181 TCP

2888 TCP

Replication

Exhibitor (Zookeeper Lifecycle)
HTTP
HTTPS

3888 TCP
8180 TCP 80 TCP
443 TCP

ICMP
Data Services Manager Peer
Data Services Manager Peer
Replication for CVM

9338 TCP 9339 TCP 3049 TCP

NRDR

9350 TCP

Cisco HX Platform Hardening Guide

User
User User User User User User
User
SSO Server SSO Server SSO Server

CIP-M
UCSM HX Installer vCenter vCenter SSO Server vCenter
UCSM
Each ESX Node Each SCVM Node CIP-M

UCSM mgmt addresses
UCSM mgmt addresses Bidirectional Bidirectional Bidirectional

Witness Witness Witness

Each CVM Node Each CVM Node Each CVM Node

Witness Witness Witness

Each CVM Node Each CVM Node Each CVM Node

Each CVM Node Each CVM Node Each CVM Node Each CVM Node Each CVM Node Each CVM Node Each CVM Node Each CVM Node Each CVM Node Each CVM Node

Bidirectional, Mgmt Addresses
Bidirectional, Mgmt Addresses
Bidirectional, Mgmt Addresses
Bidirectional, Mgmt Addresses Potential Future Req. Potential Future Req.
Include Cluster Mgmt IP Bidirectional, Include Cluster Mgmt IP as well Bidirectional, Include Cluster Mgmt IP as well Bidirectional, Include Cluster Mgmt IP as well Bidirectional, Include Cluster Mgmt IP as well
Page 80

UCSM Misc

Cluster Map

4049 TCP

NR NFS
Replication Service
NR Master for Coordination

4059 TCP 9098 TCP 8889 TCP

Encryption etc. KVM
KVM

443 TCP 81 HTTP
743 HTTPS

Hypervisor Service

9350 TCP

CIP-M Failover 9097 TCP

RPC Bind

111 TCP

Installer

8002 TCP

Apache Tomcat 8080 TCP

Auth Service

8082 TCP

hxRoboControl 9335 TCP

syslog-ng

6514 TCP

TLS

5696 TCP

Thrift RPC

10207 TCP

Cisco HX Platform Hardening Guide

Each CVM Node Each CVM Node Each CVM Node Each CVM Node Each CVM Node Each CVM Node Each CVM Node Each CVM Node

Bidirectional, Include Cluster Mgmt IP as well Bidirectional, Include Cluster Mgmt IP as well Bidirectional, Include Cluster Mgmt IP as well Bidirectional, Include Cluster Mgmt IP as well

Each CVM Node CIMC OOB

User

UCSM

User

UCSM

Each CVM Node Each CVM Node Each CVM Node Each CVM Node

Bidirectional for each UCS node OOB KVM OOB KVM Encrypted
Bidirectional, Include Cluster Mgmt IP as well Bidirectional for each CVM to other CVMs

Each CVM Node Each CVM Node
Each CVM Node Each CVM Node Each CVM Node Each CVM Node CIMC from each node
Management IP from each node

Each CVM Node Installer
Each CVM Node Each CVM Node Each CVM Node Remote syslog-ng collector KMS Server
Management IP from each node

CVM outbound to Installer stDeploy makes connection, any request with uri /stdeploy
any request with uri /auth/
Robo deployments
log aggregation
Bidirectional, Key Exchange Platform thrift server port (closed to outside world)

Page 81

SED Cluster

iSCSI

HTTPS TLS
iSCSI

443 TCP 5696 TCP
10152 TCP

Each CVM Mgmt IP inluding CIPM CIMC from each node

UCSM A/B and VIP KMS Server

responds to requests from various internal management agents
Policy Configuration
Key Exchange

Initiators

iSCSI CIP

SAN protocol

The following links are relevant specifically to ESXi and vCenter:
ESX 6.0 port requirements: https://kb.vmware.com/s/article/2106283 vCenter 6.0 Port requirements https://kb.vmware.com/s/article/2106283
Please note that the following ports are shown as open but not needed for installation or general operation:
TCP port 81: HTTP KVM direct to CIMC (UCSM credentials required) TCP port 743: HTTPS KVM direct to CIMC (UCSM credentials required) TCP port 8888: Storage data network port for file system rebuilds TCP port 843: UCS Central port on the FI for application integration Note the UDP ports 427 (Service Location Protocol) and 8125 (Graphite) are open on the SCVM. Ports 32k-65k are also open for SCVM outbound communication. These UDP ports can be seen in the IP Tables ACCEPT syntax above.
Please note that the following ports are required for successful operation of the HX Profiler in order to gather input for the HX Sizer:
· vCenter to profiler: 443(https) · hyperV/LBM/WBM to profler: 5986(HTTPS)/5985(HTTP) for remote powershell & WMIC query execution
Connectivity between a SED-enabled cluster and the KMS have a few requirements in environments that are firewall segmented. Access between FI-A/FI-B/VIP and the KMS is not required. Only the CIMC IPs of each node need to access the KMS. Allowing CIMC IPs along with KMS IP and port 5696 is sufficient.
NRDR Port Requirement summary: ICMP, 3049, 4049, 4059, 8889, 9098, 9338, 9339, and 9350 for replication. See the table above for specifics.

Intersight
Network Communication Requirements for CIMC: · Communication between CIMC and vCenter via ports 80, 443 and 8089 during installation phase.
Cisco HX Platform Hardening Guide

Page 82

· IP connectivity (L2 or L3) is required from the CIMC management IP on each server to all of the following: ESXi management interfaces, HyperFlex controller VM management interfaces, and vCenter server. Any firewalls in this path should be configured to allow the necessary ports as outlined in the Hyperflex Hardening Guide.
· This communication needs to be persistent. It is required for any and all upgrades (including firmware), monitoring, and UI cross-launch.
· CIMC to Intersight should only require 443. Per the preinstall guide: · All device connectors must properly resolve svc.intersight.com and allow outbound-initiated HTTPS connections
on port 443. The current HX Installer supports the use of an HTTP proxy. The IP addresses of ESXi management must be reachable from Cisco UCS Manager over all the ports that are listed as being needed from installer to ESXi management, to ensure deployment of ESXi management from Cisco Intersight. · Allow port 22 between the UCSM (or CIMC) VLAN and the ESXi/SCVM management VLAN
Intersight Connectivity Consider the following prerequisites pertaining to Intersight connectivity: · Before installing the HX cluster on a set of HX servers, make sure that the device connector on the corresponding Cisco IMC instance is properly configured to connect to Cisco Intersight and claimed. · Communication between CIMC and vCenter via ports 80, 443 and 8089 during installation phase. · All device connectors must properly resolve svc.intersight.com and allow outbound initiated HTTPS connections on port 443. The current version of the HX Installer supports the use of an HTTP proxy. · All controller VM management interfaces must properly resolve svc.intersight.com and allow outbound initiated HTTPS connections on port 443. The current version of HX Installer supports the use of an HTTP proxy if direct Internet connectivity is unavailable. · IP connectivity (L2 or L3) is required from the CIMC management IP on each server to all of the following: ESXi management interfaces, HyperFlex controller VM management interfaces, and vCenter server. Any firewalls in this path should be configured to allow the necessary ports as outlined in the Hyperflex Hardening Guide. · Starting with HXDP release 3.5(2a), the Intersight installer does not require a factory installed controller VM to be present on the HyperFlex servers. When redeploying HyperFlex on the same servers, new controller VMs must be downloaded from Intersight into all ESXi hosts. This requires each ESXi host to be able to resolve svc.intersight.com and allow outbound initiated HTTPS connections on port 443. Use of a proxy server for controller VM downloads is supported and can be configured in the HyperFlex Cluster Profile if desired. · Post-cluster deployment, the new HX cluster is automatically claimed in Intersight for ongoing management.
Appendix B: URLs Needed for Smart Call Home, Post Install Scripts, Intersight

Quick List · diag.hyperflex.io · svc.intersight.com · svc.ucs-connect.com · tools.cisco.com · upload.hyperflex.io · cs.co/hx-scripts · ftp.springpath.com
Cisco HX Platform Hardening Guide

Page 83

These are for autosupport, Smart Call Home, post install for script updates, and the device connector to Intersight for management, trending (diags), install, upgrade. It is recommended to add the corresponding IP addresses for these FQDNs as well in the event DNS is unavailable.

For example: the following external URLs exist in some scripts: /usr/share/springpath/storfs-misc/hx-scripts/support.py: def verifyConnectvity():
try: try: url = "https://upload.hyperflex.io/admin/api2/ping" r = requests.get(url, verify=False) return True except requests.exceptions.ConnectionError: url = "https://38.140.50.205/admin/api2/ping" r = requests.get(url, verify=False)

outputfile = open('/etc/hosts', 'a') entry = "\n38.140.50.205" + "\t upload.hyperflex.io \n"

and: /usr/share/springpath/storfs-misc/hx-scripts/check_vswitch.py: def uploadSupportBundle(fileName, folderTag, bundle):

try: folder = '/upload/' + folderTag + '/' data = {'command': 'makedir', 'path': folder} auth = ('upload', 'upload') r = requests.post("https://54.88.201.239", verify=False, auth=auth, data=data)

fileobj = open(bundle, 'rb')

files = {'uploadPath': (None, folder), 'the_action': ( None, 'STOR'), 'file': (fileName, fileobj, 'application/x-gzip ')}
requests.post("https://54.88.201.239", verify=False, auth=auth, files=files)

54.88.201.239 reverse resolves to ftp.springpathinc.com

Smart Call Home (SCH):

root@hx-6-scvm-01:~# stcli services sch show proxyPort: 8080
Cisco HX Platform Hardening Guide

Page 84

enableProxy: True enabled: True proxyPassword: proxyUser: cloudEnvironment: production proxyUrl: proxy.esl.cisco.com emailAddress: dummy_address@cisco.com portalUrl: cloudAsupEndpoint: https://diag.hyperflex.io/ root@hx-6-scvm-01:~#
Post Install:
root@Cisco-HX-Installer-Appliance:~# vi /usr/share/springpath/storfs-misc/hx-scripts/update.sh #!/bin/sh
FILENAME="hx-tools.zip" URL="http://cs.co/hx-scripts"
cd /usr/share/springpath/storfs-misc/hx-scripts wget --no-check-certificate -q -T1 -t1 ${URL} -O ${FILENAME} > /dev/null 2>&1
if [ $? -gt 0 ]; then echo "Could not download latest tools. Please verify internet connection" rm -f ${FILENAME} > /dev/null 2>&1 exit 1
fi
unzip -oj ${FILENAME} > /dev/null 2>&1 rm -f ${FILENAME} > /dev/null 2>&1 echo "Scripts succesfully updated"
Intersight Device connector:
· svc.intersight.com (Preferred)
· svc.ucs-connect.com (Will be deprecated in the future)

Appendix C: ESX Hardening Settings

ESX hardening settings: ESXi.apply-patches
Cisco HX Platform Hardening Guide

Keep ESXi system properly patched

By staying up to date on ESXi patches, vulnerabilities in the hypervisor can be mitigated. An educated attacker can exploit known
Page 85

ESXi.audit-exception-users ESXi.config-ntp ESXi.config-persistent-logs ESXi.config-snmp

vulnerabilities when attempting to attain access or elevate privileges on an ESXi host.

Audit the list of users who are on the Exception Users List and whether they have administrator privileges
Configure NTP time synchronization
Configure persistent logging for all ESXi host
Ensure proper SNMP configuration

In vSphere 6.0 and later, you can add users to the Exception Users list from the vSphere Web Client. These users do not lose their permissions when the host enters lockdown mode. Usually you may want to add service accounts such as a backup agent to the Exception Users list. Verify that the list of users who are exempted from losing permissions is legitimate and as needed per your environment. Users who do not require special permissions should not be exempted from lockdown mode. By ensuring that all systems use the same relative time source (including the relevant localization offset), and that the relative time source can be correlated to an agreed-upon time standard (such as Coordinated Universal Time--UTC), you can make it simpler to track and correlate an intruder's actions when reviewing the relevant log files. Incorrect time settings can make it difficult to inspect and correlate log files to detect attacks, and can make auditing inaccurate. ESXi can be configured to store log files on an inmemory file system. This occurs when the host's "/scratch" directory is linked to "/tmp/scratch". When this is done only a single day's worth of logs are stored at any time. In addition, log files will be reinitialized upon each reboot. This presents a security risk as user activity logged on the host is only stored temporarily and will not persistent across reboots. This can also complicate auditing and make it harder to monitor events and diagnose issues. ESXi host logging should always be configured to a persistent datastore. If SNMP is not being used, it should remain disabled. If it is being used, the proper trap destination should be configured. If SNMP is not properly configured, monitoring information can be sent to a malicious host that can then use this information to plan an attack. Note: ESXi 5.1 and later supports SNMPv3 which provides stronger security than SNMPv1 or SNMPv2, including key authentication and encryption.

Cisco HX Platform Hardening Guide

Page 86

ESXi.disable-mob ESXi.firewall-enabled ESXi.set-account-auto-unlock-time
ESXi.set-account-lockout ESXi.set-dcui-access
ESXi.set-dcui-timeout Cisco HX Platform Hardening Guide

Disable Managed Object Browser (MOB)
Configure the ESXi host firewall to restrict access to services running on the host
Set the time after which a locked account is automatically unlocked

The managed object browser (MOB) provides a way to explore the object model used by the VMkernel to manage the host; it enables configurations to be changed as well. This interface is meant to be used primarily for debugging the vSphere SDK. In Sphere 6.0 this is disabled by default Unrestricted access to services running on an ESXi host can expose a host to outside attacks and unauthorized access. Reduce the risk by configuring the ESXi firewall to only allow access from authorized networks. Multiple account login failures for the same account could possibly be a threat vector trying to brute force the system or cause denial of service. Such attempts to brute force the system should be limited by locking out the account after reaching a threshold.

Set the count of maximum failed login attempts before the account is locked out
Set DCUI.Access to allow trusted users to override lockdown mode
Audit DCUI timeout value

In case, you would want to auto unlock the account, i.e. unlock the account without administrative action, set the time for which the account remains locked. Setting a high duration for which account remains locked would deter and severely slow down the brute force method of logging in. Multiple account login failures for the same account could possibly be a threat vector trying to brute force the system or cause denial of service. Such attempts to brute force the system should be limited by locking out the account after reaching a threshold. Lockdown mode disables direct host access requiring that admins manage hosts from vCenter Server. However, if a host becomes isolated from vCenter Server, the admin is locked out and can no longer manage the host. If you are using normal lockdown mode, you can avoid becoming locked out of an ESXi host that is running in lockdown mode, by setting DCUI.Access to a list of highly trusted users who can override lockdown mode and access the DCUI. The DCUI is not running in strict lockdown mode. DCUI is used for directly logging into ESXi host and carrying out host management tasks. The idle connections to DCUI must be terminated to avoid

Page 87

any unintended usage of the DCUI originating from a left-over login session.

ESXi.set-password-policies
ESXi.set-shell-interactive-timeout ESXi.set-shell-timeout ESXi.TransparentPageSharing-intraenabled

Establish a password policy for password complexity
Set a timeout to automatically terminate idle ESXi Shell and SSH sessions
Set a timeout to limit how long the ESXi Shell and SSH services are allowed to run
Ensure default setting for intra-VM TPS is correct

ESXi uses the pam_passwdqc.so plug-in to set password strength and complexity. It is important to use passwords that are not easily guessed and that are difficult for password generators to determine. Password strength and complexity rules apply to all ESXi users, including root. They do not apply to Active Directory users when the ESX host is joined to a domain. Those password policies are enforced by AD. If a user forgets to log out of their SSH session, the idle connection will remain open indefinitely, increasing the potential for someone to gain privileged access to the host. The ESXiShellInteractiveTimeOut allows you to automatically terminate idle shell sessions. When the ESXi Shell or SSH services are enabled on a host they will run indefinitely. To avoid having these services left running set the ESXiShellTimeOut. The ESXiShellTimeOut defines a window of time after which the ESXi Shell and SSH services will automatically be terminated. Acknowledgement of the recent academic research that leverages Transparent Page Sharing (TPS) to gain unauthorized access to data under certain highly controlled conditions and documents VMware's precautionary measure of restricting TPS to individual virtual machines by default in upcoming ESXi releases. At this time, VMware believes that the published information disclosure due to TPS between virtual machines is impractical in a real-world deployment.

VMs that do not have the sched.mem.pshare.salt option set cannot share memory with any other VMs.

Cisco HX Platform Hardening Guide

Page 88

vCenter.verify-nfc-ssl
VM.disable-console-copy VM.disable-console-drag-n-drop VM.disable-console-gui-options VM.disable-console-paste VM.disable-disk-shrinking-shrink
Cisco HX Platform Hardening Guide

Enable SSL for Network File copy (NFC)

NFC (Network File Copy) is the name of the mechanism used to migrate or clone a VM between two ESXi hosts over the network.

***By default, NFC over SSL is enabled (i.e.: "True") within a vSphere cluster but the value of the setting is null.***

Explicitly disable copy/paste operations
Explicitly disable copy/paste operations

Clients check the value of the setting and default to not using SSL for performance reasons if the value is null. This behavior can be changed by ensuring the setting has been explicitly created and set to "True". This will force clients to use SSL. Copy and paste operations are disabled by default. However, if you explicitly disable this feature audit controls can check that this setting is correct. Copy and paste operations are disabled by default however by explicitly disabling this feature it will enable audit controls to check that this setting is correct.

Explicitly disable copy/paste operations
Explicitly disable copy/paste operations
Disable virtual disk shrinking

The default value is null. Setting this to true is just for audit. Copy and paste operations are disabled by default however by explicitly disabling this feature it will enable audit controls to check that this setting is correct. Copy and paste operations are disabled by default, however, if you explicitly disable this feature, audit controls can check that this setting is correct. Shrinking a virtual disk reclaims unused space in it. The shrinking process itself, which takes place on the host, reduces the size of the disk's files by the amount of disk space reclaimed in the wipe process. If there is empty space in the disk, this process reduces the amount of space the virtual disk occupies on the host drive. Normal users and processes--that is, users and processes without root or administrator privileges--within virtual machines have the capability to invoke this procedure. A non-root user cannot wipe the parts of the virtual disk that require root-level permissions. However, if this is done repeatedly,

Page 89

the virtual disk can become unavailable while this shrinking is being performed, effectively causing a denial of service. In most datacenter environments, disk shrinking is not done, so you should disable this feature. Repeated disk shrinking can make a virtual disk unavailable. Limited capability is available to nonadministrative users in the guest.

VM.disable-disk-shrinking-wiper

Disable virtual disk shrinking

VM.disable-hgfs

Disable HGFS file transfers

Shrinking a virtual disk reclaims unused space in it. VMware Tools reclaims all unused portions of disk partitions (such as deleted files) and prepares them for shrinking. Wiping takes place in the guest operating system. If there is empty space in the disk, this process reduces the amount of space the virtual disk occupies on the host drive. Normal users and processes--that is, users and processes without root or administrator privileges--within virtual machines have the capability to invoke this procedure. A non-root user cannot wipe the parts of the virtual disk that require root-level permissions. However, if this is done repeatedly, the virtual disk can become unavailable while this shrinking is being performed, effectively causing a denial of service. In most datacenter environments, disk shrinking is not done, so you should disable this feature. Repeated disk shrinking can make a virtual disk unavailable. Limited capability is available to nonadministrative users in the guest. Certain automated operations such as automated tools upgrades use a component in the hypervisor called "Host Guest File System" and an attacker could potentially use this to transfer files inside the guest OS

Cisco HX Platform Hardening Guide

Page 90

VM.disconnect-devices-floppy VM.disconnect-devices-parallel VM.disconnect-devices-serial Cisco HX Platform Hardening Guide

Disconnect unauthorized devices

Ensure that no device is connected to a virtual machine if it is not required. For example, serial and parallel ports are rarely used for virtual machines in a datacenter environment, and CD/DVD drives are usually connected only temporarily during software installation. For less commonly used devices that are not required, either the parameter should not be present or its value must be FALSE. NOTE: The parameters listed are not sufficient to ensure that a device is usable; other required parameters specify how each device is instantiated. Any enabled or connected device represents a potential attack channel.

Disconnect unauthorized devices

When setting is set to FALSE, functionality is disabled, however the device may still show up within the guest operation system. Ensure that no device is connected to a virtual machine if it is not required. For example, serial and parallel ports are rarely used for virtual machines in a datacenter environment, and CD/DVD drives are usually connected only temporarily during software installation. For less commonly used devices that are not required, either the parameter should not be present or its value must be FALSE. NOTE: The parameters listed are not sufficient to ensure that a device is usable; other required parameters specify how each device is instantiated. Any enabled or connected device represents a potential attack channel.

Disconnect unauthorized devices

When setting is set to FALSE, functionality is disabled, however the device may still show up within the guest operation system. Ensure that no device is connected to a virtual machine if it is not required. For example, serial and parallel ports are rarely used for virtual machines in a datacenter environment, and CD/DVD drives are usually connected only temporarily during software installation. For less commonly used devices that are not required, either the parameter should not be present or its value must be FALSE. NOTE: The parameters listed are not sufficient to ensure that a device is

Page 91

usable; other required parameters specify how each device is instantiated. Any enabled or connected device represents a potential attack channel.
When setting is set to FALSE, functionality is disabled, however the device may still show up within the guest operation system.

VM.limit-setinfo-size
VM.prevent-device-interactionconnect

Limit informational messages from the VM to the VMX file
Prevent unauthorized removal, connection and modification of devices

The configuration file containing these namevalue pairs is limited to a size of 1MB. This 1MB capacity should be sufficient for most cases, but you can change this value if necessary. You might increase this value if large amounts of custom information are being stored in the configuration file. The default limit is 1MB;this limit is applied even when the sizeLimit parameter is not listed in the .vmx file. Uncontrolled size for the VMX file can lead to denial of service if the datastore is filled. In a virtual machine, users and processes without root or administrator privileges can connect or disconnect devices, such as network adaptors and CD-ROM drives, and can modify device settings. Use the virtual machine settings editor or configuration editor to remove unneeded or unused hardware devices. If you want to use the device again, you can prevent a user or running process in the virtual machine from connecting, disconnecting, or modifying a device from within the guest operating system. By default, a rogue user with non-administrator privileges in a virtual machine can: 1. Connect a disconnected CD-ROM drive and access sensitive information on the media left in the drive 2. Disconnect a network adaptor to isolate the virtual machine from its network, which is a denial of service 3. Modify settings on a device

Cisco HX Platform Hardening Guide

Page 92

VM.prevent-device-interaction-edit VM.restrict-host-info VM.verify-network-filter

Prevent unauthorized removal, connection and modification of devices
Do not send host information to guests

In a virtual machine, users and processes without root or administrator privileges can connect or disconnect devices, such as network adaptors and CD-ROM drives, and can modify device settings. Use the virtual machine settings editor or configuration editor to remove unneeded or unused hardware devices. If you want to use the device again, you can prevent a user or running process in the virtual machine from connecting, disconnecting, or modifying a device from within the guest operating system. By default, a rogue user with non-administrator privileges in a virtual machine can: 1. Connect a disconnected CD-ROM drive and access sensitive information on the media left in the drive 2. Disconnect a network adaptor to isolate the virtual machine from its network, which is a denial of service 3. Modify settings on a device By enabling a VM to get detailed information about the physical host, an adversary could potentially use this information to inform further attacks on the host.

If set to "True" a VM can obtain detailed information about the physical host. *The default value for the parameter is False but is displayed as Null. Setting to False is purely for audit purposes.*

Control access to VMs through the dvfilter network APIs

This setting should not be TRUE unless a particular VM requires this information for performance monitoring. An attacker might compromise a VM by making use the dvFilter API. Configure only those VMs to use the API that need this access.

This setting is considered an "Audit Only" guideline. If there is a value present, the admin should check it to ensure it is correct.

Cisco HX Platform Hardening Guide

Page 93

VM.verify-PCI-Passthrough

Audit all uses of PCI or PCIe passthrough functionality

Using the VMware DirectPath I/O feature to pass through a PCI or PCIe device to a virtual machine results in a potential security vulnerability. The vulnerability can be triggered by buggy or malicious code running in privileged mode in the guest OS, such as a device driver. Industrystandard hardware and firmware does not currently have sufficient error containment support to make it possible for ESXi to close the vulnerability fully.

vNetwork.limit-network-healthcheck vNetwork.restrict-netflow-usage vNetwork.restrict-port-level-overrides

Enable VDS network healthcheck only if you need it
Ensure that VDS Netflow traffic is only being sent to authorized collector IPs
Restrict port-level configuration overrides on VDS

There can be a valid business reason for a VM to have this configured. This is an audit-only guideline. You should be aware of what virtual machines are configured with direct passthrough of PCI and PCIe devices and ensure that their guest OS is monitored carefully for malicious or buggy drivers that could crash the host. Network Healthcheck is disabled by default. Once enabled, the healthcheck packets contain information on host#, vds# port#, which an attacker would find useful. It is recommended that network healthcheck be used for troubleshooting, and turned off when troubleshooting is finished. The vSphere VDS can export Netflow information about traffic crossing the VDS. Netflow exports are not encrypted and can contain information about the virtual network making it easier for a MITM attack to be executed successfully. If Netflow export is required, verify that all VDS Netflow target IP's are correct. Port-level configuration overrides are disabled by default. Once enabled, this allows for different security settings to be set from what is established at the Port-Group level. There are cases where particular VM's require unique configurations, but this should be monitored so it is only used when authorized. If overrides are not monitored, anyone who gains access to a VM with a less secure VDS configuration could surreptitiously exploit that broader access.

Cisco HX Platform Hardening Guide

Page 94

Appendix D: Acronym Glossary

AAA - authentication, authorization and accounting AD ­ Active Directory API ­ Application Programming Interface CC ­ Common Criteria CERT ­ Computer Emergency Response Team, or Certificate CIMC ­ Cisco Integrated Management Console CIP ­ Cluster IP (data) CIP-M ­ Cluster IP Management CIS ­ Center for Internet Security CLI ­ Command Line Interface CSDL ­ Cisco Secure Development Lifecycle CVM ­ Control Virtual Machine CMVP ­ Cryptographic Module Validation Program CSR ­ Certificate Signing Request DISA ­ Defense Information Systems Agency DNS ­ Domain Name Service DSM ­ Vormetric Data Security Manager DRS ­ Distributed Resource Scheduler EAL ­ Evaluation Assurance Level ESX - ESXi replaces Service Console (a rudimentary operating system) with a more closely integrated OS. ESX/ESXi is the primary component in the VMware Infrastructure software suite. The name ESX originated as an abbreviation of Elastic Sky X FedRAMP -- Federal Risk and Authorization Management Program FI ­ Fabric Interconnect FIPS ­ Federal Information Processing Standard FISMA -- Federal Information Security Management Act FQDN ­ Fully Qualified Domain Name HX/HXDP ­ HyperFlex/HyperFlex Data Platform IMC ­ Integrated Management Console ISO ­ International Standards Organization KMIP ­ Key Management Interoperability Protocol KMS ­ Key Management Server KVM ­ Keyboard Video Mouse LAN ­ Local Area Network MAC ­ Media Access Control (unique identifier) MM ­ Maintenance Mode NERC -- North American Electric Reliability Corporation Critical Infrastructure Protection NTP ­ Network Time Protocol OOB ­ Out of Band POC ­ Proof of Concept QoS ­ Quality of Service REST ­ Representational State Transfer RHEL ­ Red Hat Enterprise Linux SCH ­ Smart Call Home

Cisco HX Platform Hardening Guide

Page 95

SCVM ­ Storage Control Virtual Machine SED ­ Self Encrypting Drive SL ­ Smart Licensing SLP ­ Service Location Protocol SNMP ­ Simple Network Monitoring Protocol SSH ­ Secure Shell SSL ­ Secure Sockets Layer SSO ­ Single Sign On STCLI -- Storage Command Line Interface TLS ­ Transport Layer Security TPM ­ Trusted Platform Module UCARP ­ Userland Common Address Redundancy Protocol UCS ­ Unified Computing System UCSM ­ UCS Manager UI ­ User Interface VLAN ­ Virtual Local Area Network VM ­ Virtual Machine vNIC ­ Virtual Network Interface Card vWAAS -- Virtual Wide Area Application Services (WAN acceleration device) WAN ­ Wide area Network

Cisco HX Platform Hardening Guide

Page 96

Appendix E: Sample Syslog-ng Configuration File

Sample syslog-ng collection server configuration file. Note that HX defaults to port 6514 for syslog-ng traffic. The config file(s) below uses port 6515 for encrypted TLS transport. The default location for this file in Ubuntu is: /etc/syslogng/syslog-ng.conf

It is recommended to back up the original configuration file using the following: #> sudo cp /etc/syslog-ng/syslog-ng.conf /etc/syslog-ng/syslog-ng.confg.BAK

Here is a sample syslog-ng.conf that works for TLS secure shipping. It imports configuration files from /etc/syslogng/conf.d

kaptain@kaptain-syslog:/etc/syslog-ng$ cat syslog-ng.conf

@version: 3.5

@include "scl.conf"

@include "`scl-root`/system/tty10.conf"

options {

time-reap(30);

mark-freq(10);

keep-hostname(yes);

};

source s_local { system(); internal(); };

# source s_network {

#

syslog(transport(tcp) port(6514));

#

};

# source tls_source {

# network(ip(0.0.0.0) port(6515)

#

transport("tls")

#

tls( key-file("/etc/syslog-ng/cert.d/serverkey.pem")

#

cert-file("/etc/syslog-ng/cert.d/servercert.pem")

#

ca-dir("/etc/syslog-ng/ca.d"))

#

# ); };

destination d_local {

file("/var/log/syslog-ng/messages_${HOST}"); };

destination d_logs {

file(

"/var/log/syslog-ng/logs-enc.txt"

owner("root")

group("root")

perm(0777)

); };

log { source(s_local); destination(d_logs); };

### # Include all config files in /etc/syslog-ng/conf.d/ ###
Cisco HX Platform Hardening Guide

Page 97

@include "/etc/syslog-ng/conf.d/"
kaptain@kaptain-syslog:/etc/syslog-ng/conf.d$ cat audit.conf ## Audit Logging Configuration ###
source demo_tls_src { tcp(ip(0.0.0.0) port(6515) tls( key-file("/etc/syslog-ng/cert.d/serverkey.pem") cert-file("/etc/syslog-ng/cert.d/servercert.pem") peer-verify(optional-untrusted) ) ); };
filter f_audit_rest { match("hx-audit-rest" value("MSGHDR")); }; filter f_device_conn { match("hx-device-connector" value("MSGHDR")); }; filter f_stssomgr { match("hx-stSSOMgr" value("MSGHDR")); }; filter f_ssl_access { match("hx-ssl-access" value("MSGHDR")); }; filter f_hxmanager { match("hx-manager" value("MSGHDR")); }; filter f_hx_shell { match("hx-shell" value("MSGHDR")); }; filter f_stcli { match("hx-stcli" value("MSGHDR")); }; filter f_hxcli { match("hx-cli" value("MSGHDR")); };
destination d_audit_rest { file("/var/log/syslog-ng/audit_rest.log"); }; destination d_device_conn { file("/var/log/syslog-ng/hx_device_connector.log"); }; destination d_stssomgr { file("/var/log/syslog-ng/stSSOMgr.log"); }; destination d_ssl_access { file("/var/log/syslog-ng/ssl_access.log"); }; destination d_hxmanager { file("/var/log/syslog-ng/hxmanager.log"); }; destination d_hx_shell { file("/var/log/syslog-ng/shell.log"); }; destination d_stcli { file("/var/log/syslog-ng/stcli.log"); }; destination d_hxcli { file("/var/log/syslog-ng/hxcli.log"); };
log { source(demo_tls_src); filter(f_audit_rest); destination(d_audit_rest); flags(final); }; log { source(demo_tls_src); filter(f_device_conn); destination(d_device_conn); flags(final); }; log { source(demo_tls_src); filter(f_stssomgr); destination(d_stssomgr); flags(final); }; log { source(demo_tls_src); filter(f_ssl_access); destination(d_ssl_access); flags(final); }; log { source(demo_tls_src); filter(f_hxmanager); destination(d_hxmanager); flags(final); }; log { source(demo_tls_src); filter(f_hx_shell); destination(d_hx_shell); flags(final); }; log { source(demo_tls_src); filter(f_stcli); destination(d_stcli); flags(final); }; log { source(demo_tls_src); filter(f_hxcli); destination(d_hxcli); flags(final); };
########################
It would be possible to use the same system for both TCP and TLS log transport (for example, from 2 different systems). The files above have the TCP part commented out, but if you wanted to configure it you would just create one more file like audit.conf in /etc/syslog-ng/conf.d and name it something like audit_tcp.conf with the configuration as mentioned in the documentation. However, syslog-ng won't allow the same property / identifier name like `demo_tls_src' (which would be the same in both TCP and TLS configurations above if the file was simply copied over) so it would need to be renamed (e.g., `demo_tcp_src').

Cisco HX Platform Hardening Guide

Page 98

Appendix F: Certificate Management and Use Cases

SCVM: How to generate and replace External CA Certificate in HX 4.0+
Some key points for CSR generation for CA certificates:
· Only one CSR is required for the cluster as each SCVM must be installed with the same certificate. · When generating the CSR you should enter the hostname assigned to the management CIP as the Common
Name of the Subject's Distinguished Name. · If the you want to specify the management CIP address as the common name (e.g. didn't assign a hostname for
the mgmt. CIP, or want to use IP for login and authentication removing a dependency on DNS), then you must also include the management CIP in the Subject Alternative Name. https://stackoverflow.com/questions/5136198/what-strings-are-allowed-in-the-common-name-attribute-in-an-x509-certificate

In 4.0.1a, Import CA certificate is automated through the use of shell scripts. Generate the CSRs from any SCVM preferably from the CIP node. Once you get the CA certificate, import the certificate using the automated script. The script will update the certificate in ZK and push it to other SCVMs.
It may be useful to verify certificates using the following general syntax using your cert paths: /usr/bin/openssl x509 -in /etc/nginx/server.crt -text -noout
Script Location in SCVM: /usr/share/springpath/storfs-misc/hx-scripts/

· certificate_import_input.sh · certificate_import_main.sh
In the Controller VM (CIP), execute these commands to generate the CSR request.

· openssl req -nodes -newkey rsa:2048 -keyout /etc/ssl/private/<Host Name of the CVM>.key -out /etc/ssl/certs/<Host Name of the CVM>.csr
· cat /etc/ssl/certs/<Host Name of the CVM>.csr - Copy the request to any notepad. · Send the request to CA to generate the certificate · Once you receive the certificate from CA (.crt files), copy the certificate to respective CVM. · Then use this script to import the certificate: ./certificate_import_input.sh
root@SpringpathControllerVUFSTDS58L:/usr/share/springpath/storfs-misc/hx-scripts# ./certificate_import_input.sh Enter the path for the key: /etc/ssl/private/<Host Name of the CVM>.key Enter the path for the certificate in crt format: <Path to the CA .crt file> Have any bundles(y/n)? <In case of bundle, write y, else n>

· After providing all the inputs allow some time to finish the import process · The script asks to reregister to vCenter. Its mandatory to reregister the cluster once the certificate is imported.

There is a new process for 4.0.2a and onward. The procedure for 4.0.1x and 3.5.2h are slightly different.

· In 4.0.2a, Import CA certificate is automated through a shell script. Generate CSR from any SCVM preferably

from the CIP node. Once you get the CA certificate, import the certificate using the automated script. The script

will update the certificate in ZK and push it to other SCVMs.

· Script Location in SCVM: /usr/share/springpath/storfs-misc/hx-scripts/

Cisco HX Platform Hardening Guide

Page 99

1. certificate_import_input.sh 2. updateNginxCertificate.py (not exposed to user) · In the Controller VM (Pointing to CIP), execute these commands to generate the CSR request. 1. openssl req -nodes -newkey rsa:2048 -keyout /etc/ssl/private/<Host Name of the CVM>.key -out
/etc/ssl/certs/<Host Name of the CVM>.csr 2. cat /etc/ssl/certs/<Host Name of the CVM>.csr - Copy the request to any notepad. · Send the request to CA to generate the certificate · Once you receive the certificate from CA (.crt files), copy the certificate to respective CVM. · Then use this script to import the certificate: ./certificate_import_input.sh 1. root@SpringpathControllerVUFSTDS58L:/usr/share/springpath/storfs-misc/hx-
scripts# ./certificate_import_input.sh · Enter the path for the key: /etc/ssl/private/<Host Name of the CVM>.key · Enter the path for the certificate in crt format: <Path to the CA .crt file> · After providing all the inputs, it takes some time to finish the import process · The script prompts to reregister the cluster with vCenter. Its mandatory to reregister the cluster once the
certificate is imported.
The CSR process is updated for 4.5.1a and greater due to the restricted shell:
In the following command used above: openssl req -nodes -newkey rsa:2048 -keyout /etc/ssl/private/<Host Name of the CVM>.key -out /etc/ssl/certs/<Host Name of the CVM>.csr
While openssl command is part of the allowed admin commands, this command will fail because of lack of write privilege on the /etc/ folder to the admin user.
Modify the command to use the /tmp/ folder, for example, so that the write proceeds.
NOTE: The content of the X.509 CSR is entered by the user. There are no backend checks on the contents of the entry. If the user specifies the [multiple] hostnames or IPs of the nodes as subject alternative names, or if they had used the wildcard character to specify the hostname for the Common Name, a single certificate can be used for all nodes.
HX Certificate Management
vCenter: How to generate and replace External CA Certificate
If ESXi is using a 3rd party CA certificate, certMgmt Mode in vCenter should be set to Custom. The default mode is VMSA. Once the mode is set to Custom, then the hosts with a 3rd party CA can be added.
Follow these steps to update the Mode
· Select the vCenter Server that manages the hosts and click Settings. · Click Advanced Settings, and click Edit. · In the Filter box, enter certmgmt to display only certificate management keys. · Change the value of vpxd.certmgmt.mode to custom and click OK. · Restart the vCenter Server service. To restart services
1. https://<VC URL>:5480/ui/services

Cisco HX Platform Hardening Guide

Page 100

Reference:
· https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.security.doc/GUID-122A4236-96964E1F-B9E8-738855946A93.html#GUID-122A4236-9696-4E1F-B9E8-738855946A93
· http://engineering.pivotal.io/post/vcenter_6.7_tls/
ESX: How to generate and replace External CA Certificate
Do not replace the CA certificates in all the hosts at the same time. Replace one host and then wait for the cluster to be healthy and then replace the certs for the other nodes.
Follow these steps:
· Generate csr for each ESX Node. Provide proper hostname/FQDN of the ESX host while generating .key and .csr file.
· You should have proper .csr and .key file as part of the key generate procedure i.e. rui.csr and rui.key · Send rui.csr to 3rd part CA to sign and send back the certificate · Once you receive the rui.crt from CA · Put the Node into MM mode
1. Note: Dont put MM for all the ESX node at a time. You should enter MM in a rolling fashion. · ssh to the node. Take backup of current rui.key and rui.crt in /etc/vmware/ssl · Upload the new rui.key and rui.crt to the same directory · Restart hostd and vpxa service and check status if its running
1. /etc/init.d/hostd restart 2. /etc/init.d/vpxa restart 3. /etc/init.d/hostd status 4. /etc/init.d/vpxa status · Reconnect the Host in vCenter · Exit MM in the host
Follow the same procedure for all other nodes. You can verify the certificate of each node by accessing through web.
Reference:
· https://kb.vmware.com/s/article/2113926 · http://buildvirtual.net/replacing-the-esxi-host-default-certificate-with-a-ca-signed-certificate/ · http://buildvirtual.net/how-to-generate-new-esxi-host-certificates/
ESX: To re-generate self-signed certificate
· ssh to esx and delete rui.key and rui.crt
· Then /sbin/generate-certificates · Restart hostd and vpxa
Reference:
https://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.vsphere.security.doc_50%2FGUID-EA0587C75151-40B4-88F0-C341E6B1F8D0.html

Cisco HX Platform Hardening Guide

Page 101

HX Cluster: Re-registration
Once all the hosts are added to the vCenter, reregister the HX Cluster to the vCenter using stcli cluster reregister Command

Hyperflex Use Cases
Case: A - Create and Expand HX Cluster with vCenter having CA certificate and certMgmt mode = vmsa (Default Mode)

· Both Create and Expand cluster will work as usual and ESX nodes will be deployed with self-signed certificate · If user would like to replace the ESX self-signed with CA signed - Then follow these steps
o Update the certMgmt mode to custom in vCenter o Replace self-signed cert in ESX to CA certs o Reconnect/add host to cluster o Reregister the HX cluster to vCenter
Case: B - Create and Expand HX Cluster with vCenter having CA certificate and certMgmt mode = custom

· Both Create and Expand cluster will fail in deploy stage with error: Not able to add host to vCenter · Recommendation is:
o Perform Custom UCSM+Hypervisor configuration as part of Installer work flow o After hypervisor configuration is completed, replace the self-signed certificate of ESX Nodes with the CA
signed certificates o Then perform Deploy+Create Cluster or Deploy+Expand Cluster o With this approach add hosts to vCenter will be successful and other steps will work as usual.
Case: C - Replace CA certificate in ESX Nodes in running HX Cluster

· Replace self-signed certificate in vCenter with CA certificate and update the certMgmt mode to custom · In a rolling fashion, replace the self-signed certificate in ESX Nodes with the CA certificate · Once replaced for all the nodes, re-register the cluster to vCenter · In future when trying to expand the cluster
o Perform UCSM+Hypervisor for the expanded node o Replace the self-signed cert in the new node with CA cert o Perform Deploy+Expand Cluster
Case: D - Support Work Flow - Mgmt IP Change in case ESX Nodes has CA cert

· When changing the hostname/IP addr of ESX as part of support work flow, then the CA certs are getting replaced with self-signed certs by VMware when changing the host name of ESX.
· Vmware regenerates self signed certs with the new hostname. · Recommendation would be: Again get the CA certs with the new host name and replace in each ESX. · Steps to be followed to change the host name/IP address of ESX
o Run the support work flow script to change Mgmt IP o It will fail in the step "add/reconnect host to vCenter" o Now generate CA cert with the new host name and replace the self-signed certificate o Now run the check-point of the support work flow to finish updating the remaining tasks

Cisco HX Platform Hardening Guide

Page 102

Observations and Notes
vCenter: self-signed cert with certMgmt mode = vmsa (Default Mode)
· ESX with self signed certificates can be added · Doesn't allow adding ESX with 3rd party CA certs · If already a cluster with self-signed ESX is there in vCenter - After replacing CA certs in ESX,
1. when connecting to vCenter [if certMgmt mode = vmsa], then it asks to replace the CA with self-signed and then allows to add it. Else it will not allow to add.
2. If certMgmt mode = custom in vCenter, it doesn't say to replace but it gives error as ssl thumbprint mismatch and add host fails
· Note: 1. Put the node into MM - replace rui.crt and rui.key in ESX 2. Restart vpxa and hostd service - CA cert comes in the new node 3. Right click and connect to vCenter - Host losts the CA certs and gets replaced with self-signed by vmware
vCenter: CA signed cert with certMgmt mode = vmsa (Default Mode)
· ESX with self signed certificates can be added
· Doesn't allow adding ESX with 3rd party CA certs
vCenter: CA signed cert with certMgmt mode = custom
· ESX with self signed can not be added to the vCenter
· ESX needs be replaced with the same CA certificate (CA for the vCenter and CA for the hosts should be same
Hyperflex Use Cases: HX 4.5.1a+ and NGINX Self-Signed Without CA Signed
· Generate the self signed certificate if you don't have CA signed certificate · Use the below REST API to set the custom certificate. The REST API can be invoked from any REST client.
Below we are using "Advanced REST Client (ARC)" (https://install.advancedrestclient.com/install) · REST call Construct
o URL: https://<cip_ip>/securityservice/v1/certificate?option=custom o Method Type: "PUT" request o Authorization: Basic (Add your username and password)

Cisco HX Platform Hardening Guide

Page 103

· o Body: {"sslKey":"-----BEGIN PRIVATE KEY-----\n<YOUR KEY CONTENTS WITHOUT ANY NEW LINE>\n-----
END PRIVATE KEY-----", "sslCertificate":"-----BEGIN CERTIFICATE-----\n<YOUR CERT CONTENST
WITHOUT ANY NEW LINE\n-----END CERTIFICATE-----"}

·
Steps to copy the key and certificate. For example, let's say the keyfile is mycompany.key and certificate file mycompany.crt. 1) Remove the newline the file with command:
a. cat mycompany.eky | tr -d `\n' b. copy the content between -----BEGIN PRIVATE KEY----- and -----END PRIVATE
KEY----c. cat mycompany.crt | tr -d `\n' d. copy the content between -----BEGIN CERTIFICATE----- and -----END
CERTIFICATE----e. Paste the contents to following json file:
i. {"sslKey":"-----BEGIN PRIVATE KEY-----\n<YOUR KEY CONTENTS WITHOUT ANY NEW LINE>\n-----END PRIVATE KEY-----", "sslCertificate":"-----BEGIN CERTIFICATE-----\n<YOUR CERT CONTENST WITHOUT ANY NEW LINE\n-----END CERTIFICATE-----"}

Cisco HX Platform Hardening Guide

Page 104

Example content is:
{"sslKey":"-----BEGIN PRIVATE KEY----\nMIIEvAIBADANBgkqhkiG9w0BAQEFAASCBKYwggSiAgEAAoIBAQCfnj0n341hu2Bfg8fYojTEEvfBlHGiDP0hvvCBwrmWTuzDvjX1gj5pdnW1bjiohzZu4BLxrPNy5Yay+ivCKgso8YYcYrYoOijG8fiT+dQBMBHVPTsis 8/YpW74ZJjazNoikeDDigTC6nokpBEgKZS/gl0ZutOQK6NCdQJDTAdgqu8dzac64G+wxl5afV5UBo9llgRAnM9KtrktYluqciebx6+XbSa03UCJFOjC6XEhcCJAQatrKhhiCD90j0r1iNzrSwMMy+BEIhm0gf9+jSIAgy1o njsHL2vPgueef+F5YNk+itG8mqysUhBZp7kpTFeUYg0JZcwOy3siEGVjDUjtAgMBAAECggEABb/E0yFVrc36cCZGdfKNtPw76UCIAT63hVYjwoC5f4TzOS+qMOASkGjgX3sLVmKcXsz6UbMZh6tluR+SoOkzwrNEUdRqXDO QEW5Ytje635oUIlqUvTC9zT9UKmUxLjxPpQwdDN31QvIAGT7BkSd+QJGY+drFUP2JYVTmknb0ExK2csqwECu1uynReN6mO9BkDBZf/7+AzIvtMYTGhut5X0Crv8+IXgB2uLKlM1EOvOrSd1Vvqod439dRMiTKEzF1UXYGoh ue3GjOAs39aWJcZwPLSrYKdgGkvSbVfcmAKugYzG2ecdTN7e61euBcsm1KwdOf8EeckViPRv2N5uuRcQKBgQC6Qbr/+N76nW1WQtwMA5jfCJ7bXNvPfBy6czhr952qIkp7Ol1JdnDZSZOhqLXSkLlARlLDs0JzZix/7yhAk xeXQZ79JdQGg+3kyrINku/70BqvSP8+e46qQge3STPmAWEqq8MrMSZzWEhkFWOCK3Fhw0HcOnsYxisx16IDDScEdQKBgQDbYvjNqkCWOKZXyKoKaiwXO1qZuF9/A+9tyZ8dApiX26IT8EEi3bP1B/AgVcSPHRk07MRWgWSX VtQZKRyeeiKynHUwx8LEVA9JnZ/8Zkwa0K2mytR9HBL7tHYvPEyxMIAe300rE/exd4pqGhiOGeQonRTv6QxWPKDekaXeaC1DmQKBgGqB09E0Gy3sf+1n5jToia5gW5bNDtUi/7qO0KDMw9faLAUzZszvcbCPJmC2/OIf6A8 dJ47JHyKmNqQhuj7S3haca7IOw6PGJW9DiXXBpIG2isvZTjwIo5gwkgD5Vzgbadjgx4YXYQlsXlj88h4pgXiKE0tAFcwg5epmiDp+duVRAoGAM5CzwkN+ItD16DQ2I3SJIHzG8tKvP3+BS2DUkVEG5Mqu8djKtpM9tR5Ehp UiOjEwt4vfKiYHqrqx56gOHgG/HhRAR1LsqJDJdxghfoXc5YCfEFEkWLO8koT8MmYN8KfhtV/vF2z+Dyx10DKKCvxy8Ejbnvg7+hkOBsJdJkV+PiECgYBQfPRUruuC5xDdmVFXwTiidBNw8SAKfd/Sbfe8G9HbTYAOZK0sZ 8zRoqf8EDN1MWNPH5zTavJUstAJBDVylaSZeA4i4ZTMFsW6eA51ahs2JSEjvIUI8nMXQRF36DWFNVvGDLhNyWtG4Hw0dOR5hRU7pwqNL5IGJBZ0ONsbB1tEfg==\n-----END PRIVATE KEY-----", "sslCertificate":"-----BEGIN CERTIFICATE----\nMIIEATCCAumgAwIBAgIJAK4G4aZ1K7ItMA0GCSqGSIb3DQEBCwUAMIGWMQswCQYDVQQGEwJVUzETMBEGA1UECAwKQ2FsaWZvcm5pYTEUMBIGA1UECgwLQ2lzY28sIEluYy4xEDAOBgNVBAcMB1Nhbkpvc2UxEjAQBgNVB AMMCUh5cGVyZmxleDEUMBIGA1UECwwLRW5naW5lZXJpbmcxIDAeBgkqhkiG9w0BCQEWEXN1cHBvcnRAY2lzY28uY29tMB4XDTE5MTIxMzA2MzkxOFoXDTI0MTIxMTA2MzkxOFowgZYxCzAJBgNVBAYTAlVTMRMwEQYDVQQI DApDYWxpZm9ybmlhMRQwEgYDVQQKDAtDaXNjbywgSW5jLjEQMA4GA1UEBwwHU2FuSm9zZTESMBAGA1UEAwwJSHlwZXJmbGV4MRQwEgYDVQQLDAtFbmdpbmVlcmluZzEgMB4GCSqGSIb3DQEJARYRc3VwcG9ydEBjaXNjby5 jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfnj0n341hu2Bfg8fYojTEEvfBlHGiDP0hvvCBwrmWTuzDvjX1gj5pdnW1bjiohzZu4BLxrPNy5Yay+ivCKgso8YYcYrYoOijG8fiT+dQBMBHVPTsis8/YpW 74ZJjazNoikeDDigTC6nokpBEgKZS/gl0ZutOQK6NCdQJDTAdgqu8dzac64G+wxl5afV5UBo9llgRAnM9KtrktYluqciebx6+XbSa03UCJFOjC6XEhcCJAQatrKhhiCD90j0r1iNzrSwMMy+BEIhm0gf9+jSIAgy1onjsHL 2vPgueef+F5YNk+itG8mqysUhBZp7kpTFeUYg0JZcwOy3siEGVjDUjtAgMBAAGjUDBOMB0GA1UdDgQWBBQwb4uIgSeFtt0YkEaIaTRd6AD7zjAfBgNVHSMEGDAWgBQwb4uIgSeFtt0YkEaIaTRd6AD7zjAMBgNVHRMEBTAD AQH/MA0GCSqGSIb3DQEBCwUAA4IBAQBMhZVY2pVOKJAp2odrXDd4KTW3eCrkWocW7C0JLCtFo7tLNlfgHs8FyxLOXaHLXgqzYtlclalIxjoFQKuSV7NLurgByOo4FK44ZcyeByocIeEzLc6DNMdXptSI9Mdko2DDhwgwizW 8BoguXP94DgZjmwbUtP99G90pni8u6g9mr3bgDhU5JYbnkv9/1mbRS4GfeTFRgUwInua++RSxa6AFUomTtt5Y7OL3mHG4xgIig3EpIi0MhgiuVdESRwq1kml/fL/UupbFXipq90Z+8DrUlW5H/teww6XPUQ568VJMbcz0ZY autbfNe3K/HdcXxq+Lt4O6w3iq/y+wndywxwLU\n-----END CERTIFICATE-----"}
· When execute the rest call, you will get following response:

· · · After the successful invoke of above REST API, verify if the nginx certificate is placed at "/etc/nginx"
path on controller VM by checking the timestamps of .key and .crt file
root@localhost:~# ls -l /etc/nginx/server.key /etc/nginx/server.crt -rw------- 1 root root 1.7K May 10 21:39 /etc/nginx/server.key -rw-r--r-- 1 root root 1.2K May 10 21:39 /etc/nginx/server.crt
· Reregister the cluster again using "stcli cluster reregister" command. For parameters, fill the existing vCenter details. stcli cluster reregister --vcenter-url x.x.x.x --vcenter-datacenter test_datacenter --vcenter-cluster test_cluster --vcenter-user administrator@vsphere.local

Cisco HX Platform Hardening Guide

Page 105

Appendix G: SCH Configuration and Proxy
Configuring Smart Call Home for Data Collection
Data collection is enabled by default but, during installation, you can opt-out (disable). You can also enable data collection post cluster creation. During an upgrade, Smart Call Home is set up based on your legacy configuration. For example, if stcli services asup show is enabled, Smart Call Home is enabled on upgrade. Data collection about your HX storage cluster is forwarded to Cisco TAC through https. If you have a firewall installed, configuring a proxy server for Smart Call Home is completed post cluster creation.
Note In HyperFlex Data Platform release 2.5(1.a), Smart Call Home Service Request (SR) generation does not use a proxy server.
Using Smart Call Home requires the following:
· A Cisco.com ID associated with a corresponding Cisco Unified Computing Support Service or Cisco Unified Computing Mission Critical Support Service contract for your company.
· Cisco Unified Computing Support Service or Cisco Unified Computing Mission Critical Support Service for the device to be registered.
Procedure
Step 1 Log in to a storage controller VM in your HX storage cluster. Step 2 Register your HX storage cluster with Support.
Registering your HX storage cluster adds identification to the collected data and automatically enables Smart Call Home. To register your HX storage cluster, you need to specify an email address. After registration, this email address receives support notifications whenever there is an issue and a TAC service request is generated. Note Upon configuring Smart Call Home in Hyperflex, an email will be sent to the configured
address containing a link to complete registration. If this step is not completed, the device will remain in an inactive state and an automatic Service Request will not be opened.
Syntax: stcli services sch set [-h] --email EMAILADDRESS
Example:
# stcli services sch set --email name@company.com
Step 3 Verify data flow from your HX storage cluster to Support is operational.

Cisco HX Platform Hardening Guide

Page 106

Operational data flow ensures that pertinent information is readily available to help Support troubleshoot any issues that might arise. --all option runs the commands on all the nodes in the HX cluster.
# asupcli [--all] ping
If you upgraded your HX storage cluster from HyperFlex 1.7.1 to 2.1.1b, also run the following command:
# asupcli [--all] post --type alert Contact Support if you receive the following error:
root@ucs-stctlvm-554-1:/tmp# asupcli post --type alert
/bin/sh: 1: ansible: not found
Failed to post - not enough arguments for format string
root@ucs-stctlvm-554-1:/tmp#
Step 4 (Optional) Configure a proxy server to enable Smart Call Home access through port 443. If your HX storage cluster is behind a firewall, after cluster creation, you must configure the Smart Call Home proxy server. Support collects data at the url: https://diag.hyperflex.io:443 endpoint.
a. Clear any existing registration email and proxy settings.
# stcli services sch clear
b. Set the proxy and registration email. Syntax: stcli services sch set [-h] --email EMAILADDRESS [--proxy-url PROXYURL] [--proxy-port PROXYPORT] [--proxy-user PROXYUSER] [--portal-url PORTALURL] [--enable-proxy ENABLEPROXY]

Syntax Description

Option
--email EMAILADDRESS

Required or Optional
Required.

--enable-proxy ENABLEPROXY

Optional.

--portal-url PORTALURL --proxy-url PROXYURL --proxy-port PROXYPORT

Optional. Optional. Optional.

Cisco HX Platform Hardening Guide

Description
Add an email address for someone to receive email from Cisco support. Recommendation is to use a distribution list or alias. Explicitly enable or disable use of proxy. Specify an alternative Smart Call Home portal URL, if applicable. Specify the HTTP proxy URL, if applicable. Specify the HTTP proxy port, if applicable.
Page 107

--proxy-user PROXYUSER
Example:

Optional.

Specify the HTTP proxy user, if applicable. Specify the HTTP proxy password, when prompted.

# stcli services sch set

--email name@company.com

--proxy-url www.company.com

--proxy-port 443

--proxy-user admin

--proxy-password adminpassword
c. Ping to verify the proxy server is working and data can flow from your HX storage cluster to the Support location.
# asupcli [--all] ping
--all option runs the command on all the nodes in the HX cluster. Step 5 Verify Smart Call Home is enabled.
When Smart Call Home configuration is set, it is automatically enabled. # stcli services sch show If Smart Call Home is disabled, enable it manually. # stcli services sch enable
Step 6 Enable Auto Support (ASUP) notifications.
Typically, Auto Support (ASUP) is configured during HX storage cluster creation. If it was not, you can enable it post cluster creation using HX Connect or CLI. For more information, see Auto Support and Smart Call Home for HyperFlex.

Cisco HX Platform Hardening Guide

Page 108


Microsoft Word for Microsoft 365 䵩捲潳潦璮⁗潲搠景爠䵩捲潳潦琠㌶㔻潤楦楥搠畳楮朠楔數琠㈮ㄮ㜠批‱吳塔