MCP Deployment Guide
User Manual:
Open the PDF directly: View PDF .
Page Count: 339
Download | ![]() |
Open PDF In Browser | View PDF |
MCP Deployment Guide version q3-18 Mirantis Cloud Platform Deployment Guide Copyright notice 2019 Mirantis, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. No part of this publication may be reproduced in any written, electronic, recording, or photocopying form without written permission of Mirantis, Inc. Mirantis, Inc. reserves the right to modify the content of this document at any time without prior notice. Functionality described in the document may not be available at the moment. The document contains the latest information at the time of publication. Mirantis, Inc. and the Mirantis Logo are trademarks of Mirantis, Inc. and/or its affiliates in the United States an other countries. Third party trademarks, service marks, and names mentioned in this document are the properties of their respective owners. ©2019, Mirantis Inc. Page 2 Mirantis Cloud Platform Deployment Guide Preface This documentation provides information on how to use Mirantis products to deploy cloud environments. The information is for reference purposes and is subject to change. Intended audience This documentation is intended for deployment engineers, system administrators, and developers; it assumes that the reader is already familiar with network and cloud concepts. Documentation history The following table lists the released revisions of this documentation. Revision date November 26, 2018 ©2019, Mirantis Inc. Description Q3`18 GA Page 3 Mirantis Cloud Platform Deployment Guide Introduction MCP enables you to deploy and manage cloud platforms and their dependencies. These include OpenStack and Kubernetes based clusters. The deployment can be performed automatically through MCP DriveTrain or using the manual deployment procedures. The MCP DriveTrain deployment approach is based on the bootstrap automation of the Salt Master node that contains MAAS hardware nodes provisioner as well as on the automation of an MCP cluster deployment using the Jenkins pipelines. This approach significantly reduces your time and eliminates possible human errors. The manual deployment approach provides the ability to deploy all the components of the cloud solution in a very granular fashion. The guide also covers the deployment procedures for additional MCP components including OpenContrail, Ceph, StackLight, NFV features. Seealso Minimum hardware requirements ©2019, Mirantis Inc. Page 4 Mirantis Cloud Platform Deployment Guide Plan the deployment The configuration of your MCP installation depends on the individual requirements that should be met by the cloud environments. The detailed plan of any MCP deployment is determined on a per-cloud basis. Seealso • Plan an OpenStack environment • Plan a Kubernetes cluster ©2019, Mirantis Inc. Page 5 Mirantis Cloud Platform Deployment Guide Prepare for the deployment Create a project repository An MCP cluster deployment configuration is stored in a Git repository created on a per-customer basis. This section instructs you on how to manually create and prepare your project repository for an MCP deployment. Before you start this procedure, create a Git repository in your version control system, such as GitHub. To create a project repository manually: 1. Log in to any computer. 2. Create an empty directory and change to that directory. 3. Initialize your project repository: git init Example of system response: Initialized empty Git repository in /Users/crh/Dev/mcpdoc/.git/ 4. Add your repository to the directory you have created: git remote add origin5. Create the following directories for your deployment metadata model: mkdir -p classes/cluster mkdir nodes 6. Add the Reclass variable to your bash profile: vim ~/.bash_profile Example: export RECLASS_REPO= 7. Log out and log back in. 8. Verify that your ~/.bash_profile is sourced: echo $RECLASS_REPO The command returns the content of the ~/.bash_profile file. ©2019, Mirantis Inc. Page 6 Mirantis Cloud Platform Deployment Guide 9. Add the Mirantis Reclass module to your repository as a submodule: git submodule add https://github.com/Mirantis/reclass-system-salt-model ./classes/system/ System response: Cloning into ‘ /classes/system’... remote: Counting objects: 8923, done. remote: Compressing objects: 100% (214/214), done. remote: Total 8923 (delta 126), reused 229 (delta 82), pack-reused 8613 Receiving objects: 100% (8923/8923), 1.15 MiB | 826.00 KiB/s, done. Resolving deltas: 100% (4482/4482), done. Checking connectivity... done. 10. Update the submodule: git submodule sync git submodule update --init --recursive --remote 11. Add your changes to a new commit: git add -A 12. Commit your changes: git commit 13. Add your commit message. Example of system response: [master (root-commit) 9466ada] Initial Commit 2 files changed, 4 insertions(+) create mode 100644 .gitmodules create mode 160000 classes/system 14. Push your changes: git push 15. Proceed to Create a deployment metadata model. ©2019, Mirantis Inc. Page 7 Mirantis Cloud Platform Deployment Guide Create local mirrors During an MCP deployment or MCP cluster update, you can make use of local mirrors. By default, MCP deploys local mirrors with packages in a Docker container on the DriveTrain nodes with GlusterFS volumes. MCP creates and manages mirrors with the help of Aptly, which runs in the container named aptly in the Docker Swarm mode cluster on the DriveTrain nodes, or cid0x in terms of Reclass model. MCP provides a prebuilt mirror image that you can customize depending on the needs of your MCP deployment, as well as the flexibility to manually create local mirrors. Specifically, the usage of the prebuilt mirror image is essential in the case of an offline MCP deployment scenario. ©2019, Mirantis Inc. Page 8 Mirantis Cloud Platform Deployment Guide Get the prebuilt mirror image The prebuilt mirror image contains the Debian package mirror (Aptly), Docker images mirror (Registry), Python packages mirror (PyPi), Git repositories mirror, and mirror of Mirantis Ubuntu VM cloud images. To get the prebuilt mirror image: 1. On http://images.mirantis.com, download the latest version of the prebuilt mirror VM in the mcp-offline-image- .qcow2 format. 2. If required, customize the VM contents as described in Customize the prebuilt mirror image. 3. Proceed to Deploy MCP DriveTrain. Seealso MCP Release Notes: Release artifacts section in the related MCP release documentation ©2019, Mirantis Inc. Page 9 Mirantis Cloud Platform Deployment Guide Customize the prebuilt mirror image You can easily customize mirrored Aptly, Docker, and Git repositories by configuring contents of the mirror VM defined in the cicd/aptly.yml file of the Reclass model. After you perform the customization, apply the changes to the Reclass model as described in Update mirror image. To customize the Aptly repositories mirrors You can either customize the already existing mirrors content or specify any custom mirror required by your MCP deployment: • To customize existing mirror sources: The sources for existing mirrors can be configured to use different upstream. Each Aptly mirror specification includes parameters that define their source on the system level of the Reclass model as well distribution, components, key URL, and GPG keys. To customize a mirror content, redefine these parameters as required. An example of the apt.mirantis.com mirror specification: _param: apt_mk_version: stable mirror_mirantis_openstack_xenial_extra_source: http://apt.mirantis.com/xenial/ mirror_mirantis_openstack_xenial_extra_distribution: ${_param:apt_mk_version} mirror_mirantis_openstack_xenial_extra_components: extra mirror_mirantis_openstack_xenial_extra_key_url: "http://apt.mirantis.com/public.gpg" mirror_mirantis_openstack_xenial_extra_gpgkeys: - A76882D3 aptly: server: mirror: mirantis_openstack_xenial_extra: source: ${_param:mirror_mirantis_openstack_xenial_extra_source} distribution: ${_param:mirror_mirantis_openstack_xenial_extra_distribution} components: ${_param:mirror_mirantis_openstack_xenial_extra_components} architectures: amd64 key_url: ${_param:mirror_mirantis_openstack_xenial_extra_key_url} gpgkeys: ${_param:mirror_mirantis_openstack_xenial_extra_gpgkeys} publisher: component: extra distributions: - ubuntu-xenial/${_param:apt_mk_version} Note You can find all mirrors and their parameters that can be overriden in the aptly/server/mirror section of the Reclass System Model ©2019, Mirantis Inc. Page 10 Mirantis Cloud Platform Deployment Guide • To add new mirrors, extend the aptly:server:mirror part of the model using the structure as shown in the example above Note The aptly:server:mirror: :publisher custom repository will be published. parameter specifies how the The example of a custom mirror specification: aptly: server: mirror: my_custom_repo_main: source: http://my-custom-repo.com distribution: custom-dist components: main architectures: amd64 key_url: http://my-custom-repo.com/public.gpg gpgkeys: - AAAA0000 publisher: component: custom-component distributions: - custom-dist/stable To customize the Docker images mirrors The Docker repositories are defined as an image list that includes a registry and name for each Docker image. Customize the list depending on the needs of your MCP deployment: • Specify a different Docker registry for the existing image to be pulled from • Add a new Docker image Example of customization: docker: client: registry: target_registry: apt:5000 image: - registry: "" name: registry:2 - registry: osixia name: openldap:1.1.8 - registry: tcpcloud name: aptly-public:latest ©2019, Mirantis Inc. Page 11 Mirantis Cloud Platform Deployment Guide Note The target_registry parameter specifies which registry the images will be pushed into. To customize the Git repositories mirrors: The Git repositories are defined as a repository list that includes a name and URL for each Git repository. Customize the list depending on the needs of your MCP deployment. Example of customization: git: server: directory: /srv/git/ repos: - name: gerritlib url: https://github.com/openstack-infra/gerritlib.git - name: jeepyb url: https://github.com/openstack-infra/jeepyb.git Seealso Update mirror image ©2019, Mirantis Inc. Page 12 Mirantis Cloud Platform Deployment Guide Create local mirrors manually If you prefer to manually create local mirrors for your MCP deployment, check the MCP Release Notes: Release artifacts section in the related MCP release documentation for the list of mirrors required for the MCP deployment. To manually create a local mirror: 1. Log in to the Salt Master node. 2. Identify where the container with the aptly service is running in the Docker Swarm cluster. salt -C 'I@docker:swarm:role:master' cmd.run 'docker service ps aptly|head -n3' 3. Log in to the node where the container with the aptly service is running. 4. Open the console in the container with the aptly service: docker exec -it bash 5. In the console, import the public key that will be used to fetch the repository. Note The public keys are typically available in the root directory of the repository and are called Release.key or Release.gpg. Also, you can download the public key from the key server keys.gnupg.net. gpg --no-default-keyring --keyring trustedkeys.gpg --keyserver keys.gnupg.net \ --recv-keys For example, for the apt.mirantis.com repository: gpg --no-default-keyring --keyring trustedkeys.gpg --keyserver keys.gnupg.net \ --recv-keys 24008509A76882D3 6. Create a local mirror for the specified repository: Note You can find the list of repositories in the Repository planning section of the MCP Reference Architecture guide. aptly mirror create ©2019, Mirantis Inc. Page 13 Mirantis Cloud Platform Deployment Guide For example, for the http://apt.mirantis.com/xenial repository: aptly mirror create local.apt.mirantis.xenial http://apt.mirantis.com/xenial stable 7. Update a local mirror: aptly mirror update For example, for the local.apt.mirantis.xenial local mirror: aptly mirror update local.apt.mirantis.xenial 8. Verify that the local mirror has been created: aptly mirror show For example, for the local.apt.mirantis.xenial local mirror: aptly mirror show local.apt.mirantis.xenial Example of system response: Name: local.apt.mirantis.xenial Status: In Update (PID 9167) Archive Root URL: http://apt.mirantis.com/xenial/ Distribution: stable Components: extra, mitaka, newton, oc31, oc311, oc32, oc323, oc40, oc666, ocata, salt, salt-latest Architectures: amd64 Download Sources: no Download .udebs: no Last update: never Information from release file: Architectures: amd64 Codename: stable Components: extra mitaka newton oc31 oc311 oc32 oc323 oc40 oc666 ocata salt salt-latest Date: Mon, 28 Aug 2017 14:12:39 UTC Description: Generated by aptly Label: xenial stable Origin: xenial stable Suite: stable ©2019, Mirantis Inc. Page 14 Mirantis Cloud Platform Deployment Guide 9. In the Model Designer web UI, set the local_repositories parameter to True to enable using of local mirrors. 10. Add the local_repo_url parameter manually to classes/cluster/ /init.yml after model generation. Seealso • Repository planning • GitLab Repository Mirroring • The aptly mirror ©2019, Mirantis Inc. Page 15 Mirantis Cloud Platform Deployment Guide Create a deployment metadata model In a Reclass metadata infrastructural model, the data is stored as a set of several layers of objects, where objects of a higher layer are combined with objects of a lower layer, that allows for as flexible configuration as required. The MCP metadata model has the following levels: • Service level includes metadata fragments for individual services that are stored in Salt formulas and can be reused in multiple contexts. • System level includes sets of services combined in a such way that the installation of these services results in a ready-to-use system. • Cluster level is a set of models that combine already created system objects into different solutions. The cluster module settings override any settings of service and system levels and are specific for each deployment. The model layers are firmly isolated from each other. They can be aggregated on a south-north direction using service interface agreements for objects on the same level. Such approach allows reusing of the already created objects both on service and system levels. Mirantis provides the following methods to create a deployment metadata model: ©2019, Mirantis Inc. Page 16 Mirantis Cloud Platform Deployment Guide Create a deployment metadata model using the Model Designer UI This section describes how to generate the cluster level metadata model for your MCP cluster deployment using the Model Designer UI. The tool used to generate the model is Cookiecutter, a command-line utility that creates projects from templates. Note The Model Designer web UI is only available within Mirantis. The Mirantis deployment engineers can access the Model Designer web UI using their Mirantis corporate username and password. Alternatively, you can generate the deployment model manually as described in Create a deployment metadata model manually. The workflow of a model creation includes the following stages: 1. Defining the model through the Model Designer web UI. 2. Tracking the execution of the model creation pipeline in the Jenkins web UI if required. 3. Obtaining the generated model to your email address or getting it published to the project repository directly. Note If you prefer publishing to the project repository, verify that the dedicated repository is configured correctly and Jenkins can access it. See Create a project repository for details. As a result, you get a generated deployment model and can customize it to fit specific use-cases. Otherwise, you can proceed with the base infrastructure deployment. ©2019, Mirantis Inc. Page 17 Mirantis Cloud Platform Deployment Guide Define the deployment model This section instructs you on how to define the cluster level metadata model through the web UI using Cookiecutter. Eventually, you will obtain a generic deployment configuration that can be overriden afterwards. Note The Model Designer web UI is only available within Mirantis. The Mirantis deployment engineers can access the Model Designer web UI using their Mirantis corporate username and password. Alterantivetly you can generate the deployment model manually as described in Create a deployment metadata model manually. Note Currently, Cookiecutter can generate models with basic configurations. You may need to manually customize your model after generation to meet specific requirements of your deployment, for example, four interfaces bonding. To define the deployment model: 1. Log in to the web UI. 2. Go to Integration dashboard > Models > Model Designer. 3. Click Create Model. The Create Model page opens. 4. Configure your model by selecting a corresponding tab and editing as required: 1. Configure General deployment parameters. Click Next. 2. Configure Infrastructure related parameters. Click Next. 3. Configure Product related parameters. Click Next. 5. Verify the model on the Output summary tab. Edit if required. 6. Click Confirm to trigger the Generate reclass cluster separated-products-auto Jenkins pipeline. If required, you can track the success of the pipeline execution in the Jenkins web UI. If you selected the Send to e-mail address publication option on the General parameters tab, you will receive the generated model to the e-mail address you specified in the Publication options > Email address field on the Infrastructure parameters tab. Otherwise, the model will automatically be pushed to your project repository. ©2019, Mirantis Inc. Page 18 Mirantis Cloud Platform Deployment Guide Seealso • Create a project repository • Publish the deployment model to a project repository ©2019, Mirantis Inc. Page 19 Mirantis Cloud Platform Deployment Guide General deployment parameters The tables in this section outline the general configuration parameters that you can define for your deployment model through the Model Designer web UI. Consult the Define the deployment model section for the complete procedure. The General deployment parameters wizard includes the following sections: • Basic deployment parameters cover basic deployment parameters • Services deployment parameters define the platform you need to generate the model for • Networking deployment parameters cover the generic networking setup for a dedicated management interface and two interfaces for the workload. The two interfaces for the workload are in bond and have tagged sub-interfaces for the Control plane (Control network/VLAN) and Data plane (Tenant network/VLAN) traffic. The PXE interface is not managed and is leaved to default DHCP from installation. Setups for the NFV scenarios are not covered and require manual configuration. Basic deployment parameters Parameter Default JSON output Description Cluster name cluster_name: deployment_name The name of the cluster that will be used as cluster/ / in the project directory structure Cluster domain cluster_domain: deploy-name.local The name of the domain that will be used as part of the cluster FQDN Public host public_host: ${_param:openstack_proxy_address} The name or IP address of the public endpoint for the deployment Reclass repository reclass_repository: https://github.com/Mirantis/mk-lab-salt-model.git The URL to your project Git repository containing your models Cookiecutter template URL cookiecutter_template_url: git@github.com:Mirantis/mk2x-cookiecutter-reclass-model.git The URL to the Cookiecutter template repository Cookiecutter template branch cookiecutter_template_branch: masterThe branch of the Cookiecutter template repository to use, master by default. Use refs/tags/ to generate the model that corresponds to a specific MCP release version. For example, 2017.12. Other possible values include stable and testing. Shared Reclass URL shared_reclass_url: ssh://mcp-jenkins@gerrit.mcp.mirantis.net:29418/salt-models/reclass The URL to the shared system model to be used as a Git submodule for the MCP cluster ©2019, Mirantis Inc. Page 20 Mirantis Cloud Platform Deployment Guide MCP version mcp_version: stable Version of MCP to use, stable by default. Enter the release version number, for example, 2017.12. Other possible values are: nightly, testing. For nightly, use cookiecutter_template_branch: master. Cookiecutter template credentials cookiecutter_template_credentials: gerrit Credentials to Gerrit to fetch the Cookiecutter templates repository. The parameter is used by Jenkins Deployment type deployment_type: physical The supported deployment types include: • Physical for platform Publication method publication_method: email the OpenStack • Physical and Heat for the Kubernetes platform The method to obtain the template. Available options include: • Send to the e-mail address • Commit to repository Services deployment parameters Parameter Platform Default JSON output Description • platform: openstack_enabled The platform to generate the model for: • platform: kubernetes_enabled • The OpenStack platform supports OpenContrail, StackLight LMA, Ceph, CI/CD, and OSS sub-clusters enablement. If the OpenContrail is not enabled, the model will define OVS as a network engine. • The Kubernetes platform supports StackLight LMA and CI/CD sub-clusters enablement, OpenContrail networking, and presupposes Calico networking. To use the default Calico plugin, uncheck the OpenContrail enabled check box. ©2019, Mirantis Inc. Page 21 Mirantis Cloud Platform Deployment Guide StackLight enabled stacklight_enabled: 'True' Enables a StackLight LMA sub-cluster. Gainsight service enabled gainsight_service_enabled: 'False' Enables support for the Salesforce/Gainsight service Ceph enabled ceph_enabled: 'True' Enables a Ceph sub-cluster. CI/CD enabled cicd_enabled: 'True' Enables a CI/CD sub-cluster. OSS enabled oss_enabled: 'True' Enables an OSS sub-cluster. Benchmark node enabled bmk_enabled: 'False' Enables a benchmark node. False, by default. Barbican enabled barbican_enabled: 'False' Enables the Barbican service Back end for Barbican barbican_backend: dogtag The back end for Barbican Networking deployment parameters Parameter Default JSON output Description DNS Server 01 dns_server01: 8.8.8.8 The IP address of the dns01 server DNS Server 02 dns_server02: 1.1.1.1 The IP address of the dns02 server Deploy network subnet deploy_network_subnet: 10.0.0.0/24 The IP address of the deploy network with the network mask Deploy network gateway deploy_network_gateway: 10.0.0.1 The IP gateway address of the deploy network Control network subnet control_network_subnet: 10.0.1.0/24 The IP address of the control network with the network mask Tenant network subnet tenant_network_subnet: 10.0.2.0/24 The IP address of the tenant network with the network mask Tenant network gateway tenant_network_gateway: 10.0.2.1 The IP gateway address of the tenant network Control VLAN control_vlan: '10' The Control plane VLAN ID Tenant VLAN tenant_vlan: '20' The Data plane VLAN ID ©2019, Mirantis Inc. Page 22 Mirantis Cloud Platform Deployment Guide Infrastructure related parameters The tables in this section outline the infrastructure configuration parameters you can define for your deployment model through the Model Designer web UI. Consult the Define the deployment model section for the complete procedure. The Infrastructure deployment parameters wizard includes the following sections: • Salt Master • Ubuntu MAAS • Publication options • Kubernetes Storage • Kubernetes Networking • OpenStack cluster sizes • OpenStack or Kuberbetes networking • Ceph • CI/CD • Alertmanager email notifications • OSS • Repositories • Nova Salt Master Parameter Default JSON output Description Salt Master address salt_master_address: 10.0.1.15 The IP address of the Salt Master node on the control network Salt Master management address salt_master_management_address: 10.0.1.15 The IP address of the Salt Master node on the management network Salt Master hostname salt_master_hostname: cfg01 The hostname of the Salt Master node Ubuntu MAAS Parameter Default JSON output Description MAAS hostname maas_hostname: cfg01 The hostname of the MAAS virtual server MAAS deploy address maas_deploy_address: 10.0.0.15 The IP address of the MAAS control on the deploy network ©2019, Mirantis Inc. Page 23 Mirantis Cloud Platform Deployment Guide MAAS fabric name deploy_fabric The MAAS fabric name for the deploy network MAAS deploy network name deploy_network The MAAS deploy network name MAAS deploy range start 10.0.0.20 The first IP address of the deploy network range MAAS deploy range end 10.0.0.230 The last IP address of the deploy network range Publication options Parameter Default JSON output Email address email_address: Description The email address where the generated Reclass model will be sent to Kubernetes Storage Parameter Kubernetes rbd enabled Default JSON output False Description Enables a connection to an existing external Ceph RADOS Block Device (RBD) storage. Requires additional parameters to be configured in the Product parameters section. For details, see: Product related parameters. Kubernetes Networking Parameter Default JSON output Description Kubernetes metallb enabled False Enables the MetalLB add-on that provides a network load balancer for bare metal Kubernetes clusters using standard routing protocols. For the deployment details, see: Enable the MetalLB support. Kubernetes ingressnginx enabled False Enables the NGINX Ingress controller for Kubernetes. For the deployment details, see: Enable the NGINX Ingress controller. OpenStack cluster sizes Parameter ©2019, Mirantis Inc. Default JSON output Description Page 24 Mirantis Cloud Platform Deployment Guide OpenStack cluster sizes openstack_cluster_size: compact A predefined number of compute nodes for an OpenStack cluster. Available options include: few for a minimal cloud, up to 50 for a compact cloud, up to 100 for a small cloud, up to 200 for a medium cloud, up to 500 for a large cloud. OpenStack or Kuberbetes networking Parameter Default JSON output Description OpenStack network engine openstack_network_engine: opencontrail Available options include opencontrail and ovs. NFV feature generation is experimental. The OpenStack Nova compute NFV req enabled parameter is for enabling Hugepages and CPU pinning without DPDK. Kubernetes network engine kubernetes_network_engine: opencontrail Available options include calico and opencontrail. This parameter is set automatically. If you uncheck the OpenContrail enabled field in the General parameters section, the default Calico plugin is set as the Kubernetes networking. Ceph Parameter Default JSON output Description Ceph version luminous The Ceph version Ceph OSD back end bluestore The OSD back-end type CI/CD Parameter Default JSON output Description OpenLDAP enabled openldap_enabled: 'True' Enables OpenLDAP authentication Keycloak service enabled keycloak_enabled: 'False' Enables the Keycloak service Alertmanager email notifications ©2019, Mirantis Inc. Page 25 Mirantis Cloud Platform Deployment Guide Parameter Default JSON output Description Alertmanager email notifications enabled alertmanager_notification_email_enabled: Enables 'False' email notifications using the Alertmanager service Alertmanager notification email from alertmanager_notification_email_from: Alertmanager john.doe@example.org email notifications sender Alertmanager notification email to alertmanager_notification_email_to: jane.doe@example.org Alertmanager email notifications receiver Alertmanager email notifications SMTP host alertmanager_notification_email_hostname: The address 127.0.0.1 of the SMTP host for alerts notifications Alertmanager email notifications SMTP port alertmanager_notification_email_port: 587 The address of the SMTP port for alerts notifications Alertmanager email notifications with TLS alertmanager_notification_email_require_tls: Enable'True' using of the SMTP server under TLS (for alerts notifications) Alertmanager notification email password alertmanager_notification_email_password: The sender-mail password password for alerts notifications OSS Parameter Default JSON output Description OSS CIS enabled cis_enabled: 'True' Enables the Cloud Intelligence Service OSS Security Audit enabled oss_security_audit_enabled: 'True' Enables the Security Audit service OSS Cleanup Service enabled oss_cleanup_service_enabled: 'True' Enables the Cleanup Service OSS SFDC support enabled oss_sfdc_support_enabled: 'True'` Enables synchronization of your SalesForce account with OSS Repositories Parameter ©2019, Mirantis Inc. Default JSON output Description Page 26 Mirantis Cloud Platform Deployment Guide Local repositories local_repositories: 'False' If true, changes repositories URLs to local mirrors. The local_repo_url parameter should be added manually after model generation. Nova Parameter Default JSON output Nova VNC TLS enabled nova_vnc_tls_enabled: 'False' ©2019, Mirantis Inc. Description If True, enables the TLS encryption for communications between the OpenStack compute nodes and VNC clients. Page 27 Mirantis Cloud Platform Deployment Guide Product related parameters The tables in this section outline the product configuration parameters including infrastructure, CI/CD, OpenContrail, OpenStack, Kubernetes, Stacklight LMA, and Ceph hosts details. You can configure your product infrastructure for the deployment model through the Model Designer web UI. Consult the Define the deployment model section for the complete procedure. The Product deployment parameters wizard includes the following sections: • Infrastructure product parameters • CI/CD product parameters • OSS parameters • OpenContrail service parameters • OpenStack product parameters • Kubernetes product parameters • StackLight LMA product parameters • Ceph product parameters Infrastructure product parameters Section Default JSON output Description Infra kvm01 hostname infra_kvm01_hostname: kvm01 Infra kvm01 control address infra_kvm01_control_address: 10.0.1.241 The IP address of the first KVM node on the control network Infra kvm01 deploy address infra_kvm01_deploy_address: 10.0.0.241 The IP address of the first KVM node on the management network Infra kvm02 hostname infra_kvm02_hostname: kvm02 The hostname of the second KVM node Infra kvm02 control address infra_kvm02_control_address: 10.0.1.242 The IP address of the second KVM node on the control network Infra kvm02 deploy address infra_kvm02_deploy_address: 10.0.0.242 The IP address of the second KVM node on the management network Infra kvm03 hostname infra_kvm03_hostname: kvm03 The hostname of the third KVM node Infra kvm03 control address infra_kvm03_control_address: 10.0.1.243 The IP address of the third KVM node on the control network ©2019, Mirantis Inc. The hostname of the first KVM node Page 28 Mirantis Cloud Platform Deployment Guide Infra kvm03 deploy address infra_kvm03_deploy_address: 10.0.0.243 The IP address of the third KVM node on the management network Infra KVM VIP address infra_kvm_vip_address: 10.0.1.240 The virtual IP address of the KVM cluster Infra deploy NIC infra_deploy_nic: eth0 The NIC used for PXE of the KVM hosts Infra primary first NIC infra_primary_first_nic: eth1 The first NIC in the KVM bond Infra primary second NIC infra_primary_second_nic: eth2 The second NIC in the KVM bond Infra bond mode infra_bond_mode: active-backup The bonding mode for the KVM nodes. Available options include: • active-backup • balance-xor • broadcast • 802.3ad • balance-ltb • balance-alb To decide which bonding mode best suits the needs of your deployment, you can consult the official Linux bonding documentation. OpenStack compute count openstack_compute_count: '100' The number of compute nodes to be generated. The naming convention for compute nodes is cmp000 cmp${openstack_compute_count}. If the value is 100, for example, the host names for the compute nodes expected by Salt include cmp000, cmp001, ..., cmp100. CI/CD product parameters Section CI/CD control node01 address ©2019, Mirantis Inc. Default JSON output cicd_control_node01_address: 10.0.1.91 Description The IP address of the first CI/CD control node Page 29 Mirantis Cloud Platform Deployment Guide CI/CD control node01 hostname cicd_control_node01_hostname: cid01 The hostname of the first CI/CD control node CI/CD control node02 address cicd_control_node02_address: 10.0.1.92 The IP address of the second CI/CD control nod CI/CD control node02 hostname cicd_control_node02_hostname: cid02 The hostname of the second CI/CD control node CI/CD control node03 address cicd_control_node03_address: 10.0.1.93 The IP address of the third CI/CD control node CI/CD control node03 hostname cicd_control_node03_hostname: cid03 The hostname of the third CI/CD control node CI/CD control VIP address cicd_control_vip_address: 10.0.1.90 The virtual IP address of the CI/CD control cluster CI/CD control VIP hostname cicd_control_vip_hostname: cid The hostname of the CI/CD control cluster OSS parameters Section Default JSON output Description OSS address oss_address: ${_param:stacklight_monitor_address} VIP address of the OSS cluster OSS node01 address oss_node01_addres: ${_param:stacklight_monitor01_address} The IP address of the first OSS node OSS node02 address oss_node02_addres: ${_param:stacklight_monitor02_address} The IP address of the second OSS node OSS node03 address oss_node03_addres: ${_param:stacklight_monitor03_address} The IP address of the third OSS node OSS OpenStack auth URL oss_openstack_auth_url: http://172.17.16.190:5000/v3 OpenStack auth URL for OSS tools OSS OpenStack username oss_openstack_username: admin Username for access to OpenStack OSS OpenStack password oss_openstack_password: nova Password for access to OpenStack OSS OpenStack project oss_openstack_project: admin OpenStack project name OSS OpenStack domain ID oss_openstack_domain_id: default OpenStack domain ID OSS OpenStack SSL verify oss_openstack_ssl_verify: 'False' OpenStack SSL verification mechanism OSS OpenStack certificate oss_openstack_cert: '' OpenStack plain CA certificate ©2019, Mirantis Inc. Page 30 Mirantis Cloud Platform Deployment Guide OSS OpenStack credentials path oss_openstack_credentials_path: /srv/volumes/rundeck/storage OpenStack credentials path OSS OpenStack endpoint type oss_openstack_endpoint_type: public OSS Rundeck external datasource enabled oss_rundeck_external_datasource_enabled:Enabled False external datasource (PostgreSQL) for Rundeck OSS Rundeck forward iframe rundeck_forward_iframe: False OSS Rundeck iframe host rundeck_iframe_host: ${_param:openstack_proxy_address} IP address for Rundeck configuration for proxy OSS Rundeck iframe port rundeck_iframe_port: ${_param:haproxy_rundeck_exposed_port} Port for Rundeck through proxy OSS Rundeck iframe ssl rundeck_iframe_ssl: False Secure Rundeck iframe with SSL OSS webhook from oss_webhook_from: TEXT Required. Notification email sender. OSS webhook recipients oss_webhook_recipients: TEXT Required. Notification email recipients. OSS Pushkin SMTP host oss_pushkin_smtp_host: 127.0.0.1 The address of SMTP host for alerts notifications OSS Pushkin SMTP port oss_pushkin_smtp_port: 587 The address of SMTP port for alerts notifications OSS notification SMTP with TLS oss_pushkin_smtp_use_tls: 'True' Enable using of the SMTP server under TLS (for alert notifications) OSS Pushkin email sender password oss_pushkin_email_sender_password: password The sender-mail password for alerts notifications SFDC auth URL N/A Authentication URL for the Salesforce service. For example, sfdc_auth_url: https://login.salesforce.com/s SFDC username N/A Username for logging in to the Salesforce service. For example, sfdc_username: user@example.net SFDC password N/A Password for logging in to the Salesforce service. For example, sfdc_password: secret ©2019, Mirantis Inc. Interface type of OpenStack endpoint for service connections Forward iframe of Rundeck through proxy Page 31 Mirantis Cloud Platform Deployment Guide SFDC consumer key N/A Consumer Key in Salesforce required for Open Authorization (OAuth). For example, sfdc_consumer_key: example_consumer_ke SFDC consumer secret N/A Consumer Secret from Salesforce required for OAuth. For example, sfdc_consumer_secret: example_consumer_ SFDC organization ID N/A Salesforce Organization ID in Salesforce required for OAuth. For example, sfdc_organization_id: example_organization SFDC environment ID sfdc_environment_id: 0 The cloud ID in Salesforce SFDC Sandbox enabled sfdc_sandbox_enabled: True Sandbox environments are isolated from production Salesforce clouds. Enable sandbox to use it for testing and evaluation purposes. Verify that you specify the correct sandbox-url value in the sfdc_auth_url parameter. Otherwise, set the parameter to False. OSS CIS username oss_cis_username: ${_param:oss_openstack_username} CIS username OSS CIS password oss_cis_password: ${_param:oss_openstack_password} CIS password OSS CIS OpenStack auth URL oss_cis_os_auth_url: ${_param:oss_openstack_auth_url} CIS OpenStack authentication URL OSS CIS OpenStack endpoint type oss_cis_endpoint_type: ${_param:oss_openstack_endpoint_type} CIS OpenStack endpoint type OSS CIS project oss_cis_project: ${_param:oss_openstack_project} CIS OpenStack project OSS CIS domain ID oss_cis_domain_id: ${_param:oss_openstack_domain_id} CIS OpenStack domain ID OSS CIS certificate oss_cis_cacert: ${_param:oss_openstack_cert} OSS CIS certificate OSS CIS jobs repository oss_cis_jobs_repository: https://github.com/Mirantis/rundeck-cis-jobs.git CIS jobs repository OSS CIS jobs repository branch oss_cis_jobs_repository_branch: master OSS Security Audit username oss_security_audit_username: ${_param:oss_openstack_username} Security audit service username ©2019, Mirantis Inc. CIS jobs repository branch Page 32 Mirantis Cloud Platform Deployment Guide OSS Security Audit password oss_security_audit_password: ${_param:oss_openstack_password} Security Audit service password OSS Security Audit auth URL name: oss_security_audit_os_auth_url: ${_param:oss_openstack_auth_url} Security Audit service authentication URL OSS Security Audit project oss_security_audit_project: ${_param:oss_openstack_project} Security Audit project name OSS Security Audit user domain ID oss_security_audit_user_domain_id: ${_param:oss_openstack_domain_id} Security Audit user domain ID OSS Security Audit project domain ID oss_security_audit_project_domain_id: ${_param:oss_openstack_domain_id} Security Audit project domain ID OSS Security Audit OpenStack credentials path oss_security_audit_os_credentials_path: ${_param:oss_openstack_credentials_path} Path to credentials for OpenStack cloud for the Security Audit service OSS Cleanup service Openstack credentials path oss_cleanup_service_os_credentials_path: ${_param:oss_openstack_credentials_path} Path to credentials for OpenStack cloud for the Cleanup service OSS Cleanup service username oss_cleanup_username: ${_param:oss_openstack_username} Cleanup service username OSS Cleanup service password oss_cleanup_password: ${_param:oss_openstack_password} Cleanup service password OSS Cleanup service auth URL oss_cleanup_service_os_auth_url: ${_param:oss_openstack_auth_url} Cleanup service authentication URL OSS Cleanup service project oss_cleanup_project: ${_param:oss_openstack_project} Cleanup service project name OSS Cleanup service project domain ID oss_cleanup_project_domain_id: ${_param:oss_openstack_domain_id} Cleanup service project domain ID OpenContrail service parameters Section Default JSON output Description OpenContrail analytics address opencontrail_analytics_address: 10.0.1.30 The virtual IP address of the OpenContrail analytics cluster OpenContrail analytics hostname opencontrail_analytics_hostname: nal OpenContrail analytics node01 address opencontrail_analytics_node01_address: 10.0.1.31 The virtual IP address of the first OpenContrail analytics node on the control network ©2019, Mirantis Inc. The hostname of the OpenContrail analytics cluster Page 33 Mirantis Cloud Platform Deployment Guide OpenContrail analytics node01 hostname opencontrail_analytics_node01_hostname: The nal01 hostname of the first OpenContrail analytics node on the control network OpenContrail analytics node02 address opencontrail_analytics_node02_address: 10.0.1.32 The virtual IP address of the second OpenContrail analytics node on the control network OpenContrail analytics node02 hostname opencontrail_analytics_node02_hostname: The nal02 hostname of the second OpenContrail analytics node on the control network OpenContrail analytics node03 address opencontrail_analytics_node03_address: 10.0.1.33 The virtual IP address of the third OpenContrail analytics node on the control network OpenContrail analytics node03 hostname opencontrail_analytics_node03_hostname: The nal03 hostname of the second OpenContrail analytics node on the control network OpenContrail control address opencontrail_control_address: 10.0.1.20 The virtual IP address of the OpenContrail control cluster OpenContrail control hostname opencontrail_control_hostname: ntw The hostname of the OpenContrail control cluster OpenContrail control node01 address opencontrail_control_node01_address: 10.0.1.21 The virtual IP address of the first OpenContrail control node on the control network OpenContrail control node01 hostname opencontrail_control_node01_hostname: ntw01 The hostname of the first OpenContrail control node on the control network OpenContrail control node02 address opencontrail_control_node02_address: 10.0.1.22 The virtual IP address of the second OpenContrail control node on the control network OpenContrail control node02 hostname opencontrail_control_node02_hostname: ntw02 The hostname of the second OpenContrail control node on the control network OpenContrail control node03 address opencontrail_control_node03_address: 10.0.1.23 The virtual IP address of the third OpenContrail control node on the control network OpenContrail control node03 hostname opencontrail_control_node03_hostname: ntw03 The hostname of the third OpenContrail control node on the control network OpenContrail router01 address opencontrail_router01_address: 10.0.1.100The IP address of the first OpenContrail gateway router for BGP ©2019, Mirantis Inc. Page 34 Mirantis Cloud Platform Deployment Guide OpenContrail router01 hostname opencontrail_router01_hostname: rtr01 The hostname of the first OpenContrail gateway router for BGP OpenContrail router02 address opencontrail_router02_address: 10.0.1.101The IP address of the second OpenContrail gateway router for BGP OpenContrail router02 hostname opencontrail_router02_hostname: rtr02 The hostname of the second OpenContrail gateway router for BGP OpenStack product parameters Section Default JSON output Description Compute primary first NIC compute_primary_first_nic: eth1 The first NIC in the OpenStack compute bond Compute primary second NIC compute_primary_second_nic: eth2 The second NIC in the OpenStack compute bond Compute bond mode compute_bond_mode: active-backup The bond mode for the compute nodes OpenStack compute rack01 hostname openstack_compute_rack01_hostname: cmp The compute hostname prefix OpenStack compute rack01 single subnet openstack_compute_rack01_single_subnet:The 10.0.0.1 Control plane network prefix for compute nodes OpenStack compute rack01 tenant subnet openstack_compute_rack01_tenant_subnet:The 10.0.2 data plane netwrok prefix for compute nodes OpenStack control address openstack_control_address: 10.0.1.10 The virtual IP address of the control cluster on the control network OpenStack control hostname openstack_control_hostname: ctl The hostname of the VIP control cluster OpenStack control node01 address openstack_control_node01_address: 10.0.1.11 The IP address of the first control node on the control network OpenStack control node01 hostname openstack_control_node01_hostname: ctl01The hostname of the first control node OpenStack control node02 address openstack_control_node02_address: 10.0.1.12 The IP address of the second control node on the control network ©2019, Mirantis Inc. Page 35 Mirantis Cloud Platform Deployment Guide OpenStack control node02 hostname openstack_control_node02_hostname: ctl02The hostname of the second control node OpenStack control node03 address openstack_control_node03_address: 10.0.1.13 The IP address of the third control node on the control network OpenStack control node03 hostname openstack_control_node03_hostname: ctl03The hostname of the third control node OpenStack database address openstack_database_address: 10.0.1.50 The virtual IP address of the database cluster on the control network OpenStack database hostname openstack_database_hostname: dbs The hostname of the VIP database cluster OpenStack database node01 address openstack_database_node01_address: 10.0.1.51 The IP address of the first database node on the control network OpenStack database node01 hostname openstack_database_node01_hostname: dbs01 The hostname of the first database node OpenStack database node02 address openstack_database_node02_address: 10.0.1.52 The IP address of the second database node on the control network OpenStack database node02 hostname openstack_database_node02_hostname: dbs02 The hostname of the second database node OpenStack database node03 address openstack_database_node03_address: 10.0.1.53 The IP address of the third database node on the control network OpenStack database node03 hostname openstack_database_node03_hostname: dbs03 The hostname of the third database node OpenStack message queue address openstack_message_queue_address: 10.0.1.40 The vitrual IP address of the message queue cluster on the control network OpenStack message queue hostname openstack_message_queue_hostname: msgThe hostname of the VIP message queue cluster OpenStack message queue node01 address openstack_message_queue_node01_address: The10.0.1.41 IP address of the first message queue node on the control network ©2019, Mirantis Inc. Page 36 Mirantis Cloud Platform Deployment Guide OpenStack message queue node01 hostname openstack_message_queue_node01_hostname: The hostname msg01 of the first message queue node OpenStack message queue node02 address openstack_message_queue_node02_address: The10.0.1.42 IP address of the second message queue node on the control network OpenStack message queue node02 hostname openstack_message_queue_node02_hostname: The hostname msg02 of the second message queue node OpenStack message queue node03 address openstack_message_queue_node03_address: The10.0.1.43 IP address of the third message wueue node on the control network OpenStack message queue node03 hostname openstack_message_queue_node03_hostname: The hostname msg03 of the third message queue node OpenStack benchmark node01 address openstack_benchmark_node01_address: 10.0.1.95 The IP address of a benchmark node on the control network OpenStack benchmark node01 hostname openstack_benchmark_node01_hostname: The bmk01 hostname of a becnhmark node Openstack octavia enabled False Enable the Octavia Load Balancing-as-a-Service for OpenStack. Requires OVS OpenStack to be enabled as a networking engine in Infrastructure related parameters. OpenStack proxy address openstack_proxy_address: 10.0.1.80 The virtual IP address of a proxy cluster on the control network OpenStack proxy hostname openstack_proxy_hostname: prx The hostname of the VIP proxy cluster OpenStack proxy node01 address openstack_proxy_node01_address: 10.0.1.81 The IP address of the first proxy node on the control network OpenStack proxy node01 hostname openstack_proxy_node01_hostname: prx01The hostname of the first proxy node OpenStack proxy node02 address openstack_proxy_node02_address: 10.0.1.82 The IP address of the second proxy node on the control network OpenStack proxy node02 hostname openstack_proxy_node02_hostname: prx02The hostname of the second proxy node ©2019, Mirantis Inc. Page 37 Mirantis Cloud Platform Deployment Guide OpenStack version openstack_version: pike The version of OpenStack to be deployed Manila enabled False Enable the Manila OpenStack Shared File Systems service Manila share backend LVM Enable the LVM Manila share back end Manila lvm volume name manila-volume The Manila LVM volume name Manila lvm devices /dev/sdb,/dev/sdc The comma-separated paths to the Manila LVM devices Tenant Telemetry enabled false Enable Tenant Telemetry based on Ceilometer, Aodh, Panko, and Gnocchi. Disabled by default. If enabled, you can choose the Gnocchi aggregation storage type for metrics: ceph, file, or redis storage drivers. Tenant Telemetry does not support integration with StackLight LMA. Gnocchi aggregation storage gnocchi_aggregation_storage: file Storage for aggregated metrics Designate enabled designate_enabled: 'False' Enables OpenStack DNSaaS based on Designate Designate back end designate_backend: powerdns The DNS back end for Designate OpenStack internal protocol openstack_internal_protocol: http The protocol on internal OpenStack endpoints Kubernetes product parameters Section Default JSON output Description Calico cni image artifactory.mirantis.com/docker-prod-local/mirantis/projectcalico/calico/cni:latest The Calico image with CNI binaries Calico enable nat calico_enable_nat: 'True' Calico image artifactory.mirantis.com/docker-prod-local/mirantis/projectcalico/calico/node:latest The Calico image Calico netmask 16 ©2019, Mirantis Inc. If selected, NAT will be enabled for Calico The netmask of the Calico network Page 38 Mirantis Cloud Platform Deployment Guide Calico network 192.168.0.0 Calicoctl image artifactory.mirantis.com/docker-prod-local/mirantis/projectcalico/calico/ctl:latest The image with the calicoctl command etcd SSL etcd_ssl: 'True' Hyperkube image artifactory.mirantis.com/docker-prod-local/mirantis/kubernetes/hyperkube-amd64:v1.4 The Kubernetes image Kubernetes virtlet enabled False Optional. Virtlet enables Kubernetes to run virtual machines. For the enablement details, see Enable Virtlet. Virtlet with OpenContrail is available as technical preview. Use such configuration for testing and evaluation purposes only. Kubernetes containerd enabled False Optional. Enables the containerd runtime to execute containers and manage container images on a node instead of Docker. Available as technical preview only. Kubernetes externaldns enabled False If selected, ExternalDNS will be deployed. For details, see: Deploy ExternalDNS for Kubernetes. Kubernetes rbd monitors 10.0.1.66:6789,10.0.1.67:6789,10.0.1.68:6789 A comma-separated list of the Ceph RADOS Block Device (RBD) monitors in a Ceph cluster that will be connected to Kubernetes. This parameter becomes available if you select the Kubernetes rbd enabled option in the Infrastructure parameters section. Kubernetes rbd pool kubernetes ©2019, Mirantis Inc. The network that is used for the Kubernetes containers If selected, the SSL for etcd will be enabled A pool in a Ceph cluster that will be connected to Kubernetes. This parameter becomes available if you select the Kubernetes rbd enabled option in the Infrastructure parameters section. Page 39 Mirantis Cloud Platform Deployment Guide Kubernetes rbd user id kubernetes A Ceph RBD user ID of a Ceph cluster that will be connected to Kubernetes. This parameter becomes available if you select the Kubernetes rbd enabled option in the Infrastructure parameters section. Kubernetes rbd user key kubernetes_key A Ceph RBD user key of a Ceph cluster that will be connected to Kubernetes. This parameter becomes available if you select the Kubernetes rbd enabled option in the Infrastructure parameters section. Kubernetes compute node01 hostname cmp01 The hostname of the first Kubernetes compute node Kubernetes compute node01 deploy address 10.0.0.101 The IP address of the first Kubernetes compute node Kubernetes compute node01 single address 10.0.1.101 The IP address of the first Kubernetes compute node on the Control plane Kubernetes compute node01 tenant address 10.0.2.101 The tenant IP address of the first Kubernetes compute node Kubernetes compute node02 hostname cmp02 The hostname of the second Kubernetes compute node Kubernetes compute node02 deploy address 10.0.0.102 The IP address of the second Kubernetes compute node on the deploy network Kubernetes compute node02 single address 10.0.1.102 The IP address of the second Kubernetes compute node on the control plane Kubernetes control address 10.0.1.10 The Keepalived VIP of the Kubernetes control nodes Kubernetes control node01 address 10.0.1.11 The IP address of the first Kubernetes controller node Kubernetes control node01 deploy address 10.0.0.11 The IP address of the first Kubernetes control node on the deploy network ©2019, Mirantis Inc. Page 40 Mirantis Cloud Platform Deployment Guide Kubernetes control node01 hostname ctl01 The hostname of the first Kubernetes controller node Kubernetes control node01 tenant address 10.0.2.11 The tenant IP address of the first Kubernetes controller node Kubernetes control node02 address 10.0.1.12 The IP address of the second Kubernetes controller node Kubernetes control node02 deploy address 10.0.0.12 The IP address of the second Kubernetes control node on the deploy network Kubernetes control node02 hostname ctl02 The hostname of the second Kubernetes controller node Kubernetes control node02 tenant address 10.0.2.12 The tenant IP address of the second Kubernetes controller node Kubernetes control node03 address 10.0.1.13 The IP address of the third Kubernetes controller node Kubernetes control node03 tenant address 10.0.2.13 The tenant IP address of the third Kubernetes controller node Kubernetes control node03 deploy address 10.0.0.13 The IP address of the third Kubernetes control node on the deploy network Kubernetes control node03 hostname ctl03 The hostname of the third Kubernetes controller node OpenContrail public ip range 10.151.0.0/16 The public floating IP pool for OpenContrail Opencontrail private ip range 10.150.0.0/16 The range of private OpenContrail IPs used for pods Kubernetes keepalived vip interface ens4 The Kubernetes interface used for the Keepalived VIP StackLight LMA product parameters Section StackLight LMA log address ©2019, Mirantis Inc. Default JSON output stacklight_log_address: 10.167.4.60 Description The virtual IP address of the StackLight LMA logging cluster Page 41 Mirantis Cloud Platform Deployment Guide StackLight LMA log hostname stacklight_log_hostname: log StackLight LMA log node01 address stacklight_log_node01_address: 10.167.4.61 The IP address of the first StackLight LMA logging node StackLight LMA log node01 hostname stacklight_log_node01_hostname: log01 StackLight LMA log node02 address stacklight_log_node02_address: 10.167.4.62 The IP address of the second StackLight LMA logging node StackLight LMA log node02 hostname stacklight_log_node02_hostname: log02 StackLight LMA log node03 address stacklight_log_node03_address: 10.167.4.63 The IP address of the third StackLight LMA logging node StackLight LMA log node03 hostname stacklight_log_node03_hostname: log03 The hostname of the third StackLight LMA logging node StackLight LMA monitor address stacklight_monitor_address: 10.167.4.70 The virtual IP address of the StackLight LMA monitoring cluster StackLight LMA monitor hostname stacklight_monitor_hostname: mon The hostname of the StackLight LMA monitoring cluster StackLight LMA monitor node01 address stacklight_monitor_node01_address: 10.167.4.71 The IP address of the first StackLight LMA monitoring node StackLight LMA monitor node01 hostname stacklight_monitor_node01_hostname: mon01 The hostname of the first StackLight LMA monitoring node StackLight LMA monitor node02 address stacklight_monitor_node02_address: 10.167.4.72 The IP address of the second StackLight LMA monitoring node StackLight LMA monitor node02 hostname stacklight_monitor_node02_hostname: mon02 The hostname of the second StackLight LMA monitoring node StackLight LMA monitor node03 address stacklight_monitor_node03_address: 10.167.4.73 The IP address of the third StackLight LMA monitoring node StackLight LMA monitor node03 hostname stacklight_monitor_node03_hostname: mon03 The hostname of the third StackLight LMA monitoring node StackLight LMA telemetry address stacklight_telemetry_address: 10.167.4.85 The virtual IP address of a StackLight LMA telemetry cluster ©2019, Mirantis Inc. The hostname of the StackLight LMA logging cluster The hostname of the first StackLight LMA logging node The hostname of the second StackLight LMA logging node Page 42 Mirantis Cloud Platform Deployment Guide StackLight LMA telemetry hostname stacklight_telemetry_hostname: mtr StackLight LMA telemetry node01 address stacklight_telemetry_node01_address: 10.167.4.86 The IP address of the first StackLight LMA telemetry node StackLight LMA telemetry node01 hostname stacklight_telemetry_node01_hostname: mtr01 The hostname of the first StackLight LMA telemetry node StackLight LMA telemetry node02 address stacklight_telemetry_node02_address: 10.167.4.87 The IP address of the second StackLight LMA telemetry node StackLight LMA telemetry node02 hostname stacklight_telemetry_node02_hostname: mtr02 The hostname of the second StackLight LMA telemetry node StackLight LMA telemetry node03 address stacklight_telemetry_node03_address: 10.167.4.88 The IP address of the third StackLight LMA telemetry node StackLight LMA telemetry node03 hostname stacklight_telemetry_node03_hostname: mtr03 The hostname of the third StackLight LMA telemetry node Long-term storage type stacklight_long_term_storage_type: prometheus The type of the long-term storage OSS webhook login ID oss_webhook_login_id: 13 The webhook login ID for alerts notifications OSS webhook app ID oss_webhook_app_id: 24 The webhook application ID for alerts notifications Gainsight account ID N/A The customer account ID in Salesforce Gainsight application organization ID N/A Mirantis organization ID in Salesforce Gainsight access key N/A The access key for the Salesforce Gainsight service Gainsight CSV upload URL N/A The URL to Gainsight API Gainsight environment ID N/A The customer environment ID in Salesforce Gainsight job ID N/A The job ID for the Salesforce Gainsight service ©2019, Mirantis Inc. The hostname of a StackLight LMA telemetry cluster Page 43 Mirantis Cloud Platform Deployment Guide Gainsight login N/A The login for the Salesforce Gainsight service Ceph product parameters Section Default JSON output Description Ceph RGW address ceph_rgw_address: 172.16.47.75 The IP address of the Ceph RGW storage cluster Ceph RGW hostname ceph_rgw_hostname: rgw The hostname of the Ceph RGW storage cluster Ceph MON node01 address ceph_mon_node01_address: 172.16.47.66 The IP address of the first Ceph MON storage node Ceph MON node01 hostname ceph_mon_node01_hostname: cmn01 Ceph MON node02 address ceph_mon_node02_address: 172.16.47.67 The IP address of the second Ceph MON storage node Ceph MON node02 hostname ceph_mon_node02_hostname: cmn02 Ceph MON node03 address ceph_mon_node03_address: 172.16.47.68 The IP address of the third Ceph MON storage node Ceph MON node03 hostname ceph_mon_node03_hostname: cmn03 Ceph RGW node01 address ceph_rgw_node01_address: 172.16.47.76 The IP address of the first Ceph RGW node Ceph RGW node01 hostname ceph_rgw_node01_hostname: rgw01 Ceph RGW node02 address ceph_rgw_node02_address: 172.16.47.77 The IP address of the second Ceph RGW storage node Ceph RGW node02 hostname ceph_rgw_node02_hostname: rgw02 Ceph RGW node03 address ceph_rgw_node03_address: 172.16.47.78 The IP address of the third Ceph RGW storage node Ceph RGW node03 hostname ceph_rgw_node03_hostname: rgw03 The hostname of the third Ceph RGW storage node Ceph OSD count ceph_osd_count: 10 The number of OSDs Ceph OSD rack01 hostname ceph_osd_rack01_hostname: osd The OSD rack01 hostname Ceph OSD rack01 single subnet ceph_osd_rack01_single_subnet: 172.16.47The control plane network prefix for Ceph OSDs ©2019, Mirantis Inc. The hostname of the first Ceph MON storage node The hostname of the second Ceph MON storage node The hostname of the third Ceph MON storage node The hostname of the first Ceph RGW storage node The hostname of the second Ceph RGW storage node Page 44 Mirantis Cloud Platform Deployment Guide Ceph OSD rack01 back-end subnet ceph_osd_rack01_backend_subnet: 172.16.48 The deploy network prefix for Ceph OSDs Ceph public network ceph_public_network: 172.16.47.0/24 The IP address of Ceph public network with the network mask Ceph cluster network ceph_cluster_network: 172.16.48.70/24 The IP address of Ceph cluster network with the network mask Ceph OSD block DB size ceph_osd_block_db_size: 20 The Ceph OSD block DB size in GB Ceph OSD data disks ceph_osd_data_disks: /dev/vdd,/dev/vde The list of OSD data disks Ceph OSD journal or block DB disks ceph_osd_journal_or_block_db_disks: /dev/vdb,/dev/vdc The list of journal or block disks ©2019, Mirantis Inc. Page 45 Mirantis Cloud Platform Deployment Guide Publish the deployment model to a project repository If you selected the option to receive the generated deployment model to your email address and customized it as required, you need to apply the model to the project repository. To publish the metadata model, push the changes to the project Git repository: git add * git commit –m "Initial commit" git pull -r git push --set-upstream origin master Seealso Deployment automation ©2019, Mirantis Inc. Page 46 Mirantis Cloud Platform Deployment Guide Create a deployment metadata model manually You can create a deployment metadata model manually by populating the Cookiecutter template with the required information and generating the model. For simplicity, perform all the procedures described in this section on the same machine and in the same directory where you have configured your Git repository. Before performing this task, you need to have a networking design prepared for your environment, as well as understand traffic flow in OpenStack. For more information, see MCP Reference Architecture. For the purpose of example, the following network configuration is used: Example of network design with OpenContrail Network IP range Gateway VLAN Management network 172.17.17.192/26 172.17.17.193 130 Control network 172.17.18.0/26 N/A 131 Data network 172.17.18.128/26 172.17.18.129 133 Proxy network 172.17.18.64/26 172.17.18.65 132 Tenant network 172.17.18.192/26 172.17.18.193 134 Salt Master 172.17.18.5/26 172.17.17.197/26 N/A This Cookiecutter template is used as an example throughout this section. ©2019, Mirantis Inc. Page 47 Mirantis Cloud Platform Deployment Guide Define the Salt Master node When you deploy your first MCP cluster, you need to define your Salt Master node. For the purpose of this example, the following bash profile variables are used: export export export export RECLASS_REPO="/Users/crh/MCP-DEV/mcpdoc" ENV_NAME="mcpdoc" ENV_DOMAIN="mirantis.local" SALT_MASTER_NAME="cfg01" Note Mirantis highly recommends to populate ~/.bash_profile with the parameters of your environment to protect your configuration in the event of reboots. Define the Salt Master node: 1. Log in to the computer on which you configured the Git repository. 2. Using the variables from your bash profile, create a $SALT_MASTER_NAME.$ENV_DOMAIN.yml file in the nodes/ directory with the Salt Master node definition: classes: - cluster.$ENV_NAME.infra.config parameters: _param: linux_system_codename: xenial reclass_data_revision: master linux: system: name: $SALT_MASTER_NAME domain: $ENV_DOMAIN 3. Add the changes to a new commit: git add -A 4. Commit your changes: git commit -m "your_message" 5. Push your changes: git push ©2019, Mirantis Inc. Page 48 Mirantis Cloud Platform Deployment Guide Download the Cookiecutter templates Use the Cookiecutter templates to generate infrastructure models for your future MCP cluster deployments. Cookiecutter is a command-line utility that creates projects from cookiecutters, that are project templates. The MCP template repository contains a number of infrastructure models for CI/CD, infrastructure nodes, Kubernetes, OpenContrail, StackLight LMA, and OpenStack. Note To access the template repository, you need to have the corresponding privileges. Contact Mirantis Support for further details. To download the Cookiecutter templates: 1. Install the latest Cookiecutter: pip install cookiecutter 2. Clone the template repository to your working directory: git clone https://github.com/Mirantis/mk2x-cookiecutter-reclass-model.git 3. Create a symbolic link: mkdir $RECLASS_REPO/.cookiecutters ln -sv $RECLASS_REPO/mk2x-cookiecutter-reclass-model/cluster_product/* $RECLASS_REPO/.cookiecutters/ Now, you can generate the required metadata model for your MCP cluster deployment. Seealso Generate an OpenStack environment metadata model ©2019, Mirantis Inc. Page 49 Mirantis Cloud Platform Deployment Guide Generate an OpenStack environment metadata model This section describes how to generate the OpenStack environment model using the cluster_product Cookiecutter template. You need to modify the cookiecutter.json files in the following directories under the .cookiecutter directory: • cicd - cluster name, IP address for the CI/CD control nodes. • infra - cluster name, cluster domain name, URL to the Git repository for the cluster, networking information, such as netmasks, gateway, and so on for the infrastructure nodes. • opencontrail - cluster name, IP adresses and host names for the OpenContrail nodes, as well as router information. An important parameter that you need to set is the interface mask opencontrail_compute_iface_mask. • openstack - cluster name, IP addresses, host names, and interface names for different OpenStack nodes, as well as bonding type according to your network design. You must also update the cluster name parameter to be identical in all files. For gateway_primary_first_nic, gateway_primary_second_nic, compute_primary_first_nic, compute_primary_second_nic, specify virtual interface addresses. • stacklight - cluster name, IP addresses and host names for StackLight LMA nodes. To generate a metadata model for your OpenStack environment: 1. Log in to the compute on which you configured your Cookiecutter templates. 2. Generate the metadata model: 1. Create symbolic links for all cookiecutter directories: for i in `ls .cookiecutters`; do ln -sf \ .cookiecutters/$i/cookiecutter.json cookiecutter.$i.json; done 2. Configure infrastructure specifications in all cookiecutter.json files. See: Deployment parameters. 3. Generate or regenerate the environment metadata model: for i in cicd infra openstack opencontrail stacklight; \ do cookiecutter .cookiecutters/$i --output-dir ./classes/cluster \ --no-input -f; done The command creates directories and files on your machine. Example: ©2019, Mirantis Inc. Page 50 Mirantis Cloud Platform Deployment Guide 3. Add your changes to a new commit. 4. Commit and push. Seealso • Cookiecutter documentation • Deployment parameters ©2019, Mirantis Inc. Page 51 Mirantis Cloud Platform Deployment Guide Deployment parameters This section lists all parameters that can be modified for generated environments. Example deployment parameters Parameter Default value Description cluster_name deployment_name Name of the cluster, used as cluster/ / in a directory structure cluster_domain deploy-name.local Domain name part of FQDN of cluster in the cluster public_host public-name Name or IP of public endpoint of the deployment reclass_repository https://github.com/Mirantis/mk-lab-salt-model.git URL to reclass metadata repository control_network_netmask 255.255.255.0 IP mask of control network control_network_gateway 10.167.4.1 IP gateway address of control network dns_server01 8.8.8.8 IP address of dns01 server dns_server02 1.1.1.1 IP address of dns02 server salt_master_ip 10.167.4.90 IP address of Salt Master on control network salt_master_management_ip 10.167.5.90 IP address of Salt Master on management network salt_master_hostname cfg01 Hostname of Salt Master kvm_vip_ip 10.167.4.240 VIP address of KVM cluster kvm01_control_ip 10.167.4.241 IP address of a KVM node01 on control network kvm02_control_ip 10.167.4.242 IP address of a KVM node02 on control network kvm03_control_ip 10.167.4.243 IP address of a KVM node03 on control network kvm01_deploy_ip 10.167.5.241 IP address of KVM node01 on management network kvm02_deploy_ip 10.167.5.242 IP address of KVM node02 on management network kvm03_deploy_ip 10.167.5.243 IP address of KVM node03 on management network kvm01_name kvm01 Hostname of a KVM node01 kvm02_name kvm02 Hostname of a KVM node02 kvm03_name kvm03 Hostname of a KVM node03 ©2019, Mirantis Inc. Page 52 Mirantis Cloud Platform Deployment Guide openstack_proxy_address 10.167.4.80 VIP address of proxy cluster on control network openstack_proxy_node01_address 10.167.4.81 IP address of a proxy node01 on control network openstack_proxy_node02_address 10.167.4.82 IP address of a proxy node02 on control network openstack_proxy_hostname prx Hostname of VIP proxy cluster openstack_proxy_node01_hostname prx01 Hostname of a proxy node01 openstack_proxy_node02_hostname prx02 Hostname of a proxy node02 openstack_control_address 10.167.4.10 VIP address of control cluster on control network openstack_control_node01_address 10.167.4.11 IP address of a control node01 on control network openstack_control_node02_address 10.167.4.12 IP address of a control node02 on control network openstack_control_node03_address 10.167.4.13 IP address of a control node03 on control network openstack_control_hostname ctl Hostname of VIP control cluster openstack_control_node01_hostname ctl01 Hostname of a control node01 openstack_control_node02_hostname ctl02 Hostname of a control node02 openstack_control_node03_hostname ctl03 Hostname of a control node03 openstack_database_address 10.167.4.50 VIP address of database cluster on control network openstack_database_node01_address 10.167.4.51 IP address of a database node01 on control network openstack_database_node02_address 10.167.4.52 IP address of a database node02 on control network openstack_database_node03_address 10.167.4.53 IP address of a database node03 on control network openstack_database_hostname dbs Hostname of VIP database cluster openstack_database_node01_hostname dbs01 Hostname of a database node01 openstack_database_node02_hostname dbs02 Hostname of a database node02 openstack_database_node03_hostname dbs03 Hostname of a database node03 openstack_message_queue_address 10.167.4.40 VIP address of message queue cluster on control network openstack_message_queue_node01_address 10.167.4.41 IP address of a message queue node01 on control network ©2019, Mirantis Inc. Page 53 Mirantis Cloud Platform Deployment Guide openstack_message_queue_node02_address 10.167.4.42 IP address of a message queue node02 on control network openstack_message_queue_node03_address 10.167.4.43 IP address of a message queue node03 on control network openstack_message_queue_hostname msg Hostname of VIP message queue cluster openstack_message_queue_node01_hostname msg01 Hostname of a message queue node01 openstack_message_queue_node02_hostname msg02 Hostname of a message queue node02 openstack_message_queue_node03_hostname msg03 Hostname of a message queue node03 openstack_gateway_node01_address 10.167.4.224 IP address of gateway node01 openstack_gateway_node02_address 10.167.4.225 IP address of gateway node02 openstack_gateway_node01_tenant_address 192.168.50.6 IP tenant address of gateway node01 openstack_gateway_node02_tenant_address 192.168.50.7 IP tenant address of gateway node02 openstack_gateway_node01_hostname gtw01 Hostname of gateway node01 openstack_gateway_node02_hostname gtw02 Hostname of gateway node02 stacklight_log_address 10.167.4.60 VIP address of StackLight LMA logging cluster stacklight_log_node01_address 10.167.4.61 IP address of StackLight LMA logging node01 stacklight_log_node02_address 10.167.4.62 IP address of StackLight LMA logging node02 stacklight_log_node03_address 10.167.4.63 IP address of StackLight LMA logging node03 stacklight_log_hostnamelog Hostname of StackLight LMA logging cluster stacklight_log_node01_hostname log01 Hostname of StackLight LMA logging node01 stacklight_log_node02_hostname log02 Hostname of StackLight LMA logging node02 stacklight_log_node03_hostname log03 Hostname of StackLight LMA logging node03 stacklight_monitor_address 10.167.4.70 VIP address of StackLight LMA monitoring cluster stacklight_monitor_node01_address 10.167.4.71 IP address of StackLight LMA monitoring node01 stacklight_monitor_node02_address 10.167.4.72 IP address of StackLight LMA monitoring node02 stacklight_monitor_node03_address 10.167.4.73 IP address of StackLight LMA monitoring node03 stacklight_monitor_hostname mon Hostname of StackLight LMA monitoring cluster stacklight_monitor_node01_hostname mon01 Hostname of StackLight LMA monitoring node01 stacklight_monitor_node02_hostname mon02 Hostname of StackLight LMA monitoring node02 ©2019, Mirantis Inc. Page 54 Mirantis Cloud Platform Deployment Guide stacklight_monitor_node03_hostname mon03 Hostname of StackLight LMA monitoring node03 stacklight_telemetry_address 10.167.4.85 VIP address of StackLight LMA telemetry cluster stacklight_telemetry_node01_address 10.167.4.86 IP address of StackLight LMA telemetry node01 stacklight_telemetry_node02_address 10.167.4.87 IP address of StackLight LMA telemetry node02 stacklight_telemetry_node03_address 10.167.4.88 IP address of StackLight LMA telemetry node03 stacklight_telemetry_hostname mtr hostname of StackLight LMA telemetry cluster stacklight_telemetry_node01_hostname mtr01 Hostname of StackLight LMA telemetry node01 stacklight_telemetry_node02_hostname mtr02 Hostname of StackLight LMA telemetry node02 stacklight_telemetry_node03_hostname mtr03 Hostname of StackLight LMA telemetry node03 openstack_compute_node01_single_address 10.167.2.101 IP address of a compute node01 on a dataplane network openstack_compute_node02_single_address 10.167.2.102 IP address of a compute node02 on a dataplane network openstack_compute_node03_single_address 10.167.2.103 IP address of a compute node03 on a dataplane network openstack_compute_node01_control_address 10.167.4.101 IP address of a compute node01 on a control network openstack_compute_node02_control_address 10.167.4.102 IP address of a compute node02 on a control network openstack_compute_node03_control_address 10.167.4.103 IP address of a compute node03 on a control network openstack_compute_node01_tenant_address 10.167.6.101 IP tenant address of a compute node01 openstack_compute_node02_tenant_address 10.167.6.102 IP tenant address of a compute node02 openstack_compute_node03_tenant_address 10.167.6.103 IP tenant address of a compute node03 openstack_compute_node01_hostname cmp001 Hostname of a compute node01 openstack_compute_node02_hostname cmp002 Hostname of a compute node02 openstack_compute_node03_hostname cmp003 Hostname of a compute node03 openstack_compute_node04_hostname cmp004 Hostname of a compute node04 openstack_compute_node05_hostname cmp005 Hostname of a compute node05 ©2019, Mirantis Inc. Page 55 Mirantis Cloud Platform Deployment Guide ceph_rgw_address 172.16.47.75 The IP address of the Ceph RGW storage cluster ceph_rgw_hostname rgw The hostname of the Ceph RGW storage cluster ceph_mon_node01_address 172.16.47.66 The IP address of the first Ceph MON storage node ceph_mon_node02_address 172.16.47.67 The IP address of the second Ceph MON storage node ceph_mon_node03_address 172.16.47.68 The IP address of the third Ceph MON storage node ceph_mon_node01_hostname cmn01 The hostname of the first Ceph MON storage node ceph_mon_node02_hostname cmn02 The hostname of the second Ceph MON storage node ceph_mon_node03_hostname cmn03 The hostname of the third Ceph MON storage node ceph_rgw_node01_address 172.16.47.76 The IP address of the first Ceph RGW storage node ceph_rgw_node02_address 172.16.47.77 The IP address of the second Ceph RGW storage node ceph_rgw_node03_address 172.16.47.78 The IP address of the third Ceph RGW storage node ceph_rgw_node01_hostname rgw01 The hostname of the first Ceph RGW storage node ceph_rgw_node02_hostname rgw02 The hostname of the second Ceph RGW storage node ceph_rgw_node03_hostname rgw03 The hostname of the third Ceph RGW storage node ceph_osd_count The number of OSDs 10 ceph_osd_rack01_hostname osd The OSD rack01 hostname ceph_osd_rack01_single_subnet 172.16.47 The control plane network prefix for Ceph OSDs ceph_osd_rack01_backend_subnet 172.16.48 The deploy network prefix for Ceph OSDs ceph_public_network 172.16.47.0/24 The IP address of Ceph public network with the network mask ceph_cluster_network 172.16.48.70/24 The IP address of Ceph cluster network with the network mask ceph_osd_block_db_size 20 The Ceph OSD block DB size in GB ceph_osd_data_disks The list of OSD data disks ©2019, Mirantis Inc. /dev/vdd,/dev/vde Page 56 Mirantis Cloud Platform Deployment Guide ceph_osd_journal_or_block_db_disks /dev/vdb,/dev/vdc ©2019, Mirantis Inc. The list of journal or block disks Page 57 Mirantis Cloud Platform Deployment Guide Deploy MCP DriveTrain To reduce the deployment time and eliminate possible human errors, Mirantis recommends that you use the semi-automated approach to the MCP DriveTrain deployment as described in this section. Caution! The execution of the CLI commands used in the MCP Deployment Guide requires root privileges. Therefore, unless explicitly stated otherwise, run the commands as a root user or use sudo. The deployment of MCP DriveTrain bases on the bootstrap automation of the Salt Master node. On a Reclass model creation, you receive the configuration drives by the email that you specified during the deployment model generation. Depending on the deployment type, you receive the following configuration drives: • For an online and offline deployment, the configuration drive for the cfg01 VM that is used in cloud-init to set up a virtual machine with Salt Master, MAAS provisioner, Jenkins server, and local Git server installed on it. • For an offline deployment, the configuration drive for the APT VM that is used in cloud-init to set up a virtual machine with all required repositories mirrors. The high-level workflow of the MCP DriveTrain deployment # Description 1 Manually deploy and configure the Foundation node. 2 Create the deployment model using the Model Designer web UI. 3 Obtain the pre-built ISO configuration drive(s) with the Reclass deployment metadata model to you email. If required, customize and regenerate the configuration drives. 4 Bootstrap the APT node. Optional, for an offline deployment only. 5 Bootstrap the Salt Master node that contains MAAS provisioner. 6 Deploy the remaining bare metal servers through MAAS provisioner. 7 Deploy MCP CI/CD using Jenkins. ©2019, Mirantis Inc. Page 58 Mirantis Cloud Platform Deployment Guide Prerequisites for MCP DriveTrain deployment Before you proceed with the actual deployment, verify that you have performed the following steps: 1. Deploy the Foundation physical node using one of the initial versions of Ubuntu Xenial, for example, 16.04.1. Use any standalone hardware node where you can run a KVM-based day01 virtual machine with an access to the deploy/control network. The Foundation node will host the Salt Master node and MAAS provisioner. 2. Depending on your case, proceed with one of the following options: • If you do not have a deployment metadata model: 1. Create a model using the Model Designer UI as described in Create a deployment metadata model using the Model Designer UI. Note For an offline deployment, select the Offline deployment and Local repositories options under the Repositories section on the Infrastructure parameters tab. 2. Customize the obtained configuration drives as described in Generate configuration drives manually. For example, enable custom user access. • If you use an already existing model that does not have configuration drives, or you want to generate updated configuration drives, proceed with Generate configuration drives manually. 3. Configure bridges on the Foundation node: • br-mgm for the management network • br-ctl for the control network 1. Log in to the Foundation node through IPMI. Note If the IPMI network is not reachable from the management or control network, add the br-ipmi bridge for the IPMI network or any other network that is routed to the IPMI network. 2. Create PXE bridges to provision network on the foundation node: ©2019, Mirantis Inc. Page 59 Mirantis Cloud Platform Deployment Guide brctl addbr br-mgm brctl addbr br-ctl 3. Add the bridges definition for br-mgm and br-ctl to /etc/network/interfaces. Use definitions from your deployment metadata model. Example: auto br-mgm iface br-mgm inet static address 172.17.17.200 netmask 255.255.255.192 bridge_ports bond0 4. Restart networking from the IPMI console to bring the bonds up. 5. Verify that the foundation node bridges are up by checking the output of the ip a show command: ip a show br-ctl Example of system response: 8: br-ctl: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 00:1b:21:93:c7:c8 brd ff:ff:ff:ff:ff:ff inet 172.17.45.241/24 brd 172.17.45.255 scope global br-ctl valid_lft forever preferred_lft forever inet6 fe80::21b:21ff:fe93:c7c8/64 scope link valid_lft forever preferred_lft forever 4. Depending on your case, proceed with one of the following options: • If you perform the offline deployment or online deployment with local mirrors, proceed to Deploy the APT node. • If you perform an online deployment, proceed to Deploy the Salt Master node. ©2019, Mirantis Inc. Page 60 Mirantis Cloud Platform Deployment Guide Deploy the APT node MCP enables you to deploy the whole MCP cluster without access to the Internet. On creating the metadata model, along with the configuration drive for the cfg01 VM, you will obtain a preconfigured QCOW2 image that will contain packages, Docker images, operating system images, Git repositories, and other software required specifically for the offline deployment. This section describes how to deploy the apt01 VM using the prebuilt configuration drive. Warning Perform the procedure below only in case of an offline deployment or when using a local mirror from the prebuilt image. To deploy the APT node: 1. Log in to the Foundation node. Note Root privileges are required for following steps. Execute the commands as a root user or use sudo. 2. In the /var/lib/libvirt/images/ directory, create an apt01/ subdirectory where the offline mirror image will be stored: Note You can create and use a different subdirectory in /var/lib/libvirt/images/. If that is the case, verify that you specify the correct directory for the VM_*DISK variables described in next steps. mkdir -p /var/lib/libvirt/images/apt01/ 3. Download the latest version of the prebuilt http://images.mirantis.com/mcp-offline-image- .qcow2 image for the apt node from http://images.mirantis.com. 4. Save the image on the Foundation node as /var/lib/libvirt/images/apt01/system.qcow2. 5. Copy the configuration ISO drive for the APT VM provided with the metadata model for the offline image to, for example, /var/lib/libvirt/images/apt01/. ©2019, Mirantis Inc. Page 61 Mirantis Cloud Platform Deployment Guide Note If you are using an already existing model that does not have configuration drives, or you want to generate updated configuration drives, proceed with Generate configuration drives manually. cp /path/to/prepared-drive/apt01-config.iso /var/lib/libvirt/images/apt01/apt01-config.iso ©2019, Mirantis Inc. Page 62 Mirantis Cloud Platform Deployment Guide 6. Select from the following options to deploy the APT node: 1. Download the shell script from GitHub: export MCP_VERSION="master" https://raw.githubusercontent.com/Mirantis/mcp-common-scripts/${MCP_VERSION}/predefine-vm/define-vm.sh 2. Make the script executable, export the required variables: chmod +x define-vm.sh export VM_NAME="apt01. " export VM_SOURCE_DISK="/var/lib/libvirt/images/apt01/system.qcow2" export VM_CONFIG_DISK="/var/lib/libvirt/images/apt01/apt01-config.iso" The CLUSTER_DOMAIN value is the cluster domain name used for the model. See Basic deployment parameters for details. Note You may add other optional variables that have default values and change them depending on your deployment configuration. These variables include: • VM_MGM_BRIDGE_NAME="br-mgm" • VM_CTL_BRIDGE_NAME="br-ctl" • VM_MEM_KB="8388608" • VM_CPUS="4" The br-mgm and br-ctl values are the names of the Linux bridges. See Prerequisites for MCP DriveTrain deployment for details. Custom names can be passed to a VM definition using the VM_MGM_BRIDGE_NAME and VM_CTL_BRIDGE_NAME variables accordingly. 3. Run the shell script: ./define-vm.sh 7. Start the apt01 VM: virsh start apt01. ©2019, Mirantis Inc. Page 63 Mirantis Cloud Platform Deployment Guide Deploy the Salt Master node The Salt Master node acts as a central control point for the clients that are called Salt minion nodes. The minions, in their turn, connect back to the Salt Master node. This section describes how to set up a virtual machine with Salt Master, MAAS provisioner, Jenkins server, and local Git server. The procedure is applicable to both online and offline MCP deployments. To deploy the Salt Master node: 1. Log in to the Foundation node. Note Root privileges are required for following steps. Execute the commands as a root user or use sudo. 2. In case of an offline deployment, replace the content of the /etc/apt/sources.list file with the following lines: deb [arch=amd64] http:// /ubuntu xenial-security main universe restricted deb [arch=amd64] http:// /ubuntu xenial-updates main universe restricted deb [arch=amd64] http:// /ubuntu xenial main universe restricted 3. Create a directory for the VM system disk: Note You can create and use a different subdirectory in /var/lib/libvirt/images/. If that is the case, verify that you specify the correct directory for the VM_*DISK variables described in next steps. mkdir -p /var/lib/libvirt/images/cfg01/ 4. Download the day01 image for the cfg01 node: wget http://images.mirantis.com/cfg01-day01- .qcow2 -O \ /var/lib/libvirt/images/cfg01/system.qcow2 Substitute with the required MCP Build ID, for example, 2018.11.0. 5. Copy the configuration ISO drive for the cfg01 VM provided with the metadata model for the offline image to, for example, /var/lib/libvirt/images/cfg01/cfg01-config.iso. ©2019, Mirantis Inc. Page 64 Mirantis Cloud Platform Deployment Guide Note If you are using an already existing model that does not have configuration drives, or you want to generate updated configuration drives, proceed with Generate configuration drives manually. cp /path/to/prepared-drive/cfg01-config.iso /var/lib/libvirt/images/cfg01/cfg01-config.iso ©2019, Mirantis Inc. Page 65 Mirantis Cloud Platform Deployment Guide 6. Create the Salt Master VM domain definition using the example script: 1. Download the shell script from GitHub: export MCP_VERSION="master" wget https://raw.githubusercontent.com/Mirantis/mcp-common-scripts/${MCP_VERSION}/predefine-vm/define-vm.sh 2. Make the script executable and export the required variables: chmod 0755 define-vm.sh export VM_NAME="cfg01.[CLUSTER_DOMAIN]" export VM_SOURCE_DISK="/var/lib/libvirt/images/cfg01/system.qcow2" export VM_CONFIG_DISK="/var/lib/libvirt/images/cfg01/cfg01-config.iso" The CLUSTER_DOMAIN value is the cluster domain name used for the model. See Basic deployment parameters for details. Note You may add other optional variables that have default values and change them depending on your deployment configuration. These variables include: • VM_MGM_BRIDGE_NAME="br-mgm" • VM_CTL_BRIDGE_NAME="br-ctl" • VM_MEM_KB="8388608" • VM_CPUS="4" The br-mgm and br-ctl values are the names of the Linux bridges. See Prerequisites for MCP DriveTrain deployment for details. Custom names can be passed to a VM definition using the VM_MGM_BRIDGE_NAME and VM_CTL_BRIDGE_NAME variables accordingly. 3. Run the shell script: ./define-vm.sh 7. Start the Salt Master node VM: virsh start cfg01.[CLUSTER_DOMAIN] 8. Log in to the Salt Master virsh console with the user name and password that you created in step 4 of the Generate configuration drives manually procedure: virsh console cfg01.[CLUSTER_DOMAIN] ©2019, Mirantis Inc. Page 66 Mirantis Cloud Platform Deployment Guide 9. If you use local repositories, verify that mk-pipelines are present in /home/repo/mk and pipeline-library is present in /home/repo/mcp-ci after cloud-init finishes. If not, fix the connection to local repositories and run the /var/lib/cloud/instance/scripts/part-001 script. 10. Verify that the following states are successfully applied during the execution of cloud-init: salt-call state.sls linux.system,linux,openssh,salt salt-call state.sls maas.cluster,maas.region,reclass Otherwise, fix the pillar and re-apply the above states. 11. In case of using kvm01 as the Foundation node, perform the following steps on it: 1. Depending on the deployment type, proceed with one of the options below: • For an online deployment, add /etc/apt/sources.list.d/mcp_saltstack.list: the following deb repository to deb [arch=amd64] https://mirror.mirantis.com/ /saltstack-2017.7/xenial/ xenial main • For an offline deployment or local mirrors case, /etc/apt/sources.list.d/mcp_saltstack.list, add the following deb repository: in deb [arch=amd64] http:// / /saltstack-2017.7/xenial/ xenial main 2. Install the salt-minion package. 3. Modify /etc/salt/minion.d/minion.conf: id: master: 4. Restart the salt-minion service: service salt-minion restart 5. Check the output of salt-key command on the Salt Master node to verify that the minion ID of kvm01 is present. ©2019, Mirantis Inc. Page 67 Mirantis Cloud Platform Deployment Guide Verify the Salt infrastructure Before you proceed with the deployment, validate the Reclass model and node pillars. To verify the Salt infrastructure: 1. Log in to the Salt Master node. 2. Verify the Salt Master pillars: reclass -n cfg01. The cluster_domain value is the cluster domain name that you created while preparing your deployment metadata model. See Basic deployment parameters for details. 3. Verify that the Salt version for the Salt minions is the same as for the Salt Master node, that is currently 2017.7: salt-call --version salt '*' test.version ©2019, Mirantis Inc. Page 68 Mirantis Cloud Platform Deployment Guide Enable the management of the APT node through the Salt Master node In compliance with the security best practices, MCP enables you to connect your offline mirror APT VM to the Salt Master node and manage it as any infrastructure VM on your MCP deployment. Generally, the procedure consists of the following steps: 1. In the existing cluster model, configure the pillars required to manage the offline mirror VM. 2. For the MCP releases below the 2018.8.0 Build ID, enable the Salt minion on the existing offline mirror VM. Note This section is only applicable for the offline deployments where all repositories are stored on a specific VM deployed using the MCP apt01 offline image, which is included in the MCP release artifacts. ©2019, Mirantis Inc. Page 69 Mirantis Cloud Platform Deployment Guide Enable the APT node management in the Reclass model This section instructs you on how to configure your existing cluster model to enable the management of the offline mirror VM through the Salt Master node. To configure the APT node management in the Reclass model: 1. Log in to the Salt Master node. 2. Open the cluster level of your Reclass model. 3. In infra/config/nodes.yml, add the following pillars: parameters: reclass: storage: node: aptly_server_node01: name: ${_param:aptly_server_hostname}01 domain: ${_param:cluster_domain} classes: - cluster.${_param:cluster_name}.cicd.aptly - cluster.${_param:cluster_name}.infra params: salt_master_host: ${_param:reclass_config_master} linux_system_codename: xenial single_address: ${_param:aptly_server_control_address} deploy_address: ${_param:aptly_server_deploy_address} 4. If the offline mirror VM is in the full offline mode and does not have the cicd/aptly path, create the cicd/aptly.yml file with the following contents: classes: - system.linux.system.repo_local.mcp.apt_mirantis.docker_legacy - system.linux.system.repo.mcp.apt_mirantis.ubuntu - system.linux.system.repo.mcp.apt_mirantis.saltstack - system.linux.system.repo_local.mcp.extra parameters: linux: network: interface: ens3: ${_param:linux_deploy_interface} 5. Add the following pillars to infra/init.yml or verify that they are present in the model: parameters: linux: network: host: apt: address: ${_param:aptly_server_deploy_address} ©2019, Mirantis Inc. Page 70 Mirantis Cloud Platform Deployment Guide names: - ${_param:aptly_server_hostname} - ${_param:aptly_server_hostname}.${_param:cluster_domain} 6. Check out your inventory to be able to resolve any inconsistencies in your model: reclass-salt --top 7. Use the system response of the reclass-salt --top command to define the missing variables and specify proper environment-specific values if any. 8. Generate the storage Reclass definitions for your offline image node: salt-call state.sls reclass.storage -l debug 9. Synchronize pillars and check out the inventory once again: salt '*' saltutil.refresh_pillar reclass-salt --top If your MCP version is Build ID 2018.8.0 or later, your offline mirror node should now be manageable through the Salt Master node. Otherwise, proceed to Enable the Salt minion on an existing APT node. ©2019, Mirantis Inc. Page 71 Mirantis Cloud Platform Deployment Guide Enable the Salt minion on an existing APT node For the deployments managed by the MCP 2018.8.0 Build ID or later, you should not manually enable the Salt minion on the offline image VM as it is configured automaticaly on boot during the APT VM provisioning. Though, if your want to enable the management of the offline image VM through the Salt Master node on an existing deployment managed by the MCP version below the 2018.8.0 Build ID, you need to perform the procedure included in this section. To enable the Salt minion on an existing offline mirror node: 1. Connect to the serial console of your offline image VM, which is included in the pre-built offline APT QCOW image: virsh console $(virsh list --all --name | grep ^apt01) --force Log in with the user name and password that you created in step 4 of the Generate configuration drives manually procedure. Example of system response: Connected to domain apt01.example.local Escape character is ^] 2. Press Enter to drop into the root shell. 3. Configure the Salt minion and start it: echo "" > /etc/salt/minion echo "master: " > /etc/salt/minion.d/minion.conf echo "id: " >> /etc/salt/minion.d/minion.conf service salt-minion stop rm -f /etc/salt/pki/minion/* service salt-minion start 4. Quit the serial console by sending the Ctrl + ] combination. 5. Log in to the Salt Master node. 6. Verify that you have the offline mirror VM Salt minion connected to your Salt Master node: salt-key -L | grep apt The system response should include your offline mirror VM. For example: apt01.example.local 7. Verify that you can access the Salt minion from the Salt Master node: salt apt01\* test.ping ©2019, Mirantis Inc. Page 72 Mirantis Cloud Platform Deployment Guide 8. Verify the Salt states mapped to the offline mirror VM: salt apt01\* state.show_top Now, you can manage your offline mirror APT VM from the Salt Master node. ©2019, Mirantis Inc. Page 73 Mirantis Cloud Platform Deployment Guide Configure MAAS for bare metal provisioning Before you proceed with provisioning of the remaining bare metal nodes, configure MAAS as described below. To configure MAAS for bare metal provisioning: 1. Log in to the MAAS web UI through http:// :5240/MAAS with the following credentials: • Username: mirantis • Password: r00tme 2. Go to the Subnets tab. 3. Select the fabric that is under the deploy network. 4. In the VLANs on this fabric area, click the VLAN under the VLAN column where the deploy network subnet is. 5. In the Take action drop-down menu, select Provide DHCP. 6. Adjust the IP range as required. Note The number of IP addresses should not be less than the number of the planned VCP nodes. 7. Click Provide DHCP to submit. 8. If you use local package mirrors: Note The following steps are required only to specify the local Ubuntu package repositories that are secured by a custom GPG key and used mainly for the offline mirror images prior the MCP version 2017.12. 1. Go to Settings > Package repositories. 2. Click Actions > Edit on the Ubuntu archive repository. 3. Specify the GPG key of the repository in the Key field. The key can be obtained from the aptly_gpg_public_key parameter in the cluster level Reclass model. 4. Click Save. ©2019, Mirantis Inc. Page 74 Mirantis Cloud Platform Deployment Guide Provision physical nodes using MAAS Physical nodes host the Virtualized Control Plane (VCP) of your Mirantis Cloud Platform deployment. This section describes how to provision the physical nodes using the MAAS service that you have deployed on the Foundation node while deploying the Salt Master node. The servers that you must deploy include at least: • For OpenStack: • kvm02 and kvm03 infrastructure nodes • cmp0 compute node • For Kubernetes: • kvm02 and kvm03 infrastructure nodes • ctl01, ctl02, ctl03 controller nodes • cmp01 and cmp02 compute nodes You can provision physical nodes automatically or manually: • An automated provisioning requires you to define IPMI and MAC addresses in your Reclass model. After you enforce all servers, the Salt Master node commissions and provisions them automatically. • A manual provisioning enables commissioning nodes through the MAAS web UI. Before you proceed with the physical nodes provisioning, you may want to customize the commissioning script, for example, to set custom NIC names. For details, see: Add custom commissioning scripts. Warning Before you proceed with the physical nodes provisioning, verify that BIOS settings enable PXE booting from NICs on each physical server. ©2019, Mirantis Inc. Page 75 Mirantis Cloud Platform Deployment Guide Automatically commission and provision the physical nodes This section describes how to define physical nodes in a Reclass model to automatically commission and then provision the nodes through Salt. Automatically commission the physical nodes You must define all IPMI credentials in your Reclass model to access physical servers for automated commissioning. Once you define the nodes, Salt enforces them into MAAS and starts commissioning. To automatically commission physical nodes: 1. Define all physical nodes under classes/cluster/ /infra/maas.yml using the following structure. For example, to define the kvm02 node: maas: region: machines: kvm02: interface: mac: 00:25:90:eb:92:4a power_parameters: power_address: kvm02.ipmi.net power_password: password power_type: ipmi power_user: ipmi_user Note To get MAC addresses from IPMI, you can use the ipmi tool. Usage example for Supermicro: ipmitool -U ipmi_user-P passowrd -H kvm02.ipmi.net raw 0x30 0x21 1| tail -c 18 2. (Optional) Define the IP address on the first (PXE) interface. By default, it is assigned automatically and can be used as is. For example, to define the kvm02 node: maas: region: machines: kvm02: interface: ©2019, Mirantis Inc. Page 76 Mirantis Cloud Platform Deployment Guide mac: 00:25:90:eb:92:4a mode: "static" ip: "2.2.3.15" subnet: "subnet1" gateway: "2.2.3.2" 3. (Optional) Define a custom disk layout or partitioning per server in MAAS. For more information and examples on how to define it in the model, see: Add a custom disk layout per node in the MCP model. 4. (Optional) Modify the commissioning process as required. For more information and examples, see: Add custom commissioning scripts. 5. Once you have defined all physical servers in your Reclass model, enforce the nodes: Caution! For an offline deployment, remove the deb-src repositories from commissioning before enforcing the nodes, since these repositories are not present on the reduced offline apt image node. To remove these repositories, you can enforce MAAS to rebuild sources.list. For example: export PROFILE="mirantis" export API_KEY=$(cat /var/lib/maas/.maas_credentials) maas login ${PROFILE} http://localhost:5240/MAAS/api/2.0/ ${API_KEY} REPO_ID=$(maas $PROFILE package-repositories read | jq '.[]| select(.name=="main_archive") | .id ') maas $PROFILE package-repository update ${REPO_ID} disabled_components=multiverse maas $PROFILE package-repository update ${REPO_ID} "disabled_pockets=backports" The default PROFILE variable is mirantis. You can find your deployment-specific value for this parameter in parameters:maas:region:admin:username of your Reclass model. For details on building a custom list of repositories, see: MAAS GitHub project. salt-call maas.process_machines All nodes are automatically commissioned. 6. Verify the status of servers either through the MAAS web UI or using the salt call command: salt-call maas.machines_status The successfully commissioned servers appear in the ready status. 7. Enforce the interfaces configuration defined in the model for servers: salt-call state.sls maas.machines.assign_ip ©2019, Mirantis Inc. Page 77 Mirantis Cloud Platform Deployment Guide 8. To protect any static IP assignment defined, for example, in the model, configure a reserved IP range in MAAS on the management subnet. 9. (Optional) Enforce the disk custom configuration defined in the model for servers: salt-call state.sls maas.machines.storage 10. Verify that all servers have correct NIC names and configurations. 11. Proceed to Provision the automatically commissioned physical nodes. ©2019, Mirantis Inc. Page 78 Mirantis Cloud Platform Deployment Guide Provision the automatically commissioned physical nodes Once you successfully commission your physical nodes, you can start the provisioning. To provision the automatically commissioned physical nodes through MAAS: 1. Log in to the Salt Master node. 2. Run the following command: salt-call maas.deploy_machines 3. Check the status of the nodes: salt-call maas.machines_status local: ---------machines: - hostname:kvm02,system_id:anc6a4,status:Deploying summary: ---------Deploying: 1 4. When all servers have been provisioned, perform the verification of the servers` automatic registration by running the salt-key command on the Salt Master node. All nodes should be registered. For example: salt-key Accepted Keys: cfg01.bud.mirantis.net cmp001.bud.mirantis.net cmp002.bud.mirantis.net kvm02.bud.mirantis.net kvm03.bud.mirantis.net ©2019, Mirantis Inc. Page 79 Mirantis Cloud Platform Deployment Guide Manually commission and provision the physical nodes This section describes how to discover, commission, and provision the physical nodes using the MAAS web UI. Manually discover and commission the physical nodes You can discover and commission your physical nodes manually using the MAAS web UI. To discover and commission physical nodes manually: 1. Power on a physical node. 2. In the MAAS UI, verify that the server has been discovered. 3. On the Nodes tab, rename the discovered host accordingly. Click Save after each renaming. 4. In the Settings tab, configure the Commissioning release and the Default Minimum Kernel Version to Ubuntu 16.04 TLS 'Xenial Xerus' and Xenial (hwe-16.04), respectively. Note The above step ensures that the NIC naming convention uses the predictable schemas, for example, enp130s0f0 rather than eth0. 5. In the Deploy area, configure the Default operating system used for deployment and Default OS release used for deployment to Ubuntu and Ubuntu 16.04 LTS 'Xenial Xerus', respectively. 6. Leave the remaining parameters as defaults. 7. (Optional) Modify the commissioning process as required. For more information and examples, see: Add custom commissioning scripts. 8. Commission the node: 1. From the Take Action drop-down list, select Commission. 2. Define a storage schema for each node. 3. On the Nodes tab, click the required node link from the list. 4. Scroll down to the Available disks and partitions section. 5. Select two SSDs using check marks in the left column. 6. Click the radio button to make one of the disks the boot target. 7. Click Create RAID to create an MD raid1 volume. 8. In RAID type, select RAID 1. 9. In File system, select ext4. 10. Set / as Mount point. 11. Click Create RAID. ©2019, Mirantis Inc. Page 80 Mirantis Cloud Platform Deployment Guide The Used disks and partitions section should now look as follows: 9. Repeat the above steps for each physical node. 10. Proceed to Manually provision the physical nodes. ©2019, Mirantis Inc. Page 81 Mirantis Cloud Platform Deployment Guide Manually provision the physical nodes Start the manual provisioning of the physical nodes with the control plane kvm02 and kvm03 physical nodes, and then proceed with the compute cmp01 node deployment. To manually provision the physical nodes through MAAS: 1. Verify that the boot order in the physical nodes' BIOS is set in the following order: 1. PXE 2. The physical disk that was chosen as the boot target in the Maas UI. 2. Log in to the MAAS web UI. 3. Click on a node. 4. Click the Take Action drop-down menu and select Deploy. 5. In the Choose your image area, verify that Ubuntu 16.04 LTS 'Xenial Xerus' with the Xenial(hwe-16.04) kernel is selected. 6. Click Go to deploy the node. 7. Repeat the above steps for each node. Now, your physical nodes are provisioned and you can proceed with configuring and deploying an MCP cluster on them. Seealso • Configure PXE booting over UEFI ©2019, Mirantis Inc. Page 82 Mirantis Cloud Platform Deployment Guide Deploy physical servers This section describes how to deploy physical servers intended for an OpenStack-based MCP cluster. If you plan to deploy a Kubernetes-based MCP cluster, proceed with steps 1-2 of the Kubernetes Prerequisites procedure. To deploy physical servers: 1. Log in to the Salt Master node. 2. Verify that the cfg01 key has been added to Salt and your host FQDN is shown properly in the Accepted Keys field in the output of the following command: salt-key 3. Verify that all pillars and Salt data are refreshed: salt "*" saltutil.refresh_pillar salt "*" saltutil.sync_all 4. Verify that the Reclass model is configured correctly. The following command output should show top states for all nodes: python -m reclass.cli --inventory 5. To verify that the rebooting of the nodes, which will be performed further, is successful, create the trigger file: salt -C 'I@salt:control or I@nova:compute or I@neutron:gateway or I@ceph:osd' \ cmd.run "touch /run/is_rebooted" 6. To prepare physical nodes for VCP deployment, apply the basic Salt states for setting up network interfaces and SSH access. Nodes will be rebooted. Warning If you use kvm01 as a Foundation node, the execution of the commands below will also reboot the Salt Master node. Caution! All hardware nodes must be rebooted after executing the commands below. If the nodes do not reboot for a long time, execute the below commands again or reboot the nodes manually. ©2019, Mirantis Inc. Page 83 Mirantis Cloud Platform Deployment Guide Verify that you have a possibility to log in to nodes through IPMI in case of emergency. 1. For KVM nodes: salt --async -C 'I@salt:control' cmd.run 'salt-call state.sls \ linux.system.repo,linux.system.user,openssh,linux.network;reboot' 2. For compute nodes: salt --async -C 'I@nova:compute' pkg.install bridge-utils,vlan salt --async -C 'I@nova:compute' cmd.run 'salt-call state.sls \ linux.system.repo,linux.system.user,openssh,linux.network;reboot' 3. For gateway nodes, execute the following command only for the deployments with OVS setup with physical gateway nodes: salt --async -C 'I@neutron:gateway' cmd.run 'salt-call state.sls \ linux.system.repo,linux.system.user,openssh,linux.network;reboot' The targeted KVM, compute, and gateway nodes will stop responding after a couple of minutes. Wait until all of the nodes reboot. 7. Verify that the targeted nodes are up and running: salt -C 'I@salt:control or I@nova:compute or I@neutron:gateway or I@ceph:osd' \ test.ping 8. Check the previously created trigger file to verify that the targeted nodes are actually rebooted: salt -C 'I@salt:control or I@nova:compute or I@neutron:gateway' \ cmd.run 'if [ -f "/run/is_rebooted" ];then echo "Has not been rebooted!";else echo "Rebooted";fi' All nodes should be in the Rebooted state. 9. Verify that the hardware nodes have the required network configuration. For example, verify the output of the ip a command: salt -C 'I@salt:control or I@nova:compute or I@neutron:gateway or I@ceph:osd' \ cmd.run "ip a" ©2019, Mirantis Inc. Page 84 Mirantis Cloud Platform Deployment Guide Deploy VCP The virtualized control plane (VCP) is hosted by KVM nodes deployed by MAAS. Depending on the cluster type, the VCP runs Kubernetes or OpenStack services, database (MySQL), message queue (RabbitMQ), Contrail, and support services, such as monitoring, log aggregation, and a time-series metric database. VMs can be added to or removed from the VCP allowing for easy scaling of your MCP cluster. After the KVM nodes are deployed, Salt is used to configure Linux networking, appropriate repositories, host name, and so on by running the linux Salt state against these nodes. The libvirt packages configuration, in its turn, is managed by running the libvirt Salt state. ©2019, Mirantis Inc. Page 85 Mirantis Cloud Platform Deployment Guide Prepare KVM nodes to run the VCP nodes To prepare physical nodes to run the VCP nodes: 1. On the Salt Master node, prepare the node operating system by running the Salt linux state: salt-call state.sls linux -l info Warning Some formulas may not correctly deploy on the first run of this command. This could be due to a race condition in running the deployment of nodes and services in parallel while some services are dependent on others. Repeat the command execution. If an immediate subsequent run of the command fails again, reboot the affected physical node and re-run the command. 2. Prepare physical nodes operating system to run the controller node: 1. Verify the salt-common and salt-minion versions 2. If necessary, Install the correct versions of salt-common and salt-minion. 3. Proceed to Create and provision the control plane VMs. ©2019, Mirantis Inc. Page 86 Mirantis Cloud Platform Deployment Guide Verify the salt-common and salt-minion versions To verify the version deployed with the state: 1. Log in to the physical node console. 2. To verify the salt-common version, run: apt-cache policy salt-common 3. To verify the salt-minion version, run: apt-cache policy salt-minion The output for the commands above must show the 2017.7 version. If you have different versions installed, proceed with Install the correct versions of salt-common and salt-minion. ©2019, Mirantis Inc. Page 87 Mirantis Cloud Platform Deployment Guide Install the correct versions of salt-common and salt-minion This section describes the workaround for salt.virt to properly inject minion.conf. To manually install the required version of salt-common and salt-minion: 1. Log in to the physical node console 2. Change the version to 2017.7 in /etc/apt/sources.list.d/salt.list: deb [arch=amd64] http://repo.saltstack.com/apt/ubuntu/16.04/amd64/2017.7/dists/ xenial main 3. Sync the packages index files: apt-get update 4. Verify the versions: apt-cache policy salt-common apt-cache policy salt-minion 5. If the wrong versions are installed, remove them: apt-get remove salt-minion apt-get remove salt-common 6. Install the required versions of salt-common and salt-minion: apt-get install salt-common=2017.7 apt-get install salt-minion=2017.7 7. Restart the salt-minion service to ensure connectivity with the Salt Master node: service salt-minion stop && service salt-minion start 8. Verify that the required version is installed: apt-cache policy salt-common apt-cache policy salt-minion 9. Repeat the procedure on each physical node. ©2019, Mirantis Inc. Page 88 Mirantis Cloud Platform Deployment Guide Create and provision the control plane VMs The control plane VMs are created on each node by running the salt state. This state leverages the salt virt module along with some customizations defined in a Mirantis formula called salt-formula-salt. Similarly to how MAAS manages bare metal, the salt virt module creates VMs based on profiles that are defined in the metadata and mounts the virtual disk to add the appropriate parameters to the minion configuration file. After the salt state successfully runs against a KVM node where metadata specifies the VMs placement, these VMs will be started and automatically added to the Salt Master node. To create control plane VMs: 1. Log in to the KVM nodes that do not host the Salt Master node. The correct physical node names used in the installation described in this guide to perform the next step are kvm02 and kvm03. Warning Otherwise, on running the command in the step below, you will delete the cfg Salt Master. 2. Verify whether virtual machines are not yet present: virsh list --name --all | grep -Ev '^(mas|cfg|apt)' | xargs -n 1 virsh destroy virsh list --name --all | grep -Ev '^(mas|cfg|apt)' | xargs -n 1 virsh undefine 3. Log in to the Salt Master node console. 4. Verify that the Salt Minion nodes are synchronized by running the following command on the Salt Master node: salt '*' saltutil.sync_all 5. Perform the initial Salt configuration: salt 'kvm*' state.sls salt.minion ©2019, Mirantis Inc. Page 89 Mirantis Cloud Platform Deployment Guide 6. Set up the network interfaces and the SSH access: salt -C 'I@salt:control' cmd.run 'salt-call state.sls \ linux.system.user,openssh,linux.network;reboot' Warning This will also reboot the Salt Master node because it is running on top of kvm01. 7. Log in back to the Salt Master node console. 8. Run the libvirt state: salt 'kvm*' state.sls libvirt 9. For the OpenStack-based MCP clusters, add system.salt.control.cluster.openstack_gateway_single to infra/kvm.yml to enable a gateway VM for your OpenStack environment. Skip this step for the Kubernetes-based MCP clusters. 10. Run salt.control to create virtual machines. This command also inserts minion.conf files from KVM hosts: salt 'kvm*' state.sls salt.control 11. Verify that all your Salt Minion nodes are registered on the Salt Master node. This may take a few minutes. salt-key Example of system response: mon03.bud.mirantis.net msg01.bud.mirantis.net msg02.bud.mirantis.net msg03.bud.mirantis.net mtr01.bud.mirantis.net mtr02.bud.mirantis.net mtr03.bud.mirantis.net nal01.bud.mirantis.net nal02.bud.mirantis.net nal03.bud.mirantis.net ntw01.bud.mirantis.net ntw02.bud.mirantis.net ntw03.bud.mirantis.net prx01.bud.mirantis.net ©2019, Mirantis Inc. Page 90 Mirantis Cloud Platform Deployment Guide prx02.bud.mirantis.net ... ©2019, Mirantis Inc. Page 91 Mirantis Cloud Platform Deployment Guide Deploy CI/CD The automated deployment of the MCP components is performed through CI/CD that is a part of MCP DriveTrain along with SaltStack and Reclass. CI/CD, in its turn, includes Jenkins, Gerrit, and MCP Registry components. This section explains how to deploy a CI/CD infrastructure. For a description of MCP CI/CD components, see: MCP Reference Architecture: MCP CI/CD components To deploy CI/CD automatically: 1. Deploy a customer-specific CI/CD using Jenkins as part of, for example, an OpenStack cloud environment deployment: 1. Log in to the Jenkins web UI available at salt_master_management_address:8081 with the following credentials: • Username: admin • Password: r00tme 2. Use the Deploy - OpenStack pipeline to deploy cicd cluster nodes as described in Deploy an OpenStack environment. Start with Step 7 in case of the online deployment and with Step 8 in case of the offline deployment. 2. Once the cloud environment is deployed, verify that the cicd cluster is up and running. 3. Disable the Jenkins service on the Salt Master node and start using Jenkins on cicd nodes. Seealso • Enable a watchdog ©2019, Mirantis Inc. Page 92 Mirantis Cloud Platform Deployment Guide Deploy an MCP cluster using DriveTrain After you have installed the MCP CI/CD infrastructure as descibed in Deploy CI/CD, you can reach the Jenkins web UI through the Jenkins master IP address. This section contains procedures explaining how to deploy OpenStack environments and Kubernetes clusters using CI/CD pipelines. Note For production environments, CI/CD should be deployed on a per-customer basis. For testing purposes, you can use the central Jenkins lab that is available for Mirantis employees only. To be able to configure and execute Jenkins pipelines using the lab, you need to log in to the Jenkins web UI with your Launchpad credentials. ©2019, Mirantis Inc. Page 93 Mirantis Cloud Platform Deployment Guide Deploy an OpenStack environment This section explains how to configure and launch the OpenStack environment deployment pipeline. This job is run by Jenkins through the Salt API on the functioning Salt Master node and deployed hardware servers to set up your MCP OpenStack environment. Run this Jenkins pipeline after you configure the basic infrastructure as described in Deploy MCP DriveTrain. Also, verify that you have successfully applied the linux and salt states to all physical and virtual nodes for them not to be disconnected during network and Salt Minion setup. Note For production environments, CI/CD should be deployed on a per-customer basis. For testing purposes, you can use the central Jenkins lab that is available for Mirantis employees only. To be able to configure and execute Jenkins pipelines using the lab, you need to log in to the Jenkins web UI with your Launchpad credentials. To automatically deploy an OpenStack environment: 1. Log in to the Salt Master node. 2. For the OpenContrail 4.0 setup, add the following /opencontrail/init.yml file of your Reclass model: parameters to the parameters: _param: opencontrail_version: 4.0 linux_repo_contrail_component: oc40 Note OpenContrail 3.2 is not supported. 3. Set up network interfaces and the SSH access on all compute nodes: salt -C 'I@nova:compute' cmd.run 'salt-call state.sls \ linux.system.user,openssh,linux.network;reboot' 4. If you run OVS, run the same command on physical gateway nodes as well: salt -C 'I@neutron:gateway' cmd.run 'salt-call state.sls \ linux.system.user,openssh,linux.network;reboot' ©2019, Mirantis Inc. Page 94 Mirantis Cloud Platform Deployment Guide 5. Verify that all nodes are ready for deployment: salt '*' state.sls linux,ntp,openssh,salt.minion Caution! If any of these states fails, fix the issue provided in the output and re-apply the state before you proceed to the next step. Otherwise, the Jenkins pipeline will fail. 6. In a web browser, open http:// :8081 to access the Jenkins web UI. Note The IP address is defined in the classes/cluster/ /cicd/init.yml file of the Reclass model under the cicd_control_address parameter variable. 7. Log in to the Jenkins web UI as admin. Note The password for the admin user is defined in the classes/cluster/ /cicd/control/init.yml file of the Reclass model under the openldap_admin_password parameter variable. 8. In the global view, verify that the git-mirror-downstream-mk-pipelines and git-mirror-downstream-pipeline-library pipelines have successfully mirrored all content. 9. Find the Deploy - OpenStack job in the global view. 10. Select the Build with Parameters option from the drop-down menu of the Deploy OpenStack job. 11. Specify the following parameters: Deploy - OpenStack environment parameters Parameter ASK_ON_ERROR ©2019, Mirantis Inc. Description and values If checked, Jenkins will ask either to stop a pipeline or continue execution in case of Salt state fails on any task Page 95 Mirantis Cloud Platform Deployment Guide STACK_INSTALL Specifies the components you need to install. The available values include: • core • kvm • cicd • openstack • ovs or contrail depending on the network plugin. • ceph • stacklight • oss Note For the details regarding StackLight LMA (stacklight) with the DevOps Portal (oss) deployment, see Deploy StackLight LMA with the DevOps Portal. SALT_MASTER_CREDENTIALS Specifies credentials to Salt API stored in Jenkins, included by default. See View credentials details used in Jenkins pipelines for details. SALT_MASTER_URL Specifies the reachable IP address of the Salt Master node and port on which Salt API listens. For example, http://172.18.170.28:6969 To find out on which port Salt API listens: 1. Log in to the Salt Master node. 2. Search for the port in the /etc/salt/master.d/_api.conf file. 3. Verify that the Salt Master node is listening on that port: netstat -tunelp | grep STACK_TYPE Specifies the environment type. Use physical for a bare metal deployment 12. Click Build. ©2019, Mirantis Inc. Page 96 Mirantis Cloud Platform Deployment Guide Seealso • View the deployment details • Enable a watchdog ©2019, Mirantis Inc. Page 97 Mirantis Cloud Platform Deployment Guide Deploy a multi-site OpenStack environment MCP DriveTrain enables you to deploy several OpenStack environments at the same time. Note For production environments, CI/CD should be deployed on a per-customer basis. For testing purposes, you can use the central Jenkins lab that is available for Mirantis employees only. To be able to configure and execute Jenkins pipelines using the lab, you need to log in to the Jenkins web UI with your Launchpad credentials. To deploy a multi-site OpenStack environment, repeat the Deploy an OpenStack environment procedure as many times as you need specifying different values for the SALT_MASTER_URL parameter. Seealso View the deployment details ©2019, Mirantis Inc. Page 98 Mirantis Cloud Platform Deployment Guide Deploy a Kubernetes cluster The MCP Containers as a Service architecture enables you to easily deploy a Kubernetes cluster on bare metal with Calico or OpenContrail plugins set for Kubernetes networking. This section explains how to configure and launch the Kubernetes cluster deployment pipeline using DriveTrain. Caution! OpenContrail 3.2 for Kubernetes is not supported. For production environments, use OpenContrail 4.0. For the list of OpenContrail limitations for Kubernetes, see: OpenContrail limitations. You can enable an external Ceph RBD storage in your Kubernetes cluster as required. For new deployments, enable the corresponding parameters while creating your deployment metadata model as described in Create a deployment metadata model using the Model Designer UI. For existing deployments, follow the Enable an external Ceph RBD storage procedure. You can also deploy ExternalDNS to set up a DNS management server in order to control DNS records dynamically through Kubernetes resources and make Kubernetes resources discoverable through public DNS servers. Depending on your cluster configuration, proceed with one of the sections listed below. Note For production environments, CI/CD should be deployed on a per-customer basis. For testing purposes, you can use the central Jenkins lab that is available for Mirantis employees only. To be able to configure and execute Jenkins pipelines using the lab, you need to log in to the Jenkins web UI with your Launchpad credentials. ©2019, Mirantis Inc. Page 99 Mirantis Cloud Platform Deployment Guide Prerequisites Before you proceed with an automated deployment of a Kubernetes cluster, follow the steps below: 1. If you have swap enabled on the ctl and cmp nodes, modify your Kubernetes deployment model as described in Add swap configuration to a Kubernetes deployment model. 2. For the OpenContrail 4.0 setup, add the following parameters /opencontrail/init.yml file of your deployment model: to the parameters: _param: opencontrail_version: 4.0 linux_repo_contrail_component: oc40 Caution! OpenContrail 3.2 for Kubernetes is not supported. For production MCP Kubernetes deployments, use OpenContrail 4.0. 3. Deploy DriveTrain as described in Deploy MCP DriveTrain. Now, proceed to deploying Kubernetes as described in Deploy a Kubernetes cluster on bare metal. ©2019, Mirantis Inc. Page 100 Mirantis Cloud Platform Deployment Guide Deploy a Kubernetes cluster on bare metal This section provides the steps to deploy a Kubernetes cluster on bare metal nodes configured using MAAS with Calico or OpenContrail as a Kubernetes networking plugin. Caution! OpenContrail 3.2 for Kubernetes is not supported. For production MCP Kubernetes deployments, use OpenContrail 4.0. To automatically deploy a Kubernetes cluster on bare metal nodes: 1. Verify that you have completed the steps described in Prerequisites. 2. Log in to the Jenkins web UI as Administrator. Note The password for the Administrator is defined in the classes/cluster/ /cicd/control/init.yml file of the Reclass model under the openldap_admin_password parameter variable. 3. Depending on your use case, find the k8s_ha_calico heat or k8s_ha_contrail heat pipeline job in the global view. 4. Select the Build with Parameters option from the drop-down menu of the selected job. 5. Configure the deployment by setting the following parameters as required: Deployment parameters Parameter Defualt value ASK_ON_ERROR False Description If True, Jenkins will stop on any failure and ask either you want to cancel the pipeline or proceed with the execution ignoring the error. SALT_MASTER_CREDENTIALS The Jenkins ID of credentials for logging in to the Salt API. For example, salt-credentials. See View credentials details used in Jenkins pipelines for details. SALT_MASTER_URL ©2019, Mirantis Inc. The URL to access the Salt Master node. Page 101 Mirantis Cloud Platform Deployment Guide STACK_INSTALL • core,k8s,calico for deployment with Calico • core,k8s,contrail deployment OpenContrail STACK_TEST Empty for a Components to install. a with The names of the cluster components to test. By default, nothing is tested. STACK_TYPE physical The type of the cluster. 6. Click Build to launch the pipeline. 7. Click Full stage view to track the deployment process. The following table contains the stages details for the deployment with Calico or OpenContrail as a Kubernetes networking plugin: The deploy pipeline workflow # Title 1 Create infrastructure 2 Install core infrastructure Details Creates a base infrastructure using MAAS. 1. Prepares and validates the Salt Master node and Salt Minion nodes. For example, refreshes pillars and synchronizes custom modules. 2. Applies the linux,openssh,salt.minion,ntp states to all nodes. 3 Install Kubernetes infrastructure 1. Reads the control plane load-balancer address and applies it to the model. 2. Generates the Kubernetes certificates. 3. Installs the Kubernetes support packages that include Keepalived, HAProxy, Docker, and etcd. 4 Install the Kubernetes control plane and networking plugins • For the Calico deployments: 1. Installs Calico. 2. Sets up etcd. 3. Installs the control plane nodes. • For the OpenContrail deployments: 1. Installs the OpenContrail infrastructure. 2. Configures OpenContrail Kubernetes. to be used by 3. Installs the control plane nodes. ©2019, Mirantis Inc. Page 102 Mirantis Cloud Platform Deployment Guide 8. When the pipeline has successfully executed, log in to any Kubernetes ctl node and verify that all nodes have been registered successfully: kubectl get nodes Seealso View the deployment details ©2019, Mirantis Inc. Page 103 Mirantis Cloud Platform Deployment Guide Deploy ExternalDNS for Kubernetes ExternalDNS deployed on Mirantis Cloud Platform (MCP) allows you to set up a DNS management server for Kubernetes starting with version 1.7. ExternalDNS enables you to control DNS records dynamically through Kubernetes resources and make Kubernetes resources discoverable through public DNS servers. ExternalDNS synchronizes exposed Kubernetes Services and Ingresses with DNS cloud providers, such as Designate, AWS Route 53, Google CloudDNS, and CoreDNS. ExternalDNS retrieves a list of resources from the Kubernetes API to determine the desired list of DNS records. It synchronizes the DNS service according to the current Kubernetes status. ExternalDNS can use the following DNS back-end providers: • AWS Route 53 is a highly available and scalable cloud DNS web service. Amazon Route 53 is fully compliant with IPv6. • Google CloudDNS is a highly available, scalable, cost-effective, and programmable DNS service running on the same infrastructure as Google. • OpenStack Designate can use different DNS servers including Bind9 and PowerDNS that are supported by MCP. • CoreDNS is the next generation of SkyDNS that can use etcd to accept updates to DNS entries. It functions as an on-premises open-source alternative to cloud DNS services (DNSaaS). You can deploy CoreDNS with ExternalDNS if you do not have an active DNS back-end provider yet. This section describes how to configure and set up ExternalDNS on a new or existing MCP Kubernetes-based cluster. ©2019, Mirantis Inc. Page 104 Mirantis Cloud Platform Deployment Guide Prepare a DNS back end for ExternalDNS Depending on your DNS back-end provider, prepare your back end and the metadata model of your MCP cluster before setting up ExternalDNS. If you do not have an active DNS back-end provider yet, you can use CoreDNS that functions as an on-premises open-source alternative to cloud DNS services. To prepare a DNS back end Choose from the following options depending on your DNS back end: • For AWS Route 53: 1. Log in to your AWS Route 53 console. 2. Navigate to the AWS Services page. 3. In the search field, type "Route 53" to find the corresponding service page. 4. On the Route 53 page, find the DNS management icon and click Get started now. 5. On the DNS management page, click Create hosted zone. 6. On the right side of the Create hosted zone window: 1. Add .local name. 2. Choose the Public Hosted Zone type. 3. Click Create. You will be redirected to the previous page with two records of NS and SOA type. Keep the link of this page for verification after the ExernalDNS deployment. 7. Click Back to Hosted zones. 8. Locate and copy the Hosted Zone ID in the corresponding column of your recently created hosted zone. 9. Add this ID to the following template: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "route53:ChangeResourceRecordSets", "route53:ListResourceRecordSets", "route53:GetHostedZone ], "Resource": [ "arn:aws:route53:::hostedzone/ " ] }, { "Effect" : "Allow", "Action" : [ "route53:GetChange" ], ©2019, Mirantis Inc. "Resource" : [ "arn:aws:route53:::change/*" Page 105 Mirantis Cloud Platform Deployment Guide ] }, { "Effect" : "Allow", "Action" " [ "route53:ListHostedZones" ], "Resource" : [ "*" ] } ] } 10. Navigate to Services > IAM > Customer Managed Policies. 11. Click Create Policy > Create your own policy. 12. Fill in the required fields: • Policy Name field: externaldns • Policy Document field: use the JSON template provided in step 9 13. Click Validate Policy. 14. Click Create Policy. You will be redirected to the policy view page. 15. Navigate to Users. 16. Click Add user: 1. Add a user name: extenaldns. 2. Select the Programmatic access check box. 3. Click Next: Permissions. 4. Select the Attach existing policy directly option. 5. Choose the Customer managed policy type in the Filter drop-down menu. 6. Select the externaldns check box. 7. Click Next: Review. 8. Click Create user. 9. Copy the Access key ID and Secret access key. • For Google CloudDNS: 1. Log in to your Google Cloud Platform web console. 2. Navigate to IAM & Admin > Service accounts > Create service account. 3. In the Create service account window, configure your new ExernalDNS service account: 1. Add a service account name. ©2019, Mirantis Inc. Page 106 Mirantis Cloud Platform Deployment Guide 2. Assign the DNS Administrator role to the account. 3. Select the Furnish a new private key check box and the JSON key type radio button. The private key is automatically saved on your computer. 4. Navigate to NETWORKING > Network services > Cloud DNS. 5. Click CREATE ZONE to create a DNS zone that will be managed by ExternalDNS. 6. In the Create a DNS zone window, fill in the following fields: • Zone name • DNS name that must contain .local format. 7. Click Create. your MCP domain address in the You will be redirected to the Zone details page with two DNS names of the NS and SOA type. Keep this page for verification after the ExernalDNS deployment. • For Designate: 1. Log in to the Horizon web UI of your OpenStack environment with Designate. 2. Create a project with the required admin role as well as generate the access credentials for the project. 3. Create a hosted DNS zone in this project. • For CoreDNS, proceed to Configure cluster model for ExternalDNS. Now, proceed to Configure cluster model for ExternalDNS. ©2019, Mirantis Inc. Page 107 Mirantis Cloud Platform Deployment Guide Configure cluster model for ExternalDNS After you prepare your DNS back end as described in Prepare a DNS back end for ExternalDNS, prepare your cluster model as described below. To configure the cluster model: 1. Choose from the following options: • If you are performing the initial deployment of your MCP Kubernetes cluster: 1. Use the ModelDesigner UI to create the Kubernetes cluster model. For details, see: Create a deployment metadata model using the Model Designer UI. 2. While creating the model, select the Kubernetes externaldns enabled check box in the Kubernetes product parameters section. • If you are making changes to an existing MCP Kubernetes cluster, proceed to the next step. 2. Open your Git project repository. 3. In classes/cluster/ /kubernetes/control.yml: 1. If you are performing the initial deployment of your MCP Kubernetes cluster, configure the provider parameter in the snippet below depending on your DNS provider: coredns|aws|google|designate. If you are making changes to an existing cluster, add and configure the snippet below. For example: parameters: kubernetes: common: addons: externaldns: enabled: True namespace: kube-system image: mirantis/external-dns:latest domain: domain provider: coredns 2. Set up the pillar data for your DNS provider to configure it as an add-on. Use the credentials generated while preparing your DNS provider. • For Designate: parameters: kubernetes: common: addons: externaldns: externaldns: enabled: True domain: company.mydomain provider: designate designate_os_options: ©2019, Mirantis Inc. Page 108 Mirantis Cloud Platform Deployment Guide OS_AUTH_URL: https://keystone_auth_endpoint:5000 OS_PROJECT_DOMAIN_NAME: default OS_USER_DOMAIN_NAME: default OS_PROJECT_NAME: admin OS_USERNAME: admin OS_PASSWORD: password OS_REGION_NAME: RegionOne • For AWS Route 53: parameters: kubernetes: common: addons: externaldns: externaldns: enabled: True domain: company.mydomain provider: aws aws_options: AWS_ACCESS_KEY_ID: XXXXXXXXXXXXXXXXXXXX AWS_SECRET_ACCESS_KEY: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX • For Google CloudDNS: parameters: kubernetes: common: addons: externaldns: externaldns: enabled: True domain: company.mydomain provider: google google_options: key: '' project: default-123 Note You can export the credentials from the Google console and process them using the cat key.json | tr -d 'n' command. • For CoreDNS: parameters: kubernetes: ©2019, Mirantis Inc. Page 109 Mirantis Cloud Platform Deployment Guide common: addons: coredns: enabled: True namespace: kube-system image: coredns/coredns:latest etcd: operator_image: quay.io/coreos/etcd-operator:v0.5.2 version: 3.1.8 base_image: quay.io/coreos/etcd 4. Commit and push the changes to the project Git repository. 5. Log in to the Salt Master node. 6. Update your Salt formulas and the system level of your repository: 1. Change the directory to /srv/salt/reclass. 2. Run the git pull origin master command. 3. Run the salt-call state.sls salt.master command. 4. Run the salt-call state.sls reclass command. Now, proceed to Deploy ExternalDNS. ©2019, Mirantis Inc. Page 110 Mirantis Cloud Platform Deployment Guide Deploy ExternalDNS Before you deploy ExternalDNS, complete the steps described in Configure cluster model for ExternalDNS. To deploy ExternalDNS Choose from the following options: • If you are performing the initial deployment of your MCP Kubernetes cluster, deploy a Kubernetes cluster as described in Deploy a Kubernetes cluster on bare metal. The ExternalDNS will be deployed automatically by the MCP DriveTrain pipeline job during the Kubernetes cluster deployment. • If you are making changes to an existing MCP Kubernetes cluster, apply the following state: salt --hard-crash --state-output=mixed --state-verbose=False -C \ 'I@kubernetes:master' state.sls kubernetes.master.kube-addons Once the state is applied, the kube-addons.sh script applies the Kubernetes resources and they will shortly appear in the Kubernetes resources list. ©2019, Mirantis Inc. Page 111 Mirantis Cloud Platform Deployment Guide Verify ExternalDNS after deployment After you complete the steps described in Deploy ExternalDNS, verify that ExternalDNS is up and running using the procedures below depending on your DNS back end. ©2019, Mirantis Inc. Page 112 Mirantis Cloud Platform Deployment Guide Verify ExternalDNS with Designate back end after deployment After you complete the steps described in Deploy ExternalDNS, verify that ExternalDNS is successfully deployed with Designate back end using the procedure below. To verify ExternalDNS with Designate back end: 1. Log in to any Kubernetes Master node. 2. Source the openrc file of your OpenStack environment: source keystonerc Note If you use Keystone v3, use the source keystonercv3 command instead. 3. Open the Designate shell using the designate command. 4. Create a domain: domain-create --name nginx. .local. --email Example of system response: +-------------+---------------------------------------+ | Field | Value | +-------------+---------------------------------------+ | description | None | | created_at | 2017-10-13T16:23:26.533547 | | updated_at | None | | email | designate@example.org | | ttl | 3600 | | serial | 1423844606 | | id | ae59d62b-d655-49a0-ab4b-ea536d845a32 | | name | nginx.virtual-mcp11-k8s-calico.local. | +-------------+---------------------------------------+ 5. Verify that the domain was successfully created. Use the id parameter value from the output of the command described in the previous step. Keep this value for further verification steps. For example: record-list ae59d62b-d655-49a0-ab4b-ea536d845a32 Example of system response: ©2019, Mirantis Inc. Page 113 Mirantis Cloud Platform Deployment Guide +----+------+---------------------------------------+------------------------+ |id | type | name | data | +----+------+---------------------------------------+------------------------+ |... | NS | nginx.virtual-mcp11-k8s-calico.local. | dns01.bud.mirantis.net.| +----+------+---------------------------------------+------------------------+ 6. Start my-nginx: kubectl run my-nginx --image=nginx --port=80 Example of system response: deployment "my-nginx" created 7. Expose my-nginx: kubectl expose deployment my-nginx --port=80 --type=ClusterIP Example of system response: service "my-nginx" exposed 8. Annotate my-nginx: kubectl annotate service my-nginx \ "external-dns.alpha.kubernetes.io/hostname=nginx. .local." Example of system response: service "my-nginx" annotated 9. Verify that the domain was associated with the IP inside a Designate record by running the record-list [id] command. Use the id parameter value from the output of the command described in step 4. For example: record-list ae59d62b-d655-49a0-ab4b-ea536d845a32 Example of system response: +-----+------+--------------------------------------+---------------------------------------------------------+ | id | type | name | data | +-----+------+--------------------------------------+---------------------------------------------------------+ | ... | NS | nginx.virtual-mcp11-k8s-calico.local.| dns01.bud.mirantis.net. +-----+------+--------------------------------------+---------------------------------------------------------+ ©2019, Mirantis Inc. | Page 114 Mirantis Cloud Platform Deployment Guide | ... | A | nginx.virtual-mcp11-k8s-calico.local.| 10.254.70.16 | +-----+------+--------------------------------------+---------------------------------------------------------+ | ... | TXT | nginx.virtual-mcp11-k8s-calico.local.| "heritage=external-dns,external-dns/owner=my-identifier"| +-----+------+--------------------------------------+---------------------------------------------------------+ ©2019, Mirantis Inc. Page 115 Mirantis Cloud Platform Deployment Guide Verify ExternalDNS with CoreDNS back end after deployment After you complete the steps described in Deploy ExternalDNS, verify that ExternalDNS is successfully deployed with CoreDNS back end using the procedure below. To verify ExternalDNS with CoreDNS back end: 1. Log in to any Kubernetes Master node. 2. Start my-nginx: kubectl run my-nginx --image=nginx --port=80 Example of system response: deployment "my-nginx" created 3. Expose my-nginx: kubectl expose deployment my-nginx --port=80 --type=ClusterIP Example of system response: service "my-nginx" exposed 4. Annotate my-nginx: kubectl annotate service my-nginx \ "external-dns.alpha.kubernetes.io/hostname=nginx. .local." Example of system response: service "my-nginx" annotated 5. Get the IP of DNS service: kubectl get svc coredns -n kube-system | awk '{print $2}' | tail -1 Example of system response: 10.254.203.8 6. Choose from the following options: • If your Kubernetes networking is Calico, run the following command from any Kubernetes Master node. ©2019, Mirantis Inc. Page 116 Mirantis Cloud Platform Deployment Guide • If your Kubernetes networking is OpenContrail, run the following command from any Kubernetes pod. nslookup nginx. .local. Example of system response: Server: 10.254.203.8 Address: 10.254.203.8#53 Name: test.my_domain.local Address: 10.254.42.128 ©2019, Mirantis Inc. Page 117 Mirantis Cloud Platform Deployment Guide Verify ExternalDNS with Google CloudDNS back end after deployment After you complete the steps described in Deploy ExternalDNS, verify that ExternalDNS is successfully deployed with Google CloudDNS back end using the procedure below. To verify ExternalDNS with Google CloudDNS back end: 1. Log in to any Kubernetes Master node. 2. Start my-nginx: kubectl run my-nginx --image=nginx --port=80 Example of system response: deployment "my-nginx" created 3. Expose my-nginx: kubectl expose deployment my-nginx --port=80 --type=ClusterIP Example of system response: service "my-nginx" exposed 4. Annotate my-nginx: kubectl annotate service my-nginx \ "external-dns.alpha.kubernetes.io/hostname=nginx. .local." Example of system response: service "my-nginx" annotated 5. Log in to your Google Cloud Platform web console. 6. Navigate to the Cloud DNS > Zone details page. 7. Verify that your DNS zone now has two more records of the A and TXT type. Both records must point to nginx. .local. ©2019, Mirantis Inc. Page 118 Mirantis Cloud Platform Deployment Guide Verify ExternalDNS with AWS Route 53 back end after deployment After you complete the steps described in Deploy ExternalDNS, verify that ExternalDNS is successfully deployed with AWS Route 53 back end using the procedure below. To verify ExternalDNS with AWS Route 53 back end: 1. Log in to any Kubernetes Master node. 2. Start my-nginx: kubectl run my-nginx --image=nginx --port=80 Example of system response: deployment "my-nginx" created 3. Expose my-nginx: kubectl expose deployment my-nginx --port=80 --type=ClusterIP Example of system response: service "my-nginx" exposed 4. Annotate my-nginx: kubectl annotate service my-nginx \ "external-dns.alpha.kubernetes.io/hostname=nginx. .local." Example of system response: service "my-nginx" annotated 5. Log in to your AWS Route 53 console. 6. Navigate to the Services > Route 53 > Hosted zones > YOUR_ZONE_NAME page. 7. Verify that your DNS zone now has two more records of the A and TXT type. Both records must point to nginx. .local. Seealso MCP Operations Guide: Kubernetes operations ©2019, Mirantis Inc. Page 119 Mirantis Cloud Platform Deployment Guide Deploy StackLight LMA with the DevOps Portal This section explains how to deploy StackLight LMA with the DevOps Portal (OSS) using Jenkins. Before you proceed with the deployment, verify that your cluster level model contains configuration to deploy StackLight LMA as well as OSS. More specifically, check whether you enabled StackLight LMA and OSS as described in Services deployment parameters, and specified all the required parameters for these MCP components as described in StackLight LMA product parameters and OSS parameters. Note For production environments, CI/CD should be deployed on a per-customer basis. For testing purposes, you can use the central Jenkins lab that is available for Mirantis employees only. To be able to configure and execute Jenkins pipelines using the lab, you need to log in to the Jenkins web UI with your Launchpad credentials. To deploy StackLight LMA with the DevOps Portal: 1. In a web browser, open http:// :8081 to access the Jenkins web UI. Note The IP address is defined in the classes/cluster/ /cicd/init.yml file of the Reclass model under the cicd_control_address parameter variable. 2. Log in to the Jenkins web UI as admin. Note The password for the admin user is defined in the classes/cluster/ /cicd/control/init.yml file of the Reclass model under the openldap_admin_password parameter variable. 3. Find the Deploy - OpenStack job in the global view. 4. Select the Build with Parameters option from the drop-down menu of the Deploy OpenStack job. 5. For the STACK_INSTALL parameter, specify the stacklight and oss values. ©2019, Mirantis Inc. Page 120 Mirantis Cloud Platform Deployment Guide Warning If you enabled Stacklight LMA and OSS in the Reclass model, you should specify both stacklight and oss to deploy them together. Otherwise, the Runbooks Automation service (Rundeck) will not start due to Salt and Rundeck behavior. Note For the details regarding other parameters for this pipeline, see Deploy - OpenStack environment parameters. 6. Click Build. 7. Once the cluster is deployed, you can access the DevOps Portal at the the IP address specified in the stacklight_monitor_address parameter on port 8800. Seealso • Deploy an OpenStack environment • View the deployment details ©2019, Mirantis Inc. Page 121 Mirantis Cloud Platform Deployment Guide View credentials details used in Jenkins pipelines MCP uses the Jenkins Credentials Plugin that enables users to store credentials in Jenkins globally. Each Jenkins pipeline can operate only the credential ID defined in the pipeline's parameters and does not share any security data. To view the detailed information about all available credentials in the Jenkins UI: 1. Log in to your Jenkins master located at http:// :8081. Note The Jenkins master IP address is defined in the classes/cluster/ /cicd/init.yml file of the Reclass model under the cicd_control_address parameter variable. 2. Navigate to the Credentials page from the left navigation menu. All credentials listed on the Credentials page are defined in the Reclass model. For example, on the system level in the ../../system/jenkins/client/credential/gerrit.yml file. Examples of users definitions in the Reclass model: • With the RSA key definition: jenkins: client: credential: gerrit: username: ${_param:gerrit_admin_user} key: ${_param:gerrit_admin_private_key} • With the open password: jenkins: client: credential: salt: username: salt password: ${_param:salt_api_password} ©2019, Mirantis Inc. Page 122 Mirantis Cloud Platform Deployment Guide View the deployment details Once you have enforced a pipeline in CI/CD, you can monitor the progress of its execution on the job progress bar that appears on your screen. Moreover, Jenkins enables you to analyze the details of the deployments process. To view the deployment details: 1. Log in to the Jenkins web UI. 2. Under Build History on the left, click the number of the build you are interested in. 3. Go to Console Output from the navigation menu to view the the deployment progress. 4. When the deployment succeeds, verify the deployment result in Horizon. Note The IP address for Horizon is defined in the classes/cluster/ /openstack/init.yml file of the Reclass model under the openstack_proxy_address parameter variable. To troubleshoot an OpenStack deployment: 1. Log in to the Jenkins web UI. 2. Under Build History on the left, click the number of the build you are interested in. 3. Verify Full log to determine the cause of the error. 4. Rerun the deployment with the failed component only. For example, if StackLight LMA fails, run the deployment with only StackLight selected for deployment. Use steps 6-10 of the Deploy an OpenStack environment instruction. ©2019, Mirantis Inc. Page 123 Mirantis Cloud Platform Deployment Guide Deploy an MCP cluster manually This section explains how to manually configure and install the software required for your MCP cluster. For an easier deployment process, use the automated DriveTrain deployment procedure described in Deploy an MCP cluster using DriveTrain. Note The modifications to the metadata deployment model described in this section provide only component-specific parameters and presuppose the networking-specific parameters related to each OpenStack component, since the networking model may differ depending on a per-customer basis. Deploy an OpenStack environment manually This section explains how to manually configure and install software required by your MCP OpenStack environment, such as support services, OpenStack services, and others. Prepare VMs to install OpenStack This section instructs you on how to prepare the virtual machines for the OpenStack services installation. To prepare VMs for a manual installation of an OpenStack environment: 1. Log in to the Salt Master node. 2. Verify that the Salt Minion nodes are synchronized: salt '*' saltutil.sync_all 3. Configure basic operating system settings on all nodes: salt '*' state.sls salt.minion,linux,ntp,openssh Enable TLS support To assure the confidentiality and integrity of network traffic inside your OpenStack deployment, you should use cryptographic protective measures, such as the Transport Layer Security (TLS) protocol. By default, only the traffic that is transmitted over public networks is encrypted. If you have specific security requirements, you may want to configure internal communications to connect through encrypted channels. This section explains how to enable the TLS support for your MCP cluster. ©2019, Mirantis Inc. Page 124 Mirantis Cloud Platform Deployment Guide Note The procedures included in this section apply to new MCP OpenStack deployments only, unless specified otherwise. Encrypt internal API HTTP transport with TLS This section explains how to encrypt the internal OpenStack API HTTP with TLS. Note The procedures included in this section apply to new MCP OpenStack deployments only, unless specified otherwise. To encrypt the internal API HTTP transport with TLS: 1. Verify that the Keystone, Nova Placement, Cinder, Barbican, Gnocchi, Panko, and Manila API services, whose formulas support using Web Server Gateway Interface (WSGI) templates from Apache, are running under Apache by adding the following classes to your deployment model: • In openstack/control.yml: classes: ... - system.apache.server.site.barbican - system.apache.server.site.cinder - system.apache.server.site.gnocchi - system.apache.server.site.manila - system.apache.server.site.nova-placement - system.apache.server.site.panko • In openstack/telemetry.yml: classes: ... - system.apache.server.site.gnocchi - system.apache.server.site.panko 2. Add SSL configuration for each WSGI template by specifying the following parameters: • In openstack/control.yml: parameters: _param: ... ©2019, Mirantis Inc. Page 125 Mirantis Cloud Platform Deployment Guide apache_proxy_ssl: enabled: true engine: salt authority: "${_param:salt_minion_ca_authority}" key_file: "/etc/ssl/private/internal_proxy.key" cert_file: "/etc/ssl/certs/internal_proxy.crt" chain_file: "/etc/ssl/certs/internal_proxy-with-chain.crt" apache_cinder_ssl: ${_param:apache_proxy_ssl} apache_keystone_ssl: ${_param:apache_proxy_ssl} apache_barbican_ssl: ${_param:apache_proxy_ssl} apache_manila_ssl: ${_param:apache_proxy_ssl} apache_nova_placement: ${_param:apache_proxy_ssl} • In openstack/telemetry.yml: parameters: _param: ... apache_gnocchi_api_address: ${_param:single_address} apache_panko_api_address: ${_param:single_address} apache_gnocchi_ssl: ${_param:nginx_proxy_ssl} apache_panko_ssl: ${_param:nginx_proxy_ssl} 3. For services that are still running under Eventlet, configure TLS termination proxy. Such services include Nova, Neutron, Ironic, Glance, Heat, Aodh, and Designate. Depending on your use case, configure proxy on top of either Apache or NGINX by defining the following classes and parameters: • In openstack/control.yml: • To configure proxy on Apache: classes: ... - system.apache.server.proxy.openstack.designate - system.apache.server.proxy.openstack.glance - system.apache.server.proxy.openstack.heat - system.apache.server.proxy.openstack.ironic - system.apache.server.proxy.openstack.neutron - system.apache.server.proxy.openstack.nova parameters: _param: ... # Configure proxy to redirect request to locahost: apache_proxy_openstack_api_address: ${_param:cluster_local_host} apache_proxy_openstack_designate_host: 127.0.0.1 apache_proxy_openstack_glance_host: 127.0.0.1 ©2019, Mirantis Inc. Page 126 Mirantis Cloud Platform Deployment Guide apache_proxy_openstack_heat_host: 127.0.0.1 apache_proxy_openstack_ironic_host: 127.0.0.1 apache_proxy_openstack_neutron_host: 127.0.0.1 apache_proxy_openstack_nova_host: 127.0.0.1 • To configure proxy on NGINX: classes: ... - system.nginx.server.single - system.nginx.server.proxy.openstack_api - system.nginx.server.proxy.openstack.designate - system.nginx.server.proxy.openstack.ironic - system.nginx.server.proxy.openstack.placement # Delete proxy sites that are running under Apache: _param: ... nginx: server: site: nginx_proxy_openstack_api_keystone: enabled: false nginx_proxy_openstack_api_keystone_private: enabled: false ... # Configure proxy to redirect request to locahost _param: ... nginx_proxy_openstack_api_address: ${_param:cluster_local_address} nginx_proxy_openstack_cinder_host: 127.0.0.1 nginx_proxy_openstack_designate_host: 127.0.0.1 nginx_proxy_openstack_glance_host: 127.0.0.1 nginx_proxy_openstack_heat_host: 127.0.0.1 nginx_proxy_openstack_ironic_host: 127.0.0.1 nginx_proxy_openstack_neutron_host: 127.0.0.1 nginx_proxy_openstack_nova_host: 127.0.0.1 # Add nginx SSL settings: _param: ... nginx_proxy_ssl: enabled: true engine: salt authority: "${_param:salt_minion_ca_authority}" key_file: "/etc/ssl/private/internal_proxy.key" ©2019, Mirantis Inc. Page 127 Mirantis Cloud Platform Deployment Guide cert_file: "/etc/ssl/certs/internal_proxy.crt" chain_file: "/etc/ssl/certs/internal_proxy-with-chain.crt" • In openstack/telemetry.yml: classes: ... - system.nginx.server.proxy.openstack_aodh ... parameters: _param: ... nginx_proxy_openstack_aodh_host: 127.0.0.1 4. Edit the openstack/init.yml file: 1. Add the following parameters to the cluster model: parameters: _param: ... cluster_public_protocol: https cluster_internal_protocol: https aodh_service_protocol: ${_param:cluster_internal_protocol} barbican_service_protocol: ${_param:cluster_internal_protocol} cinder_service_protocol: ${_param:cluster_internal_protocol} designate_service_protocol: ${_param:cluster_internal_protocol} glance_service_protocol: ${_param:cluster_internal_protocol} gnocchi_service_protocol: ${_param:cluster_internal_protocol} heat_service_protocol: ${_param:cluster_internal_protocol} ironic_service_protocol: ${_param:cluster_internal_protocol} keystone_service_protocol: ${_param:cluster_internal_protocol} manila_service_protocol: ${_param:cluster_internal_protocol} neutron_service_protocol: ${_param:cluster_internal_protocol} nova_service_protocol: ${_param:cluster_internal_protocol} panko_service_protocol: ${_param:cluster_internal_protocol} 2. Depending on your use case, define the following parameters for the OpenStack services to verify that the services running behind TLS proxy are binded to the localhost: • In openstack/control.yml: OpenStack service ©2019, Mirantis Inc. Required configuration Page 128 Mirantis Cloud Platform Deployment Guide Barbican Cinder Designate Glance Heat Horizon Ironic ©2019, Mirantis Inc. bind: address: 127.0.0.1 identity: protocol: https identity: protocol: https osapi: host: 127.0.0.1 glance: protocol: https identity: protocol: https bind: api: address: 127.0.0.1 bind: address: 127.0.0.1 identity: protocol: https registry: protocol: https bind: api: address: 127.0.0.1 api_cfn: address: 127.0.0.1 api_cloudwatch: address: 127.0.0.1 identity: protocol: https identity: encryption: ssl ironic: bind: api: address: 127.0.0.1 Page 129 Mirantis Cloud Platform Deployment Guide Neutron Nova Panko bind: address: 127.0.0.1 identity: protocol: https controller: bind: private_address: 127.0.0.1 identity: protocol: https network: protocol: https glance: protocol: https metadata: bind: address: ${_param:nova_service_host} panko: server: bind: host: 127.0.0.1 • In openstack/telemetry.yml: parameters: _param: ... aodh server: bind: host: 127.0.0.1 identity: protocol: http gnocchi: server: identity: protocol: http panko: server: identity: protocol: https 5. Apply the model changes to your deployment: ©2019, Mirantis Inc. Page 130 Mirantis Cloud Platform Deployment Guide salt salt salt salt -C 'I@haproxy' state.apply haproxy -C 'I@apache' state.apply apache 'ctl0*' state.apply kesytone,nova,neutron,heat,glance,cinder,designate,manila,ironic 'mdb0*' state.apply aodh,ceilometer,panko,gnocchi Enable TLS for RabbitMQ and MySQL back ends Using TLS protects the communications within your cloud environment from tampering and eavesdropping. This section explains how to configure the OpenStack databases back ends to require TLS. Caution! TLS for MySQL is supported starting from the Pike OpenStack release. Note The procedures included in this section apply to new MCP OpenStack deployments only, unless specified otherwise. To encrypt RabbitMQ and MySQL communications: 1. Add the following classes to the cluster model of the nodes where the server is located: • For the RabbitMQ server: classes: ### Enable tls, contains paths to certs/keys - service.rabbitmq.server.ssl ### Definition of cert/key - system.salt.minion.cert.rabbitmq_server • For the MySQL server (Galera cluster): classes: ### Enable tls, contains paths to certs/keys - service.galera.ssl ### Definition of cert/key - system.salt.minion.cert.mysql.server 2. Verify that each node trusts the CA certificates that come from the Salt Master node: ©2019, Mirantis Inc. Page 131 Mirantis Cloud Platform Deployment Guide _param: salt_minion_ca_host: cfg01.${_param:cluster_domain} salt: minion: trusted_ca_minions: - cfg01.${_param:cluster_domain} 3. Deploy RabbitMQ and MySQL as described in Install support services. 4. Apply the changes by executing the salt.minion state: salt -I salt:minion:enabled state.apply salt.minion Seealso • Database transport security in the OpenStack Security Guide • Messaging security in the OpenStack Security Guide Enable TLS for client-server communications This section explains how to encrypt the communication paths between the OpenStack services and the message queue service (RabbitMQ) as well as the MySQL database. Note The procedures included in this section apply to new MCP OpenStack deployments only, unless specified otherwise. To enable TLS for client-server communications: 1. For each of the OpenStack services, enable the TLS protocol usage for messaging and database communications by changing the cluster model as shown in the examples below: • For a controller node: • The database server configuration example: classes: - system.salt.minion.cert.mysql.server - service.galera.ssl parameters: barbican: server: database: ©2019, Mirantis Inc. Page 132 Mirantis Cloud Platform Deployment Guide ssl: enabled: heat: server: database: ssl: enabled: designate: server: database: ssl: enabled: glance: server: database: ssl: enabled: neutron: server: database: ssl: enabled: nova: controller: database: ssl: enabled: cinder: controller: database: ssl: enabled: volume: database: ssl: enabled: keystone: server: database: ssl: enabled: True True True True True True True True True • The messaging server configuration example: classes: - service.rabbitmq.server.ssl - system.salt.minion.cert.rabbitmq_server ©2019, Mirantis Inc. Page 133 Mirantis Cloud Platform Deployment Guide parameters: designate: server: message_queue: port: 5671 ssl: enabled: True barbican: server: message_queue: port: 5671 ssl: enabled: True heat: server: message_queue: port: 5671 ssl: enabled: True glance: server: message_queue: port: 5671 ssl: enabled: True neutron: server: message_queue: port: 5671 ssl: enabled: True nova: controller: message_queue: port: 5671 ssl: enabled: True cinder: controller: message_queue: ©2019, Mirantis Inc. Page 134 Mirantis Cloud Platform Deployment Guide port: 5671 ssl: enabled: True volume: message_queue: port: 5671 ssl: enabled: True keystone: server: message_queue: port: 5671 ssl: enabled: True • For a compute node, the messaging server configuration example: parameters: neutron: compute: message_queue: port: 5671 ssl: enabled: True nova: compute: message_queue: port: 5671 ssl: enabled: True • For a gateway node, the messaging configuration example: parameters: neutron: gateway: message_queue: port: 5671 ssl: enabled: True 2. Refresh the pillar data to synchronize the model update at all nodes: salt '*' saltutil.refresh_pillar salt '*' saltutil.sync_all 3. Proceed to Install OpenStack services. ©2019, Mirantis Inc. Page 135 Mirantis Cloud Platform Deployment Guide Enable libvirt control channel and live migration over TLS This section explains how to enable TLS encryption for libvirt. By protecting libvirt with TLS, you prevent your cloud workloads from security compromise. The attacker without an appropriate TLS certificate will not be able to connect to libvirtd and affect its operation. Even if the user does not define custom certificates in their Reclass configuration, the certificates are created automatically. Note The procedures included in this section apply to new MCP OpenStack deployments only, unless specified otherwise. To enable libvirt control channel and live migration over TLS: 1. Log in to the Salt Master node. 2. Select from the following options: • To use dynamically generated pillars from the Salt minion with the automatically generated certificates, add the following class in the classes/cluster/cluster_name/openstack/compute/init.yml of your Recalss model: classes: ... - system.nova.compute.libvirt.ssl • To install the pre-created certificates, define them as follows in the pillar: nova: compute: libvirt: tls: enabled: True key: certificate_content cert: certificate_content cacert: certificate_content client: key: certificate_content cert: certificate_content 3. Apply the changes by running the nova state for all compute nodes: salt 'cmp*' state.apply nova Enable TLS encryption between the OpenStack compute nodes and VNC clients The Virtual Network Computing (VNC) provides a remote console or remote desktop access to guest virtual machines through either the OpenStack dashboard or the command-line interface. ©2019, Mirantis Inc. Page 136 Mirantis Cloud Platform Deployment Guide The OpenStack Compute service users can access their instances using the VNC clients through the VNC proxy. MCP enables you to encrypt the communication between the VNC clients and OpenStack сompute nodes with TLS. Note The procedures included in this section apply to new MCP OpenStack deployments only, unless specified otherwise. To enable TLS encryption for VNC: 1. Open your Reclass model Git repository on the cluster level. ©2019, Mirantis Inc. Page 137 Mirantis Cloud Platform Deployment Guide 2. Enable the TLS encryption of communications between the OpenStack compute nodes and VNC proxy: Note The data encryption over TLS between the OpenStack compute nodes and VNC proxy is supported starting with the OpenStack Pike release. 1. In openstack/compute/init.yml, enable the TLS encryption on the OpenStack compute nodes: - system.nova.compute.libvirt.ssl.vnc parameters: _param: ... nova_vncproxy_url: https://${_param:cluster_public_host}:6080 2. In openstack/control.yml, enable the TLS encryption on the VNC proxy: - system.nova.control.novncproxy.tls parameters: _param: ... nova_vncproxy_url: https://${_param:cluster_public_host}:6080 3. In openstack/proxy.yml, define the HTTPS protocol for the nginx_proxy_novnc site: nginx: server: site: nginx_proxy_novnc: proxy: protocol: https 3. Enable the TLS encryption of communications between VNC proxy and VNC clients in openstack/control.yml: Note The data encryption over TLS between VNC proxy and VNC clients is supported starting with the OpenStack Queens release. ©2019, Mirantis Inc. Page 138 Mirantis Cloud Platform Deployment Guide nova: controller: novncproxy: tls: enabled: True 4. Apply the changes: salt 'cmp*' state.apply nova salt 'ctl*' state.apply nova salt 'prx*' state.apply nginx Configure OpenStack APIs to use X.509 certificates for MySQL MCP enables you to enhance the security of your OpenStack cloud by requiring X.509 certificates for authentication. Configuring OpenStack APIs to use X.509 certificates for communicating with the MySQL database provides greater identity assurance of OpenStack clients making the connection to the database and ensures that the communications are encrypted. When configuring X.509 for your MCP cloud, you enable the TLS support for the communications between MySQL and the OpenStack services. The OpenStack services that support X.509 certificates include: Aodh, Barbican, Cinder, Designate, Glance, Gnocchi, Heat, Ironic, Keystone, Manila Neutron, Nova, and Panko. Note The procedures included in this section apply to new MCP OpenStack deployments only, unless specified otherwise. To enable the X.509 and SSL support: 1. Configure the X.509 support on the Galera side: 1. Include the following deployment model: class to cluster_name/openstack/database.yml of your system.galera.server.database.x509. 2. Apply the changes by running the galera state: Note On an existing environment, the already existing database users and their privileges will not be replaced automatically. If you want to replace the existing users, you need to remove them manually before applying the galera state. ©2019, Mirantis Inc. Page 139 Mirantis Cloud Platform Deployment Guide salt -C 'I@galera:master' state.sls galera 2. Configure the X.509 support on the service side: 1. Configure all OpenStack APIs that support X.509 to use X.509 certificates by setting openstack_mysql_x509_enabled: True on the cluster level of your deployment model: parameters: _param: openstack_mysql_x509_enabled: True 2. Define the certificates: 1. Generate certificates automatically using Salt: salt '*' state.sls salt.minion 2. Optional. Define pre-created certificates for particular services in pillars as described in the table below. Note The table illustrates how to define pre-created certificates through paths. Though, you can include a certificate content to a pillar instead. For example, for the Aodh, use the following structure: aodh: server: database: x509: cacert: (certificate content) cert: (certificate content) key: (certificate content) OpenStack service Aodh ©2019, Mirantis Inc. Define custom certificates in pillar aodh: server: database: x509: ca_cert: cert_file: key_file: Apply the change salt -C 'I@aodh:server' state.sls aodh Page 140 Mirantis Cloud Platform Deployment Guide Barbican Cinder Designate Glance Gnocchi ©2019, Mirantis Inc. barbican: server: database: x509: ca_cert: cert_file: key_file: cinder: controller: database: x509: ca_cert: cert_file: key_file: volume: database: x509: ca_cert: cert_file: key_file: designate: server: database: x509: ca_cert: cert_file: key_file: glance: server: database: x509: ca_cert: cert_file: key_file: gnocchi: common: database: x509: ca_cert: cert_file: key_file: salt -C 'I@barbican:server' state.sls barbican.server salt -C 'I@cinder:controller' state.sls cinder salt -C 'I@designate:server' state.sls designate salt -C 'I@glance:server' state.sls glance.server salt -C 'I@gnocchi:server' state.sls gnocchi.server Page 141 Mirantis Cloud Platform Deployment Guide Heat Ironic Keystone Manila Neutron ©2019, Mirantis Inc. heat: server: database: x509: ca_cert: cert_file: key_file: ironic: api: database: x509: ca_cert: cert_file: key_file: conductor: database: x509: ca_cert: cert_file: key_file: keystone: server: database: x509: ca_cert: cert_file: key_file: manila: common: database: x509: ca_cert: cert_file: key_file: neutron: server: database: x509: ca_cert: cert_file: key_file: salt -C 'I@heat:server' state.sls heat salt -C 'I@ironic:api' state.sls ironic.api salt -C 'I@ironic:conductor' state.sls ironic.conductor salt -C 'I@keystone:server' state.sls keystone.server salt -C 'I@manila:common' state.sls manila salt -C 'I@neutron:server' state.sls neutron.server Page 142 Mirantis Cloud Platform Deployment Guide Nova Panko nova: controller: database: x509: ca_cert: cert_file: key_file: panko: server: database: x509: ca_cert: cert_file: key_file: salt -C 'I@nova:controller' state.sls nova.controller salt -C 'I@panko:server' state.sls panko 3. To verify that a particular client is able to authorize with X.509, verify the output of the mysql --user-name= on any controller node. For example: mysql --user-name=nova --host=10.11.0.50 --password= --silent \ --ssl-ca=/etc/nova/ssl/mysql/ca-cert.pem \ --ssl-cert=/etc/nova/ssl/mysql/client-cert.pem \ --ssl-key=/etc/nova/ssl/mysql/client-key.pem Configure OpenStack APIs to use X.509 certificates for RabbitMQ MCP enables you to enhance the security of your OpenStack environment by requiring X.509 certificates for authentication. Configuring the OpenStack services to use X.509 certificates for communicating with the RabbitMQ server provides greater identity assurance of OpenStack clients making the connection to message_queue and ensures that the communications are encrypted. When configuring X.509 for your MCP cloud, you enable the TLS support for the communications between RabbitMQ and the OpenStack services. The OpenStack services that support X.509 certificates for communicating with the RabbitMQ server include Aodh, Barbican, Cinder, Designate, Glance, Heat, Ironic, Keystone, Manila, Neutron, and Nova. Note The procedures included in this section apply to new MCP OpenStack deployments only, unless specified otherwise. To enable the X.509 and SSL support for communications between the OpenStack services and RabbitMQ: 1. Configure the X.509 support on the RabbitMQ server side: ©2019, Mirantis Inc. Page 143 Mirantis Cloud Platform Deployment Guide 1. Include the following class to /openstack/message_queue.yml of your deployment model: - system.rabbitmq.server.ssl 2. Refresh the pillars: salt -C 'I@rabbitmq:server' saltutil.refresh_pillar 3. Verify the pillars: Note X.509 remains disabled until you enable it on the cluster level as described further in this procedure. salt -C 'I@rabbitmq:server' pillar.get rabbitmq:server:x509 2. Configure the X.509 support on the service side: 1. Configure all OpenStack services that support X.509 to use X.509 certificates for RabbitMQ by setting the following parameters on the cluster level of your deployment model in /openstack/init.yml: parameters: _param: rabbitmq_ssl_enabled: True openstack_rabbitmq_x509_enabled: True openstack_rabbitmq_port: 5671 2. Refresh the pillars: salt '*' saltutil.refresh_pillar 3. Verify that the pillars for the OpenStack services are updated. For example, for the Nova controller: salt -C 'I@nova:controller' pillar.get nova:controller:message_queue:x509 Example of system response: ctl03.example-cookiecutter-model.local: ---------ca_file: /etc/nova/ssl/rabbitmq/ca-cert.pem cert_file: ©2019, Mirantis Inc. Page 144 Mirantis Cloud Platform Deployment Guide /etc/nova/ssl/rabbitmq/client-cert.pem enabled: True key_file: /etc/nova/ssl/rabbitmq/client-key.pem ctl02.example-cookiecutter-model.local: ---------ca_file: /etc/nova/ssl/rabbitmq/ca-cert.pem cert_file: /etc/nova/ssl/rabbitmq/client-cert.pem enabled: True key_file: /etc/nova/ssl/rabbitmq/client-key.pem ctl01.example-cookiecutter-model.local: ---------ca_file: /etc/nova/ssl/rabbitmq/ca-cert.pem cert_file: /etc/nova/ssl/rabbitmq/client-cert.pem enabled: True key_file: /etc/nova/ssl/rabbitmq/client-key.pem 3. Generate certificates automatically using Salt: 1. For the OpenStack services: salt '*' state.sls salt.minion 2. For the RabbitMQ server: salt -C 'I@rabbitmq:server' state.sls salt.minion.cert 4. Verify that the RabbitmMQ cluster is healthy: salt -C 'I@rabbitmq:server' cmd.run 'rabbitmqctl cluster_status' 5. Apply the changes on the server side: salt -C 'I@rabbitmq:server' state.sls rabbitmq 6. Apply the changes for the OpenStack services by running the appropriate service states listed in the Apply the change column of the Definition of custom X.509 certificates for RabbitMQ table in the next step. ©2019, Mirantis Inc. Page 145 Mirantis Cloud Platform Deployment Guide 7. Optional. Define pre-created certificates for particular services in pillars as described in the table below. Note The table illustrates how to define pre-created certificates through paths. Though, you can include a certificate content to a pillar instead. For example, for the Aodh, use the following structure: aodh: server: message_queue: x509: cacert: cert: key: Definition of custom X.509 certificates for RabbitMQ OpenStack service Aodh Barbican Define custom certificates in pillar aodh: server: message_queue: x509: ca_cert: cert_file: key_file: barbican: server: message_queue: x509: ca_cert: cert_file: key_file: ©2019, Mirantis Inc. Apply the change salt -C 'I@aodh:server' state.sls aodh salt -C 'I@barbican:server' state.sls barbican.server Page 146 Mirantis Cloud Platform Deployment Guide Cinder Designate Glance Heat salt -C 'I@cinder:controller or I@cinder:volume' state.sls cinder cinder: controller: message_queue: x509: ca_cert: cert_file: key_file: volume: message_queue: x509: ca_cert: cert_file: key_file: designate: server: message_queue: x509: ca_cert: cert_file: key_file: salt -C 'I@designate:server' state.sls designate glance: server: message_queue: x509: ca_cert: cert_file: key_file: salt -C 'I@glance:server' state.sls glance.server heat: server: message_queue: x509: ca_cert: cert_file: key_file: salt -C 'I@heat:server' state.sls heat ©2019, Mirantis Inc. Page 147 Mirantis Cloud Platform Deployment Guide Ironic Keystone Manila Neutron ironic: api: message_queue: x509: ca_cert: cert_file: key_file: conductor: message_queue: x509: ca_cert: cert_file: key_file: keystone: server: message_queue: x509: ca_cert: cert_file: key_file: manila: common: message_queue: x509: ca_cert: cert_file: key_file: cert_file: key_file: neutron: gateway: message_queue: x509: ca_cert: cert_file: key_file: ©2019, Mirantis Inc. Page 148 Mirantis Cloud Platform Deployment Guide Nova nova: controller: message_queue: x509: ca_cert: cert_file: key_file: salt -C 'I@nova:controller or I@nova:compute' state.sls nova nova: compute: message_queue: x509: ca_cert: cert_file: key_file: 8. To verify that a particular client can authorize to RabbitMQ with an X.509 certificate, verify the output of the rabbitmqctl list_connections command on any RabbitMQ node. For example: salt msg01* cmd.run 'rabbitmqctl list_connections peer_host peer_port peer_cert_subject ssl' Install support services Your installation should include a number of support services such as RabbitMQ for messaging; HAProxy for load balancing, proxying, and HA; GlusterFS for storage; and others. This section provides the procedures to install the services and verify they are up and running. Warning The HAProxy state should not be deployed prior to Galera. Otherwise, the Galera deployment will fail because of the ports/IP are not available due to HAProxy is already listening on them attempting to bind to 0.0.0.0. Therefore, verify that your deployment workflow is correct: 1. Keepalived 2. Galera 3. HAProxy Deploy Keepalived Keepalived is a framework that provides high availability and load balancing to Linux systems. Keepalived provides a virtual IP address that network clients use as a main entry point to access the CI/CD services distributed between nodes. Therefore, in MCP, Keepalived is used in HA ©2019, Mirantis Inc. Page 149 Mirantis Cloud Platform Deployment Guide (multiple-node warm-standby) configuration to keep track of services availability and manage failovers. Warning The HAProxy state should not be deployed prior to Galera. Otherwise, the Galera deployment will fail because of the ports/IP are not available due to HAProxy is already listening on them attempting to bind to 0.0.0.0. Therefore, verify that your deployment workflow is correct: 1. Keepalived 2. Galera 3. HAProxy To deploy Keepalived: salt -C 'I@keepalived:cluster' state.sls keepalived -b 1 To verify the VIP address: 1. Determine the VIP address for the current environment: salt -C 'I@keepalived:cluster' pillar.get keepalived:cluster:instance:VIP:address Example of system output: ctl03.mk22-lab-basic.local: 172.16.10.254 ctl02.mk22-lab-basic.local: 172.16.10.254 ctl01.mk22-lab-basic.local: 172.16.10.254 Note You can also find the Keepalived VIP address in the following files of the Reclass model: • /usr/share/salt-formulas/reclass/service/keepalived/cluster/single.yml, parameter keepalived.cluster.instance.VIP.address • /srv/salt/reclass/classes/cluster/ /openstack/control.yml, parameter cluster_vip_address ©2019, Mirantis Inc. Page 150 Mirantis Cloud Platform Deployment Guide 2. Verify if the obtained VIP address is assigned to any network interface on one of the controller nodes: salt -C 'I@keepalived:cluster' cmd.run "ip a | grep " Note Remember that multiple clusters are defined. Therefore, verify that all of them are up and running. Deploy NTP The Network Time Protocol (NTP) is used to properly synchronize services among your OpenStack nodes. To deploy NTP: salt '*' state.sls ntp Seealso Enable NTP authentication Deploy GlusterFS GlusterFS is a highly-scalable distributed network file system that enables you to create a reliable and redundant data storage. GlusterFS keeps all important data for Database, Artifactory, and Gerrit in shared storage on separate volumes that makes MCP CI infrastructure fully tolerant to failovers. To deploy GlusterFS: salt -C 'I@glusterfs:server' state.sls glusterfs.server.service salt -C 'I@glusterfs:server' state.sls glusterfs.server.setup -b 1 To verify GlusterFS: salt -C 'I@glusterfs:server' cmd.run "gluster peer status; gluster volume status" -b 1 Deploy RabbitMQ RabbitMQ is an intermediary for messaging. It provides a platform to send and receive messages for applications and a safe place for messages to live until they are received. All ©2019, Mirantis Inc. Page 151 Mirantis Cloud Platform Deployment Guide OpenStack services depend on RabbitMQ message queues to communicate and distribute the workload across workers. To deploy RabbitMQ: 1. Log in to the Salt Master node. 2. Apply the rabbitmq state: salt -C 'I@rabbitmq:server' state.sls rabbitmq 3. Verify the RabbitMQ status: salt -C 'I@rabbitmq:server' cmd.run "rabbitmqctl cluster_status" Deploy Galera (MySQL) Galera cluster is a synchronous multi-master database cluster based on the MySQL storage engine. Galera is an HA service that provides scalability and high system uptime. Warning The HAProxy state should not be deployed prior to Galera. Otherwise, the Galera deployment will fail because of the ports/IP are not available due to HAProxy is already listening on them attempting to bind to 0.0.0.0. Therefore, verify that your deployment workflow is correct: 1. Keepalived 2. Galera 3. HAProxy To deploy Galera: 1. Log in to the Salt Master node. 2. Apply the galera state: salt -C 'I@galera:master' state.sls galera salt -C 'I@galera:slave' state.sls galera -b 1 3. Verify that Galera is up and running: salt -C 'I@galera:master' mysql.status | grep -A1 wsrep_cluster_size salt -C 'I@galera:slave' mysql.status | grep -A1 wsrep_cluster_size Deploy HAProxy ©2019, Mirantis Inc. Page 152 Mirantis Cloud Platform Deployment Guide HAProxy is a software that provides load balancing for network connections while Keepalived is used for configuring the IP address of the VIP. Warning The HAProxy state should not be deployed prior to Galera. Otherwise, the Galera deployment will fail because of the ports/IP are not available due to HAProxy is already listening on them attempting to bind to 0.0.0.0. Therefore, verify that your deployment workflow is correct: 1. Keepalived 2. Galera 3. HAProxy To deploy HAProxy: salt -C 'I@haproxy:proxy' state.sls haproxy salt -C 'I@haproxy:proxy' service.status haproxy salt -I 'haproxy:proxy' service.restart rsyslog Deploy Memcached Memcached is used for caching data for different OpenStack services such as Keystone, for example. To deploy Memcached: salt -C 'I@memcached:server' state.sls memcached Deploy a DNS back end for Designate Berkely Internet Name Domain (BIND9) and PowerDNS are the two underlying Domain Name system (DNS) servers that Designate supports out of the box. You can use either new or existing DNS server as a back end for Designate. Deploy BIND9 for Designate Berkely Internet Name Domain (BIND9) server can be used by Designate as its underlying back end. This section describes how to configure an existing or deploy a new BIND9 server for Designate. Configure an existing BIND9 server for Designate If you already have a running BIND9 server, you can configure and use it for the Designate deployment. The example configuration below has three predeployed BIND9 servers. ©2019, Mirantis Inc. Page 153 Mirantis Cloud Platform Deployment Guide To configure an existing BIND9 server for Designate: 1. Open your BIND9 server UI. 2. Verify that the BIND9 configuration files contain rdnc.key for Designate. The following text is an example of /etc/bind/named.conf.local on the managed BIND9 server with the IPs allowed for Designate and rdnc.key: key "designate" { algorithm hmac-sha512; secret "4pc+X4PDqb2q+5o72dISm72LM1Ds9X2EYZjqg+nmsS7F/C8H+z0fLLBunoitw=="; }; controls { inet 10.0.0.3 port 953 allow { 172.16.10.101; 172.16.10.102; 172.16.10.103; } keys { designate; }; }; 3. Open classes/cluster/cluster_name/openstack in your Git project repository. 4. In init.yml, add the following parameters: bind9_node01_address: 10.0.0.1 bind9_node02_address: 10.0.0.2 bind9_node03_address: 10.0.0.3 mysql_designate_password: password keystone_designate_password: password designate_service_host: ${_param:openstack_control_address} designate_bind9_rndc_algorithm: hmac-sha512 designate_bind9_rndc_key: > 4pc+X4PDqb2q+5o72dISm72LM1Ds9X2EYZjqg+nmsS7F/C8H+z0fLLBunoitw== designate_domain_id: 5186883b-91fb-4891-bd49-e6769234a8fc designate_pool_ns_records: - hostname: 'ns1.example.org.' priority: 10 designate_pool_nameservers: - host: ${_param:bind9_node01_address} port: 53 - host: ${_param:bind9_node02_address} port: 53 - host: ${_param:bind9_node03_address} port: 53 designate_pool_target_type: bind9 designate_pool_target_masters: ©2019, Mirantis Inc. Page 154 Mirantis Cloud Platform Deployment Guide - host: ${_param:openstack_control_node01_address} port: 5354 - host: ${_param:openstack_control_node02_address} port: 5354 - host: ${_param:openstack_control_node03_address} port: 5354 designate_pool_target_options: host: ${_param:bind9_node01_address} port: 53 rndc_host: ${_param:bind9_node01_address} rndc_port: 953 rndc_key_file: /etc/designate/rndc.key designate_version: ${_param:openstack_version} 5. In control.yml, modify the parameters section. Add targets according to the number of BIND9 severs that will be managed, three in our case. Example: designate: server: backend: bind9: rndc_key: ${_param:designate_bind9_rndc_key} rndc_algorithm: ${_param:designate_bind9_rndc_algorithm} pools: default: description: 'test pool' targets: default: description: 'test target1' default1: type: ${_param:designate_pool_target_type} description: 'test target2' masters: ${_param:designate_pool_target_masters} options: host: ${_param:bind9_node02_address} port: 53 rndc_host: ${_param:bind9_node02_address} rndc_port: 953 rndc_key_file: /etc/designate/rndc.key default2: type: ${_param:designate_pool_target_type} description: 'test target3' masters: ${_param:designate_pool_target_masters} options: host: ${_param:bind9_node03_address} port: 53 rndc_host: ${_param:bind9_node03_address} ©2019, Mirantis Inc. Page 155 Mirantis Cloud Platform Deployment Guide rndc_port: 953 rndc_key_file: /etc/designate/rndc.key 6. Add your changes to a new commit. 7. Commit and push the changes. Once done, proceed to deploy Designate as described in Deploy Designate. Prepare a deployment model for a new BIND9 server Before you deploy a BIND9 server as a back end for Designate, prepare your cluster deployment model as described below. The example provided in this section describes the configuration of the deployment model with two BIND9 servers deployed on separate VMs of the infrastructure nodes. To prepare a deployment model for a new BIND9 server: 1. Open the classes/cluster/cluster_name/openstack directory in your Git project repository. 2. Create a dns.yml file with the following parameters: classes: - system.linux.system.repo.mcp.extra - system.linux.system.repo.mcp.apt_mirantis.ubuntu - system.linux.system.repo.mcp.apt_mirantis.saltstack - system.bind.server.single - cluster.cluster_name.infra parameters: linux: network: interface: ens3: ${_param:linux_single_interface} bind: server: key: designate: secret: "${_param:designate_bind9_rndc_key}" algorithm: "${_param:designate_bind9_rndc_algorithm}" allow_new_zones: true query: true control: mgmt: enabled: true bind: address: ${_param:single_address} port: 953 allow: - ${_param:openstack_control_node01_address} - ${_param:openstack_control_node02_address} - ${_param:openstack_control_node03_address} ©2019, Mirantis Inc. Page 156 Mirantis Cloud Platform Deployment Guide - ${_param:single_address} - 127.0.0.1 keys: - designate client: enabled: true option: default: server: 127.0.0.1 port: 953 key: designate key: designate: secret: "${_param:designate_bind9_rndc_key}" algorithm: "${_param:designate_bind9_rndc_algorithm}" Note In the parameters above, substitute cluster_name with the appropriate value. 3. In control.yml, modify the parameters section as follows. Add targets according to the number of the BIND9 servers that will be managed. designate: server: backend: bind9: rndc_key: ${_param:designate_bind9_rndc_key} rndc_algorithm: ${_param:designate_bind9_rndc_algorithm} pools: default: description: 'test pool' targets: default: description: 'test target1' default1: type: ${_param:designate_pool_target_type} description: 'test target2' masters: ${_param:designate_pool_target_masters} options: host: ${_param:openstack_dns_node02_address} port: 53 rndc_host: ${_param:openstack_dns_node02_address} rndc_port: 953 rndc_key_file: /etc/designate/rndc.key ©2019, Mirantis Inc. Page 157 Mirantis Cloud Platform Deployment Guide Note In the example above, the first target that contains default parameters is defined in openstack/init.yml. The second target is defined explicitly. You can add more targets in this section as required. 4. In init.yml, modify the parameters section. Example: openstack_dns_node01_hostname: dns01 openstack_dns_node02_hostname: dns02 openstack_dns_node01_deploy_address: 10.0.0.8 openstack_dns_node02_deploy_address: 10.0.0.9 openstack_dns_node01_address: 10.0.0.1 openstack_dns_node02_address: 10.0.0.2 mysql_designate_password: password keystone_designate_password: password designate_service_host: ${_param:openstack_control_address} designate_bind9_rndc_key: > 4pc+X4PDqb2q+5o72dISm72LM1Ds9X2EYZjqg+nmsS7F/C8H+z0fLLBunoitw== designate_bind9_rndc_algorithm: hmac-sha512 designate_domain_id: 5186883b-91fb-4891-bd49-e6769234a8fc designate_pool_ns_records: - hostname: 'ns1.example.org.' priority: 10 designate_pool_nameservers: - host: ${_param:openstack_dns_node01_address} port: 53 - host: ${_param:openstack_dns_node02_address} port: 53 designate_pool_target_type: bind9 designate_pool_target_masters: - host: ${_param:openstack_control_node01_address} port: 5354 - host: ${_param:openstack_control_node02_address} port: 5354 - host: ${_param:openstack_control_node03_address} port: 5354 designate_pool_target_options: host: ${_param:openstack_dns_node01_address} port: 53 rndc_host: ${_param:openstack_dns_node01_address} rndc_port: 953 rndc_key_file: /etc/designate/rndc.key designate_version: ${_param:openstack_version} linux: ©2019, Mirantis Inc. Page 158 Mirantis Cloud Platform Deployment Guide network: host: dns01: address: ${_param:openstack_dns_node01_address} names: - ${_param:openstack_dns_node01_hostname} - ${_param:openstack_dns_node01_hostname}.${_param:cluster_domain} dns02: address: ${_param:openstack_dns_node02_address} names: - ${_param:openstack_dns_node02_hostname} - ${_param:openstack_dns_node02_hostname}.${_param:cluster_domain} 5. In classes/cluster/cluster_name/infra/kvm.yml, add the following class: classes: - system.salt.control.cluster.openstack_dns_cluster 6. In classes/cluster/cluster_name/infra/config.yml, sections. modify the classes and parameters Example: • In the classes section: classes: - system.reclass.storage.system.openstack_dns_cluster • In the parameters section, add the DNS VMs. reclass: storage: node: openstack_dns_node01: params: linux_system_codename: xenial deploy_address: ${_param:openstack_database_node03_deploy_address} openstack_dns_node01: params: linux_system_codename: xenial deploy_address: ${_param:openstack_dns_node01_deploy_address} openstack_dns_node02: params: linux_system_codename: xenial deploy_address: ${_param:openstack_dns_node02_deploy_address} openstack_message_queue_node01: params: linux_system_codename: xenial ©2019, Mirantis Inc. Page 159 Mirantis Cloud Platform Deployment Guide 7. Commit and push the changes. Once done, proceed to deploy the BIND9 server service as described in Deploy a new BIND9 server for Designate. Deploy a new BIND9 server for Designate After you configure the Reclass model for a BIND9 server as the back end for Designate, proceed to deploying the BIND9 server service as described below. To deploy a BIND9 server service: 1. Log in to the Salt Master node. 2. Configure basic operating system settings on the DNS nodes: salt -C ‘I@bind:server’ state.sls linux,ntp,openssh 3. Apply the following state: salt -C ‘I@bind:server’ state.sls bind Once done, proceed to deploy Designate as described in Deploy Designate. Deploy PowerDNS for Designate PowerDNS server can be used by Designate as its underlying back end. This section describes how to configure an existing or deploy a new PowerDNS server for Designate. The default PowerDNS configuration for Designate uses the Designate worker role. If you need live synchronization of DNS zones between Designate and PowerDNS servers, you can configure Designate with the pool_manager role. The Designate Pool Manager keeps records consistent across the Designate database and the PowerDNS servers. For example, if a record was removed from the PowerDNS server due to a hard disk failure, this record will be automatically restored from the Designate database. Configure an existing PowerDNS server for Designate If you already have a running PowerDNS server, you can configure and use it for the Designate deployment. The example configuration below has three predeployed PowerDNS servers. To configure an existing PowerDNS server for Designate: 1. Open your PowerDNS server UI. 2. In etc/powerdns/pdns.conf, modify the following parameters: • allow-axfr-ips - must list the IPs of the Designate nodes, which will be located on the OpenStack API nodes • api-key - must coincide with the designate_pdns_api_key parameter for Designate in the Reclass model • webserver - must have the value yes ©2019, Mirantis Inc. Page 160 Mirantis Cloud Platform Deployment Guide • webserver-port - must coincide with the powerdns_webserver_port parameter for Designate in the Reclass model • api - must have the value yes to enable management through API • disable-axfr - must have the value no to enable the axfr zone updates from the Designate nodes Example: allow-axfr-ips=172.16.10.101,172.16.10.102,172.16.10.103,127.0.0.1 allow-recursion=127.0.0.1 api-key=VxK9cMlFL5Ae api=yes config-dir=/etc/powerdns daemon=yes default-soa-name=a.very.best.power.dns.server disable-axfr=no guardian=yes include-dir=/etc/powerdns/pdns.d launch= local-address=10.0.0.1 local-port=53 master=no setgid=pdns setuid=pdns slave=yes soa-minimum-ttl=3600 socket-dir=/var/run version-string=powerdns webserver=yes webserver-address=10.0.0.1 webserver-password=gJ6n3gVaYP8eS webserver-port=8081 3. Open the classes/cluster/cluster_name/openstack directory in your Git project repository. 4. In init.yml, add the following parameters: powerdns_node01_address: 10.0.0.1 powerdns_node02_address: 10.0.0.2 powerdns_node03_address: 10.0.0.3 powerdns_webserver_password: gJ6n3gVaYP8eS powerdns_webserver_port: 8081 mysql_designate_password: password keystone_designate_password: password designate_service_host: ${_param:openstack_control_address} designate_domain_id: 5186883b-91fb-4891-bd49-e6769234a8fc designate_pdns_api_key: VxK9cMlFL5Ae designate_pdns_api_endpoint: > "http://${_param:powerdns_node01_address}:${_param:powerdns_webserver_port}" ©2019, Mirantis Inc. Page 161 Mirantis Cloud Platform Deployment Guide designate_pool_ns_records: - hostname: 'ns1.example.org.' priority: 10 designate_pool_nameservers: - host: ${_param:powerdns_node01_address} port: 53 - host: ${_param:powerdns_node02_address} port: 53 - host: ${_param:powerdns_node03_address} port: 53 designate_pool_target_type: pdns4 designate_pool_target_masters: - host: ${_param:openstack_control_node01_address} port: 5354 - host: ${_param:openstack_control_node02_address} port: 5354 - host: ${_param:openstack_control_node03_address} port: 5354 designate_pool_target_options: host: ${_param:powerdns_node01_address} port: 53 api_token: ${_param:designate_pdns_api_key} api_endpoint: ${_param:designate_pdns_api_endpoint} designate_version: ${_param:openstack_version} 5. In control.yml, modify the parameters section. Add targets according to the number of PowerDNS severs that will be managed, three in our case. Example: designate: server: backend: pdns4: api_token: ${_param:designate_pdns_api_key} api_endpoint: ${_param:designate_pdns_api_endpoint} pools: default: description: 'test pool' targets: default: description: 'test target1' default1: type: ${_param:designate_pool_target_type} description: 'test target2' masters: ${_param:designate_pool_target_masters} options: host: ${_param:powerdns_node02_address} port: 53 ©2019, Mirantis Inc. Page 162 Mirantis Cloud Platform Deployment Guide api_endpoint: > "http://${_param:${_param:powerdns_node02_address}}: ${_param:powerdns_webserver_port}" api_token: ${_param:designate_pdns_api_key} default2: type: ${_param:designate_pool_target_type} description: 'test target3' masters: ${_param:designate_pool_target_masters} options: host: ${_param:powerdns_node03_address} port: 53 api_endpoint: > "http://${_param:powerdns_node03_address}: ${_param:powerdns_webserver_port}" api_token: ${_param:designate_pdns_api_key} Once done, proceed to deploy Designate as described in Deploy Designate. Prepare a deployment model for a new PowerDNS server with the worker role Before you deploy a PowerDNS server as a back end for Designate, prepare your deployment model with the default Designate worker role as described below. If you need live synchronization of DNS zones between Designate and PowerDNS servers, configure Designate with the pool_manager role as described in Prepare a deployment model for a new PowerDNS server with the pool_manager role. The examples provided in this section describe the configuration of the deployment model with two PowerDNS servers deployed on separate VMs of the infrastructure nodes. To prepare a deployment model for a new PowerDNS server: 1. Open the classes/cluster/cluster_name/openstack directory of your Git project repository. 2. Create a dns.yml file with the following parameters: classes: - system.powerdns.server.single - cluster.cluster_name.infra parameters: linux: network: interface: ens3: ${_param:linux_single_interface} host: dns01: address: ${_param:openstack_dns_node01_address} names: - dns01 - dns01.${_param:cluster_domain} dns02: ©2019, Mirantis Inc. Page 163 Mirantis Cloud Platform Deployment Guide address: ${_param:openstack_dns_node02_address} names: - dns02 - dns02.${_param:cluster_domain} powerdns: server: enabled: true bind: address: ${_param:single_address} port: 53 backend: engine: sqlite dbname: pdns.sqlite3 dbpath: /var/lib/powerdns api: enabled: true key: ${_param:designate_pdns_api_key} webserver: enabled: true address: ${_param:single_address} port: ${_param:powerdns_webserver_port} password: ${_param:powerdns_webserver_password} axfr_ips: - ${_param:openstack_control_node01_address} - ${_param:openstack_control_node02_address} - ${_param:openstack_control_node03_address} - 127.0.0.1 Note If you want to use the MySQL back end instead of the default SQLite one, modify the backend section parameters accordingly and configure your metadata model as described in Enable the MySQL back end for PowerDNS. 3. In init.yml, define the following parameters: Example: openstack_dns_node01_address: 10.0.0.1 openstack_dns_node02_address: 10.0.0.2 powerdns_webserver_password: gJ6n3gVaYP8eS powerdns_webserver_port: 8081 mysql_designate_password: password keystone_designate_password: password designate_service_host: ${_param:openstack_control_address} designate_domain_id: 5186883b-91fb-4891-bd49-e6769234a8fc designate_pdns_api_key: VxK9cMlFL5Ae ©2019, Mirantis Inc. Page 164 Mirantis Cloud Platform Deployment Guide designate_pdns_api_endpoint: > "http://${_param:openstack_dns_node01_address}:${_param:powerdns_webserver_port}" designate_pool_ns_records: - hostname: 'ns1.example.org.' priority: 10 designate_pool_nameservers: - host: ${_param:openstack_dns_node01_address} port: 53 - host: ${_param:openstack_dns_node02_address} port: 53 designate_pool_target_type: pdns4 designate_pool_target_masters: - host: ${_param:openstack_control_node01_address} port: 5354 - host: ${_param:openstack_control_node02_address} port: 5354 - host: ${_param:openstack_control_node03_address} port: 5354 designate_pool_target_options: host: ${_param:openstack_dns_node01_address} port: 53 api_token: ${_param:designate_pdns_api_key} api_endpoint: ${_param:designate_pdns_api_endpoint} designate_version: ${_param:openstack_version} designate_worker_enabled: true 4. In control.yml, define the following parameters in the parameters section: Example: designate: worker: enabled: ${_param:designate_worker_enabled} server: backend: pdns4: api_token: ${_param:designate_pdns_api_key} api_endpoint: ${_param:designate_pdns_api_endpoint} pools: default: description: 'test pool' targets: default: description: 'test target1' default1: type: ${_param:designate_pool_target_type} description: 'test target2' masters: ${_param:designate_pool_target_masters} options: host: ${_param:openstack_dns_node02_address} ©2019, Mirantis Inc. Page 165 Mirantis Cloud Platform Deployment Guide port: 53 api_endpoint: > "http://${_param:openstack_dns_node02_address}: ${_param:powerdns_webserver_port}" api_token: ${_param:designate_pdns_api_key} 5. In classes/cluster/cluster_name/infra/kvm.yml, modify the classes and parameters sections. Example: • In the classes section: classes: - system.salt.control.cluster.openstack_dns_cluster • In the parameters section, add the DNS parameters for VMs with the required location of DNS VMs on kvm nodes and the planned resource usage for them. salt: control: openstack.dns: cpu: 2 ram: 2048 disk_profile: small net_profile: default cluster: internal: node: dns01: provider: kvm01.${_param:cluster_domain} dns02: provider: kvm02.${_param:cluster_domain} 6. In classes/cluster/cluster_name/infra/config.yml, sections. modify the classes and parameters Example: • In the classes section: classes: - system.reclass.storage.system.openstack_dns_cluster • In the parameters section, add the DNS VMs. For example: reclass: storage: node: openstack_dns_node01: params: ©2019, Mirantis Inc. Page 166 Mirantis Cloud Platform Deployment Guide linux_system_codename: xenial openstack_dns_node02: params: linux_system_codename: xenial 7. Commit and push the changes. Once done, proceed to deploy the PowerDNS server service as described in Deploy a new PowerDNS server for Designate. Prepare a deployment model for a new PowerDNS server with the pool_manager role If you need live synchronization of DNS zones between Designate and PowerDNS servers, you can configure Designate with the pool_manager role as described below. The Designate Pool Manager keeps records consistent across the Designate database and the PowerDNS servers. For example, if a record was removed from the PowerDNS server due to a hard disk failure, this record will be automatically restored from the Designate database. To configure a PowerDNS server with the default Designate worker role, see Prepare a deployment model for a new PowerDNS server with the worker role. The examples provided in this section describe the configuration of the deployment model with two PowerDNS servers deployed on separate VMs of the infrastructure nodes. To prepare a model for a new PowerDNS server with the pool_manager role: 1. Open the classes/cluster/cluster_name/openstack directory of your Git project repository. 2. Create a dns.yml file with the following parameters: classes: - system.powerdns.server.single - cluster.cluster_name.infra parameters: linux: network: interface: ens3: ${_param:linux_single_interface} host: dns01: address: ${_param:openstack_dns_node01_address} names: - dns01 - dns01.${_param:cluster_domain} dns02: address: ${_param:openstack_dns_node02_address} names: - dns02 - dns02.${_param:cluster_domain} powerdns: server: enabled: true ©2019, Mirantis Inc. Page 167 Mirantis Cloud Platform Deployment Guide bind: address: ${_param:single_address} port: 53 backend: engine: sqlite dbname: pdns.sqlite3 dbpath: /var/lib/powerdns api: enabled: true key: ${_param:designate_pdns_api_key} overwrite_supermasters: ${_param:powerdns_supermasters} supermasters: ${_param:powerdns_supermasters} webserver: enabled: true address: ${_param:single_address} port: ${_param:powerdns_webserver_port} password: ${_param:powerdns_webserver_password} axfr_ips: - ${_param:openstack_control_node01_address} - ${_param:openstack_control_node02_address} - ${_param:openstack_control_node03_address} - 127.0.0.1 Note If you want to use the MySQL back end instead of the default SQLite one, modify the backend section parameters accordingly and configure your metadata model as described in Enable the MySQL back end for PowerDNS. 3. In init.yml, define the following parameters: Example: openstack_dns_node01_address: 10.0.0.1 openstack_dns_node02_address: 10.0.0.2 powerdns_axfr_ips: - ${_param:openstack_control_node01_address} - ${_param:openstack_control_node02_address} - ${_param:openstack_control_node03_address} - 127.0.0.1 powerdns_supermasters: - ip: ${_param:openstack_control_node01_address} nameserver: ns1.example.org account: master - ip: ${_param:openstack_control_node02_address} ©2019, Mirantis Inc. Page 168 Mirantis Cloud Platform Deployment Guide nameserver: ns2.example.org account: master - ip: ${_param:openstack_control_node03_address} nameserver: ns3.example.org account: master powerdns_overwrite_supermasters: True powerdns_webserver_password: gJ6n3gVaYP8eS powerdns_webserver_port: 8081 mysql_designate_password: password keystone_designate_password: password designate_service_host: ${_param:openstack_control_address} designate_domain_id: 5186883b-91fb-4891-bd49-e6769234a8fc designate_mdns_address: 0.0.0.0 designate_mdns_port: 53 designate_pdns_api_key: VxK9cMlFL5Ae designate_pdns_api_endpoint: > "http://${_param:openstack_dns_node01_address}:${_param:powerdns_webserver_port}" designate_pool_manager_enabled: True designate_pool_manager_periodic_sync_interval: '120' designate_pool_ns_records: - hostname: 'ns1.example.org.' priority: 10 - hostname: 'ns2.example.org.' priority: 20 - hostname: 'ns3.example.org.' priority: 30 designate_pool_nameservers: - host: ${_param:openstack_dns_node01_address} port: 53 - host: ${_param:openstack_dns_node02_address} port: 53 designate_pool_target_type: pdns4 designate_pool_target_masters: - host: ${_param:openstack_control_node01_address} port: ${_param:designate_mdns_port} - host: ${_param:openstack_control_node02_address} port: ${_param:designate_mdns_port} - host: ${_param:openstack_control_node03_address} port: ${_param:designate_mdns_port} designate_pool_target_options: host: ${_param:openstack_dns_node01_address} port: 53 api_token: ${_param:designate_pdns_api_key} api_endpoint: ${_param:designate_pdns_api_endpoint} designate_version: ${_param:openstack_version} 4. In control.yml, define the following parameters in the parameters section: Example: ©2019, Mirantis Inc. Page 169 Mirantis Cloud Platform Deployment Guide designate: pool_manager: enabled: ${_param:designate_pool_manager_enabled} periodic_sync_interval: ${_param:designate_pool_manager_periodic_sync_interval} server: backend: pdns4: api_token: ${_param:designate_pdns_api_key} api_endpoint: ${_param:designate_pdns_api_endpoint} mdns: address: ${_param:designate_mdns_address} port: ${_param:designate_mdns_port} pools: default: description: 'test pool' targets: default: description: 'test target1' default1: type: ${_param:designate_pool_target_type} description: 'test target2' masters: ${_param:designate_pool_target_masters} options: host: ${_param:openstack_dns_node02_address} port: 53 api_endpoint: > "http://${_param:openstack_dns_node02_address}: ${_param:powerdns_webserver_port}" api_token: ${_param:designate_pdns_api_key} 5. In classes/cluster/cluster_name/infra/kvm.yml, modify the classes and parameters sections. Example: • In the classes section: classes: - system.salt.control.cluster.openstack_dns_cluster • In the parameters section, add the DNS parameters for VMs with the required location of DNS VMs on the kvm nodes and the planned resource usage for them. salt: control: openstack.dns: cpu: 2 ram: 2048 disk_profile: small net_profile: default cluster: ©2019, Mirantis Inc. Page 170 Mirantis Cloud Platform Deployment Guide internal: node: dns01: provider: kvm01.${_param:cluster_domain} dns02: provider: kvm02.${_param:cluster_domain} 6. In classes/cluster/cluster_name/infra/config.yml, sections. modify the classes and parameters Example: • In the classes section: classes: - system.reclass.storage.system.openstack_dns_cluster • In the parameters section, add the DNS VMs. For example: reclass: storage: node: openstack_dns_node01: params: linux_system_codename: xenial openstack_dns_node02: params: linux_system_codename: xenial 7. Commit and push the changes. Once done, proceed to deploy the PowerDNS server service as described in Deploy a new PowerDNS server for Designate. Enable the MySQL back end for PowerDNS You can use PowerDNS with the MySQL back end instead of the default SQLite one if required. Warning If you use PowerDNS in the slave mode, you must run MySQL with a storage engine that supports transactions, for example, InnoDB that is the default storage engine for MySQL in MCP. Using a non-transaction storage engine may negatively affect your database after some actions, such as failures in an incoming zone transfer. For more information, see: PowerDNS documentation. ©2019, Mirantis Inc. Page 171 Mirantis Cloud Platform Deployment Guide Note While following the procedure below, replace ${node} with a short name of the required node where applicable. To enable the MySQL back end for PowerDNS: 1. Open your Reclass model Git repository. 2. Modify nodes/_generated/${full_host_name}.yml, where ${full_host_name} is the FQDN of the particular node. Add the following classes and parameters: classes: ... - cluster. - system.powerdns.server.single ... parameters: ... powerdns: ... server: ... backend: engine: mysql host: ${_param:cluster_vip_address} port: 3306 dbname: ${_param:mysql_powerdns_db_name} user: ${_param:mysql_powerdns_db_name} password: ${_param:mysql_powerdns_password} Substitute with the appropriate value. Warning Do not override the cluster_vip_address parameter. 3. Create a classes/system/galera/server/database/powerdns_${node}.yml file and add the databases to use with the MySQL back end: parameters: mysql: server: database: powerdns_${node}: ©2019, Mirantis Inc. Page 172 Mirantis Cloud Platform Deployment Guide encoding: utf8 users: - name: ${_param:mysql_powerdns_user_name_${node}} password: ${_param:mysql_powerdns_user_password_${node}} host: '%' rights: all - name: ${_param:mysql_powerdns_user_name_${node}} password: ${_param:mysql_powerdns_user_password_${node}} host: ${_param:cluster_local_address} rights: all 4. Add the following class to classes/cluster/ /openstack/control.yml: classes: ... - system.galera.server.database.powerdns_${node} 5. Add the MySQL parameters for classes/cluster/ /openstack/init.yml. For example: Galera to parameters: _param: ... mysql_powerdns_db_name_${node}: powerdns_${node} mysql_powerdns_user_name_${node}: pdns_slave_${node} mysql_powerdns_user_password_${node}: ni1iX1wuf]ongiVu 6. Log in to the Salt Master node. 7. Refresh pillar information: salt '*' saltutil.refresh_pillar 8. Apply the Galera states: salt -C 'I@galera:master' state.sls galera 9. Proceed to deploying PowerDNS as described in Deploy a new PowerDNS server for Designate. 10. Optional. After you deploy PowerDNS: • If you use MySQL InnoDB, add foreign key constraints to the tables. For details, see: PowerDNS documentation. • If you use MySQL replication, to support the NATIVE domains, set binlog_format to MIXED or ROW to prevent differences in data between replicated servers. For details, see: MySQL documentation. Deploy a new PowerDNS server for Designate ©2019, Mirantis Inc. Page 173 Mirantis Cloud Platform Deployment Guide After you configure the Reclass model for PowerDNS server as a back end for Designate, proceed to deploying the PowerDNS server service as described below. To deploy a PowerDNS server service: 1. Log in to the Salt Master node. 2. Configure basic operating system settings on the DNS nodes: salt -C ‘I@powerdns:server’ state.sls linux,ntp,openssh 3. Apply the following state: salt -C ‘I@powerdns:server’ state.sls powerdns Once done, you can proceed to deploy Designate as described in Deploy Designate. Seealso • Deploy Designate • BIND9 documentation • PowerDNS documentation • Plan the Domain Name System Install OpenStack services Many of the OpenStack service states make changes to the databases upon deployment. To ensure proper deployment and to prevent multiple simultaneous attempts to make these changes, deploy a service states on a single node of the environment first. Then, you can deploy the remaining nodes of this environment. Keystone must be deployed before other services. Following the order of installation is important, because many of the services have dependencies of the others being in place. Deploy Keystone To deploy Keystone: 1. Log in to the Salt Master node. 2. Set up the Keystone service: salt -C 'I@keystone:server and *01*' state.sls keystone.server salt -C 'I@keystone:server' state.sls keystone.server 3. Populate keystone services/tenants/admins: ©2019, Mirantis Inc. Page 174 Mirantis Cloud Platform Deployment Guide salt -C 'I@keystone:client' state.sls keystone.client salt -C 'I@keystone:server' cmd.run ". /root/keystonerc; openstack service list" Note By default, the latest MCP deployments use rsync for fernet and credential keys rotation. To configure rsync on the environments that use GlusterFS as a default rotation driver and credential keys rotation driver, see MCP Operations Guide: Migrate from GlusterFS to rsync for fernet and credential keys rotation. Deploy Glance The OpenStack Image service (Glance) provides a REST API for storing and managing virtual machine images and snapshots. To deploy Glance: 1. Install Glance and verify that GlusterFS clusters exist: salt salt salt salt -C -C -C -C 'I@glance:server and *01*' state.sls glance.server 'I@glance:server' state.sls glance.server 'I@glance:client' state.sls glance.client 'I@glusterfs:client' state.sls glusterfs.client 2. Update Fernet tokens before doing request on the Keystone server. Otherwise, you will get the following error: No encryption keys found; run keystone-manage fernet_setup to bootstrap one: salt -C 'I@keystone:server' state.sls keystone.server salt -C 'I@keystone:server' cmd.run ". /root/keystonerc; glance image-list" Deploy Nova To deploy the Nova: 1. Install Nova: salt salt salt salt salt -C -C -C -C -C 'I@nova:controller and *01*' state.sls nova.controller 'I@nova:controller' state.sls nova.controller 'I@keystone:server' cmd.run ". /root/keystonercv3; nova --debug service-list" 'I@keystone:server' cmd.run ". /root/keystonercv3; nova --debug list" 'I@nova:client' state.sls nova.client 2. On one of the controller nodes, verify that the Nova services are enabled and running: root@cfg01:~# ssh ctl01 "source keystonerc; nova service-list" ©2019, Mirantis Inc. Page 175 Mirantis Cloud Platform Deployment Guide Deploy Cinder To deploy Cinder: 1. Install Cinder: salt -C 'I@cinder:controller and *01*' state.sls cinder salt -C 'I@cinder:controller' state.sls cinder 2. On one of the controller nodes, verify that the Cinder service is enabled and running: salt -C 'I@keystone:server' cmd.run ". /root/keystonerc; cinder list" Deploy Neutron To install Neutron: salt salt salt salt -C -C -C -C 'I@neutron:server and *01*' state.sls neutron.server 'I@neutron:server' state.sls neutron.server 'I@neutron:gateway' state.sls neutron 'I@keystone:server' cmd.run ". /root/keystonerc; neutron agent-list" Note For installations with the OpenContrail setup, see Deploy OpenContrail manually. Seealso MCP Operations Guide: Configure Neutron OVS Deploy Horizon To install Horizon: salt -C 'I@horizon:server' state.sls horizon salt -C 'I@nginx:server' state.sls nginx Deploy Heat To deploy Heat: 1. Apply the following states: ©2019, Mirantis Inc. Page 176 Mirantis Cloud Platform Deployment Guide salt -C 'I@heat:server and *01*' state.sls heat salt -C 'I@heat:server' state.sls heat 2. On one of the controller nodes, verify that the Heat service is enabled and running: salt -C 'I@keystone:server' cmd.run ". /root/keystonerc; heat list" Deploy Tenant Telemetry Tenant Telemetry collects metrics about the OpenStack resources and provides this data through the APIs. This section describes how to deploy the Tenant Telemetry, which uses its own back ends, such as Gnocchi and Panko, on a new or existing MCP cluster. Caution! The deployment of Tenant Telemetry based on Ceilometer, Aodh, Panko, and Gnocchi is supported starting from the Pike OpenStack release and does not support integration with StackLight LMA. However, you can add the Gnocchi data source to Grafana to view the Tenant Telemetry data. Note If you select Ceph as an aggregation metrics storage, a Ceph health warning 1 pools have many more objects per pg than average may appear due to Telemetry writing a number of small files to Ceph. The possible solutions are as follows: • Increase the amount of PGs per pool. This option is suitable only if concurrent access is required together with request low latency. • Suppress the warning by modifying mon pg warn max object skew depending on the number of objects. For details, see Ceph documentation. Deploy Tenant Telemetry on a new cluster Caution! The deployment of Tenant Telemetry based on Ceilometer, Aodh, Panko, and Gnocchi is supported starting from the Pike OpenStack release and does not support integration with StackLight LMA. However, you can add the Gnocchi data source to Grafana to view the Tenant Telemetry data. ©2019, Mirantis Inc. Page 177 Mirantis Cloud Platform Deployment Guide Follow the procedure below to deploy Tenant Telemetry that uses its own back ends, such as Gnocchi and Panko. To deploy Tenant Telemetry on a new cluster: 1. Log in to the Salt Master node. 2. Set up the aggregation metrics storage for Gnocchi: • For Ceph, verify that you have deployed Ceph as described in Deploy a Ceph cluster manually and run the following commands: salt salt salt salt salt salt -C -C -C -C -C -C "I@ceph:osd or I@ceph:osd or I@ceph:radosgw" saltutil.refresh_pillar "I@ceph:mon:keyring:mon or I@ceph:common:keyring:admin" state.sls ceph.mon "I@ceph:mon:keyring:mon or I@ceph:common:keyring:admin" mine.update "I@ceph:mon" state.sls 'ceph.mon' "I@ceph:setup" state.sls ceph.setup "I@ceph:osd or I@ceph:osd or I@ceph:radosgw" state.sls ceph.setup.keyring • For the file back end based on GlusterFS, run the following commands: salt salt salt salt salt salt -C -C -C -C -C -C "I@glusterfs:server" saltutil.refresh_pillar "I@glusterfs:server" state.sls glusterfs.server.service "I@glusterfs:server:role:primary" state.sls glusterfs.server.setup "I@glusterfs:server" state.sls glusterfs "I@glusterfs:client" saltutil.refresh_pillar "I@glusterfs:client" state.sls glusterfs.client 3. Create users and databases for Panko and Gnocchi: salt-call state.sls reclass.storage salt -C 'I@salt:control' state.sls salt.control salt -C 'I@keystone:client' state.sls keystone.client salt -C 'I@keystone:server state.sls linux.system.package salt -C 'I@galera:master' state.sls galera salt -C 'I@galera:slave' state.sls galera salt prx\* state.sls nginx 4. Provision the mdb nodes: 1. Apply basic states: salt salt salt salt salt mdb\* mdb\* mdb\* mdb\* mdb\* saltutil.refresh_pillar saltutil.sync_all state.sls linux.system state.sls linux,ntp,openssh,salt.minion system.reboot --async 2. Deploy basic services on mdb nodes: ©2019, Mirantis Inc. Page 178 Mirantis Cloud Platform Deployment Guide salt salt salt salt salt salt mdb01\* state.sls keepalived mdb\* state.sls keepalived mdb\* state.sls haproxy mdb\* state.sls memcached mdb\* state.sls nginx mdb\* state.sls apache 3. Install packages: • For Ceph: salt mdb\* state.sls ceph.common,ceph.setup.keyring • For GlusterFS: salt mdb\* state.sls glusterfs 5. Update the cluster nodes: salt '*' saltutil.refresh_pillar salt '*' state.sls linux.network.host 6. To use the Redis cluster as coordination back end and storage for Gnocchi, deploy Redis master: salt -C 'I@redis:cluster:role:master' state.sls redis 7. Deploy Redis on all servers: salt -C 'I@redis:server' state.sls redis 8. Deploy Gnocchi: salt -C 'I@gnocchi:server and *01*' state.sls gnocchi.server salt -C 'I@gnocchi:server' state.sls gnocchi.server 9. Deploy Panko: salt -C 'I@panko:server and *01*' state.sls panko salt -C 'I@panko:server' state.sls panko 10. Deploy Ceilometer: salt -C 'I@ceilometer:server and *01*' state.sls ceilometer salt -C 'I@ceilometer:server' state.sls ceilometer salt -C 'I@ceilometer:agent' state.sls ceilometer -b 1 ©2019, Mirantis Inc. Page 179 Mirantis Cloud Platform Deployment Guide 11. Deploy Aodh: salt -C 'I@aodh:server and *01*' state.sls aodh salt -C 'I@aodh:server' state.sls aodh Deploy Tenant Telemetry on an existing cluster Caution! The deployment of Tenant Telemetry based on Ceilometer, Aodh, Panko, and Gnocchi is supported starting from the Pike OpenStack release and does not support integration with StackLight LMA. However, you can add the Gnocchi data source to Grafana to view the Tenant Telemetry data. If you have already deployed an MCP cluster with OpenStack Pike, StackLight LMA, and Ceph (optionally), you can add the Tenant Telemetry as required. Prepare the cluster deployment model Before you deploy Tenant Telemetry on an existing MCP cluster, prepare your cluster deployment model by making the corresponding changes in your Git project repository. To prepare the deployment model: 1. Open your Git project repository. 2. Set up the aggregation metrics storage for Gnocchi: • For the Ceph back end, define the Ceph users and pools: 1. In the classes/cluster/ /ceph/setup.yml file, add the pools: parameters: ceph: setup: pool: telemetry_pool: pg_num: 512 pgp_num: 512 type: replicated application: rgw # crush_rule: sata dev-telemetry: pg_num: 512 pgp_num: 512 type: replicated application: rgw # crush_rule: sata ©2019, Mirantis Inc. Page 180 Mirantis Cloud Platform Deployment Guide 2. In the classes/cluster/ /ceph/init.yml file, specify the Telemetry user names and keyrings: parameters: _param: dev_gnocchi_storage_user: gnocchi_user dev_gnocchi_storage_client_key: "secret_key" Note To generate the keyring, run the salt -C 'I@ceph:mon and *01*' cmd.run 'ceph-authtool --gen-print-key' command from the Salt Master node. 3. In the classes/cluster/ /ceph/common.yml Telemetry user permissions: file, define the parameters: ceph: common: keyring: gnocchi: name: ${_param:gnocchi_storage_user} caps: mon: "allow r" osd: "allow rwx pool=telemetry_pool" dev-gnocchi: name: ${_param:dev_gnocchi_storage_user} key: ${_param:dev_gnocchi_storage_client_key} caps: mon: "allow r" osd: "allow rwx pool=dev-telemetry" • For the file back end with GlusterFS, define the GlusterFS volume in the classes/cluster/ /infra/glusterfs.yml file: classes: - system.glusterfs.server.volume.gnocchi Note Mirantis recommends creating a separate LVM for the Gnocchi GlusterFS volume. The LVM must contain a file system with a large number of inodes. Four million of inodes allow keeping the metrics of 1000 Gnocchi resources with a medium Gnocchi archive policy for two days maximum. ©2019, Mirantis Inc. Page 181 Mirantis Cloud Platform Deployment Guide 3. In the classes/cluster/ /infra/config.yml file, add the Telemetry node definitions: classes: - system.reclass.storage.system.openstack_telemetry_cluster parameters: salt: reclass: storage: node: openstack_telemetry_node01: params: linux_system_codename: xenial deploy_address: ${_param:openstack_telemetry_node01_deploy_address} storage_address: ${_param:openstack_telemetry_node01_storage_address} redis_cluster_role: 'master' ceilometer_create_gnocchi_resources: true openstack_telemetry_node02: params: linux_system_codename: xenial deploy_address: ${_param:openstack_telemetry_node02_deploy_address} storage_address: ${_param:openstack_telemetry_node02_storage_address} redis_cluster_role: 'slave' openstack_telemetry_node03: params: linux_system_codename: xenial deploy_address: ${_param:openstack_telemetry_node03_deploy_address} storage_address: ${_param:openstack_telemetry_node03_storage_address} redis_cluster_role: 'slave' 4. In the classes/cluster/ /infra/kvm.yml file, add the Telemetry VM definition: classes: - system.salt.control.cluster.openstack_telemetry_cluster parameters: salt: control: size: openstack.telemetry: cpu: 4 ram: 8192 disk_profile: large net_profile: mdb cluster: internal: node: mdb01: ©2019, Mirantis Inc. Page 182 Mirantis Cloud Platform Deployment Guide name: ${_param:openstack_telemetry_node01_hostname} provider: ${_param:infra_kvm_node01_hostname}.${_param:cluster_domain} image: ${_param:salt_control_xenial_image} size: openstack.telemetry rng: backend: /dev/urandom mdb02: name: ${_param:openstack_telemetry_node02_hostname} provider: ${_param:infra_kvm_node02_hostname}.${_param:cluster_domain} image: ${_param:salt_control_xenial_image} size: openstack.telemetry rng: backend: /dev/urandom mdb03: name: ${_param:openstack_telemetry_node03_hostname} provider: ${_param:infra_kvm_node03_hostname}.${_param:cluster_domain} image: ${_param:salt_control_xenial_image} size: openstack.telemetry rng: backend: /dev/urandom virt: nic: ##Telemetry mdb: eth2: bridge: br-mgm eth1: bridge: br-ctl eth0: bridge: br-storage 5. Define the Panko and Gnocchi secrets: 1. In the classes/cluster/ /infra/secrets.yml Gnocchi and Panko services: file, add passwords for parameters: _param: mysql_gnocchi_password: mysql_panko_password: keystone_gnocchi_password: keystone_panko_password: 2. Optional. If you have configured Ceph as the aggregation metrics storage for Gnocchi, specify the following parameters in the classes/cluster/ /openstack/init.yml file: gnocchi_storage_user: gnocchi_storage_user_name gnocchi_storage_pool: telemetry_storage_pool ©2019, Mirantis Inc. Page 183 Mirantis Cloud Platform Deployment Guide Note Use dev-telemetry for Gnocchi storage pool and devgnocchi for Gnocchi storage user. 6. In the classes/cluster/ /openstack/init.yml file, define the global parameters and linux:network:host: parameters: _param: telemetry_public_host: ${_param:openstack_telemetry_address} ceilometer_service_host: ${_param:openstack_telemetry_address} aodh_service_host: ${_param:openstack_control_address} aodh_service_host: ${_param:openstack_telemetry_address} panko_version: ${_param:openstack_version} gnocchi_version: 4.0 gnocchi_service_host: ${_param:openstack_telemetry_address} gnocchi_public_host: ${_param:telemetry_public_host} aodh_public_host: ${_param:telemetry_public_host} ceilometer_public_host: ${_param:telemetry_public_host} panko_public_host: ${_param:telemetry_public_host} panko_service_host: ${_param:openstack_telemetry_address} mysql_gnocchi_password: ${_param:mysql_gnocchi_password_generated} mysql_panko_password: ${_param:mysql_panko_password_generated} keystone_gnocchi_password: ${_param:keystone_gnocchi_password_generated} keystone_panko_password: ${_param:keystone_panko_password_generated} # openstack telemetry openstack_telemetry_address: 172.30.121.65 openstack_telemetry_node01_deploy_address: 10.160.252.66 openstack_telemetry_node02_deploy_address: 10.160.252.67 openstack_telemetry_node03_deploy_address: 10.160.252.68 openstack_telemetry_node01_address: 172.30.121.66 openstack_telemetry_node02_address: 172.30.121.67 openstack_telemetry_node03_address: 172.30.121.68 openstack_telemetry_node01_storage_address: 10.160.196.66 openstack_telemetry_node02_storage_address: 10.160.196.67 openstack_telemetry_node03_storage_address: 10.160.196.68 openstack_telemetry_hostname: mdb openstack_telemetry_node01_hostname: mdb01 openstack_telemetry_node02_hostname: mdb02 openstack_telemetry_node03_hostname: mdb03 linux: network: host: ©2019, Mirantis Inc. Page 184 Mirantis Cloud Platform Deployment Guide mdb: address: ${_param:openstack_telemetry_address} names: - ${_param:openstack_telemetry_hostname} - ${_param:openstack_telemetry_hostname}.${_param:cluster_domain} mdb01: address: ${_param:openstack_telemetry_node01_address} names: - ${_param:openstack_telemetry_node01_hostname} - ${_param:openstack_telemetry_node01_hostname}.${_param:cluster_domain} mdb02: address: ${_param:openstack_telemetry_node02_address} names: - ${_param:openstack_telemetry_node02_hostname} - ${_param:openstack_telemetry_node02_hostname}.${_param:cluster_domain} mdb03: address: ${_param:openstack_telemetry_node03_address} names: - ${_param:openstack_telemetry_node03_hostname} - ${_param:openstack_telemetry_node03_hostname}.${_param:cluster_domain} 7. Add endpoints: 1. In the classes/cluster/ /openstack/control_init.yml file, add the Panko and Gnocchi endpoints: classes: - system.keystone.client.service.panko - system.keystone.client.service.gnocchi 2. In the classes/cluster/ /openstack/proxy.yml file, add the Aodh public endpoint: classes: - system.nginx.server.proxy.openstack.aodh 8. In the classes/cluster/ /openstack/database.yml file, add classes for the Panko and Gnocchi databases: classes: - system.galera.server.database.panko - system.galera.server.database.gnocchi 9. Change the configuration of the OpenStack controller nodes: 1. In the classes/cluster/ /openstack/control.yml file, remove Heka, Ceilometer, and Aodh. Optionally, add the Panko client package to test the OpenStack event CLI command. Additionally, verify that the file includes the ceilometer.client classes. ©2019, Mirantis Inc. Page 185 Mirantis Cloud Platform Deployment Guide classes: #- system.ceilometer.server.backend.influxdb #- system.heka.ceilometer_collector.single #- system.aodh.server.cluster #- system.ceilometer.server.cluster - system.keystone.server.notification.messagingv2 - system.glance.control.notification.messagingv2 - system.nova.control.notification.messagingv2 - system.neutron.control.notification.messagingv2 - system.ceilometer.client.nova_control - system.cinder.control.notification.messagingv2 - system.cinder.volume.notification.messagingv2 - system.heat.server.notification.messagingv2 parameters: linux: system: package: python-pankoclient: 2. In the classes/cluster/ /openstack/control_init.yml file, add the following classes: classes: - system.gnocchi.client - system.gnocchi.client.v1.archive_policy.default 3. In the classes/cluster/ /stacklight/telemetry.yml file, remove InfluxDB from the mdb* node definition: classes: #- system.haproxy.proxy.listen.stacklight.influxdb_relay #- system.influxdb.relay.cluster #- system.influxdb.server.single #- system.influxdb.database.ceilometer 10. Change the configuration of compute nodes: 1. Open the classes/cluster/ /openstack/compute.yml file for editing. 2. Verify that ceilometer.client and ceilometer.agent classes are present on the compute nodes: classes: - system.ceilometer.agent.telemetry.cluster - system.ceilometer.agent.polling.default - system.nova.compute.notification.messagingv2 3. Set the following parameters: ©2019, Mirantis Inc. Page 186 Mirantis Cloud Platform Deployment Guide parameters: ceilometer: agent: message_queue: port: ${_param:rabbitmq_port} ssl: enabled: ${_param:rabbitmq_ssl_enabled} identity: protocol: https 11. In the classes/cluster/ /openstack/networking/telemetry.yml file, define the networking schema for the mdb VMs: # Networking template for Telemetry nodes parameters: linux: network: interface: ens2: ${_param:linux_deploy_interface} ens3: ${_param:linux_single_interface} ens4: enabled: true type: eth mtu: 9000 proto: static address: ${_param:storage_address} netmask: 255.255.252.0 12. Define the Telemetry node YAML file: 1. Open the classes/cluster/ /openstack/telemetry.yml file for editing. 2. Specify the classes and parameters depending on the aggregation metrics storage: • For Ceph, specify: classes: - cluster. .ceph.common parameters: gnocchi: common: storage: driver: ceph ceph_pool: ${_param:gnocchi_storage_pool} ceph_username: ${_param:gnocchi_storage_user} • For the file back end with GlusterFS, specify: classes: - system.linux.system.repo.mcp.apt_mirantis.glusterfs ©2019, Mirantis Inc. Page 187 Mirantis Cloud Platform Deployment Guide - system.glusterfs.client.cluster - system.glusterfs.client.volume.gnocchi parameters: _param: gnocchi_glusterfs_service_host: ${_param:glusterfs_service_host} 3. Specify the following classes and parameters: classes: - system.linux.system.repo.mcp.extra - system.linux.system.repo.mcp.apt_mirantis.openstack - system.linux.system.repo.mcp.apt_mirantis.ubuntu - system.linux.system.repo.mcp.apt_mirantis.saltstack_2016_3 - system.keepalived.cluster.instance.openstack_telemetry_vip - system.memcached.server.single - system.apache.server.single - system.apache.server.site.gnocchi - system.apache.server.site.panko - service.redis.server.single - system.nginx.server.single - system.nginx.server.proxy.openstack.aodh - system.gnocchi.server.cluster - system.gnocchi.common.storage.incoming.redis - system.gnocchi.common.coordination.redis - system.ceilometer.server.telemetry.cluster - system.ceilometer.server.coordination.redis - system.aodh.server.cluster - system.aodh.server.coordination.redis - system.panko.server.cluster - system.ceilometer.server.backend.gnocchi - system.ceph.common.cluster - cluster. .infra - cluster. .openstack.networking.telemetry parameters: _param: cluster_vip_address: ${_param:openstack_telemetry_address} keepalived_vip_interface: ens3 keepalived_vip_address: ${_param:cluster_vip_address} keepalived_vip_password: secret_password cluster_local_address: ${_param:single_address} cluster_node01_hostname: ${_param:openstack_telemetry_node01_hostname} cluster_node01_address: ${_param:openstack_telemetry_node01_address} cluster_node02_hostname: ${_param:openstack_telemetry_node02_hostname} cluster_node02_address: ${_param:openstack_telemetry_node02_address} cluster_node03_hostname: ${_param:openstack_telemetry_node03_hostname} cluster_node03_address: ${_param:openstack_telemetry_node03_address} cluster_internal_protocol: https redis_sentinel_node01_address: ${_param:openstack_telemetry_node01_address} redis_sentinel_node02_address: ${_param:openstack_telemetry_node02_address} redis_sentinel_node03_address: ${_param:openstack_telemetry_node03_address} openstack_telemetry_redis_url: redis://${_param:redis_sentinel_node01_address}:26379?sentinel=master_1&sentinel_fallback=${_param:redis_sentinel_node02_address}:26379&sentinel_fallback=${_param:redis_sentinel_node03_address}:26379 gnocchi_coordination_url: ${_param:openstack_telemetry_redis_url} gnocchi_storage_incoming_redis_url: ${_param:openstack_telemetry_redis_url} nginx_proxy_openstack_api_host: ${_param:openstack_telemetry_address} nginx_proxy_openstack_api_address: ${_param:single_address} nginx_proxy_openstack_ceilometer_host: 127.0.0.1 nginx_proxy_openstack_aodh_host: 127.0.0.1 nginx_proxy_ssl: enabled: true engine: salt authority: "${_param:salt_minion_ca_authority}" key_file: "/etc/ssl/private/internal_proxy.key" cert_file: "/etc/ssl/certs/internal_proxy.crt" chain_file: "/etc/ssl/certs/internal_proxy-with-chain.crt" apache_gnocchi_api_address: ${_param:single_address} apache_panko_api_address: ${_param:single_address} apache_gnocchi_ssl: ${_param:nginx_proxy_ssl} apache_panko_ssl: ${_param:nginx_proxy_ssl} salt: minion: cert: internal_proxy: host: ${_param:salt_minion_ca_host} authority: ${_param:salt_minion_ca_authority} common_name: internal_proxy signing_policy: cert_open alternative_names: IP:127.0.0.1,IP:${_param:cluster_local_address},IP:${_param:openstack_proxy_address},IP:${_param:openstack_telemetry_address},DNS:${linux:system:name},DNS:${linux:network:fqdn},DNS:${_param:single_address},DNS:${_param:openstack_telemetry_address},DNS:${_param:openstack_proxy_address} key_file: "/etc/ssl/private/internal_proxy.key" cert_file: "/etc/ssl/certs/internal_proxy.crt" all_file: "/etc/ssl/certs/internal_proxy-with-chain.crt" redis: server: version: 3.0 bind: address: ${_param:single_address} cluster: enabled: True mode: sentinel role: ${_param:redis_cluster_role} quorum: 2 master: host: ${_param:cluster_node01_address} port: 6379 sentinel: address: ${_param:single_address} apache: server: modules: - wsgi gnocchi: common: database: host: ${_param:openstack_database_address} ssl: enabled: true server: identity: protocol: ${_param:cluster_internal_protocol} pkgs: # TODO: move python-memcache installation to formula - gnocchi-api - gnocchi-metricd - python-memcache panko: server: identity: protocol: ${_param:cluster_internal_protocol} database: ssl: enabled: true aodh: server: bind: host: 127.0.0.1 coordination_backend: url: ${_param:openstack_telemetry_redis_url} identity: protocol: ${_param:cluster_internal_protocol} host: ${_param:openstack_control_address} database: ssl: enabled: true message_queue: port: 5671 ssl: enabled: true ceilometer: server: bind: host: 127.0.0.1 coordination_backend: url: ${_param:openstack_telemetry_redis_url} identity: protocol: ${_param:cluster_internal_protocol} host: ${_param:openstack_control_address} message_queue: port: 5672 ssl: enabled: true haproxy: proxy: listen: panko_api: type: ~ gnocchi_api: type: ~ aodh-api: type: ~ Once done, proceed to Deploy Tenant Telemetry. ©2019, Mirantis Inc. Page 188 Mirantis Cloud Platform Deployment Guide Deploy Tenant Telemetry Once you have performed the steps described in Prepare the cluster deployment model, deploy Tenant Telemetry on an existing MCP cluster as described below. To deploy Tenant Telemetry on an existing MCP cluster: 1. Log in to the Salt Master node. 2. Depending on the type of the aggregation metrics storage, choose from the following options: • For Ceph, deploy the newly created users and pools: salt salt salt salt salt salt -C -C -C -C -C -C "I@ceph:osd or I@ceph:osd or I@ceph:radosgw" saltutil.refresh_pillar "I@ceph:mon:keyring:mon or I@ceph:common:keyring:admin" state.sls ceph.mon "I@ceph:mon:keyring:mon or I@ceph:common:keyring:admin" mine.update "I@ceph:mon" state.sls 'ceph.mon' "I@ceph:setup" state.sls ceph.setup "I@ceph:osd or I@ceph:osd or I@ceph:radosgw" state.sls ceph.setup.keyring • For the file back end with GlusterFS, deploy the Gnocchi GlusterFS configuration: salt -C "I@glusterfs:server" saltutil.refresh_pillar salt -C "I@glusterfs:server" state.sls glusterfs 3. Run the following commands /srv/salt/reclass/nodes/_generated: to generate definitions under salt-call saltutil.refresh_pillar salt-call state.sls reclass.storage 4. Verify that the following files were created: ls -1 /srv/salt/reclass/nodes/_generated | grep mdb mdb01.domain.name mdb02.domain.name mdb03.domain.name 5. Create the mdb VMs: salt -C 'I@salt:control' saltutil.refresh_pillar salt -C 'I@salt:control' state.sls salt.control 6. Verify that the mdb nodes were successfully registered on the Salt Master node: salt-key -L | grep mdb mdb01.domain.name mdb02.domain.name mdb03.domain.name ©2019, Mirantis Inc. Page 189 Mirantis Cloud Platform Deployment Guide 7. Create endpoints: 1. Create additional endpoints for Panko and Gnocchi and update the existing Ceilometer and Aodh endpoints, if any: salt -C 'I@keystone:client' saltutil.refresh_pillar salt -C 'I@keystone:client' state.sls keystone.client 2. Verify the created endpoints: salt salt salt salt -C -C -C -C 'I@keystone:client' 'I@keystone:client' 'I@keystone:client' 'I@keystone:client' cmd.run cmd.run cmd.run cmd.run '. '. '. '. /root/keystonercv3 /root/keystonercv3 /root/keystonercv3 /root/keystonercv3 ; ; ; ; openstack openstack openstack openstack endpoint endpoint endpoint endpoint list list list list --service --service --service --service ceilometer' aodh' panko' gnocchi' 3. Optional. Install the Panko client if you have defined it in the cluster model: salt -C 'I@keystone:server' saltutil.refresh_pillar salt -C 'I@keystone:server' state.sls linux.system.package 8. Create databases: 1. Create databases for Panko and Gnocchi: salt -C 'I@galera:master or I@galera:slave' saltutil.refresh_pillar salt -C 'I@galera:master' state.sls galera salt -C 'I@galera:slave' state.sls galera 2. Verify that the databases were successfully created: salt -C 'I@galera:master' cmd.run 'mysql --defaults-extra-file=/etc/mysql/debian.cnf -e "show databases;"' salt -C 'I@galera:master' cmd.run 'mysql --defaults-extra-file=/etc/mysql/debian.cnf -e "select User from mysql.user;"' 9. Update the NGINX configuration on the prx nodes: salt prx\* saltutil.refresh_pillar salt prx\* state.sls nginx 10. Disable the Ceilometer and Aodh services deployed on the ctl nodes: for service in aodh-evaluator aodh-listener aodh-notifier \ ceilometer-agent-central ceilometer-agent-notification \ ceilometer_collector do salt ctl\* service.stop $service salt ctl\* service.disable $service done 11. Provision the mdb nodes: 1. Apply the basic states for the mdb nodes: ©2019, Mirantis Inc. Page 190 Mirantis Cloud Platform Deployment Guide salt salt salt salt salt mdb\* mdb\* mdb\* mdb\* mdb\* saltutil.refresh_pillar saltutil.sync_all state.sls linux.system state.sls linux,ntp,openssh,salt.minion system.reboot --async 2. Install basic services on the mdb nodes: salt salt salt salt salt salt mdb01\* state.sls keepalived mdb\* state.sls keepalived mdb\* state.sls haproxy mdb\* state.sls memcached mdb\* state.sls nginx mdb\* state.sls apache 3. Install packages depending on the aggregation metrics storage: • For Ceph: salt mdb\* state.sls ceph.common,ceph.setup.keyring • For the file back end with GlusterFS: salt mdb\* state.sls glusterfs 4. Install the Redis, Gnocchi, Panko, Ceilometer, and Aodh services on mdb nodes: salt salt salt salt salt salt salt -C -C -C -C -C -C -C 'I@redis:cluster:role:master' state.sls redis 'I@redis:server' state.sls redis 'I@gnocchi:server' state.sls gnocchi -b 1 'I@gnocchi:client' state.sls gnocchi.client -b 1 'I@panko:server' state.sls panko -b 1 'I@ceilometer:server' state.sls ceilometer -b 1 'I@aodh:server' state.sls aodh -b 1 5. Update the cluster nodes: 1. Verify that the mdb nodes were added to /etc/hosts on every node: salt '*' saltutil.refresh_pillar salt '*' state.sls linux.network.host 2. For Ceph, run: salt -C 'I@ceph:common and not mon*' state.sls ceph.setup.keyring 6. Verify that the Ceilometer agent is deployed and up to date: ©2019, Mirantis Inc. Page 191 Mirantis Cloud Platform Deployment Guide salt -C 'I@ceilometer:agent' state.sls ceilometer 7. Update the StackLight LMA configuration: salt salt salt salt salt salt salt mdb\* state.sls telegraf mdb\* state.sls fluentd '*' state.sls salt.minion.grains '*' saltutil.refresh_modules '*' mine.update -C 'I@docker:swarm and I@prometheus:server' state.sls prometheus -C 'I@sphinx:server' state.sls sphinx 12. Verify Tenant Telemetry: Note Metrics will be collected for the newly created resources. Therefore, launch an instance or create a volume before executing the commands below. 1. Verify that metrics are available: salt ctl01\* cmd.run '. /root/keystonercv3 ; openstack metric list --limit 50' 2. If you have installed the Panko client on the ctl nodes, verify that events are available: salt ctl01\* cmd.run '. /root/keystonercv3 ; openstack event list --limit 20' 3. Verify that the Aodh endpoint is available: salt ctl01\* cmd.run '. /root/keystonercv3 ; openstack --debug alarm list' The output will not contain any alarm because no alarm was created yet. 4. For Ceph, verify that metrics are saved to the Ceph pool (telemtry_pool for the cloud): salt cmn01\* cmd.run 'rados df' Seealso • MCP Reference Architecture: Tenant Telemetry • MCP Operations Guide: Enable the Gnocchi archive policies in Tenant Telemetry • MCP Operations Guide: Add the Gnocchi data source to Grafana ©2019, Mirantis Inc. Page 192 Mirantis Cloud Platform Deployment Guide Deploy Designate Designate supports underlying DNS servers, such as BIND9 and PowerDNS. You can use either a new or an existing DNS server as a back end for Designate. By default, Designate is deployed on three OpenStack API VMs of the VCP nodes. Prepare a deployment model for the Designate deployment Before you deploy Designate with a new or existing BIND9 or PowerDNS server as a back end, prepare your cluster deployment model by making corresponding changes in your Git project repository. To prepare a deployment model for the Designate deployment: 1. Verify that you have configured and deployed a DNS server as a back end for Designate as described in Deploy a DNS back end for Designate. 2. Open the repository. classes/cluster/ /openstack/ directory in your Git project 3. In control_init.yml, add the following parameter in the classes section: classes: - system.keystone.client.service.designate 4. In control.yml, add the following parameter in the classes section: classes: - system.designate.server.cluster 5. In database.yml, add the following parameter in the classes section: classes: - system.galera.server.database.designate 6. Add your changes to a new commit. 7. Commit and push the changes. Once done, proceed to Install Designate. Install Designate This section describes how to install Designate on a new or existing MCP cluster. Before you proceed to installing Designate: 1. Configure and deploy a DNS back end for Designate as described in Deploy a DNS back end for Designate. 2. Prepare your cluster model for the Designate deployment as described in Prepare a deployment model for the Designate deployment. To install Designate on a new MCP cluster: 1. Log in to the Salt Master node. ©2019, Mirantis Inc. Page 193 Mirantis Cloud Platform Deployment Guide 2. Apply the following states: salt -C 'I@designate:server and *01*' state.sls designate.server salt -C 'I@designate:server' state.sls designate To install Designate on an already deployed MCP cluster: 1. Log in to the Salt Master node. 2. Refresh Salt pillars: salt '*' saltutil.refresh_pillar 3. Create databases for Designate by applying the mysql state: salt -C 'I@galera:master' state.sls galera 4. Create the HAProxy configuration for Designate: salt -C 'I@haproxy:proxy' state.sls haproxy 5. Create endpoints for Designate in Keystone: salt -C 'I@keystone:client' state.sls keystone.client 6. Apply the designate states: salt -C 'I@designate:server and *01*' state.sls designate.server salt -C 'I@designate:server' state.sls designate 7. Verify that the Designate services are up and running: salt -C 'I@designate:server' cmd.run ". /root/keystonercv3; openstack dns service list" Example of the system response extract: ctl02.virtual-mcp-ocata-ovs.local: +-------------------+---------+-------------+-------+------+-------------+ | id |hostname |service_name |status |stats |capabilities | +-------------------+---------+-------------+-------+------+-------------+ | 72df3c63-ed26-... | ctl03 | worker | UP | - | | | c3d425bb-131f-... | ctl03 | central | UP | - | | | 1af4c4ef-57fb-... | ctl03 | producer | UP | - | | | 75ac49bc-112c-... | ctl03 | api | UP | - | | | ee0f24cd-0d7a-... | ctl03 | mdns | UP | - | | | 680902ef-380a-... | ctl02 | worker | UP | - | | | f09dca51-c4ab-... | ctl02 | producer | UP | - | | | 26e09523-0140-... | ctl01 | producer | UP | - | | ©2019, Mirantis Inc. Page 194 Mirantis Cloud Platform Deployment Guide | 18ae9e1f-7248-... | ctl01 | worker | UP | - | | | e96dffc1-dab2-... | ctl01 | central | UP | - | | | 3859f1e7-24c0-... | ctl01 | api | UP | - | | | 18ee47a4-8e38-... | ctl01 | mdns | UP | - | | | 4c807478-f545-... | ctl02 | api | UP | - | | | b66305e3-a75f-... | ctl02 | central | UP | - | | | 3c0d2310-d852-... | ctl02 | mdns | UP | - | | +-------------------+---------+-------------+-------+------+-------------+ Seealso Designate operations Seealso • Deploy a DNS back end for Designate • Plan the Domain Name System • Designate operations Deploy Barbican MCP enables you to integrate LBaaSv2 Barbican to OpenContrail. Barbican is an OpenStack service that provides a REST API for secured storage as well as for provisioning and managing of secrets such as passwords, encryption keys, and X.509 certificates. Barbican requires a back end to store secret data in its database. If you have an existing Dogtag back end, deploy and configure Barbican with it as described in Deploy Barbican with the Dogtag back end. Otherwise, deploy a new Dogtag back end as described in Deploy Dogtag. For testing purposes, you can use the simple_crypto back end. Deploy Dogtag Dogtag is one of the Barbican plugins that represents a back end for storing symmetric keys, for example, for volume encryption, as well as passwords, and X.509 certificates. To deploy the Dogtag back end for Barbican: 1. Open the classes/cluster/ / directory of your Git project repository. 2. In openstack/control.yml, add the Dogtag class and specify the required parameters. For example: classes: - system.dogtag.server.cluster ©2019, Mirantis Inc. Page 195 Mirantis Cloud Platform Deployment Guide ... parameters: _param: dogtag_master_host: ${_param:openstack_control_node01_hostname}.${_param:cluster_domain} haproxy_dogtag_bind_port: 8444 cluster_dogtag_port: 8443 # Dogtag listens on 8443 but there is no way to bind it to a # Specific IP, as in this setup Dogtag is installed on ctl nodes # Change port on haproxy side to avoid binding conflict. haproxy_dogtag_bind_port: 8444 cluster_dogtag_port: 8443 dogtag_master_host: ctl01.${linux:system:domain} dogtag_pki_admin_password: workshop dogtag_pki_client_database_password: workshop dogtag_pki_client_pkcs12_password: workshop dogtag_pki_ds_password: workshop dogtag_pki_token_password: workshop dogtag_pki_security_domain_password: workshop dogtag_pki_clone_pkcs12_password: workshop dogtag: server: ldap_hostname: ${linux:network:fqdn} ldap_dn_password: workshop ldap_admin_password: workshop export_pem_file_path: /etc/dogtag/kra_admin_cert.pem 3. Modify classes/cluster/os-ha-ovs/infra/config.yml: 1. Add the - salt.master.formula.pkg.dogtag class to the classes section. 2. Specify the dogtag_cluster_role: master parameter in the openstack_control_node01 section, and the dogtag_cluster_role: slave parameter in the openstack_control_node02 and openstack_control_node03 sections. For example: classes: - salt.master.formula.pkg.dogtag ... node: openstack_control_node01: classes: - service.galera.master.cluster - service.dogtag.server.cluster.master params: mysql_cluster_role: master linux_system_codename: xenial dogtag_cluster_role: master openstack_control_node02: classes: - service.galera.slave.cluster - service.dogtag.server.cluster.slave params: ©2019, Mirantis Inc. Page 196 Mirantis Cloud Platform Deployment Guide mysql_cluster_role: slave linux_system_codename: xenial dogtag_cluster_role: slave openstack_control_node03: classes: - service.galera.slave.cluster - service.dogtag.server.cluster.slave params: mysql_cluster_role: slave linux_system_codename: xenial dogtag_cluster_role: slave 4. Commit and push the changes to the project Git repository. 5. Log in to the Salt Master node. 6. Update your Salt formulas at the system level: 1. Change the directory to /srv/salt/reclass. 2. Run the git pull origin master command. 3. Run the salt-call state.sls salt.master command. 7. Apply the following states: salt salt salt salt -C -C -C -C 'I@salt:master' state.sls salt,reclass 'I@dogtag:server and *01*' state.sls dogtag.server 'I@dogtag:server' state.sls dogtag.server 'I@haproxy:proxy' state.sls haproxy 8. Proceed to Deploy Barbican with the Dogtag back end. Note If the dogtag:export_pem_file_path variable is defined, the system imports kra admin certificate to the defined .pem file and to the Salt Mine dogtag_admin_cert variable. After that, Barbican and other components can use kra admin certificate. Seealso Dogtag OpenStack documentation Deploy Barbican with the Dogtag back end You can deploy and configure Barbican to work with the private Key Recovery Agent (KRA) Dogtag back end. ©2019, Mirantis Inc. Page 197 Mirantis Cloud Platform Deployment Guide Before you proceed with the deployment, make sure that you have a running Dogtag back end. If you do not have a Dogtag back end yet, deploy it as described in Deploy Dogtag. To deploy Barbican with the Dogtag back end: 1. Open the classes/cluster/ / directory of your Git project repository. 2. In infra/config.yml, add the following class: classes: - system.keystone.client.service.barbican 3. In openstack/control.yml, modify the classes and parameters sections: classes: - system.apache.server.site.barbican - system.galera.server.database.barbican - system.barbican.server.cluster - service.barbican.server.plugin.dogtag ... parameters: _param: apache_barbican_api_address: ${_param:cluster_local_address} apache_barbican_api_host: ${_param:single_address} apache_barbican_ssl: ${_param:nginx_proxy_ssl} barbican_dogtag_nss_password: workshop barbican_dogtag_host: ${_param:cluster_vip_address} ... barbican: server: enabled: true dogtag_admin_cert: engine: mine minion: ${_param:dogtag_master_host} ks_notifications_enable: True store: software: store_plugin: dogtag_crypto global_default: True plugin: dogtag: port: ${_param:haproxy_dogtag_bind_port} nova: controller: barbican: enabled: ${_param:barbican_integration_enabled} cinder: controller: barbican: enabled: ${_param:barbican_integration_enabled} ©2019, Mirantis Inc. Page 198 Mirantis Cloud Platform Deployment Guide glance: server: barbican: enabled: ${_param:barbican_integration_enabled} 4. In openstack/init.yml, modify the parameters section. For example: parameters: _param: ... barbican_service_protocol: ${_param:cluster_internal_protocol} barbican_service_host: ${_param:openstack_control_address} barbican_version: ${_param:openstack_version} mysql_barbican_password: workshop keystone_barbican_password: workshop barbican_dogtag_host: "dogtag.example.com" barbican_dogtag_nss_password: workshop barbican_integration_enabled: true 5. In openstack/proxy.yml, add the following class: classes: - system.nginx.server.proxy.openstack.barbican 6. Optional. Enable image verification: 1. In openstack/compute/init.yml, add the following parameters: parameters: _param: nova: compute: barbican: enabled: ${_param:barbican_integration_enabled} 2. In openstack/control.yml, add the following parameters: parameters: _param: nova: controller: barbican: enabled: ${_param:barbican_integration_enabled} ©2019, Mirantis Inc. Page 199 Mirantis Cloud Platform Deployment Guide Note This configuration changes the requirement to the Glance image upload procedure. All glance images will have to be updated with signature information. For details, see: OpenStack Nova and OpenStack Glance documentation. 7. Optional. In openstack/control.yml, enable volume encryption supported by the key manager: parameters: _param: cinder: volume: barbican: enabled: ${_param:barbican_integration_enabled} 8. Optional. In init.yml, add the following parameters if you plan to use a self-signed certificate managed by Salt: parameters: _param: salt: minion: trusted_ca_minions: - cfg01 9. Distribute the Dogtag KRA certificate from the Dogtag node to the Barbican nodes. Choose from the following options (engines): • Define the KRA admin certificate infra/openstack/control.yml file: manually in pillar by editing the barbican: server: dogtag_admin_cert: engine: manual key: | • Receive the Dogtag certificate from Salt Mine. The Dogtag formula sends the KRA certificate to the dogtag_admin_cert Mine function. Add the following to infra/openstack/control.yml: barbican: server: dogtag_admin_cert: ©2019, Mirantis Inc. Page 200 Mirantis Cloud Platform Deployment Guide engine: mine minion: • If some additional steps were applied to install the KRA certificate and these steps are out of scope of the Barbican formula, the formula has the noop engine to perform no operations. If the noop engine is defined in infra/openstack/control.yml, the Barbican formula does nothing to install the KRA admin certificate. barbican: server: dogtag_admin_cert: engine: noop In this case, manually populate the Dogtag /etc/barbican/kra_admin_cert.pem on the Barbican nodes. 10. Commit and push the changes to the project Git repository. KRA certificate in 11. Log in to the Salt Master node. 12. Update your Salt formulas at the system level: 1. Change the directory to /srv/salt/reclass. 2. Run the git pull origin master command. 3. Run the salt-call state.sls salt.master command. 13. If you enabled the usage of a self-signed certificate managed by Salt, apply the following state: salt -C 'I@salt:minion' state.apply salt.minion 14. Apply the following states: salt salt salt salt salt salt salt -C -C -C -C -C -C -C 'I@keystone:client' state.sls keystone.client 'I@galera:master' state.sls galera.server 'I@galera:slave' state.apply galera 'I@nginx:server' state.sls nginx 'I@barbican:server and *01*' state.sls barbican.server 'I@barbican:server' state.sls barbican.server 'I@barbican:client' state.sls barbican.client 15. If you enabled image verification by Nova, apply the following states: salt -C 'I@nova:controller' state.sls nova -b 1 salt -C 'I@nova:compute' state.sls nova 16. If you enabled volume encryption supported by the key manager, apply the following state: salt -C 'I@cinder:controller' state.sls cinder -b 1 ©2019, Mirantis Inc. Page 201 Mirantis Cloud Platform Deployment Guide 17. If you have async workers enabled, restart the Barbican worker service: salt -C 'I@barbican:server' service.restart barbican-worker 18. Restart the Barbican API server: salt -C 'I@barbican:server' service.restart apache2 19. Verify that Barbican works correctly. For example: openstack secret store --name mysecret --payload j4=]d21 Deploy Barbican with the simple_crypto back end Warning The deployment of Barbican with the simple_crypto back end described in this section is intended for testing and evaluation purposes only. For production deployments, use the Dogtag back end. For details, see: Deploy Dogtag. You can configure and deploy Barbican with the simple_crypto back end. To deploy Barbican with the simple_crypto back end: 1. Open the classes/cluster/ / directory of your Git project repository. 2. In openstack/database_init.yml, add the following class: classes: - system.mysql.client.database.barbican 3. In openstack/control_init.yml, add the following class: classes: - system.keystone.client.service.barbican 4. In infra/openstack/control.yml, modify the parameters section. For example: classes: - system.apache.server.site.barbican - system.barbican.server.cluster - service.barbican.server.plugin.simple_crypto parameters: _param: barbican: ©2019, Mirantis Inc. Page 202 Mirantis Cloud Platform Deployment Guide server: store: software: crypto_plugin: simple_crypto store_plugin: store_crypto global_default: True 5. In infra/secret.yml, modify the parameters section. For example: parameters: _param: barbican_version: ${_param:openstack_version} barbican_service_host: ${_param:openstack_control_address} mysql_barbican_password: password123 keystone_barbican_password: password123 barbican_simple_crypto_kek: "base64 encoded 32 bytes as secret key" 6. In openstack/proxy.yml, add the following class: classes: - system.nginx.server.proxy.openstack.barbican ©2019, Mirantis Inc. Page 203 Mirantis Cloud Platform Deployment Guide 7. Optional. Enable image verification: 1. In openstack/compute/init.yml, add the following parameters: parameters: _param: nova: compute: barbican: enabled: ${_param:barbican_integration_enabled} 2. In openstack/control.yml, add the following parameters: parameters: _param: nova: controller: barbican: enabled: ${_param:barbican_integration_enabled} Note This configuration changes the requirement for the Glance image upload procedure. All glance images will have to be updated with signature information. For details, see: OpenStack Nova and OpenStack Glance documentation. 8. Optional. In openstack/control.yml, enable volume encryption supported by the key manager: parameters: _param: cinder: volume: barbican: enabled: ${_param:barbican_integration_enabled} 9. Optional. In init.yml, add the following parameters if you plan to use a self-signed certificate managed by Salt: parameters: _param: salt: minion: trusted_ca_minions: - cfg01 ©2019, Mirantis Inc. Page 204 Mirantis Cloud Platform Deployment Guide 10. Commit and push the changes to the project Git repository. 11. Log in to the Salt Master node. 12. Update your Salt formulas at the system level: 1. Change the directory to /srv/salt/reclass. 2. Run the git pull origin master command. 3. Run the salt-call state.sls salt.master command. 13. If you enabled the usage of a self-signed certificate managed by Salt, apply the following state: salt -C 'I@salt:minion' state.apply salt.minion 14. If you enabled image verification by Nova, apply the following states: salt -C 'I@nova:controller' state.sls nova -b 1 salt -C 'I@nova:compute' state.sls nova 15. If you enabled volume encryption supported by the key manager, apply the following state: salt -C 'I@cinder:controller' state.sls cinder -b 1 16. Apply the following states: salt salt salt salt salt salt salt salt -C -C -C -C -C -C -C -C 'I@keystone:client' state.apply keystone.client 'I@galera:master' state.apply galera.server 'I@galera:slave' state.apply galera 'I@nginx:server' state.apply nginx 'I@haproxy:proxy' state.apply haproxy.proxy 'I@barbican:server and *01*' state.sls barbican.server 'I@barbican:server' state.sls barbican.server 'I@barbican:client' state.sls barbican.client Seealso • Integrate Barbican to OpenContrail LBaaSv2 • Barbican OpenStack documentation Deploy Ironic While virtualization provides outstanding benefits in server management, cost efficiency, and resource consolidation, some cloud environments with particularly high I/O rate may require physical servers as opposed to virtual. ©2019, Mirantis Inc. Page 205 Mirantis Cloud Platform Deployment Guide MCP supports bare-metal provisioning for OpenStack environments using the OpenStack Bare Metal service (Ironic). Ironic enables system administrators to provision physical machines in the same fashion as they provision virtual machines. Note This feature is available as technical preview. Use such configuration for testing and evaluation purposes only. By default, MCP does not deploy Ironic, therefore, to use this functionality, you need to make changes to your Reclass model manually prior to deploying an OpenStack environment. Limitations When you plan on using the OpenStack Bare Metal provisioning service (Ironic), consider the following limitations: Specific hardware limitations When choosing hardware (switch) to be used by Ironic, consider hardware limitations of a specific vendor. For example, for the limitations of the Cumulus Supermicro SSE-X3648S/R switch used as an example in this guide, see Prepare a physical switch for TSN. Only iSCSI deploy drivers are enabled Ironic is deployed with only iSCSI deploy drivers enabled which may pose performance limitations for deploying multiple nodes concurrently. You can enable agent-based Ironic drivers manually after deployment if the deployed cloud has a working Swift-compatible object-store service with support for temporary URLs, with Glance configured to use the object store service to store images. For more information on how to configure Glance for temporary URLs, see OpenStack documentation. Modify the deployment model To use the OpenStack Bare Metal service, you need to modify your Reclass model before deploying a new OpenStack environment. You can also deploy the OpenStack Bare Metal service in the existing OpenStack environment by updating the Salt states. Note This feature is available as technical preview. Use such configuration for testing and evaluation purposes only. As bare-metal configurations vary, this section provides examples of deployment model modifications. You may need to tailor them for your specific use case. The examples describe: • OpenStack Bare Metal API service running on the OpenStack Controller node • A single-node Bare Metal service for ironic-conductor and other services per the baremetal role residing on the bmt01 node ©2019, Mirantis Inc. Page 206 Mirantis Cloud Platform Deployment Guide To modify the deployment model: 1. Create a deployment model as described in Create a deployment metadata model using the Model Designer UI. 2. In the top Reclass ./init.yml file, add: parameters: _param: openstack_baremetal_node01_address: 172.16.10.110 openstack_baremetal_address: 192.168.90.10 openstack_baremetal_node01_baremetal_address: 192.168.90.11 openstack_baremetal_neutron_subnet_cidr: 192.168.90.0/24 openstack_baremetal_neutron_subnet_allocation_start: 192.168.90.100 openstack_baremetal_neutron_subnet_allocation_end: 192.168.90.150 openstack_baremetal_node01_hostname: bmt01 Note The openstack_baremetal_neutron_subnet_ parameters must match your baremetal network settings. The baremetal nodes must connected to the network before the deployment. During the deployment, MCP automatically registers this network in the OpenStack Networking service. 3. Modify the ./infra/config.yml: classes: - system.salt.master.formula.pkg.baremetal - system.keystone.client.service.ironic - system.reclass.storage.system.openstack_baremetal_single parameters: reclass: storage: class_mapping: expression: < >__startswith__bmt node_class: value_template: - cluster.< >.openstack.baremetal cluster_param: openstack_baremetal_node01_address: value_template: < > node: openstack_baremetal_node01: params: single_baremetal_address: ${_param:openstack_baremetal_node01_baremetal_address} ©2019, Mirantis Inc. Page 207 Mirantis Cloud Platform Deployment Guide keepalived_openstack_baremetal_vip_priority: 100 ironic_api_type: 'deploy' tenant_address: 10.1.0.110 external_address: 10.16.0.110 4. Modify the OpenStack nodes: • ./openstack/init.yml: parameters: _param: ironic_version: ${_param:openstack_version} ironic_api_type: 'public' ironic_service_host: ${_param:cluster_vip_address} cluster_baremetal_local_address: ${_param:cluster_local_address} mysql_ironic_password: workshop keystone_ironic_password: workshop linux: network: host: bmt01: address: ${_param:openstack_baremetal_node01_address} names: - bmt01 - bmt01.${_param:cluster_domain} • ./openstack/control.yml: classes: - system.haproxy.proxy.listen.openstack.ironic - system.galera.server.database.ironic - service.ironic.client - system.ironic.api.cluster - cluster.virtual-mcp11-ovs-ironic • ./openstack/baremetal.yml: classes: - system.linux.system.repo.mcp.openstack - system.linux.system.repo.mcp.extra - system.linux.system.repo.saltstack.xenial - system.keepalived.cluster.instance.openstack_baremetal_vip - system.haproxy.proxy.listen.openstack.ironic_deploy - system.ironic.api.cluster # deploy only api (heartbeat and lookup endpoints are open) - system.ironic.conductor.cluster - system.ironic.tftpd_hpa - system.nova.compute_ironic.cluster - system.apache.server.single ©2019, Mirantis Inc. Page 208 Mirantis Cloud Platform Deployment Guide - system.apache.server.site.ironic - system.keystone.client.core - system.neutron.client.service.ironic - cluster.virtual-mcp11-ovs-ironic parameters: _param: primary_interface: ens4 baremetal_interface: ens5 linux_system_codename: xenial interface_mtu: 1450 cluster_vip_address: ${_param:openstack_control_address} cluster_baremetal_vip_address: ${_param:single_baremetal_address} cluster_baremetal_local_address: ${_param:single_baremetal_address} linux_system_codename: xenial linux: network: concat_iface_files: - src: '/etc/network/interfaces.d/50-cloud-init.cfg' dst: '/etc/network/interfaces' bridge: openvswitch interface: dhcp_int: enabled: true name: ens3 proto: dhcp type: eth mtu: ${_param:interface_mtu} primary_interface: enabled: true name: ${_param:primary_interface} proto: static address: ${_param:single_address} netmask: 255.255.255.0 mtu: ${_param:interface_mtu} type: eth baremetal_interface: enabled: true name: ${_param:baremetal_interface} mtu: ${_param:interface_mtu} proto: static address: ${_param:cluster_baremetal_local_address} netmask: 255.255.255.0 type: eth mtu: ${_param:interface_mtu} 5. Proceed to Install the Bare Metal service components. Install the Bare Metal service components ©2019, Mirantis Inc. Page 209 Mirantis Cloud Platform Deployment Guide After you have configured the deployment model as described in Modify the deployment model, install the Bare Metal service components, including Ironic API, Ironic Conductor, Ironic Client, and others. Use the procedure below for both new or existing clusters. Note This feature is available as technical preview. Use such configuration for testing and evaluation purposes only. To install the Bare Metal service components: 1. Install Ironic API: salt -C 'I@ironic:api and *01*' state.sls ironic.api salt -C 'I@ironic:api' state.sls ironic.api 2. Install Ironic Conductor: salt -C 'I@ironic:conductor' state.sls ironic.conductor 3. Install Ironic Client: salt -C 'I@ironic:client' state.sls ironic.client 4. Install software required by Ironic, such as Apache and TFTP server: salt -C 'I@ironic:conductor' state.sls apache salt -C 'I@tftpd_hpa:server' state.sls tftpd_hpa 5. Install nova-compute with ironic virt-driver: salt -C 'I@nova:compute' state.sls nova.compute salt -C 'I@nova:compute' cmd.run 'systemctl restart nova-compute' 6. Log in to an OpenStack Controller node. 7. Verify that the Ironic services are enabled and running: salt -C 'I@ironic:client' cmd.run 'source keystonerc; ironic driver-list' Deploy Manila Manila, also known as the OpenStack Shared File Systems service, provides coordinated access to shared or distributed file systems that a compute instance can consume. Modify the deployment model ©2019, Mirantis Inc. Page 210 Mirantis Cloud Platform Deployment Guide You can enable Manila while generating you deployment metadata model using the Model Designer UI before deploying a new OpenStack environment. You can also deploy Manila on an existing OpenStack environment. The manila-share service may use different back ends. This section provides examples of deployment model modifications for the LVM back end. You may need to tailor these examples depending on the needs of your deployment. Basically, the examples provided in this section describe the following configuration: • The OpenStack Manila API and Scheduler services run on the OpenStack share nodes. • The manila-share service and other services per share role may reside on the share or cmp nodes depending on the back end type. The default LVM-based shares reside on the cmp nodes. To modify the deployment model: 1. While generating a deployment metadata model for your new MCP cluster as described in Create a deployment metadata model using the Model Designer UI, select Manila enabled and modify its parameters as required in the Product parameters section of the Model Designer UI. 2. If you have already generated a deployment metadata model without the Manila service or to enable this feature on an existing MCP cluster: 1. Open your Reclass model Git project repository on the cluster level. 2. Modify the ./infra/config.yml file: classes: ... - system.reclass.storage.system.openstack_share_multi - system.salt.master.formula.pkg.manila 3. Modify the ./infra/secrets.yml file: parameters: _param: ... keystone_manila_password_generated: some_password mysql_manila_password_generated: some_password manila_keepalived_vip_password_generated: some_password 4. Modify the ./openstack/compute/init.yml file: classes: ... - system.manila.share - system.manila.share.backend.lvm parameters: _param: ... ©2019, Mirantis Inc. Page 211 Mirantis Cloud Platform Deployment Guide manila_lvm_volume_name: manila_lvm_devices: 5. Modify the ./openstack/control_init.yml file: classes: ... - system.keystone.client.service.manila - system.keystone.client.service.manila2 - system.manila.client parameters: _param: ... manila_share_type_default_extra_specs: driver_handles_share_servers: False snapshot_support: True create_share_from_snapshot_support : True mount_snapshot_support : True revert_to_snapshot_support : True 6. Modify the ./openstack/database.yml file: classes: ... - system.galera.server.database.manila 7. Modify the ./openstack/init.yml file: parameters: _param: ... manila_service_host: ${_param:openstack_share_address} keystone_manila_password: ${_param:keystone_manila_password_generated} mysql_manila_password: ${_param:mysql_manila_password_generated} openstack_share_address: openstack_share_node01_address: openstack_share_node02_address: openstack_share_node03_address: openstack_share_node01_share_address: ${_param:openstack_share_node01_address} openstack_share_node02_share_address: ${_param:openstack_share_node02_address} openstack_share_node03_share_address: ${_param:openstack_share_node03_address} openstack_share_node01_deploy_address: openstack_share_node02_deploy_address: openstack_share_node03_deploy_address: openstack_share_hostname: openstack_share_node01_hostname: openstack_share_node02_hostname: openstack_share_node03_hostname: ©2019, Mirantis Inc. Page 212 Mirantis Cloud Platform Deployment Guide linux: network: host: ... share01: address: ${_param:openstack_share_node01_address} names: - ${_param:openstack_share_node01_hostname} - ${_param:openstack_share_node01_hostname}.${_param:cluster_domain} share02: address: ${_param:openstack_share_node02_address} names: - ${_param:openstack_share_node02_hostname} - ${_param:openstack_share_node02_hostname}.${_param:cluster_domain} share03: address: ${_param:openstack_share_node03_address} names: - ${_param:openstack_share_node03_hostname} - ${_param:openstack_share_node03_hostname}.${_param:cluster_domain} 8. Modify the ./openstack/proxy.yml file: classes: ... - system.nginx.server.proxy.openstack.manila 9. Modify the ./openstack/share.yml file: classes: ... - system.linux.system.repo.mcp.extra - system.linux.system.repo.mcp.apt_mirantis.openstack - system.apache.server.single - system.manila.control.cluster - system.keepalived.cluster.instance.openstack_manila_vip parameters: _param: ... manila_cluster_vip_address: ${_param:openstack_control_address} cluster_vip_address: ${_param:openstack_share_address} cluster_local_address: ${_param:single_address} cluster_node01_hostname: ${_param:openstack_share_node01_hostname} cluster_node01_address: ${_param:openstack_share_node01_address} cluster_node02_hostname: ${_param:openstack_share_node02_hostname} cluster_node02_address: ${_param:openstack_share_node02_address} cluster_node03_hostname: ${_param:openstack_share_node03_hostname} cluster_node03_address: ${_param:openstack_share_node03_address} keepalived_vip_interface: ens3 keepalived_vip_address: ${_param:cluster_vip_address} ©2019, Mirantis Inc. Page 213 Mirantis Cloud Platform Deployment Guide keepalived_vip_password: ${_param:manila_keepalived_vip_password_generated} apache_manila_api_address: ${_param:cluster_local_address} manila: common: default_share_type: default 3. Proceed to Install the Manila components. Install the Manila components After you have configured the deployment model as described in Modify the deployment model, install the Manila components that include the manila-api, manila-scheduler, manila-share, manila-data, and other services. To install the Manila components: 1. Log in to the Salt Master node. 2. Refresh your Reclass storage data: salt-call state.sls reclass.storage 3. Install manila-api: salt -C 'I@manila:api and *01*' state.sls manila.api salt -C 'I@manila:api' state.sls manila.api 4. Install manila-scheduler: salt -C 'I@manila:scheduler' state.sls manila.scheduler 5. Install manila-share: salt -C 'I@manila:share' state.sls manila.share 6. Install manila-data: salt -C 'I@manila:data' state.sls manila.data 7. Install the Manila client: salt -C 'I@manila:client' state.sls manila.client 8. Log in to any OpenStack controller node. 9. Verify that the Manila services are enabled and running: salt 'cfg01*' cmd.run 'source keystonercv3; manila list' salt 'cfg01*' cmd.run 'source keystonercv3; manila service-list' ©2019, Mirantis Inc. Page 214 Mirantis Cloud Platform Deployment Guide Deploy a Ceph cluster manually Ceph is a storage back end for cloud environments. This section guides you through the manual deployment of a Ceph cluster. Warning Converged storage is not supported. Note Prior to deploying a Ceph cluster: 1. Verify that you have selected Ceph enabled while generating a deployment model as described in Define the deployment model. 2. If you require Tenant Telemetry, verify that you have set the gnocchi_aggregation_storage option to Ceph while generating the deployment model. 3. Verify that OpenStack services, such as Cinder, Glance, and Nova are up and running. 4. Verify and, if required, adjust the Ceph classes/cluster/ /ceph/osd.yml file. setup for disks in the To deploy a Ceph cluster: 1. Log in to the Salt Master node. 2. Update modules and states on all Minions: salt '*' saltutil.sync_all 3. Run basic states on all Ceph nodes: salt "*" state.sls linux,openssh,salt,ntp,rsyslog 4. Generate admin and mon keyrings: salt -C 'I@ceph:mon:keyring:mon or I@ceph:common:keyring:admin' state.sls ceph.mon salt -C 'I@ceph:mon' saltutil.sync_grains salt -C 'I@ceph:mon:keyring:mon or I@ceph:common:keyring:admin' mine.update 5. Deploy Ceph mon nodes: • If your Ceph version is older than Luminous: ©2019, Mirantis Inc. Page 215 Mirantis Cloud Platform Deployment Guide salt -C 'I@ceph:mon' state.sls ceph.mon • If your Ceph version is Luminous or newer: salt -C 'I@ceph:mon' state.sls ceph.mon salt -C 'I@ceph:mgr' state.sls ceph.mgr 6. (Optional) To modify the Ceph CRUSH map: 1. Uncomment the example pillar in classes/cluster/ /ceph/setup.yml file and modify it as required. the 2. Verify the ceph_crush_parent parameters in the classes/cluster/ /infra.config.yml file and modify them if required. 3. If you have modified the ceph_crush_parent parameters, also update the grains: salt salt salt salt salt -C '*' -C -C -C 'I@salt:master' state.sls reclass.storage saltutil.refresh_pillar 'I@ceph:common' state.sls salt.minion.grains 'I@ceph:common' mine.flush 'I@ceph:common' mine.update 7. Deploy Ceph osd nodes: salt salt salt salt salt salt -C -C -C -C -C -C 'I@ceph:osd' state.sls ceph.osd 'I@ceph:osd' saltutil.sync_grains 'I@ceph:osd' state.sls ceph.osd.custom 'I@ceph:osd' saltutil.sync_grains 'I@ceph:osd' mine.update 'I@ceph:setup' state.sls ceph.setup 8. Deploy RADOS Gateway: salt -C 'I@ceph:radosgw' saltutil.sync_grains salt -C 'I@ceph:radosgw' state.sls ceph.radosgw 9. Set up the Keystone service and endpoints for Swift or S3: salt -C 'I@keystone:client' state.sls keystone.client 10. Connect Ceph to your MCP cluster: salt salt salt salt salt salt -C -C -C -C -C -C 'I@ceph:common 'I@ceph:common 'I@ceph:common 'I@ceph:common 'I@ceph:common 'I@ceph:common ©2019, Mirantis Inc. and and and and and and I@glance:server' state.sls ceph.common,ceph.setup.keyring,glance I@glance:server' service.restart glance-api I@glance:server' service.restart glance-glare I@glance:server' service.restart glance-registry I@cinder:controller' state.sls ceph.common,ceph.setup.keyring,cinder I@nova:compute' state.sls ceph.common,ceph.setup.keyring Page 216 Mirantis Cloud Platform Deployment Guide salt -C 'I@ceph:common and I@nova:compute' saltutil.sync_grains salt -C 'I@ceph:common and I@nova:compute' state.sls nova 11. If you have deployed Tenant Telemetry, connect Gnocchi to Ceph: salt salt salt salt -C -C -C -C 'I@ceph:common 'I@ceph:common 'I@ceph:common 'I@ceph:common and and and and I@gnocchi:server' state.sls ceph.common,ceph.setup.keyring I@gnocchi:server' saltutil.sync_grains I@gnocchi:server:role:primary' state.sls gnocchi.server I@gnocchi:server' state.sls gnocchi.server 12. (Optional) If you have modified the CRUSH map as described in the step 6: 1. View the CRUSH map generated in the /etc/ceph/crushmap file and modify it as required. Before applying the CRUSH map, verify that the settings are correct. 2. Apply the following state: salt -C 'I@ceph:setup:crush' state.sls ceph.setup.crush 3. Once the CRUSH map is set up correctly, add the following snippet to the classes/cluster/ /ceph/osd.yml file to make the settings persist even after a Ceph OSD reboots: ceph: osd: crush_update: false 4. Apply the following state: salt -C 'I@ceph:osd' state.sls ceph.osd Once done, if your Ceph version is Luminous or newer, you can access the Ceph dashboard through http:// :7000/. Run ceph -s on a cmn node to obtain the active mgr node. Deploy Xtrabackup for MySQL MCP uses the Xtrabackup utility to back up MySQL databases. To deploy Xtrabackup for MySQL: 1. Apply the xtrabackup server state: salt -C 'I@xtrabackup:server' state.sls xtrabackup 2. Apply the xtrabackup client state: salt -C 'I@xtrabackup:client' state.sls openssh.client,xtrabackup ©2019, Mirantis Inc. Page 217 Mirantis Cloud Platform Deployment Guide Post-deployment procedures After your OpenStack environment deployment has been successfully completed, perform a number of steps to verify all the components are working and your OpenStack installation is stable and performs correctly at scale. Run non-destructive Rally tests Rally is a benchmarking tool that enables you to test the performance and stability of your OpenStack environment at scale. The Tempest and Rally tests are integrated into the MCP CI/CD pipeline and can be managed through the DriveTrain web UI. For debugging purposes, you can manually start Rally tests from the deployed Benchmark Rally Server (bmk01) with the installed Rally benchmark service or run the appropriate Docker container. To manually run a Rally test on a deployed environment: 1. Validate the input parameters of the Rally scenarios in the task_arguments.yaml file. 2. Create the Cirros image: Note If you need to run Glance scenarios with an image that is stored locally, download it from https://download.cirros-cloud.net/0.3.5/cirros-0.3.5-i386-disk.img: wget https://download.cirros-cloud.net/0.3.5/cirros-0.3.5-i386-disk.img openstack image create --disk-format qcow2 --container-format bare --public --file ./cirros-0.3.5-i386-disk.img cirros 3. Run the Rally scenarios: rally task start --task-args-file task_arguments.yaml or rally task start combined_scenario.yaml --task-args-file task_arguments.yaml Troubleshoot This section provides solutions to the issues that may occur while installing Mirantis Cloud Platform. Troubleshooting of an MCP installation usually requires the salt command usage. The following options may be helpful if you run into an error: ©2019, Mirantis Inc. Page 218 Mirantis Cloud Platform Deployment Guide • -l LOG_LEVEL, --log-level=LOG_LEVEL Console logging log level. One of all, garbage, trace, debug, info, warning, error, or quiet. Default is warning • --state-output=STATE_OUTPUT Override the configured STATE_OUTPUT value for minion output. One of full, terse, mixed, changes, or filter. Default is full. To synchronize all of the dynamic modules from the file server for a specific environment, use the saltutil.sync_all module. For example: salt '*' saltutil.sync_all Troubleshooting the server provisioning This section includes the workarounds for the following issues: Virtual machine node stops responding If one of the control plane VM nodes stops responding, you may need to redeploy it. Workaround: 1. From the physical node where the target VM is located, get a list of the VM domain IDs and VM names: virsh list 2. Destroy the target VM (ungraceful powering off of the VM): virsh destroy DOMAIN_ID 3. Undefine the VM (removes the VM configuration from KVM): virsh undefine VM_NAME 4. Verify that your physical KVM node has the correct salt-common and salt-minion version: apt-cache policy salt-common apt-cache policy salt-minion Note If the salt-common and salt-minion versions are not 2015.8, proceed with Install the correct versions of salt-common and salt-minion. ©2019, Mirantis Inc. Page 219 Mirantis Cloud Platform Deployment Guide 5. Redeploy the VM from the physical node meant to host the VM: salt-call state.sls salt.control 6. Verify the newly deployed VM is listed in the Salt keys: salt-key 7. Deploy the Salt states to the node: salt 'OST_NAME*' state.sls linux,ntp,openssh,salt 8. Deploy service states to the node: salt 'HOST_NAME*' state.sls keepalived,haproxy,SPECIFIC_SERVICES Note You may need to log in to the node itself and run the states locally for higher success rates. Troubleshoot Ceph This section includes workarounds for the Ceph-related issues that may occur during the deployment of a Ceph cluster. Troubleshoot an encrypted Ceph OSD During the deployment of a Ceph cluster, an encrypted OSD may fail to be prepared or activated and thus fail to join the Ceph cluster. In such case, remove all the disk partitions as described below. Workaround: 1. From the Ceph OSD node where the failed encrypted OSD disk resides, erase its partition table: dd if=/dev/zero of=/dev/< > bs=512 count=1 conv=notrunc 2. Reboot the server: reboot 3. Run the following command twice to create a partition table for the disk and to remove the disk data: ©2019, Mirantis Inc. Page 220 Mirantis Cloud Platform Deployment Guide ceph-disk zap /dev/< >; 4. Remove all disk signatures using wipefs: wipefs --all --force /dev/< >*; ©2019, Mirantis Inc. Page 221 Mirantis Cloud Platform Deployment Guide Deploy a Kubernetes cluster manually Kubernetes is the system for containerized applications automated deployment, scaling, and management. This section guides you through the manual deployment of a Kubernetes cluster on bare metal with Calico or OpenContrail plugins set for Kubernetes networking. For an easier deployment process, use the automated DriveTrain deployment procedure described in Deploy a Kubernetes cluster. Caution! OpenContrail 3.2 for Kubernetes is not supported. For production MCP Kubernetes deployments, use OpenContrail 4.0. Note For the list of OpenContrail limitations for Kubernetes, see: OpenContrail limitations. Prerequisites The following are the prerequisite steps for a manual MCP Kubernetes deployment: 1. Prepare six nodes: • 1 x configuration node - a host for the Salt Master node. Can be a virtual machine. • 3 x Kubernetes Master nodes (ctl) - hosts for the Kubernetes control plane components and etcd. • 2 x Kubernetes Nodes (cmp) - hosts for the Kubernetes pods, groups of containers that are deployed together on the same host. 2. For an easier deployment and testing, the following usage of three NICs is recommended: • 1 x NIC as a PXE/DHCP/Salt network (PXE and DHCP is are third-party services in a data center, unmanaged by SaltStack) • 2 x NICs as bond active-passive or active-active with two 10 Gbit slave interfaces 3. Create a project repository. 4. Create a deployment metadata model. 5. Optional. Add additional options to the deployment model as required: • Enable Virtlet • Enable the role-based access control (RBAC) • Enable the MetalLB support • Enable an external Ceph RBD storage ©2019, Mirantis Inc. Page 222 Mirantis Cloud Platform Deployment Guide 6. For the OpenContrail 4.0 setup, add the following parameters /opencontrail/init.yml file of your deployment model: to the parameters: _param: opencontrail_version: 4.0 linux_repo_contrail_component: oc40 Caution! OpenContrail 3.2 for Kubernetes is not supported. For production MCP Kubernetes deployments, use OpenContrail 4.0. 7. If you have swap enabled on the ctl and cmp nodes, modify the deployment model as described in Add swap configuration to a Kubernetes deployment model. 8. Define interfaces. 9. Deploy the Salt Master node. Now, proceed to Deploy a Kubernetes cluster. Salt formulas used in the Kubernetes cluster deployment MCP Kubernetes cluster standard deployment uses the following Salt formulas to deploy and configure a Kubernetes cluster: salt-formula-kubernetes Handles Kubernetes hyperkube binaries, CNI plugins, Calico manifests salt-formula-etcd Provisions etcd clusters salt-formula-docker Installs and configures the Docker daemon salt-formula-bird Customizes BIRD templates used by Calico to provide advanced networking scenarios for route distribution through BGP Add swap configuration to a Kubernetes deployment model If you have swap enabled on the ctl and cmp nodes, configure your Kubernetes model to make kubelet work correctly with swapping. To add swap configuration to a Kubernetes deployment model: 1. Open your Git project repository. 2. In classes/cluster/ /kubernetes/control.yml, add the following snippet: ©2019, Mirantis Inc. Page 223 Mirantis Cloud Platform Deployment Guide ... parameters: kubernetes: master: kubelet: fail_on_swap: False 3. In classes/cluster/ /kubernetes/compute.yml, add the following snippet: ... parameters: kubernetes: pool: kubelet: fail_on_swap: False Now, proceed with further MCP Kubernetes cluster configuration as required. Define interfaces Since Cookiecutter is simply a tool to generate projects from templates, it cannot handle all networking use-cases. Your cluster may include a single interface, two interfaces in bond, bond and management interfaces, and so on. This section explains how to handle 3 interfaces configuration: • eth0 interface for pxe • eth1 and eth2 as bond0 slave interfaces To configure network interfaces: 1. Open your MCP Git project repository. 2. Open the {{ cookiecutter.cluster_name }}/kubernetes/init.yml file for editing. 3. Add the following example definition to this file: parameters: … _param: deploy_nic: eth0 primary_first_nic: eth1 primary_second_nic: eth2 linux: ... network: ... interface: deploy_nic: name: ${_param:deploy_nic} enabled: true ©2019, Mirantis Inc. Page 224 Mirantis Cloud Platform Deployment Guide type: eth proto: static address: ${_param:deploy_address} netmask: 255.255.255.0 primary_first_nic: name: ${_param:primary_first_nic} enabled: true type: slave master: bond0 mtu: 9000 pre_up_cmds: - /sbin/ethtool --offload eth6 rx off tx off tso off gro off primary_second_nic: name: ${_param:primary_second_nic} type: slave master: bond0 mtu: 9000 pre_up_cmds: - /sbin/ethtool --offload eth7 rx off tx off tso off gro off bond0: enabled: true proto: static type: bond use_interfaces: - ${_param:primary_first_nic} - ${_param:primary_second_nic} slaves: ${_param:primary_first_nic} ${_param:primary_second_nic} mode: active-backup mtu: 9000 address: ${_param:single_address} netmask: 255.255.255.0 name_servers: - {{ cookiecutter.dns_server01 }} - {{ cookiecutter.dns_server02 }} Deploy a Kubernetes cluster After you complete the prerequisite steps described in Prerequisites, deploy your MCP Kubernetes cluster manually using the procedure below. To deploy the Kubernetes cluster: 1. Log in to the Salt Master node. 2. Update modules and states on all Minions: salt '*' saltutil.sync_all 3. If you use autoregistration for the compute nodes, register all discovered compute nodes. Run the following command on every compute node: ©2019, Mirantis Inc. Page 225 Mirantis Cloud Platform Deployment Guide salt-call event.send "reclass/minion/classify" \ "{\"node_master_ip\": \" \", \ \"node_os\": \" \", \ \"node_deploy_ip\": \" \", \ \"node_deploy_iface\": \" \", \ \"node_control_ip\": \" \", \ \"node_control_iface\": \" \", \ \"node_sriov_ip\": \" \", \ \"node_sriov_iface\": \" \", \ \"node_tenant_ip\": \" \", \ \"node_tenant_iface\": \" \", \ \"node_external_ip\": \" \", \ \"node_external_iface\": \" \", \ \"node_baremetal_ip\": \" \", \ \"node_baremetal_iface\": \" \", \ \"node_domain\": \" \", \ \"node_cluster\": \" \", \ \"node_hostname\": \" \"}" Modify the parameters passed with the command above as required. The table below provides the description of the parameters required for a compute node registration. Parameter Description config_host IP of the Salt Master node os_codename Operating system code name. Check the system response of lsb_release -c for it node_deploy_network_ip Minion deploy network IP address node_deploy_network_iface Minion deploy network interface node_control_network_ip Minion control network IP address node_control_network_iface Minion control network interface node_sriov_ip Minion SR-IOV IP address node_sriov_iface Minion SR-IOV interface node_tenant_network_ip Minion tenant network IP address node_tenant_network_iface Minion tenant network interface node_external_network_ip Minion external network IP address node_external_network_ifaceMinion external network interface node_baremetal_network_ip Minion baremetal network IP address node_baremetal_network_iface Minion baremetal network interface node_domain ©2019, Mirantis Inc. Domain of a minion. Check the system response of hostname -d for it Page 226 Mirantis Cloud Platform Deployment Guide cluster_name Value of the cluster_name variable specified in the Reclass model. See Basic deployment parameters for details node_hostname Short hostname without a domain part. Check the system response of hostname -s for it 4. Log in to the Salt Master node. 5. Perform Linux system configuration to synchronize repositories and execute outstanding system maintenance tasks: salt -C 'I@docker:host' state.sls linux.system 6. Install the Kubernetes control plane: 1. Bootstrap the Kubernetes Master nodes: salt salt salt salt -C -C -C -C 'I@kubernetes:master' state.sls linux 'I@kubernetes:master' state.sls salt.minion 'I@kubernetes:master' state.sls openssh,ntp 'I@docker:host' state.sls docker.host 2. Create and distribute SSL certificates for services using the salt state and install etcd with the SSL support: salt -C 'I@kubernetes:master' state.sls salt.minion.cert,etcd.server.service salt -C 'I@etcd:server' cmd.run '. /var/lib/etcd/configenv && etcdctl cluster-health' 3. Install Keepalived: salt -C 'I@keepalived:cluster' state.sls keepalived -b 1 4. Install HAProxy: salt -C 'I@haproxy:proxy' state.sls haproxy salt -C 'I@haproxy:proxy' service.status haproxy 5. Install Kubernetes: • For the OpenContrail-based clusters: salt -C 'I@kubernetes:master' state.sls kubernetes.pool • For the Calico-based clusters: salt -C 'I@kubernetes:master' state.sls kubernetes.master.kube-addons salt -C 'I@kubernetes:master' state.sls kubernetes.pool 6. For the Calico setup: 1. Verify the Calico nodes status: ©2019, Mirantis Inc. Page 227 Mirantis Cloud Platform Deployment Guide salt -C 'I@kubernetes:pool' cmd.run "calicoctl node status" 2. Set up NAT for Calico: salt -C 'I@kubernetes:master' state.sls etcd.server.setup 7. Apply the following state to simplify namespaces creation: • For the OpenContrail-based clusters: salt -C 'I@kubernetes:master and *01*' state.sls kubernetes.master \ exclude=kubernetes.master.setup,kubernetes.master.kube-addons • For the Calico-based clusters: salt -C 'I@kubernetes:master and *01*' state.sls kubernetes.master \ exclude=kubernetes.master.setup 8. Apply the following state: • For the OpenContrail-based clusters: salt -C 'I@kubernetes:master' state.sls kubernetes \ exclude=kubernetes.master.setup,kubernetes.master.kube-addons • For the Calico-based clusters: salt -C 'I@kubernetes:master' state.sls kubernetes exclude=kubernetes.master.setup 9. Run the Kubernetes Master nodes setup: salt -C 'I@kubernetes:master' state.sls kubernetes.master.setup 10. Restart kubelet: salt -C 'I@kubernetes:master' service.restart kubelet 7. For the OpenContrail setup, deploy OpenContrail 4.0 as described in Deploy OpenContrail 4.0 for Kubernetes. Caution! OpenContrail 3.2 for Kubernetes is not supported. ©2019, Mirantis Inc. Page 228 Mirantis Cloud Platform Deployment Guide 8. Log in to any Kubernetes Master node and verify that all nodes have been registered successfully: kubectl get nodes 9. Deploy the Kubernetes Nodes: 1. Log in to the Salt Master node. 2. Bootstrap all compute nodes: salt -C 'I@kubernetes:pool and not I@kubernetes:master' state.sls linux salt -C 'I@kubernetes:pool and not I@kubernetes:master' state.sls salt.minion salt -C 'I@kubernetes:pool and not I@kubernetes:master' state.sls openssh,ntp 3. Create and distribute SSL certificates for services and install etcd with the SSL support: salt -C 'I@kubernetes:pool and not I@kubernetes:master' state.sls salt.minion.cert,etcd.server.service salt -C 'I@etcd:server' cmd.run '. /var/lib/etcd/configenv && etcdctl cluster-health' 4. Install Docker: salt -C 'I@docker:host' state.sls docker.host 5. Install Kubernetes: salt -C 'I@kubernetes:pool and not I@kubernetes:master' state.sls kubernetes.pool 6. Restart kubelet: • For the OpenContrail-based clusters: salt -C 'I@kubernetes:master' state.sls kubernetes.master.kube-addons salt -C 'I@kubernetes:pool and not I@kubernetes:master' service.restart kubelet • For the Calico-based clusters: salt -C 'I@kubernetes:pool and not I@kubernetes:master' service.restart kubelet After you deploy Kubernetes, deploy StackLight LMA to your cluster as described in Deploy StackLight LMA components. Enable Virtlet You can enable Kubernetes to run virtual machines using Virtlet. Virtlet enables you to run unmodified QEMU/KVM virtual machines that do not include an additional Docker layer as in similar solutions in Kubernetes. Virtlet requires the --feature-gates=MountPropagation=true feature gate to be enabled in the Kubernetes API server and on all kubelet instances. This feature is enabled by default in MCP. Using this feature, Virtlet can create or delete network namespaces assigned to VM pods. ©2019, Mirantis Inc. Page 229 Mirantis Cloud Platform Deployment Guide Caution! Virtlet with OpenContrail is available as technical preview. Use such configuration for testing and evaluation purposes only. Deploy Virtlet You can deploy Virtlet on either new or existing MCP cluster using the procedure below. By default, Virtlet is deployed on all Kubernetes Nodes (cmp). To deploy Virtlet on a new MCP cluster: 1. When generating a deployment metadata model using the ModelDesigner UI, select the Virtlet enabled check box in the Kubernetes Product parameters section. 2. Open your Git project repository. 3. In classes/cluster/ /kubernetes/compute.yml, modify the kubernetes:common:addons:virtlet: parameters as required to define the Virtlet namespace and image path as well as the number of compute nodes on which you want to enable Virtlet. For example: parameters: kubernetes: common: addons: virtlet: enabled: true namespace: kube-system image: mirantis/virtlet:latest 4. If your networking system is OpenContrail, add classes/cluster/ /opencontrail/compute.yml: the following snippet to kubernetes: pool: network: hash: 77169cdadb80a5e33e9d9fe093ed0d99 Proceed with further MCP cluster configuration. Virtlet will be automatically deployed during the Kubernetes cluster deployment. To deploy Virtlet on an existing MCP cluster: 1. Open your Git project repository. 2. In classes/cluster/ /kubernetes/compute.yml, add the following snippet: ©2019, Mirantis Inc. Page 230 Mirantis Cloud Platform Deployment Guide parameters: kubernetes: common: addons: virtlet: enabled: true namespace: kube-system image: mirantis/virtlet:latest Modify the kubernetes:common:addons:virtlet: parameters as required to define the Virtlet namespace and image path as well as the number of compute nodes on which you want to enable Virtlet. 3. If your networking system is OpenContrail, add classes/cluster/ /opencontrail/compute.yml: the following snippet to kubernetes: pool: network: hash: 77169cdadb80a5e33e9d9fe093ed0d99 4. Commit and push the changes to the project Git repository. 5. Log in to the Salt Master node. 6. Update your Salt formulas and the system level of your repository: 1. Change the directory to /srv/salt/reclass. 2. Run the git pull origin master command. 3. Run the salt-call state.sls salt.master command. 4. Run the salt-call state.sls reclass command. 7. Apply the following states: salt -C 'I@kubernetes:master' state.sls kubernetes.master.kube-addons salt -C 'I@kubernetes:pool' state.sls kubernetes.pool salt -C 'I@kubernetes:master' state.sls kubernetes.master.setup Seealso Verify Virtlet after deployment Verify Virtlet after deployment After you enable Virtlet as described in Deploy Virtlet, proceed with the verification procedure described in this section. ©2019, Mirantis Inc. Page 231 Mirantis Cloud Platform Deployment Guide To verify Virtlet after deployment: 1. Verify a basic pod startup: 1. Start a sample VM: kubectl create -f https://raw.githubusercontent.com/Mirantis/virtlet/v1.1.2/examples/cirros-vm.yaml kubectl get pods --all-namespaces -o wide -w 2. Connect to the VM console: kubectl attach -it cirros-vm If you do not see a command prompt, press Enter. Example of system response: login as 'cirros' user. default password: 'gosubsgo'. use 'sudo' for root. cirros-vm login: cirros Password: $ To quit the console, use the ^] key combination. 2. Verify SSH access to the VM pod: 1. Download the vmssh.sh script with the test SSH key: wget https://raw.githubusercontent.com/Mirantis/virtlet/v1.1.2/examples/{vmssh.sh,vmkey} chmod +x vmssh.sh chmod 600 vmkey Note The vmssh.sh script requires kubectl to access a cluster. 2. Access the VM pod using the vmssh.sh script: ./vmssh.sh cirros@cirros-vm 3. Verify whether the VM can access the Kubernetes cluster services: 1. Verify the DNS resolution of the cluster services: ©2019, Mirantis Inc. Page 232 Mirantis Cloud Platform Deployment Guide nslookup kubernetes.default.svc.cluster.local 2. Verify the service connectivity: curl -k https://kubernetes.default.svc.cluster.local Note The above command will raise an authentication error. Ignore this error. 3. Verify Internet access from the VM. For example: curl -k https://google.com ping -c 1 8.8.8.8 Enable the role-based access control (RBAC) Enabling the role-based access control (RBAC) allows you to dynamically configure and control access rights to a cluster resources for users and services. To enable RBAC on a new MCP cluster: 1. Generate a deployment metadata model for your new MCP Kubernetes deployment as described in Create a deployment metadata model using the Model Designer UI. 2. Open your Git project repository. 3. In classes/cluster/ 443/TCP 23h my-nginx LoadBalancer 10.254.96.233 172.16.10.150 80:31983/TCP 7m Seealso • MCP Reference Architecture: MetalLB support • Enable the NGINX Ingress controller Enable the NGINX Ingress controller The NGINX Ingress controller provides load balancing, SSL termination, and name-based virtual hosting. You can enable the NGINX Ingress controller if you use MetalLB in your MCP Kubernetes-based cluster. To enable the NGINX Ingress controller on a Kubernetes cluster: ©2019, Mirantis Inc. Page 235 Mirantis Cloud Platform Deployment Guide 1. While generating a deployment metadata model for your new MCP Kubernetes cluster as described in Create a deployment metadata model using the Model Designer UI, select the following options in the Infrastructure parameters section of the Model Designer UI: • Kubernetes ingressnginx enabled • Kubernetes metallb enabled as the Kubernetes network engine 2. If you have already generated a deployment metadata model without the NGINX Ingress controller parameter or to enable this feature on an existing Kubernetes cluster: 1. Enable MetalLB as described in Enable the MetalLB support. 2. Open your Reclass model Git project repository on the cluster level. 3. In /kubernetes/control.yml, enable the NGINX Ingress controller: parameters: kubernetes: common: addons: ... ingress-nginx: enabled: true Note If required, you can change the default number of replicas for the NGINX Ingress controller by adding the kubernetes_ingressnginx_controller_replicas parameter to /kubernetes/control.yml. The default value is 1. 3. Select from the following options: • If you are performing an initial deployment of your cluster, proceed with further configuration as required. The NGINX Ingress controller will be installed during your Kubernetes cluster deployment. • If you are making changes to an existing cluster: 1. Log in to the Salt Master node. 2. Refresh your Reclass storage data: salt-call state.sls reclass.storage 3. Apply the kube-addons state: salt -C 'I@kubernetes:master' state.sls kubernetes.master.kube-addons Enable an external Ceph RBD storage ©2019, Mirantis Inc. Page 236 Mirantis Cloud Platform Deployment Guide You can connect your Kubernetes cluster to an existing external Ceph RADOS Block Device (RBD) storage by enabling the corresponding feature in your new or existing Kubernetes cluster. To enable an external Ceph RBD storage on a Kubernetes cluster: 1. While generating a deployment metadata model for your new MCP Kubernetes cluster as described in Create a deployment metadata model using the Model Designer UI, select the Kubernetes rbd enabled option in the Infrastructure parameters section and define the Kubernetes RBD parameters in the Product parameters section of the Model Designer UI. 2. If you have already generated a deployment metadata model without the Ceph RBD storage parameters or to enable this feature on an existing Kubernetes cluster: 1. Open your Reclass model Git project repository on the cluster level. 2. In /kubernetes/control.yml, add the Ceph RBD cluster parameters. For example: parameters: ... kubernetes: common: addons: storageclass: rbd: enabled: True default: True provisioner: rbd name: rbd user_id: kubernetes user_key: AQAOoo5bGqtPExAABGSPtThpt5s+iq97KAE+WQ== monitors: cmn01:6789,cmn02:6789,cmn03:6789 pool: kubernetes fstype: ext4 3. Choose from the following options: • On a new Kubernetes cluster, proceed to further cluster configuration. The external Ceph RBD storage will be enabled during the Kubernetes cluster deployment. For the deployment details, see: Deploy a Kubernetes cluster. • On an existing Kubernetes cluster: 1. Log in to the Salt Master node. 2. Update your Salt formulas and the system level of your repository: 1. Change the directory to /srv/salt/reclass. 2. Run the following commands: git pull origin master salt-call state.sls salt.master state.sls reclass 3. Apply salt-call the following state: ©2019, Mirantis Inc. Page 237 Mirantis Cloud Platform Deployment Guide salt -C 'I@kubernetes:master' state.sls kubernetes.master.kube-addons Deploy OpenContrail manually OpenContrail is a component of MCP that provides overlay networking built on top of physical IP-based underlay network for cloud environments. OpenContrail provides more flexibility in terms of network hardware used in cloud environments comparing to other enterprise-class networking solutions. Deploy OpenContrail This section instructs you on how to manually deploy OpenContrail 4.0 on your Mirantis Cloud Platform (MCP) cluster. Caution! New deployments with OpenContrail 3.2 are not supported. Deploy OpenContrail 4.0 for OpenStack This section provides instructions on how to manually deploy OpenContrail 4.0 on your OpenStack-based MCP cluster. To deploy OpenContrail 4.0 on an OpenStack-based MCP cluster: 1. Log in to the Salt Master node. 2. Run the following basic states to prepare the OpenContrail nodes: salt -C 'ntw* or nal*' saltutil.refresh_pillar salt -C 'I@opencontrail:database' saltutil.sync_all salt -C 'I@opencontrail:database' state.sls salt.minion,linux,ntp,openssh 3. Deploy and configure Keepalived and HAProxy: salt -C 'I@opencontrail:database' state.sls keepalived,haproxy 4. Deploy and configure Docker: salt -C 'I@opencontrail:database' state.sls docker.host 5. Create configuration files for OpenContrail: salt -C 'I@opencontrail:database' state.sls opencontrail exclude=opencontrail.client 6. Start the OpenContrail Docker containers: ©2019, Mirantis Inc. Page 238 Mirantis Cloud Platform Deployment Guide salt -C 'I@opencontrail:database' state.sls docker.client 7. Verify the status of the OpenContrail service: salt -C 'I@opencontrail:database' cmd.run 'doctrail all contrail-status' In the output, the services status should be active or backup. Note It may take some time for all services to finish initializing. 8. Configure the OpenContrail resources: salt -C 'I@opencontrail:client and not I@opencontrail:compute' state.sls opencontrail.client 9. Apply the following states to deploy the OpenContrail vRouters: salt salt salt salt salt -C -C -C -C -C 'cmp*' saltutil.refresh_pillar 'I@opencontrail:compute' saltutil.sync_all 'I@opencontrail:compute' state.highstate exclude=opencontrail.client 'I@opencontrail:compute' cmd.run 'reboot' 'I@opencontrail:compute' state.sls opencontrail.client Deploy OpenContrail 4.0 for Kubernetes This section provides instructions on how to manually deploy OpenContrail 4.0 as an add-on on your Kubernetes-based MCP cluster. To deploy OpenContrail 4.0 on a Kubernetes-based MCP cluster: 1. Log in to the Salt Master node. 2. Run the following basic states to prepare the OpenContrail nodes: salt -C 'ctl*' saltutil.refresh_pillar salt -C 'I@opencontrail:control' saltutil.sync_all salt -C 'I@opencontrail:control' state.sls salt.minion,linux,ntp,openssh 3. Create configuration files for OpenContrail: salt -C 'I@opencontrail:control' state.sls opencontrail exclude=opencontrail.client 4. Apply the following states to configure OpenContrail as an add-on for Kubernetes: ©2019, Mirantis Inc. Page 239 Mirantis Cloud Platform Deployment Guide salt -C 'I@kubernetes:pool and not I@kubernetes:master' state.sls kubernetes.pool salt -C 'I@kubernetes:master' state.sls kubernetes.master.kube-addons 5. Verify the status of the OpenContrail service: salt -C 'I@opencontrail:database' cmd.run 'doctrail all contrail-status' In the output, the services status should be active or backup. Note It may take some time for all services to finish initializing. 6. Set up the OpenContrail resources: salt -C 'I@opencontrail:database:id:1' state.sls opencontrail.client 7. Apply the following states to deploy the OpenContrail vRouters: salt salt salt salt salt -C -C -C -C -C 'cmp*' saltutil.refresh_pillar 'I@opencontrail:compute' saltutil.sync_all 'I@opencontrail:compute' state.highstate exclude=opencontrail.client 'I@opencontrail:compute' cmd.run 'reboot' 'I@opencontrail:compute' state.sls opencontrail.client 8. Proceed to the step 14 of the Deploy a Kubernetes cluster procedure. Seealso OpenContrail limitations Integrate Barbican to OpenContrail LBaaSv2 The Transport Layer Security (TLS) termination on OpenContrail HAProxy load balancer requires Barbican. Barbican is a REST API that is used for secured storage as well as for provisioning and managing of secrets such as passwords, encryption keys, and X.509 certificates. To connect to the Barbican API, OpenContrail requires configuring the authentication in /etc/contrail/contrail-lbaas-auth.conf and the Barbican client library package python-barbicanclient installed on compute nodes. To install the Barbican client library: 1. Deploy Barbican. ©2019, Mirantis Inc. Page 240 Mirantis Cloud Platform Deployment Guide 2. Open your Git project repository. 3. Include the following class to classes/cluster/ /openstack/compute/init.yml: - service.barbican.client.cluster 4. Commit and push the changes to the project Git repository. 5. Log in to the Salt Master node. 6. Update your Salt formulas at the system level: 1. Change the directory to /srv/salt/reclass. 2. Run the git pull origin master command. 3. Run the salt-call state.sls salt.master command. 7. Apply the following state: salt -C 'I@barbican:client' state.apply barbican To configure OpenContrail for the Barbican authentication: 1. Open the classes/cluster/ / directory of your Git project repository. 2. In openstack/compute.yml, include the following class: - service.opencontrail.compute.lbaas.barbican 3. In openstack/init.yml, edit the following parameters: opencontrail_barbican_user: admin opencontrail_barbican_password: ${_param:keystone_admin_password} opencontrail_barbican_tenant: admin 4. Commit and push the changes to the project Git repository. 5. Log in to the Salt Master node. 6. Update your Salt formulas at the system level: 1. Change the directory to /srv/salt/reclass. 2. Run the git pull origin master command. 3. Run the salt-call state.sls salt.master command. 7. Log in to the Salt Master node. 8. Apply the following state: salt -C 'I@opencontrail:compute' state.apply opencontrail ©2019, Mirantis Inc. Page 241 Mirantis Cloud Platform Deployment Guide Seealso Use HTTPS termination in OpenContrail load balancer Enable TSN support While deploying your MCP cluster with OpenContrail, you can connect the OpenContrail virtual network to a bare metal server through a top-of-rack (ToR) switch. Using this feature on large deployments enhances the performance of the tenant-to-tenant networking and simplifies communication with the virtual instances that run on the OpenContrail cluster. A basic ToR services node (TSN) of the OpenContrail cluster consists of two physical servers that host the ToR Service node and ToR agents. TSN is the multicast controller of the ToR switches. The modification of the MCP DriveTrain pipeline is not required since deployment of a TSN is the same as deploying a compute node. You only have to modify the TARGET_SERVERS field when enabling TSN on an existing MCP cluster. The configuration of TSN and the ToR agent is part of the OpenContrail compute role along with Keepalived and HAProxy. Add a ToR services node to MCP This section describes how to add one top-of-rack (ToR) services node (TSN) with one ToR agent to manage one ToR switch for the OpenContrail cluster in MCP. Before you proceed with the procedure: • If you are performing the initial deployment of your MCP cluster, verify that you have created the deployment metadata model as described in Create a deployment metadata model. • If you are making changes to an existing MCP cluster: 1. Verify that two physical servers dedicated for TSN are provisioned by MAAS as described in Provision physical nodes using MAAS. 2. Verify that these two nodes are ready for deployment: salt 'tor*' state.sls linux,ntp,openssh,salt.minion Caution! If any of these states fail, fix the issue provided in the output and re-apply the state before you proceed to the below procedure. To add a TSN to an MCP cluster: 1. Open your Git project repository. ©2019, Mirantis Inc. Page 242 Mirantis Cloud Platform Deployment Guide 2. In classes/cluster/