MongoDB Ops Manager Manual Opsmanager V1.6

opsmanager-manual-v1.6

User Manual:

Open the PDF directly: View PDF PDF.
Page Count: 457

DownloadMongoDB Ops Manager Manual Opsmanager-manual-v1.6
Open PDF In BrowserView PDF
MongoDB Ops Manager Manual
Release 1.6

MongoDB, Inc.
Dec 04, 2017

© MongoDB, Inc. 2008 - 2016

2

Contents
1

2

Ops Manager Introduction
1.1 Functional Overview . . . . . . . . . . . . . . . . .
Overview . . . . . . . . . . . . . . . . . . . . . . .
Monitoring . . . . . . . . . . . . . . . . . . . . . .
Automation . . . . . . . . . . . . . . . . . . . . . .
Backup . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Ops Manager Components . . . . . . . . . . . . . .
Network Diagram . . . . . . . . . . . . . . . . . . .
Ops Manager Application . . . . . . . . . . . . . . .
Backup Daemon . . . . . . . . . . . . . . . . . . . .
Dedicated MongoDB Databases for Operational Data
1.3 Install a Simple Test Ops Manager Installation . . . .
Overview . . . . . . . . . . . . . . . . . . . . . . .
Procedure . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

11
11
12
12
12
12
13
14
14
15
15
16
16
16

Install Ops Manager
2.1 Installation Checklist . . . . . . . . . . . . . . . . . . . . . . . .
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Topology Decisions . . . . . . . . . . . . . . . . . . . . . . . . .
Security Decisions . . . . . . . . . . . . . . . . . . . . . . . . .
Backup Decisions . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Example Installation Diagrams . . . . . . . . . . . . . . . . . . .
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Non-Durable, Test Install on a Single Server . . . . . . . . . . . .
Durable Production Install . . . . . . . . . . . . . . . . . . . . .
Durable, Highly Available Install with Multiple Backup Databases
2.3 Ops Manager Hardware and Software Requirements . . . . . . . .
Hardware Requirements . . . . . . . . . . . . . . . . . . . . . .
EC2 Security Groups . . . . . . . . . . . . . . . . . . . . . . . .
Software Requirements . . . . . . . . . . . . . . . . . . . . . . .
2.4 Deploy Backing MongoDB Replica Sets . . . . . . . . . . . . . .
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Replica Sets Requirements . . . . . . . . . . . . . . . . . . . . .
Server Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . .
Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.5 Install Ops Manager . . . . . . . . . . . . . . . . . . . . . . . . .
Install Ops Manager with deb Packages . . . . . . . . . . . . . .
Install Ops Manager with rpm Packages . . . . . . . . . . . . . .
Install Ops Manager from tar.gz or zip Archives . . . . . . .
Install Ops Manager on Windows . . . . . . . . . . . . . . . . . .
2.6 Upgrade Ops Manager . . . . . . . . . . . . . . . . . . . . . . .
Upgrade Ops Manager with deb Packages . . . . . . . . . . . . .
Upgrade Ops Manager with rpm Packages . . . . . . . . . . . . .
Upgrade Ops Manager from tar.gz or zip Archives . . . . . .
Upgrade from Version 1.2 and Earlier . . . . . . . . . . . . . . .
2.7 Configure Local Mode if Ops Manager has No Internet Access . .
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Required Access . . . . . . . . . . . . . . . . . . . . . . . . . .
Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.8 Configure High Availability . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

20
20
20
21
23
23
23
24
24
24
25
27
27
30
30
32
32
32
33
33
35
35
40
45
50
54
54
57
61
64
65
66
66
68
68
70

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

3

3

4

5

4

Configure a Highly Available Ops Manager Application . . . . . .
Configure a Highly Available Ops Manager Backup Service . . .
2.9 Configure Backup Jobs and Storage . . . . . . . . . . . . . . . .
Configure Multiple Blockstores in Multiple Data Centers . . . . .
Move Jobs from a Lost Backup Service to another Backup Service
2.10 Test Ops Manager Monitoring . . . . . . . . . . . . . . . . . . .
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

70
72
73
73
76
77
78
78

Create a New MongoDB Deployment
3.1 Add Servers for Use by Automation . . .
Overview . . . . . . . . . . . . . . . . .
Add Existing Servers to Ops Manager . .
3.2 Deploy a Replica Set . . . . . . . . . . .
Overview . . . . . . . . . . . . . . . . .
Consideration . . . . . . . . . . . . . . .
Prerequisites . . . . . . . . . . . . . . . .
Procedure . . . . . . . . . . . . . . . . .
3.3 Deploy a Sharded Cluster . . . . . . . . .
Overview . . . . . . . . . . . . . . . . .
Prerequisites . . . . . . . . . . . . . . . .
Procedure . . . . . . . . . . . . . . . . .
3.4 Deploy a Standalone MongoDB Instance .
Overview . . . . . . . . . . . . . . . . .
Prerequisites . . . . . . . . . . . . . . . .
Procedure . . . . . . . . . . . . . . . . .
3.5 Connect to a MongoDB Process . . . . .
Overview . . . . . . . . . . . . . . . . .
Firewall Rules . . . . . . . . . . . . . . .
Procedures . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

80
80
80
81
82
82
83
83
83
84
84
84
84
85
85
85
86
86
86
86
87

Import an Existing MongoDB Deployment
4.1 Add Existing MongoDB Processes to Monitoring
Overview . . . . . . . . . . . . . . . . . . . . .
Prerequisite . . . . . . . . . . . . . . . . . . . .
Add MongoDB Processes . . . . . . . . . . . . .
4.2 Add Monitored Processes to Automation . . . . .
Overview . . . . . . . . . . . . . . . . . . . . .
Prerequisites . . . . . . . . . . . . . . . . . . . .
Procedures . . . . . . . . . . . . . . . . . . . . .
4.3 Reactivate Monitoring for a Process . . . . . . .
Overview . . . . . . . . . . . . . . . . . . . . .
Procedure . . . . . . . . . . . . . . . . . . . . .
4.4 Remove Hosts . . . . . . . . . . . . . . . . . . .
Overview . . . . . . . . . . . . . . . . . . . . .
Procedure . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

88
88
88
89
89
90
90
90
91
92
92
92
93
93
93

Manage Deployments
5.1 Edit a Replica Set . . . . . . . . . . . . . . . .
Overview . . . . . . . . . . . . . . . . . . . .
Procedures . . . . . . . . . . . . . . . . . . . .
Additional Information . . . . . . . . . . . . .
5.2 Migrate a Replica Set Member to a New Server
Overview . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

93
94
94
94
97
97
98

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.

6

Considerations . . . . . . . . . . . . . . . .
Procedure . . . . . . . . . . . . . . . . . . .
5.3 Move or Add a Monitoring or Backup Agent .
Overview . . . . . . . . . . . . . . . . . . .
Procedures . . . . . . . . . . . . . . . . . . .
5.4 Change the Version of MongoDB . . . . . . .
Overview . . . . . . . . . . . . . . . . . . .
Considerations . . . . . . . . . . . . . . . .
Procedure . . . . . . . . . . . . . . . . . . .
5.5 Restart a MongoDB Process . . . . . . . . .
Overview . . . . . . . . . . . . . . . . . . .
Considerations . . . . . . . . . . . . . . . .
Procedure . . . . . . . . . . . . . . . . . . .
5.6 Shut Down MongoDB Processes . . . . . . .
Overview . . . . . . . . . . . . . . . . . . .
Procedure . . . . . . . . . . . . . . . . . . .
Additional Information . . . . . . . . . . . .
5.7 Remove Processes from Monitoring . . . . .
Overview . . . . . . . . . . . . . . . . . . .
Considerations . . . . . . . . . . . . . . . .
Procedure . . . . . . . . . . . . . . . . . . .
5.8 Alerts . . . . . . . . . . . . . . . . . . . . .
Manage Host Alerts . . . . . . . . . . . . . .
Create an Alert Configuration . . . . . . . . .
Manage Alert Configuration . . . . . . . . .
Manage Alerts . . . . . . . . . . . . . . . . .
Alert Conditions . . . . . . . . . . . . . . . .
5.9 Monitoring Metrics . . . . . . . . . . . . . .
Deployment . . . . . . . . . . . . . . . . . .
Host Statistics . . . . . . . . . . . . . . . . .
Aggregated Cluster Statistics . . . . . . . . .
Replica Set Statistics . . . . . . . . . . . . .
Profile Databases . . . . . . . . . . . . . . .
5.10 View Logs . . . . . . . . . . . . . . . . . . .
Overview . . . . . . . . . . . . . . . . . . .
MongoDB Real-Time Logs . . . . . . . . . .
MongoDB On-Disk Logs . . . . . . . . . . .
Agent Logs . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

98
98
99
100
100
101
101
102
102
102
103
103
103
103
104
104
104
104
104
105
105
105
105
105
108
110
112
119
119
120
122
123
124
126
127
127
128
128

Back Up MongoDB Deployments
6.1 Backup Flows . . . . . . . . . . . . . . . . .
Introduction . . . . . . . . . . . . . . . . . .
Initial Sync . . . . . . . . . . . . . . . . . .
Routine Operation . . . . . . . . . . . . . . .
Snapshots . . . . . . . . . . . . . . . . . . .
Grooms . . . . . . . . . . . . . . . . . . . .
6.2 Backup Preparations . . . . . . . . . . . . .
Overview . . . . . . . . . . . . . . . . . . .
Snapshot Frequency and Retention Policy . .
Excluded Namespaces . . . . . . . . . . . .
Storage Engine . . . . . . . . . . . . . . . .
Resyncing Production Deployments . . . . .
Checkpoints . . . . . . . . . . . . . . . . . .
Snapshots when Agent Cannot Stop Balancer

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

129
130
130
130
131
131
132
132
132
132
133
133
133
133
134

5

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

134
134
134
134
134
135
136
136
138
138
142
151
156
157
159
160
160
162
164
164
166
166
168

Security
7.1 Security Overview . . . . . . . . . . . . . . . . . . . . . . .
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Security Options . . . . . . . . . . . . . . . . . . . . . . . .
Supported User Authentication Per Release . . . . . . . . . .
Supported MongoDB Security Features on Linux . . . . . . .
Supported MongoDB Security Features on Windows . . . . .
7.2 Firewall Configuration . . . . . . . . . . . . . . . . . . . . .
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Monitoring HTTP Endpoints . . . . . . . . . . . . . . . . . .
7.3 Change the Ops Manager Ports . . . . . . . . . . . . . . . . .
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.4 Configure SSL Connections to Ops Manager . . . . . . . . . .
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Run the Ops Manager Application Over HTTPS . . . . . . . .
7.5 Configure the Connections to the Backing MongoDB Instances
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . .
Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.6 Configure SSL for MongoDB . . . . . . . . . . . . . . . . . .
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.7 Configure Users and Groups with LDAP for Ops Manager . .
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . .
Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.8 Configure MongoDB Authentication and Authorization . . . .
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

170
170
170
171
171
171
172
173
173
173
174
175
175
175
178
178
178
179
179
179
179
182
182
183
183
184
185
185
186
187

6.3

6.4

6.5

6.6

6.7

7

6

Snapshots when Agent Cannot Contact a mongod . . .
Activate Backup . . . . . . . . . . . . . . . . . . . . . .
Overview . . . . . . . . . . . . . . . . . . . . . . . . .
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . .
Procedure . . . . . . . . . . . . . . . . . . . . . . . . .
Edit a Backup’s Settings . . . . . . . . . . . . . . . . .
Overview . . . . . . . . . . . . . . . . . . . . . . . . .
Procedure . . . . . . . . . . . . . . . . . . . . . . . . .
Restore MongoDB Deployments . . . . . . . . . . . . .
Restore Flows . . . . . . . . . . . . . . . . . . . . . . .
Restore a Sharded Cluster from a Backup . . . . . . . .
Restore a Replica Set from a Backup . . . . . . . . . . .
Restore MongoDB Instances with Backup . . . . . . . .
Restore from a Stored Snapshot . . . . . . . . . . . . .
Retrieve a Snapshot with SCP Delivery . . . . . . . . .
Restore from a Point in the Last 24 Hours . . . . . . . .
Restore a Single Database . . . . . . . . . . . . . . . .
Seed a New Secondary from Backup Restore . . . . . .
Backup Maintenance . . . . . . . . . . . . . . . . . . .
Select Backup File Delivery Method and Format . . . . .
Delete Snapshots for Replica Sets and Sharded Clusters .
Stop, Start, or Disable the Ops Manager Backup Service
Resync Backup . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

Access Control Mechanisms . . . . . . . . . . . . .
Edit Host Credentials . . . . . . . . . . . . . . . . .
7.9 Manage Two-Factor Authentication for Ops Manager
Overview . . . . . . . . . . . . . . . . . . . . . . .
Procedures . . . . . . . . . . . . . . . . . . . . . . .
7.10 Manage Your Two-Factor Authentication Options . .
Overview . . . . . . . . . . . . . . . . . . . . . . .
Procedures . . . . . . . . . . . . . . . . . . . . . . .
8

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

187
187
189
189
190
191
191
192

Administration
8.1 Manage Your Account . . . . . . . . . . . . . . . . . .
Account Page . . . . . . . . . . . . . . . . . . . . . . .
Personalization Page . . . . . . . . . . . . . . . . . . .
API Keys & Whitelists Page . . . . . . . . . . . . . . .
My Groups Page . . . . . . . . . . . . . . . . . . . . .
Group Settings Page . . . . . . . . . . . . . . . . . . .
Users Page . . . . . . . . . . . . . . . . . . . . . . . .
Agents Page . . . . . . . . . . . . . . . . . . . . . . . .
Billing/Subscriptions . . . . . . . . . . . . . . . . . . .
Payment History . . . . . . . . . . . . . . . . . . . . .
8.2 Administer the System . . . . . . . . . . . . . . . . . .
General Tab . . . . . . . . . . . . . . . . . . . . . . . .
Backup Tab . . . . . . . . . . . . . . . . . . . . . . . .
Control Panel Tab . . . . . . . . . . . . . . . . . . . . .
8.3 Manage Groups . . . . . . . . . . . . . . . . . . . . . .
Overview . . . . . . . . . . . . . . . . . . . . . . . . .
Working with Multiple Environments . . . . . . . . . .
Procedures . . . . . . . . . . . . . . . . . . . . . . . . .
8.4 Manage Ops Manager Users and Roles . . . . . . . . . .
Manage Ops Manager Users . . . . . . . . . . . . . . .
Ops Manager Roles . . . . . . . . . . . . . . . . . . . .
8.5 Manage MongoDB Users and Roles . . . . . . . . . . .
Enable MongoDB Role-Based Access Control . . . . . .
Manage MongoDB Users and Roles . . . . . . . . . . .
Manage Custom Roles . . . . . . . . . . . . . . . . . .
8.6 Configure Available MongoDB Versions . . . . . . . . .
Overview . . . . . . . . . . . . . . . . . . . . . . . . .
Procedure . . . . . . . . . . . . . . . . . . . . . . . . .
8.7 Backup Alerts . . . . . . . . . . . . . . . . . . . . . . .
Backup Agent Down . . . . . . . . . . . . . . . . . . .
Backups Broken . . . . . . . . . . . . . . . . . . . . . .
Cluster Snapshot Failed . . . . . . . . . . . . . . . . . .
Bind Failure . . . . . . . . . . . . . . . . . . . . . . . .
Snapshot Behind Snitch . . . . . . . . . . . . . . . . . .
8.8 Start and Stop Ops Manager Application . . . . . . . . .
Start the Ops Manager Server . . . . . . . . . . . . . . .
Stop the Ops Manager Server . . . . . . . . . . . . . . .
Startup Log File Output . . . . . . . . . . . . . . . . . .
Optional: Run as Different User . . . . . . . . . . . . .
Optional: Ops Manager Application Server Port Number
8.9 Back Up Ops Manager . . . . . . . . . . . . . . . . . .
Back Up with the Public API . . . . . . . . . . . . . . .
Shut Down and Back Up . . . . . . . . . . . . . . . . .
Online Backup . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

194
194
195
195
196
196
196
196
196
196
196
197
197
199
203
204
204
204
204
206
206
208
212
212
213
215
218
218
219
219
219
219
220
220
220
220
221
221
221
222
222
222
223
223
223

7

9

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

223
223
224
224
224
225
226
226
227
227
227
228
228
228
228
229
230
236
241
244
249
255
261
270
274
276
280
285
288
295
297
297
299
307

10 Troubleshooting
10.1 Getting Started Checklist . . . . . . . . . . . . . . . . . . . .
Authentication Errors . . . . . . . . . . . . . . . . . . . . . .
Check Agent Output or Log . . . . . . . . . . . . . . . . . .
Confirm Only One Agent is Actively Monitoring . . . . . . .
Ensure Connectivity Between Agent and Monitored Hosts . .
Ensure Connectivity Between Agent and Ops Manager Server
Allow Agent to Discover Hosts and Collect Initial Data . . . .
10.2 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . .
Why doesn’t the monitoring server startup successfully? . . .
10.3 Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . .
Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Deployments . . . . . . . . . . . . . . . . . . . . . . . . . .
Monitoring Agent Fails to Collect Data . . . . . . . . . . . .
Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Munin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.4 Authentication . . . . . . . . . . . . . . . . . . . . . . . . . .
Two-Factor Authentication . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

310
310
310
310
310
311
311
311
311
311
311
311
312
313
313
313
314
315
315

8

API
9.1 Public API Principles . . . . . . . . . . . . . .
Overview . . . . . . . . . . . . . . . . . . . .
HTTP Methods . . . . . . . . . . . . . . . . .
JSON . . . . . . . . . . . . . . . . . . . . . .
Linking . . . . . . . . . . . . . . . . . . . . .
Lists . . . . . . . . . . . . . . . . . . . . . . .
Envelopes . . . . . . . . . . . . . . . . . . . .
Pretty Printing . . . . . . . . . . . . . . . . . .
Response Codes . . . . . . . . . . . . . . . . .
Errors . . . . . . . . . . . . . . . . . . . . . .
Authentication . . . . . . . . . . . . . . . . . .
Automation . . . . . . . . . . . . . . . . . . .
Additional Information . . . . . . . . . . . . .
9.2 Public API Resources . . . . . . . . . . . . . .
Root . . . . . . . . . . . . . . . . . . . . . . .
Hosts . . . . . . . . . . . . . . . . . . . . . .
Metrics . . . . . . . . . . . . . . . . . . . . .
Clusters . . . . . . . . . . . . . . . . . . . . .
Groups . . . . . . . . . . . . . . . . . . . . . .
Users . . . . . . . . . . . . . . . . . . . . . .
Alerts . . . . . . . . . . . . . . . . . . . . . .
Alert Configurations . . . . . . . . . . . . . .
Backup Configurations . . . . . . . . . . . . .
Snapshot Schedule . . . . . . . . . . . . . . .
Snapshots . . . . . . . . . . . . . . . . . . . .
Restore Jobs . . . . . . . . . . . . . . . . . . .
Whitelist . . . . . . . . . . . . . . . . . . . . .
Automation Configuration . . . . . . . . . . .
Automation Status . . . . . . . . . . . . . . .
9.3 Public API Tutorials . . . . . . . . . . . . . .
Enable the Public API . . . . . . . . . . . . . .
Deploy a Cluster through the API . . . . . . .
Update the MongoDB Version of a Deployment

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

LDAP . . . . . . . . . . . . . . . . . . .
Cannot Enable LDAP . . . . . . . . . . .
Forgot to Change MONGODB-CR Error .
All Deployments . . . . . . . . . . . . .
10.5 Backup . . . . . . . . . . . . . . . . . .
Logs Display MongodVersionException .
10.6 System . . . . . . . . . . . . . . . . . . .
Logs Display OutOfMemoryError . . . .
10.7 Automation Checklist . . . . . . . . . . .

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

315
315
316
316
317
317
318
318
318

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

318
319
319
319
321
322
322
322
323
323
325
325
328
328
328
329
329

12 Reference
12.1 Ops Manager Configuration Files . . . . . . . . . . .
Overview . . . . . . . . . . . . . . . . . . . . . . .
Settings . . . . . . . . . . . . . . . . . . . . . . . .
Encrypt MongoDB User Credentials . . . . . . . . .
MongoDB User Access . . . . . . . . . . . . . . . .
12.2 Automation Agent . . . . . . . . . . . . . . . . . .
Install the Automation Agent . . . . . . . . . . . . .
Automation Agent Configuration . . . . . . . . . . .
12.3 Monitoring Agent . . . . . . . . . . . . . . . . . . .
Install Monitoring Agent . . . . . . . . . . . . . . .
Monitoring Agent Configuration . . . . . . . . . . .
Required Access for Monitoring Agent . . . . . . . .
Configure Monitoring Agent for Access Control . . .
Configure Monitoring Agent for SSL . . . . . . . . .
Configure Hardware Monitoring with munin-node
Start or Stop the Monitoring Agent . . . . . . . . . .
Remove Monitoring Agents from Ops Manager . . .
12.4 Backup Agent . . . . . . . . . . . . . . . . . . . . .
Install Backup Agent . . . . . . . . . . . . . . . . .
Backup Agent Configuration . . . . . . . . . . . . .
Required Access for Backup Agent . . . . . . . . . .
Configure Backup Agent for Access Control . . . . .
Configure Backup Agent for SSL . . . . . . . . . . .
Start or Stop the Backup Agent . . . . . . . . . . . .
Remove the Backup Agent from Ops Manager . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

329
330
330
331
345
346
346
346
355
358
358
374
377
380
385
387
389
391
392
392
410
412
414
421
422
424

11 Frequently Asked Questions
11.1 Monitoring FAQs . . . . . .
Host Configuration . . . . .
Monitoring Agent . . . . . .
Data Presentation . . . . . .
Data Retention . . . . . . .
11.2 Backup FAQs . . . . . . . .
Requirements . . . . . . . .
Interface . . . . . . . . . . .
Operations . . . . . . . . . .
Configuration . . . . . . . .
Restoration . . . . . . . . .
11.3 Administration FAQs . . . .
User and Group Management
Activity . . . . . . . . . . .
Operations . . . . . . . . . .
About Ops Manager . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

9

12.5 Audit Events . . . . . . . . . . . . . . . . . . . . .
User Audits . . . . . . . . . . . . . . . . . . . . . .
Host Audits . . . . . . . . . . . . . . . . . . . . . .
Alert Config Audits . . . . . . . . . . . . . . . . . .
Backup Audits . . . . . . . . . . . . . . . . . . . .
Group Audits . . . . . . . . . . . . . . . . . . . . .
12.6 Monitoring Reference . . . . . . . . . . . . . . . . .
Host Types . . . . . . . . . . . . . . . . . . . . . .
Host Process Types . . . . . . . . . . . . . . . . . .
Event Types . . . . . . . . . . . . . . . . . . . . . .
Alert Types . . . . . . . . . . . . . . . . . . . . . .
Chart Colors . . . . . . . . . . . . . . . . . . . . . .
Database Commands Used by the Monitoring Agent
12.7 Supported Browsers . . . . . . . . . . . . . . . . . .
12.8 Advanced Options for MongoDB Deployments . . .
Overview . . . . . . . . . . . . . . . . . . . . . . .
Advanced Options . . . . . . . . . . . . . . . . . . .
12.9 Automation Configuration . . . . . . . . . . . . . .
Overview . . . . . . . . . . . . . . . . . . . . . . .
Fields . . . . . . . . . . . . . . . . . . . . . . . . .
12.10Supported MongoDB Options for Automation . . . .
Overview . . . . . . . . . . . . . . . . . . . . . . .
MongoDB 2.6 and Later Configuration Options . . .
MongoDB 2.4 and Earlier Configuration Options . .
13 Release Notes
13.1 Ops Manager Server Changelog . . .
Ops Manager Server 1.6.4 . . . . . .
Ops Manager Server 1.6.3 . . . . . .
Ops Manager Server 1.6.2 . . . . . .
Ops Manager Server 1.6.1 . . . . . .
Ops Manager Server 1.6.0 . . . . . .
MMS Onprem Server 1.5.5 . . . . . .
MMS Onprem Server 1.5.4 . . . . . .
MMS OnPrem Server 1.5.3 . . . . . .
MMS OnPrem Server 1.5.2 . . . . . .
MMS OnPrem Server 1.5.1 . . . . . .
MMS OnPrem Server 1.5.0 . . . . . .
MMS OnPrem Server 1.4.3 . . . . . .
MMS OnPrem Server 1.4.2 . . . .
MMS OnPrem Server 1.4.1 . . . .
MMS OnPrem Server 1.4.0 . . . .
MMS OnPrem Server 1.3.0 . . . .
MMS OnPrem Server 1.2.0 . . . .
13.2 Automation Agent Changelog . . . .
Automation Agent 1.4.18.1199-1
Automation Agent 1.4.16.1075 .
Automation Agent 1.4.15.999 . .
Automation Agent 1.4.14.983 . .
13.3 Monitoring Agent Changelog . . . . .
Monitoring Agent 2.9.2.184 . . .
Monitoring Agent 2.9.1.176 . . .
Monitoring Agent 2.4.2.113 . . .
Monitoring Agent 2.3.1.89-1 . .

10

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

425
425
426
427
427
428
428
428
429
429
429
429
430
431
431
431
431
432
433
433
445
445
445
447

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

448
448
448
448
448
449
449
450
450
451
451
451
451
452
453
453
453
453
454
454
454
454
454
454
455
455
455
455
455

Monitoring Agent 2.1.4.51-1
Monitoring Agent 2.1.3.48-1
Monitoring Agent 2.1.1.41-1
Monitoring Agent 1.6.6 . . . .
13.4 Backup Agent Changelog . . . . .
Backup Agent 3.1.2.274 . . .
Backup Agent 3.1.1.263 . . .
Backup Agent 2.3.3.209-1 . .
Backup Agent 2.3.1.160 . . .
Backup Agent 1.5.1.83-1 . .
Backup Agent 1.5.0.57-1 . .
Backup Agent 1.4.6.42-1 . .

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

456
456
456
456
456
456
456
457
457
457
457
457

Ops Manager is a package for managing MongoDB deployments. Ops Manager provides Ops Manager Monitoring
and Ops Manager Backup, which helps users optimize clusters and mitigate operational risk.
You can also download a PDF edition of the Ops Manager Manual.
Introduction Describes Ops Manager components and provides steps to install a test deployment.
Install Ops Manager Install Ops Manager.
Create New Deployments Set up servers and create MongoDB deployments.
Import Existing Deployments Import your existing MongoDB deployments to Ops Manager.
Manage Deployments Monitor, update, and manage your deployments.
Back Up Deployments Initiate and restore backups.
Security Describes Ops Manager security features.
Administration Configure and manage Ops Manager.
API Manage Ops Manager through the API.
Troubleshooting Troubleshooting advice for common issues.
Frequently Asked Questions Common questions about the operation and use of Ops Manager.
Reference Reference material for Ops Manager components and operations.
Release Notes Changelogs and notes on Ops Manager releases.

1 Ops Manager Introduction
Functional Overview Describes Ops Manager services and operations.
Ops Manager Components Describes Ops Manager components.
Install a Simple Test Ops Manager Set up a simple test installation in minutes.

1.1 Functional Overview
On this page
• Overview
• Monitoring

11

• Automation
• Backup

Overview
MongoDB Ops Manager is a service for managing, monitoring and backing up a MongoDB infrastructure. Ops
Manager provides the services described here.
Monitoring
Ops Manager Monitoring provides real-time reporting, visualization, and alerting on key database and hardware indicators.
How it Works: A lightweight Monitoring Agent runs within your infrastructure and collects statistics from the nodes
in your MongoDB deployment. The agent transmits database statistics back to Ops Manager to provide real-time
reporting. You can set alerts on indicators you choose.
Automation
Ops Manager Automation provides an interface for configuring MongoDB nodes and clusters and for upgrading your
MongoDB deployment.

How it Works: Automation Agents on each server maintain your deployments. The Automation Agent also maintains
the Monitoring and Backup agents and starts, restarts, and upgrades the agents as needed.
Automation allows only one agent of each type per machine and will remove additional agents. For example, when
maintaining Backup Agents, automation will remove a Backup Agent from a machine that has two Backup Agents.
Backup
Ops Manager Backup provides scheduled snapshots and point-in-time recovery of your MongoDB replica sets and
sharded clusters.
12

How it Works: A lightweight Backup Agent runs within your infrastructure and backs up data from the MongoDB
processes you have specified.
Data Backup
When you start Backup for a MongoDB deployment, the agent performs an initial sync of the deployment’s data as
if it were creating a new, “invisible” member of a replica set. For a sharded cluster the agent performs a sync of
each shard’s primary and of each config server. The agent ships initial sync and oplog data over HTTPS back to Ops
Manager.
The Backup Agent then tails each replica set’s oplog to maintain on disk a standalone database, called a head database.
Ops Manager maintains one head database for each backed-up replica set. The head database is consistent with the
original primary up to the last oplog supplied by the agent.
Backup performs the initial sync and the tailing of the oplog using standard MongoDB queries. The production replica
set is not aware of the copy of the backup data.
Backup uses a mongod with a version equal to or greater than the version of the replica set it backs up.
Backup takes and stores snapshots based on a user-defined snapshot retention policy. Sharded clusters snapshots
temporarily stop the balancer via the mongos so that they can insert a marker token into all shards and config servers
in the cluster. Ops Manager takes a snapshot when the marker tokens appear in the backup data.
Compression and block-level de-duplication technology reduce snapshot data size. The snapshot only stores the
differences between successive snapshots. Snapshots use only a fraction of the disk space required for full snapshots.
Data Restoration
Ops Manager Backup lets you restore data from a scheduled snapshot or from a selected point between snapshots. For
sharded clusters you can restore from checkpoints between snapshots. For replica sets, you can restore from selected
points in time.
When you restore from a snapshot, Ops Manager reads directly from the Backup Blockstore database and transfers
files either through an HTTPS download link or by sending them via HTTPS or SCP.
When you restore from checkpoint or point in time, Ops Manager first creates a local restore of a snapshot from the
blockstore and then applies stored oplogs until the specified point is reached. Ops Manager delivers the backup via
the same HTTPS or SCP mechanisms.
The amount of oplog to keep per backup is configurable and affects the time window available for checkpoint and
point-in-time restores.

1.2 Ops Manager Components
On this page
• Network Diagram
• Ops Manager Application
• Backup Daemon
• Dedicated MongoDB Databases for Operational Data

13

An Ops Manager installation consists of the Ops Manager Application and optional Backup Daemon. Each package
also requires a dedicated MongoDB database to hold operational data.
Network Diagram

Ops Manager Application
The front-end Ops Manager Application contains the UI the end user interacts with, as well as HTTPS services used
by the Monitoring Agent and Backup Agent to transmit data to and from Ops Manager. All three components start
automatically when the Ops Manager Application starts. These components are stateless. Multiple instances of the
front-end package can run as long as each instance has the same configuration. Users and agents can interact with any
instance.
For Monitoring, you only need to install the application package. The application package consists of the following
components:
• Ops Manager HTTP Service
• Backup HTTP Service
• Backup Alert Service

14

Ops Manager HTTP Service
The HTTP server runs on port 8080 by default. This component contains the web interface for managing Ops
Manager users, monitoring of MongoDB servers, and managing those server’s backups. Users can sign up, create new
accounts and groups, as well as join an existing group.
Backup HTTP Service
The HTTP server runs on port 8081 by default. The Backup HTTP Service contains a set of web services used by
the Backup Agent. The agent retrieves its configuration from this service. The agent also sends back initial sync and
oplog data through this interface. There is no user interaction with this service. The Backup HTTP service runs on
port 8081 by default.
The Backup HTTP Service exposes an endpoint that reports on the state of the service and the underlying database
to support monitoring of the Backup service. This status also checks the connections from the service to the Ops
Manager Application Database and the Backup Blockstore Database. See Backup HTTP Service Endpoint.
Backup Alert Service
The Backup Alert Service watches the state of all agents, local copies of backed up databases, and snapshots. It sends
email alerts as problems occur. The Backup Alert Service exposes a health-check endpoint. See Backup Alert Service
Endpoint.
Backup Daemon
The Backup Daemon manages both the local copies of the backed-up databases and each backup’s snapshots. The
daemon does scheduled work based on data coming in to the Backup HTTP Service from the Backup Agents. No
client applications talk directly to the daemon. Its state and job queues come from the Ops Manager Application
Database.
The Backup Daemon’s local copy of a backed-up deployment is called the head database. The daemon stores all its
head databases in its rootDirectory path. To create each head database, the daemon’s server acts as though it
were an “invisible” secondary for each replica set designated for backup.
If you run multiple Backup Daemons, Ops Manager selects the Backup Daemon to use when a user enables backup
for a deployment. The local copy of the deployment resides with that daemon’s server.
The daemon will take scheduled snapshots and store the snapshots in the Backup Blockstore database. It will also act
on restore requests by retrieving data from the Blockstore and delivering it to the requested destination.
Multiple Backup Daemons can increase your storage by scaling horizontally and can provide manual failover.
The Backup Daemon exposes a health-check endpoint. See Backup Daemon Endpoint.
Dedicated MongoDB Databases for Operational Data
Ops Manager uses dedicated MongoDB databases to store the Ops Manager Application’s monitoring data and the
Backup Daemon’s snapshots. To ensure redundancy and high availability, the backing databases run as replica sets.
The replica sets host only Ops Manager data. You must set up the backing replica sets before installing Ops Manager.

15

Ops Manager Application Database
This database contains application metadata used by the Ops Manager Application. The database stores:
• Monitoring data collected from Monitoring Agents.
• Metadata for Ops Manager users, groups, hosts, monitoring data, and backup state.
For topology and specifications, see Ops Manager Application Database Hardware.
Backup Blockstore Database
This database contains all snapshots of databases backed up and oplogs retained for point in time restores. The Backup
Blockstore database requires disk space proportional to the backed-up databases.
Configure the Blockstore as a replica set to provide durability and automatic failover to the backup and restore components. The replica set must have at least three members that hold data.
You cannot back up the Blockstore database with Ops Manager Backup. To back up Ops Manager Backup, see Back
Up Ops Manager.
For additional specifications, see Ops Manager Backup Blockstore Database Hardware.

1.3 Install a Simple Test Ops Manager Installation
On this page
• Overview
• Procedure

Overview
To evaluate Ops Manager, you can quickly create a test installation by installing the Ops Manager Application and
Ops Manager Application Database on a single server. This setup provides all the functionality of Ops Manager
monitoring and automation but provides no failover or high availability. This is not a production setup.
Unlike a production installation, the simple test installation uses only one mongod for the Ops Manager Application
database. In production, the database requires a dedicated replica set.
This procedure includes optional instructions to install Ops Manager Backup, in which case you would install the
Backup Daemon and Backup Blockstore database on the same server as the other Ops Manager components. The
Backup Blockstore database uses only one mongod and not a dedicated replica set, as it would in production.
This procedure installs the test deployment on servers running either RHEL 6+ or Amazon Linux.
Procedure

Warning: This setup is not suitable for a production deployment.
To install Ops Manager for evaluation:

16

Step 1: Set up a RHEL 6+ or Amazon Linux server that meets the following requirements:
• The server must have 15 GB of memory and 50 GB of disk space for the root partition. You can meet the size
requirements by using an Amazon Web Services EC2 m3.xlarge instance and changing the size of the root
partition from 8 GB to 50 GB. When you log into the instance, execute “df -h” to verify the root partition has
50 GB of space.
• You must have root access to the server.
Step 2: Configure the yum package management system to install the latest stable release of MongoDB.
Issue the following command to set up a yum repository definition:
echo "[MongoDB]
name=MongoDB Repository
baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/x86_64
gpgcheck=0
enabled=1" | sudo tee /etc/yum.repos.d/mongodb.repo

Step 3: Install MongoDB.
Issue the following command to install the latest stable release of MongoDB:
sudo yum install -y mongodb-org mongodb-org-shell

Step 4: Create the data directory for the Ops Manager Application database.
Issue the following two commands to create the data directory and change its ownership:
sudo mkdir -p /data/db
sudo chown -R mongod:mongod /data

OPTIONAL: To also install the Backup feature, issue following additional commands for the Backup Blockstore
database:
sudo mkdir -p /data/backup
sudo chown mongod:mongod /data/backup

Step 5: Start the MongoDB backing instance for the Ops Manager Application database.
Issue the following command to start MongoDB as the mongod user. Start MongoDB on port 27017 and specify
the /data/db for both data files and logs. Include the --fork option to run the process in the background and
maintain control of the terminal.
sudo -u mongod mongod --port 27017 --dbpath /data/db --logpath /data/db/mongodb.log -˓→fork

OPTIONAL: To also install the Backup feature, issue following command to start a MongoDB instance similar to the
other but on port 27018 and with the data directory and log path of the Backup Blockstore database:

17

sudo -u mongod mongod --port 27018 --dbpath /data/backup --logpath /data/backup/
˓→mongodb.log --fork

Step 6: Download the Ops Manager Application package.
1. In a browser, go to http://www.mongodb.com/download.
2. Fill out and submit the subscription form.
3. On the MongoDB Enterprise Downloads page, go to the MongoDB Ops Manager section and click the here
link.
4. On the Ops Manager Download page, acknowledge the recommendation to contact MongoDB for production
installs.
5. On the MongoDB Ops Manager Downloads page, copy the link address of the RPM link for Monitoring, Automation and Core. OPTIONAL: If you will install Backup, copy the link address of the RPM link for Backup
as well.
6. Open a system prompt.
7. Download the Ops Manager Application package by issuing a curl command that uses the link address copied
for the RPM for Monitoring, Automation and Core:
curl -OL 

OPTIONAL: Download the Backup Daemon package by issuing a curl command that uses the link address
copied for the Backup RPM:
curl -OL 

Step 7: Install the Ops Manager Application.
Install the Monitoring, Automation and Core RPM package that you downloaded. Issue the rpm --install command with root privileges and specify the package name:
sudo rpm --install 

OPTIONAL: To also install Backup, issue the rpm --install command with root privileges and specify the
Backup RPM package:
sudo rpm --install 

Step 8: Get your server’s public IP address.
If you are using an EC2 instance, this is available on the instance’s Description tab.
Alternately, you can get the public IP address by issuing the following:
curl -s http://whatismijnip.nl |cut -d " " -f 5

18

Step 9: Configure the Ops Manager Application.
Edit /opt/mongodb/mms/conf/conf-mms.properties with root privileges and set the following options.
For detailed information on each option, see Ops Manager Configuration Files.
Set mms.centralUrl and mms.backupCentralUrl as follows, where  is the IP address of
the server running the Ops Manager Application.
mms.centralUrl=http://:8080
mms.backupCentralUrl=http://:8081

Set the following Email Address Settings as appropriate. You can use the same email address throughout, or specify a
different address for each field.
mms.fromEmailAddr=
mms.replyToEmailAddr=
mms.adminFromEmailAddr=
mms.adminEmailAddr=
mms.bounceEmailAddr=

Set the mongo.mongoUri option to the port hosting the Ops Manager Application database:
mongo.mongoUri=mongodb://localhost:27017

OPTIONAL: If you installed the Backup Daemon, edit /opt/mongodb/mms-backup-daemon/conf/
conf-daemon.properties with root privileges and set the mongo.mongoUri value to the port hosting the
Ops Manager Application database:
mongo.mongoUri=mongodb://localhost:27017

Step 10: Start the Ops Manager Application.
To start the Ops Manager Application, issue the following:
sudo service mongodb-mms start

OPTIONAL: To start the Backup Daemon, issue the following:
sudo service mongodb-mms-backup-daemon start

Step 11: Open the Ops Manager home page.
In a browser, enter the following URL, where  is the IP address of the server:
http://:8080

Step 12: To begin testing Ops Manager, click Register and follow the prompts to create the first user
and group.
The first user receives Global Owner permissions for the test install.

19

Step 13: At the Welcome page, follow the prompts to set up Automation or Monitoring.
Automation lets you define a MongoDB deployment through the Ops Manager interface and rely on the Automation
Agent to construct the deployment. If you select Automation, Ops Manager prompts you to download the Automation
Agent and Monitoring Agent to the server.
Monitoring lets you manage a MongoDB deployment through the Ops Manager interface. If you select Monitoring,
Ops Manager prompts you to download only the Monitoring Agent to the server.
OPTIONAL: If you installed the Backup Daemon, do the following to enable Backup: click the Admin link
in at the top right of the Ops Manager page and click the Backup tab. In the : field, enter
localhost:27018 and click Save.

2 Install Ops Manager
Installation Checklist Prepare for your installation.
Example Installation Diagrams Provides diagrams of Ops Manager deployments.
Hardware and Software Requirements Describes the hardware and software requirements for the servers that run the
Ops Manager components, including the servers that run the backing MongoDB replica sets.
Deploy Application and Backup Databases Set up the Ops Manager Application Database and Backup Database.
Install Ops Manager Operating-system specific instructions for installing the Ops Manager Application and the
Backup Daemon.
Upgrade Ops Manager Operating-system specific instructions for upgrading the Ops Manager Application and the
Backup Daemon.
Configure Offline Binary Access Configure local mode for an installation that uses Automation but has no internet
access for downloading the MongoDB binaries.
Configure High Availability Configure the Ops Manager application and components to be highly available.
Configure Backup Jobs and Storage Manage and control the jobs used by the Backup system to create snapshots.
Test Ops Manager Monitoring Set up a replica set for testing Ops Manager Monitoring.

2.1 Installation Checklist
On this page
• Overview
• Topology Decisions
• Security Decisions
• Backup Decisions

Overview
You must make the following decisions before you install Ops Manager. During the install procedures you will make
choices based on your decisions here.

20

If you have not yet read the Ops Manager Components page, please do so for a description of the system’s components.
The sequence for installing Ops Manager is to:
• Plan your installation according to the questions on this page.
• Provision servers that meet the Hardware and Software Requirements
• Set up the Ops Manager Application Database and optional Backup Database.
• Install the Ops Manager Application and optional Backup Daemon.
Note: To install a simple evaluation deployment on a single server, see Install a Simple Test Ops Manager Installation.

Topology Decisions
Do you require durability and/or high availability?
Ops Manager stores application metadata and snapshots in the Ops Manager Application Database and Backup
Database respectively. To provide data durability, run each database as a three-member replica set on multiple servers.
To provide high availability for write operations to the databases, set up each replica set so that all three members hold
data. This way, if a member is unreachable the replica set can still write data. Ops Manager uses w:2 write concern,
which requires acknowledgement from the primary and one secondary for each write operation.
To provide high availability for the Ops Manager Application, run at least two instances of the application and use a
load balancer. For more information, see Configure a Highly Available Ops Manager Application.
The following tables describe the pros and cons for each combination of durability and high availability.
Non-Durable, Test Install
This is a non-durable install that runs on one server. If you lose the server, you must start over from scratch.
Pros:
Cons:

Needs only needs one server.
If you lose the server, you lose everything: users and groups, metadata, backups, automation
configurations, stored monitoring metrics, etc.

Durable Production Install
This install runs on at least three servers and provides durability for your metadata and snapshots. The replica sets for
the Ops Manager Application Database and the Backup Database are each made up of two data-bearing members and
an arbiter. This installation does not provide high availability.

21

Pros:
Cons:

Can run on as few as three servers. Ops Manager metadata and backups are durable from the
perspective of the Ops Manager Application.
No high availability, neither for the databases nor the application:
1. If the Ops Manager Application Database or the Backup Database loses a data-bearing
member, the data is durable but you must restart the member to gain back full Ops Manager
functionality. For the Backup Database, Ops Manager will not write new snapshots until
the member is again running.
2. Loss of the Ops Manager Application requires you to manually start a new Ops Manager
Application. No Ops Manager functionality is available while the application is down.

Durable Production Install with Highly Available Backup and Application Data
This install requires at least three servers. The replica sets for the Ops Manager Application Database and the Backup
Database each comprise at least three data-bearing members. This requires more storage and memory than for the
Durable Production Install.
Pros:

Cons:

You can lose a member of the Ops Manager Application Database or Backup Database and still
maintain Ops Manager availability. No Ops Manager functionality is lost while the member is
down.
Loss of the Ops Manager Application requires you to manually start a new Ops Manager Application. No Ops Manager functionality is available while the application is down.

Durable Production Install with a Highly Available Ops Manager Application
This runs multiple Ops Manager Applications behind a load balancer and requires infrastructure outside of what Ops
Manager offers. For details, see Configure a Highly Available Ops Manager Application.
Pros:
Cons:

Ops Manager continues to be available even when any individual server is lost.
Requires a larger number of servers, and requires a load balancer capable of routing traffic to
available application servers.

Will you deploy managed MongoDB instances on servers that have no internet access?
If you use Automation and if the servers where you will deploy MongoDB do not have internet access, then you must
configure Ops Manager to locally store and share the binaries used to deploy MongoDB so that the Automation agents
can download them directly from Ops Manager.
You must configure local mode and store the binaries before you create the first managed MongoDB deployment from
Ops Manager. For more information, see Configure Local Mode if Ops Manager has No Internet Access.
Will you use a proxy for the Ops Manager application’s outbound network connections?
If Ops Manager will use a proxy server to access external services, you must configure the proxy settings in Ops
Manager’s conf-mms.properties configuration file. If you have already started Ops Manager, you must restart
after configuring the proxy settings.

22

Security Decisions
Will you use authentication and/or SSL for the connections to the backing databases?
If you will use authentication or SSL for connections to the Ops Manager Application Database and Backup Database,
you must configure those options on each database when deploying the database and then you must configure Ops
Manager with the necessary certificate information for accessing the databases. For details, see Configure the Connections to the Backing MongoDB Instances
Will you use LDAP for user authenticate to Ops Manager?
If you will use LDAP for user management, you must configure LDAP authentication before you register any Ops
Manager user or group. If you have already created an Ops Manager user or group, you must start from scratch with a
fresh Ops Manager install.
During the procedure to install Ops Manager, you are given the option to configure LDAP before creating users or
groups. For details on LDAP authentication, see Configure Users and Groups with LDAP for Ops Manager.
Will you use SSL (HTTPS) for connections to the Ops Manager application?
If you will use SSL for connections to Ops Manager from agents, users, and the API, then you must configure Ops
Manager to use SSL. The procedure to install Ops Manager includes the option to configure SSL access.
Backup Decisions
Will the servers that run your Backup Daemons have internet access?
If the servers that run your Backup Daemons have no internet access, you must configure offline binary access for the
Backup Daemon before running the Daemon. The install procedure includes the option to configure offline binary
access.
Are certain backups required to be in certain data centers?
If you need to assign backups of particular MongoDB deployments to particular data centers, then each data center requires its own Ops Manager Application, Backup Daemon, and Backup Agent. The separate Ops Manager Application
instances must share a single dedicated Ops Manager Application Database. The Backup Agent in each data center
must use the URL for its local Ops Manager Application, which you can configure through either different hostnames
or split-horizon DNS. For detailed requirements, see Configure Multiple Blockstores in Multiple Data Centers.

2.2 Example Installation Diagrams
On this page
• Overview
• Non-Durable, Test Install on a Single Server
• Durable Production Install

23

• Durable, Highly Available Install with Multiple Backup Databases

Overview
The following diagrams show example Ops Manager deployments.
Non-Durable, Test Install on a Single Server
For a test deployment, you can deploy all of the Ops Manager components to a single server, as described in Install a
Simple Test Ops Manager Installation. Ensure you configure the appropriate ulimits for the deployment.

The head databases are dynamically created and maintained by the Backup Daemon. They reside on the disk partition
specified in the conf-daemon.properties file.
Durable Production Install
The basic deployment provides durability in case of failure by keeping a redundant copy of the application data and
snapshots. However, the basic deployment does not provide high availability and cannot accept writes to the backing
databases in the event that a replica set memeber is lost. See Durable, Highly Available Install with Multiple Backup
Databases for a deployment that can continue to accept writes with the loss of a member.
Server 1 must satisfy the combined hardware and software requirements for the Ops Manager Application hardware
and Ops Manager Application Database hardware.

24

Server 2 must satisfy the combined hardware and software requirements for the Backup Daemon hardware and Backup
Blockstore database hardware. The Backup Daemon automatically creates and maintains the head databases. These
databases reside on the disk partition specified in the conf-daemon.properties file. Do not place the head
databases on the same disk partition as the Backup Blockstore database, as this will reduce Backup’s performance.
Server 3 hosts replica set members for the Backup Blockstore and Ops Manager Application databases. Replica sets
provide data redundancy and are strongly recommended, but are not required for Ops Manager. Server 3 must satisfy
the combined hardware and software requirements for the Ops Manager Application database hardware and Backup
Blockstore database hardware.
For an example tutorial on installing the minimally viable Ops Manager installation on RHEL 6+ or Amazon Linux,
see /tutorial/install-basic-deployment.
Durable, Highly Available Install with Multiple Backup Databases
The following is a highly available deployment that you can scale out to add additional Backup Databases.
The deployment includes two servers that host the Ops Manager Application and the Ops Manager Application
Database, four servers that host two Backup deployments, and an additional server to host the arbiters for each replica
set.
Deploy an HTTP Load Balancer to balance the HTTP traffic for the Ops Manager HTTP Service and Backup service.
Ops Manager does not supply an HTTP Load Balancer: you must deploy and configure it yourself.
All of the software services need to be able to communicate with the Ops Manager Application databases, and the
Backup Blockstore databases. Configure your firewalls to allow traffic between these servers on the appropriate ports.
• Server 1 and Server 2 must satisfy the combined hardware and software requirements for the Ops Manager
Application hardware and Ops Manager Application Database hardware.
• Server 3, Server 4, Server 5, and Server 6 must satisfy the combined hardware and software requirements for
the Backup Daemon hardware and Backup Database hardware.
The Backup Daemon creates and maintains the head databases. They reside on the disk partition specified
in the conf-daemon.properties file. Only the Backup Daemon needs to communicate with the head
databases. As such, their net.bindIp value is 127.0.0.1 to prevent external communication. net.
bindIp specifies the IP address that mongod and mongos listens to for connections coming from applications.

25

26

For best performance, each Backup server should have 2 partitions. One for the Backup Blockstore database,
and one for the head databases.
• Server 7 and Server 8 host secondaries for the Ops Manager Application database, and for the two Backup
Blockstore databases. They must satisfy the combined hardware and software requirements for the databases.
To deploy Ops Manager with high availability, see: Configure a Highly Available Ops Manager Application.

2.3 Ops Manager Hardware and Software Requirements
On this page
• Hardware Requirements
• EC2 Security Groups
• Software Requirements
This page describes the hardware and software requirements for the servers that run the Ops Manager Components,
including the servers that run the backing MongoDB replica sets.
The servers that run the Backup Daemon and the backing replica sets must also meet the configuration requirements
in the MongoDB Production Notes in addition to the requirements on this page. The Production Notes include information on ulimits, NUMA, Transparent Huge Pages (THP), and other configuration options.
Warning:
failure.

Failure to configure servers according to the MongoDB Production Notes can lead to production

This page also includes requirements for the EC2 security group used when installing on AWS servers.
Hardware Requirements
Each server must meet the sum of the requirements for all its components.
Ops Manager Monitoring and Automation require servers for the following components:
• Ops Manager Application.
• Ops Manager Application Database replica set members
Note: Usually the Ops Manager Application and one of the Application database’s replica set members run on
the same server.
If you run Backup, Ops Manager also requires servers for the following:
• Backup Daemon
• Backup Blockstore database replica set members
Note: The following requirements are specific to a given component. You must add together the requirements for the
components you will install. For example, the requirements for the Ops Manager Application do not cover the Ops
Manager Application database.

27

Ops Manager Application Hardware
The Ops Manager Application requires the hardware listed here.
Number of Monitored Hosts
Up to 400 monitored hosts
Up to 2000 monitored hosts
More than 2000 hosts

CPU Cores
4+
8+
Contact MongoDB Account Manager

RAM
15 GB
15 GB
Contact MongoDB Account Manager

Ops Manager Application Database Hardware
The Ops Manager Application Database holds monitoring and other metadata for the Ops Manager Application.
The database runs as a three-member replica set. If you cannot allocate space for three data-bearing members, the third
member can be an arbiter, but keep in mind that Ops Manager uses w:2 write concern, which reports a write operation
as successful after acknowledgement from the primary and one secondary. If you use a replica set with fewer than 3
data-bearing members, and if you lose one of the data-bearing members, MongoDB blocks write operations, meaning
the Ops Manager Application Database has durability but not high availability.
Run the replica set on dedicated servers. You can optionally run one member of the replica set on the same physical
server as the Ops Manager Application.
For a test deployment, you can use a MongoDB standalone in place of a replica set.
Each server that hosts a MongoDB process for the Ops Manager Application database must comply with the Production Notes in the MongoDB manual. The Production Notes include important information on ulimits, NUMA,
Transparent Huge Pages (THP), and other configuration options.
Warning:
failure.

Failure to configure servers according to the MongoDB Production Notes can lead to production

Each server also requires the following:
Number of Monitored Hosts
Up to 400 monitored
hosts
Up to 2000 monitored
hosts
More than 2000 hosts

RAM

Disk Space

8 GB additional RAM beyond the RAM required for the
Ops Manager Application
15 GB additional RAM beyond the RAM required for the
Ops Manager Application
Contact MongoDB account manager

200 GB of storage space
500 GB of storage space
Contact MongoDB account manager

For the best results use SSD-backed storage.
Ops Manager Backup Daemon Hardware
The Backup Daemon server must meet the requirements in the table below and also must meet the configuration
requirements in the MongoDB Production Notes. The Production Notes include information on ulimits, NUMA,
Transparent Huge Pages (THP), and other options.

28

Warning:
failure.

Failure to configure servers according to the MongoDB Production Notes can lead to production

If you wish to install the Backup Daemon on the same physical server as the Ops Manager Application, the server
must satisfy these requirements separately from the requirements in Ops Manager Application Hardware.
The server running the Backup Daemon acts like a hidden secondary for every replica set assigned to it, receiving the
streamed oplog entries each replica set’s primary. However, the Backup Daemon differs from a hidden secondary in
that the replica set is not aware of it.
The server must have the disk space and write capacity to maintain the replica sets plus the space to store an additional
copy of the data to support point-in-time restore. Typically, the Backup Daemon must be able to store 2 to 2.5 times the
sum of the size on disk of all the backed-up replica sets, as it also needs space locally to build point-in-time restores.
Before installing the Backup Daemon, we recommend contacting your MongoDB Account Manager for assistance in
estimating the storage requirements for your Backup Daemon server.
Number of Hosts
Up to 200 hosts

CPU Cores
4+ 2Ghz+

RAM
15
GB
additional
RAM

Disk Space
Contact MongoDB
Account Manager

Storage IOPS/s
Contact MongoDB
Account Manager

Ops Manager Backup Blockstore Database Hardware
Blockstore servers store snapshots of MongoDB deployments. Only provision Blockstore servers if you are deploying
Ops Manager Backup.
Replica Set for the Blockstore Database
Backup requires a separate, dedicated MongoDB replica set to hold snapshot data. This cannot be a replica set used
for any purpose other than holding the snapshots.
For durability, the replica set must have at least two data-bearing members. For high availability the replica set must
have at least three data-bearing members.
Note: Ops Manager uses w:2 write concern, which reports a write operation as successful after acknowledgement
from the primary and one secondary. If you use a replica set with two data-bearing members and an arbiter, and you
lose one of the data-bearing members, write operations will be blocked.
For testing only you may use a standalone MongoDB deployment in place of a replica set.
Server Size for the Blockstore Database
Snapshots are compressed and de-duplicated at the block level in the Blockstore database. Typically, depending on
data compressibility and change rate, the replica set must run on servers with enough capacity to store 2 to 3 times the
total backed-up production data size.
Contact your MongoDB Account Manager for assistance in estimating the use-case and workload-dependent storage
requirements for your Blockstore servers.

29

Configuration Requirements from the MongoDB Production Notes
Each server that hosts a MongoDB process for the Blockstore database must comply with the Production Notes in the
MongoDB manual. The Production Notes include important information on ulimits, NUMA, Transparent Huge Pages
(THP), and other configuration options.
Warning:
failure.

Failure to configure servers according to the MongoDB Production Notes can lead to production

Other Requirements for the Blockstore Databsase
For each data-bearing member of the replica set member
CPU RAM
Cores
4 x 8 GB of RAM for every 1 TB disk of Blockstore to
2ghz+ provide good snapshot and restore speed. Ops Manager defines 1 TB of Blockstore as 10244 bytes.

Disk Space

Storage IOPS

Contact your
MongoDB
Account
Manager.

Medium grade HDDs should
have enough I/O throughput to
handle the load of the Blockstore.

EC2 Security Groups
If you install on AWS servers, you must have at least one EC2 security group configured with the following inbound
rules:
• An SSH rule on the ssh port, usually port 22, that allows traffic from all IPs. This is to provide administrative
access.
• A custom TCP rule that allows connection on ports 8080 and 8081 on the server that runs the Ops Manager
Application. This lets users connect to Ops Manager.
• A custom TCP rule that allows traffic on all MongoDB ports from any member of the security group. This
allows communication between the various Ops Manager components. MongoDB usually uses ports between
27000 and 28000.
Software Requirements
Operating System
The Ops Manager Application and Backup Daemon(s) can run on 64-bit versions of the following operating systems:
• CentOS 5 or later
• Red Hat Enterprise Linux 5 or later
• SUSE 11 or Later
• Amazon Linux AMI (latest version only)
• Ubuntu 12.04 or later
• Windows Server 2008 R2 or later

30

Warning: Ops Manager supports Monitoring and Backup on Windows but does not support Automation
on Windows.

Ulimits
The Ops Manager packages automatically raise the open file, max user processes, and virtual memory ulimits. On Red
Hat, be sure to check for a /etc/security/limits.d/90-nproc.conf file that may override the max user
processes limit. If the /etc/security/limits.d/90-nproc.conf file exists, remove it before continuing.
See MongoDB ulimit Settings for recommended ulimit settings.
Warning: Always refer to the MongoDB Production Notes to ensure healthy server configurations.

Authentication
If you are using LDAP for user authentication to Ops Manager (as described in Configure Users and Groups with
LDAP for Ops Manager), you must enable LDAP before setting up Ops Manager, beyond starting the service.
Important: You cannot enable LDAP once you have opened the Ops Manager user interface and registered the first
user. You can enable LDAP only on a completely blank, no-hosts, no-users installation.

MongoDB
The Ops Manager Application Database and Backup Blockstore Database must run on MongoDB 2.4.9 or later.
Note: Ops Manager 1.8.0, when released, will not support MongoDB 2.4 for the Ops Manager Application database
and Backup Blockstore database.
Your backed-up sharded cluster deployments must run at least MongoDB 2.4.3 or later. Your backed-up replica set
deployments must run at least MongoDB 2.2 or later.
Web Browsers
Ops Manager supports clients using the following browsers:
• Chrome 8 and greater
• Firefox 12 and greater
• IE 9 and greater
• Safari 6 and greater
The Ops Manager Application will display a warning on non-supported browsers.

31

SMTP
Ops Manager requires email for fundamental server functionality such as password reset and alerts.
Many Linux server-oriented distributions include a local SMTP server by default, for example, Postfix, Exim, or
Sendmail. You also may configure Ops Manager to send mail via third party providers, including Gmail and Sendgrid.
SNMP
If your environment includes SNMP, you can configure an SMNP trap receiver with periodic heartbeat traps to monitor
the internal health of Ops Manager. Ops Manager uses SNMP v2c.
For more information, see Configure SNMP Heartbeat Support.

2.4 Deploy Backing MongoDB Replica Sets
On this page
• Overview
• Replica Sets Requirements
• Server Prerequisites
• Procedures

Overview
Ops Manager uses two dedicated replica sets to store operational data: one replica set to store the Ops Manager Application Database and another to store the Backup Blockstore Database. If you use multiple application or blockstore
databases, set up separate replica sets for each database.
The backing MongoDB replica sets are dedicated to operational data and must store no other data.
Replica Sets Requirements
Each replica set that hosts the Ops Manager Application Database or a Backup Database:
• Stores data in support of Ops Manager operations only and stores no other data.
• Must run on a server that meets the server prerequisites below.
• Must run MongoDB 2.4.9 or later.
• Must use the MMAPv1 storage engine.
• Must have the MongoDB security.javascriptEnabled set to true, which is the default. The Ops
Manager Application uses the $where query and requires this setting be enabled on the Ops Manager Application database.
• Must not run the MongoDB notablescan option.

32

Server Prerequisites
The servers that run the replica sets must meet the following requirements.
• The hardware requirements described in Ops Manager Application Database Hardware or Ops Manager Backup
Blockstore Database Hardware, depending on which database the server will host. If a server hosts other Ops
Manager components in addition to the database, you must sum the hardware requirements for each component
to determine the requirements for the server.
• The system requirements in the MongoDB Production Notes. The Production Notes include important information on ulimits, NUMA, Transparent Huge Pages (THP), and other configuration options.
• The MongoDB ports requirements described in Firewall Configuration. Each server’s firewall rules must allow
access to the required ports.
• RHEL servers only: if the /etc/security/limits.d directory contains the 90-nproc.conf file,
remove the file. The file overrides limits.conf, decreasing ulimit settings. Issue the following command
to remove the file:
sudo rm /etc/security/limits.d/90-nproc.conf

Procedures
Install MongoDB on Each Server

Warning:
failure.

Failure to configure servers according to the MongoDB Production Notes can lead to production

Use servers that meet the above prerequisites.
The following procedure assumes that you are installing MongoDB on a server running Red Hat Enterprise Linux
(RHEL). If you are installing MongoDB on another Linux operating system, or you prefer to use cURL rather than
yum, refer to the installation guides in the MongoDB manual.
Step 1: Create a repository file on each server by issuing the following command:
echo "[mongodb-org-3.0]
name=MongoDB Repository
baseurl=http://repo.mongodb.org/yum/redhat/\$releasever/mongodb-org/3.0/x86_64/
gpgcheck=0
enabled=1" | sudo tee -a /etc/yum.repos.d/mongodb-org-3.0.repo

Step 2: Install MongoDB on each server by issuing the following command:
sudo yum install -y mongodb-org mongodb-org-shell

Deploy a Backing Replica Set
This procedure deploys a three-member replica set dedicated to store either the Ops Manager Application database or
Backup Blockstore database. Deploy the replica set to three servers that meet the requirements for the database.
33

Repeat this procedure for each backing replica set that your installation requires.
Step 1: Create a data directory on each server.
Create a data directory on each server and set mongod as the directory’s owner. For example:
sudo mkdir -p /data
sudo chown mongod:mongod /data

Step 2: Start each MongoDB process.
For each replica set member, start the mongod process and specify mongod as the user. Start each process on its own
dedicated port and point to the data directory you created in the previous step. Specify the same replica set for all three
members.
The following command starts a MongoDB process as part of a replica set named operations and specifies the
/data directory.
sudo -u mongod mongod --port 27017 --dbpath /data --replSet operations --logpath /
˓→data/mongodb.log --fork

Step 3: Connect to the MongoDB process on the server that will initially host the database’s primary.
For example, the following command connects on port 27017 to a MongoDB process running on mongodb1.
example.net:
mongo --host mongodb1.example.net --port 27017

When you connect, the MongoDB shell displays its version as well as the database to which you are connected.
Step 4: Initiate the replica set.
To initiate the replica set, issue the following command:
rs.initiate()

MongoDB returns 1 upon successful completion, and creates a replica set with the current mongod as the initial
member.
The server on which you initiate the replica set becomes the initial primary. The mongo shell displays the replica set
member’s state in the prompt.
MongoDB supports automatic failover, so this mongod may not always be the primary.
Step 5: Add the remaining members to the replica set.
In a mongo shell connected to the primary, use the rs.add() method to add the other two replica set members. For example, the following adds the mongod instances running on mongodb2.example.net:27017 and mongodb3.
example.net:27017 to the replica set:

34

rs.add(’mongodb2.example.net:27017’)
rs.add(’mongodb3.example.net:27017’)

Step 6: Verify the replica set configuration.
To verify that the configuration includes the three members, issue rs.conf():
rs.conf()

The method returns output similiar to the following.
{
"_id" : "operations",
"version" : 3,
"members" : [
{
"_id" : 0,
"host" : "mongodb1.example.net:27017"
},
{
"_id" : 1,
"host" : "mongodb2.example.net:27017"
},
{
"_id" : 2,
"host" : "mongodb3.example.net:27017",
}
]
}

Optionally, run rs.status() 

The downloaded package is named mongodb-mms-.x86_64.deb, where  is the
version number.
Step 2: Install the Ops Manager Application package.
Install the .deb package by issuing the following command, where  is the version of the .deb package:
sudo dpkg --install mongodb-mms__x86_64.deb

When installed, the base directory for the Ops Manager software is /opt/mongodb/mms/. The .deb package
creates a new system user mongodb-mms under which the server will run.
Step 3: Configure Monitoring.
Open /conf/conf-mms.properties with root privileges and set values for the settings described in this step. For detailed information on each setting, see the Ops Manager Configuration Files page.
Set mms.centralUrl and mms.backupCentralUrl as follows, where  is the fully qualified domain
name of the server running the Ops Manager Application.
mms.centralUrl=http://:8080
mms.backupCentralUrl=http://:8081

Set the following Email Address Settings as appropriate. Each may be the same or different.

37

mms.fromEmailAddr=
mms.replyToEmailAddr=
mms.adminFromEmailAddr=
mms.adminEmailAddr=
mms.bounceEmailAddr=

Set mongo.mongoUri to the servers and ports hosting the Ops Manager Application database. For example:
mongo.mongoUri=mongodb://mongodb1.example.net:27017,mongodb2.example.net:27017,
˓→mongodb3.example.net:27017

If you use HTTPS to encrypt user connections to Ops Manager, set mms.https.PEMKeyFile to a PEM file
containing an X509 certificate and private key, and set mms.https.PEMKeyFilePassword to the password for
the certificate. For example:
mms.https.PEMKeyFile=
mms.https.PEMKeyFilePassword=

To configure authentication, email, and other optional settings, see Ops Manager Configuration Files. To run the Ops
Manager application in a highly available configuration, see Configure a Highly Available Ops Manager Application.
Step 4: Start the Ops Manager Application.
Issue the following command:
sudo service mongodb-mms start

Step 5: Open the Ops Manager home page and register the first user.
To open the home page, enter the following URL in a browser, where  is the fully qualified domain name of
the server:
http://:8080

Click the Register link and follow the prompts to register the first user and create the first group. The first user is
automatically assigned the Global Owner role. When you finish, you are logged into the Ops Manager Application as
the new user. For more information on creating and managing users, see Manage Ops Manager Users.
Step 6: At the Welcome page, follow the prompts to set up Automation (which includes Monitoring)
or just Monitoring alone.
Automation lets you deploy and configure MongoDB processes from Ops Manager as well as monitor and manage
them. If you select Automation, Ops Manager prompts you to download the Automation Agent and Monitoring Agent.
Monitoring lets you manage a MongoDB deployment from Ops Manager. If you select Monitoring, Ops Manager
prompts you to download only the Monitoring Agent.
Install the Backup Daemon (Optional)
If you use Backup, install the Backup Daemon.

38

Step 1: Download the Backup Daemon package.
1. In a browser, go to http://www.mongodb.com/download.
2. Submit the subscription form.
3. On the MongoDB Enterprise Downloads page, go to the MongoDB Ops Manager section and click the here
link.
4. On the Ops Manager Download page, acknowledge the recommendation to contact MongoDB for production
installs.
5. On the MongoDB Ops Manager Downloads page, copy the link address of the “Backup” DEB link.
6. Open a system prompt.
7. Download the Backup Daemon package by issuing a curl command that uses the copied link address:
curl -OL 

The downloaded package is named mongodb-mms-backup-daemon-.x86_64.deb, where
 is replaced by the version number.
Step 2: Install the Backup Daemon package.
Issue dpkg --install with root privileges and specify the name of the downloaded package:
sudo dpkg --install 

When installed, the base directory for the Ops Manager software is /opt/mongodb/mms/. The .deb package
creates a new system user mongodb-mms under which the server will run.
Step 3: Point the Backup Daemon to the Ops Manager Application database.
Open the /opt/mongodb/mms-backup-daemon/conf/conf-daemon.properties file with root privileges and set the mongo.mongoUri value to the servers and ports hosting the Ops Manager Application database.
For example:
mongo.mongoUri=mongodb://mongodb1.example.net:27017,mongodb2.example.net:27017,
˓→mongodb3.example.net:27017

Step 4: Synchronize the gen.key file
Synchronize the /etc/mongodb-mms/gen.key file from an Ops Manager Application server. This is only required if the Backup Daemon was installed on a different server than the Ops Manager Application.
Step 5: Start the back-end software package.
To start the Backup Daemon run:
sudo service mongodb-mms-backup-daemon start

If everything worked the following displays:

39

Start Backup Daemon

[

OK

]

If you run into any problems, the log files are at /opt/mongodb/mms-backup-daemon/logs.
Step 6: Open Ops Manager and access the Backup configuration page.
Open the Ops Manager home page and log in as the user you first registered when installing the Ops Manager Application. (This user is the global owner.) Then click the Admin link at the top right of the page. Then click the Backup
tab.
Step 7: Enter configuration information for the Backup database.
Enter the configuration information described below, and then click Save. Ops Manager uses this information to create
the connection string URI used to connect to the database.
:: Enter a comma-separated list of the fully qualified domain names and port numbers for all
replica set members for the Backup database.
MongoDD Auth Username and MongoDB Auth Password: Enter the user credentials if the database uses authentication.
Encrypted Credentials: Check this if the user credentials use the Ops Manager credentialstool. For more
information, see Encrypt MongoDB User Credentials.
Use SSL: Check this if the MongoDB database uses SSL. If you select this, you must configure SSL settings for both
the Ops Manager Application and Backup Daemon. See Ops Manager Configuration Files.
Connection Options: To add additional connection options, enter them using the MongoDB Connection String URI
Format.
Install Ops Manager with rpm Packages

On this page
• Overview
• Prerequisites
• Install Procedures

Overview
To install Ops Manager you install the Ops Manager Application and the optional Backup Daemon. This tutorial
describes how to install both using rpm packages. The Ops Manager Application monitors MongoDB deployments,
and the Backup Daemon creates and stores deployment snapshots.
If you are instead upgrading an existing deployment, please see Upgrade Ops Manager.

40

Warning: To use Ops Manager 1.6 Automation to manage MongoDB Enterprise deployments, you must first
install the MongoDB Enterprise dependencies to each server that runs MongoDB Enterprise. You must install the
dependencies to the servers before using Automation.
Note that Automation does not yet support use of the MongoDB Enterprise advanced security features (SSL,
LDAP, Kerberos, and auditing). Automation will support these features in the next major Ops Manager release.
To run MongoDB Enterprise with advanced security now, deploy MongoDB Enterprise manually (not through
Automation) and use Ops Manager only for Monitoring and Backup.

Prerequisites
Deploy Servers
Prior to installation, you must set up servers for the entire Ops Manager deployment, including the Ops Manager
Application, the optional Backup Daemon, and the backing replica sets. For deployment diagrams, see Example
Installation Diagrams.
Deploy servers that meet the hardware requirements described in Ops Manager Hardware and Software Requirements.
Servers for the Backup Daemon and the backing replica sets must also comply with the Production Notes in the
MongoDB manual. Configure as many servers as needed for your deployment.
Warning:
failure.

Failure to configure servers according to the MongoDB Production Notes can lead to production

Deploy MongoDB
Install MongoDB on the servers that will store the Ops Manager Application Database and Backup Blockstore
Database. The Backup Blockstore database is required only if you run the Backup Daemon. The databases require
dedicated MongoDB instances. Do not use MongoDB installations that store other data.
Install separate MongoDB instances for the two databases and install the instances as replica sets. Ensure that firewall
rules on the servers allow access to the ports that the instances runs on.
The Ops Manager Application and Backup Daemon must authenticate to their databases as a MongoDB user with
appropriate access. The user must have the following roles:
• readWriteAnyDatabase
• dbAdminAnyDatabase.
• clusterAdmin if the database is a sharded cluster, otherwise clusterMonitor
Install Procedures
You must have administrative access on the machines to which you install.

41

Install and Start the Ops Manager Application
Step 1: Download the latest version of the Ops Manager Application package.
1. In a browser, go to http://www.mongodb.com/download.
2. Submit the subscription form.
3. On the MongoDB Enterprise Downloads page, go to the MongoDB Ops Manager section and click the here
link.
4. On the Ops Manager Download page, acknowledge the recommendation for production installs.
5. On the MongoDB Ops Manager Downloads page, copy the link address of the “Monitoring, Automation and
Core” RPM link.
6. Open a system prompt.
7. Download the Ops Manager Application package by issuing a curl command that uses the copied link address:
curl -OL 

The downloaded package is named mongodb-mms-.x86_64.rpm, where  is the
version number.
Step 2: Install the Ops Manager Application package.
Install the .rpm package by issuing the following command, where  is the version of the .rpm package:
sudo rpm -ivh mongodb-mms-.x86_64.rpm

When installed, the base directory for the Ops Manager software is /opt/mongodb/mms/. The RPM package
creates a new system user mongodb-mms under which the server runs.
Step 3: Configure Monitoring.
Open /conf/conf-mms.properties with root privileges and set values for the settings described in this step. For detailed information on each setting, see the Ops Manager Configuration Files page.
Set mms.centralUrl and mms.backupCentralUrl as follows, where  is the fully qualified domain
name of the server running the Ops Manager Application.
mms.centralUrl=http://:8080
mms.backupCentralUrl=http://:8081

Set the following Email Address Settings as appropriate. Each may be the same or different.
mms.fromEmailAddr=
mms.replyToEmailAddr=
mms.adminFromEmailAddr=
mms.adminEmailAddr=
mms.bounceEmailAddr=

Set mongo.mongoUri to the servers and ports hosting the Ops Manager Application database. For example:

42

mongo.mongoUri=mongodb://mongodb1.example.net:27017,mongodb2.example.net:27017,
˓→mongodb3.example.net:27017

If you use HTTPS to encrypt user connections to Ops Manager, set mms.https.PEMKeyFile to a PEM file
containing an X509 certificate and private key, and set mms.https.PEMKeyFilePassword to the password for
the certificate. For example:
mms.https.PEMKeyFile=
mms.https.PEMKeyFilePassword=

To configure authentication, email, and other optional settings, see Ops Manager Configuration Files. To run the Ops
Manager application in a highly available configuration, see Configure a Highly Available Ops Manager Application.
Step 4: Start the Ops Manager Application.
Issue the following command:
sudo service mongodb-mms start

Step 5: Open the Ops Manager home page and register the first user.
To open the home page, enter the following URL in a browser, where  is the fully qualified domain name of
the server:
http://:8080

Click the Register link and follow the prompts to register the first user and create the first group. The first user is
automatically assigned the Global Owner role. When you finish, you are logged into the Ops Manager Application as
the new user. For more information on creating and managing users, see Manage Ops Manager Users.
Step 6: At the Welcome page, follow the prompts to set up Automation (which includes Monitoring)
or just Monitoring alone.
Automation lets you deploy and configure MongoDB processes from Ops Manager as well as monitor and manage
them. If you select Automation, Ops Manager prompts you to download the Automation Agent and Monitoring Agent.
Monitoring lets you manage a MongoDB deployment from Ops Manager. If you select Monitoring, Ops Manager
prompts you to download only the Monitoring Agent.
Install the Backup Daemon (Optional)
If you use Backup, install the Backup Daemon.
Step 1: Download the Backup Daemon package.
To download the Backup Daemon package:
1. In a browser, go to http://www.mongodb.com/download.
2. Submit the subscription form.

43

3. On the MongoDB Enterprise Downloads page, go to the MongoDB Ops Manager section and click the here
link.
4. On the Ops Manager Download page, acknowledge the recommendation to contact MongoDB for production
installs.
5. On the MongoDB Ops Manager Downloads page, copy the link address of the “Backup” RPM link.
6. Open a system prompt.
7. Download the Backup Daemon package by issuing a curl command that uses the copied link address:
curl -OL 

The downloaded package is named mongodb-mms-backup-daemon-.x86_64.rpm, where
 is replaced by the version number.
Step 2: Install the Backup Daemon package.
Issue rpm --install with root privileges and specify the name of the downloaded package:
sudo rpm --install 

The software is installed to /opt/mongodb/mms-backup-daemon.
Step 3: Point the Backup Daemon to the Ops Manager Application database.
Open the /opt/mongodb/mms-backup-daemon/conf/conf-daemon.properties file with root privileges and set the mongo.mongoUri value to the servers and ports hosting the Ops Manager Application database.
For example:
mongo.mongoUri=mongodb://mongodb1.example.net:27017,mongodb2.example.net:27017,
˓→mongodb3.example.net:27017

Step 4: Synchronize the gen.key file
Synchronize the /etc/mongodb-mms/gen.key file from an Ops Manager Application server. This is only required if the Backup Daemon was installed on a different server than the Ops Manager Application.
Step 5: Start the back-end software package.
To start the Backup Daemon run:
sudo service mongodb-mms-backup-daemon start

If everything worked the following displays:
Start Backup Daemon

[

OK

]

If you run into any problems, the log files are at /opt/mongodb/mms-backup-daemon/logs.

44

Step 6: Open Ops Manager and access the Backup configuration page.
Open the Ops Manager home page and log in as the user you first registered when installing the Ops Manager Application. (This user is the global owner.) Then click the Admin link at the top right of the page. Then click the Backup
tab.
Step 7: Enter configuration information for the Backup database.
Enter the configuration information described below, and then click Save. Ops Manager uses this information to create
the connection string URI used to connect to the database.
:: Enter a comma-separated list of the fully qualified domain names and port numbers for all
replica set members for the Backup database.
MongoDD Auth Username and MongoDB Auth Password: Enter the user credentials if the database uses authentication.
Encrypted Credentials: Check this if the user credentials use the Ops Manager credentialstool. For more
information, see Encrypt MongoDB User Credentials.
Use SSL: Check this if the MongoDB database uses SSL. If you select this, you must configure SSL settings for both
the Ops Manager Application and Backup Daemon. See Ops Manager Configuration Files.
Connection Options: To add additional connection options, enter them using the MongoDB Connection String URI
Format.
Install Ops Manager from tar.gz or zip Archives

On this page
• Overview
• Prerequisites
• Install Procedures

Overview
To install Ops Manager you install the Ops Manager Application and the optional Backup Daemon. This tutorial
describes how to install both using tar.gz or zip packages. The tutorial installs to a Linux OS. The Ops Manager
Application monitors MongoDB deployments, and the Backup Daemon creates and stores deployment snapshots.
If you are instead upgrading an existing deployment, please see Upgrade Ops Manager.
Warning: To use Ops Manager 1.6 Automation to manage MongoDB Enterprise deployments, you must first
install the MongoDB Enterprise dependencies to each server that runs MongoDB Enterprise. You must install the
dependencies to the servers before using Automation.
Note that Automation does not yet support use of the MongoDB Enterprise advanced security features (SSL,
LDAP, Kerberos, and auditing). Automation will support these features in the next major Ops Manager release.
To run MongoDB Enterprise with advanced security now, deploy MongoDB Enterprise manually (not through
Automation) and use Ops Manager only for Monitoring and Backup.

45

Prerequisites
Deploy Servers
Prior to installation, you must set up servers for the entire Ops Manager deployment, including the Ops Manager
Application, the optional Backup Daemon, and the backing replica sets. For deployment diagrams, see Example
Installation Diagrams.
Deploy servers that meet the hardware requirements described in Ops Manager Hardware and Software Requirements.
Servers for the Backup Daemon and the backing replica sets must also comply with the Production Notes in the
MongoDB manual. Configure as many servers as needed for your deployment.
Warning:
failure.

Failure to configure servers according to the MongoDB Production Notes can lead to production

Deploy MongoDB
Install MongoDB on the servers that will store the Ops Manager Application Database and Backup Blockstore
Database. The Backup Blockstore database is required only if you run the Backup Daemon. The databases require
dedicated MongoDB instances. Do not use MongoDB installations that store other data.
Install separate MongoDB instances for the two databases and install the instances as replica sets. Ensure that firewall
rules on the servers allow access to the ports that the instances runs on.
The Ops Manager Application and Backup Daemon must authenticate to their databases as a MongoDB user with
appropriate access. The user must have the following roles:
• readWriteAnyDatabase
• dbAdminAnyDatabase.
• clusterAdmin if the database is a sharded cluster, otherwise clusterMonitor
Install Procedures
You must have administrative access on the machines to which you install.
Install and Start the Ops Manager Application
Step 1: Download the Ops Manager Application package.
1. In a browser, go to http://www.mongodb.com/download.
2. Submit the subscription form.
3. On the MongoDB Enterprise Downloads page, go to the MongoDB Ops Manager section and click the here
link.
4. On the Ops Manager Download page, acknowledge the recommendation to contact MongoDB for production
installs.
5. On the MongoDB Ops Manager Downloads page, copy the link address of the “Monitoring, Automation and
Core” TAR.GZ link.

46

6. Open a system prompt.
7. Download the Ops Manager Application package by issuing a curl command that uses the copied link address:
curl -OL 

The downloaded package is named mongodb-mms-.x86_64.tar.gz, where 
is the version number.
Step 2: Extract the Ops Manager Application package.
Navigate to the directory to which to install the Ops Manager Application. Extract the archive to that directory:
tar -zxf mongodb-mms-.x86_64.tar.gz

When complete, Ops Manager is installed.
Step 3: Configure Monitoring.
Open /conf/conf-mms.properties with root privileges and set values for the settings described in this step. For detailed information on each setting, see the Ops Manager Configuration Files page.
Set mms.centralUrl and mms.backupCentralUrl as follows, where  is the fully qualified domain
name of the server running the Ops Manager Application.
mms.centralUrl=http://:8080
mms.backupCentralUrl=http://:8081

Set the following Email Address Settings as appropriate. Each may be the same or different.
mms.fromEmailAddr=
mms.replyToEmailAddr=
mms.adminFromEmailAddr=
mms.adminEmailAddr=
mms.bounceEmailAddr=

Set mongo.mongoUri to the servers and ports hosting the Ops Manager Application database. For example:
mongo.mongoUri=mongodb://mongodb1.example.net:27017,mongodb2.example.net:27017,
˓→mongodb3.example.net:27017

If you use HTTPS to encrypt user connections to Ops Manager, set mms.https.PEMKeyFile to a PEM file
containing an X509 certificate and private key, and set mms.https.PEMKeyFilePassword to the password for
the certificate. For example:
mms.https.PEMKeyFile=
mms.https.PEMKeyFilePassword=

To configure authentication, email, and other optional settings, see Ops Manager Configuration Files. To run the Ops
Manager application in a highly available configuration, see Configure a Highly Available Ops Manager Application.
Step 4: Start the Ops Manager Application.
To start Ops Manager, issue the following command:

47

/bin/mongodb-mms start

Step 5: Open the Ops Manager home page and register the first user.
To open the home page, enter the following URL in a browser, where  is the fully qualified domain name of
the server:
http://:8080

Click the Register link and follow the prompts to register the first user and create the first group. The first user is
automatically assigned the Global Owner role. When you finish, you are logged into the Ops Manager Application as
the new user. For more information on creating and managing users, see Manage Ops Manager Users.
Step 6: At the Welcome page, follow the prompts to set up Automation (which includes Monitoring)
or just Monitoring alone.
Automation lets you deploy and configure MongoDB processes from Ops Manager as well as monitor and manage
them. If you select Automation, Ops Manager prompts you to download the Automation Agent and Monitoring Agent.
Monitoring lets you manage a MongoDB deployment from Ops Manager. If you select Monitoring, Ops Manager
prompts you to download only the Monitoring Agent.
Install the Backup Daemon (Optional)
If you use Backup, install the Backup Daemon.
Step 1: Download the Backup Daemon package.
1. In a browser, go to http://www.mongodb.com/download.
2. Submit the subscription form.
3. On the MongoDB Enterprise Downloads page, go to the MongoDB Ops Manager section and click the here
link.
4. On the Ops Manager Download page, acknowledge the recommendation to contact MongoDB for production
installs.
5. On the MongoDB Ops Manager Downloads page, copy the link address of the “Backup” TAR.GZ link.
6. Open a system prompt.
7. Download the Backup Daemon package by issuing a curl command that uses the copied link address:
curl -OL 

The downloaded package is named mongodb-mms-backup-daemon-.x86_64.tar.gz,
where  is replaced by the version number.

48

Step 2: To install the Backup Daemon, extract the downloaded archive file.
tar -zxf 

Step 3: Point the Backup Daemon to the Ops Manager Application database.
Open the /conf/conf-daemon.properties file and set the mongo.mongoUri value to
the servers and ports hosting the Ops Manager Application database. For example:
mongo.mongoUri=mongodb://mongodb1.example.net:27017,mongodb2.example.net:27017,
˓→mongodb3.example.net:27017

Additionally, ensure that the file system that holds the rootDirectory has sufficient space to accommodate the
current snapshots of all backed up instances.
Step 4: Synchronize the mms-gen-key file.
Synchronize the /bin/mms-gen-key file from the Ops Manager Application server. This is
required only if the Backup Daemon is installed on a different server than the Ops Manager Application.
Step 5: Start the Backup Daemon.
To start the Backup Daemon run:
/bin/mongodb-mms-backup-daemon start

If you run into any problems, the log files are at /logs.
Step 6: Open Ops Manager and access the Backup configuration page.
Open the Ops Manager home page and log in as the user you first registered when installing the Ops Manager Application. (This user is the global owner.) Then click the Admin link at the top right of the page. Then click the Backup
tab.
Step 7: Enter configuration information for the Backup database.
Enter the configuration information described below, and then click Save. Ops Manager uses this information to create
the connection string URI used to connect to the database.
:: Enter a comma-separated list of the fully qualified domain names and port numbers for all
replica set members for the Backup database.
MongoDD Auth Username and MongoDB Auth Password: Enter the user credentials if the database uses authentication.
Encrypted Credentials: Check this if the user credentials use the Ops Manager credentialstool. For more
information, see Encrypt MongoDB User Credentials.
Use SSL: Check this if the MongoDB database uses SSL. If you select this, you must configure SSL settings for both
the Ops Manager Application and Backup Daemon. See Ops Manager Configuration Files.

49

Connection Options: To add additional connection options, enter them using the MongoDB Connection String URI
Format.
Install Ops Manager on Windows

On this page
• Overview
• Prerequisites
• Procedures
• Next Step

Overview
This tutorial describes how to install the Ops Manager Application, which monitors MongoDB deployments, and the
optional Backup Daemon, which creates and stores deployment snapshots. This tutorial installs to Windows servers.
Ops Manager supports Monitoring and Backup on Windows but does not support Automation on Windows.
Prerequisites
Prior to installation you must:
• Configure Windows servers that meet the hardware and software requirements. Configure as many servers as
needed for your deployment. For deployment diagrams, see Example Installation Diagrams.
• Deploy the dedicated MongoDB instances that store the Ops Manager Application Database and Backup Blockstore Database. Do not use MongoDB instances that store other data. Ensure that firewall rules allow access to
the ports the instances runs on. See Deploy Backing MongoDB Replica Sets.
• Optionally install an SMTP email server.
Procedures
You must have administrative access on the machines to which you install.
Install and Start the Ops Manager Application
Step 1: Download Monitoring.
1. In a browser, go to http://www.mongodb.com/download.
2. Submit the subscription form.
3. On the MongoDB Enterprise Downloads page, go to the MongoDB Ops Manager section and click the here
link.
4. On the Ops Manager Download page, acknowledge the recommendation to contact MongoDB for production
installs.
50

5. On the MongoDB Ops Manager Downloads page, click the “Monitoring and Core” MSI link.
Step 2: Install the Ops Manager Application.
Right-click on the mongodb-mms-.msi file and select Install. Follow the instructions in the Setup
Wizard.
During setup, the Configuration/Log Folder screen prompts you to specify a folder for configuration and log files. The
installation restricts access to the folder to administrators only.
Step 3: Configure the Ops Manager Application.
In the folder you selected for configuration and log files, navigate to \Server\Config. For example, if you chose
C:\MMSData for configuration and log files, navigate to C:\MMSData\Server\Config.
Open the conf-mms.properties file and configure the required settings below, as well as any additional settings
your deployment uses, such as authentication settings. For descriptions of all settings, see Ops Manager Configuration
Files.
Set mms.centralUrl and mms.backupCentralUrl as follows, where  is the fully qualified domain
name of the server running the Ops Manager Application.
mms.centralUrl=http://:8080
mms.backupCentralUrl=http://:8081

Set the following Email Address Settings as appropriate. Each can be the same or different values.
mms.fromEmailAddr=
mms.replyToEmailAddr=
mms.adminFromEmailAddr=
mms.adminEmailAddr=
mms.bounceEmailAddr=

Set the mongo.mongoUri option to the servers and ports hosting the Ops Manager Application database. For
example:
mongo.mongoUri=mongodb://mongodb1.example.net:27017,mongodb2.example.net:27017,
˓→mongodb3.example.net:27017

Step 4: Start the MongoDB Ops Manager HTTP Service.
Before starting the service, make sure the MongoDB instances that store the Ops Manager Application Database are
running and that they are reachable from the Ops Manager Application’s host machine. Ensure that firewall rules allow
access to the ports the MongoDB instances runs on.
To start the service, open Control Panel, then System and Security, then Administrative Tools,
and then Services.
In the Services list, right-click on the MongoDB Ops Manager HTTP Service and select Start.
Step 5: If you will also run MMS Backup, start the two Backup services.
In the Services list, right-click on the following services and select Start:

51

• MMS Backup HTTP Service
• MMS Backup Alert Service
Step 6: Open the Ops Manager home page and register the first user.
To open the home page, enter the following URL in a browser, where  is the fully qualified domain name of
the server:
http://:8080

Click the Register link and follow the prompts to register the first user and create the first group. The first user is
automatically assigned the Global Owner role. When you finish, you are logged into the Ops Manager Application as
the new user. For more information on creating and managing users, see Manage Ops Manager Users.
Step 7: At the Welcome page, follow the prompts to set up Monitoring.
Ops Manager prompts you to download only the Monitoring Agent.
Install the Backup Daemon (Optional)
If you use Backup, install the Backup Daemon.
Step 1: Download the Backup Daemon package.
1. In a browser, go to http://www.mongodb.com/download.
2. Submit the subscription form.
3. On the MongoDB Enterprise Downloads page, go to the MongoDB Ops Manager section and click the here
link.
4. On the Ops Manager Download page, acknowledge the recommendation to contact MongoDB for production
installs.
5. On the MongoDB Ops Manager Downloads page, copy the link address of the “Backup” MSI link.
Step 2: Install the Backup Daemon.
Right-click on the mongodb-mms-backup-daemon-.msi file and select Install. Follow the instructions in the Setup Wizard.
During setup, the Daemon Paths screen prompts you to specify the following folders. The installer will restrict access
to these folders to administrators only:
• Configuration/Log Path. The location of the Backup Daemon’s configuration and log files.
• Backup Data Root Path. The path where the Backup Daemon stores the local copies of the backed-up databases.
This location must have enough storage to hold a full copy of each database being backed up.
• MongoDB Releases Path. The location of the MongoDB software releases required to replicate the backed up
databases. These releases will be downloaded from mongodb.org by default.

52

Step 3: Configure the Backup Daemon.
mongo.mongoUri=mongodb://mongodb1.example.net:27017,mongodb2.example.net:27017,
˓→mongodb3.example.net:27017

In the folder you selected for storing configuration and log files, navigate to \BackupDaemon\Config. For example, if you chose C:\MMSData, navigate to C:\MMSData\BackupDaemon\Config.
Open the conf-daemon.properties file and configure the mongo.mongoUri property to point the Backup
Daemon to the servers and ports hosting the Ops Manager Application database. For example:
Step 4: Copy the gen-key file from the Ops Manager Application server to the Backup Daemon
server.
Important: You must copy the file as a whole. Do not open the file and copy its content.
Copy the gen.key file from the C:\MMSData\Secrets folder on the Ops Manager Application server to the
empty C:\MMSData\Secrets folder on the Backup Daemon server.
Step 5: If you have not already done so, start the Backup services on the Ops Manager Application
server.
On the Ops Manager Application server, open Control Panel, then System and Security, then
Administrative Tools, and then Services. Right-click on the following services and select Start:
• MMS Backup HTTP Service
• MMS Backup Alert Service
Step 6: Start the Backup Daemon.
On the Backup Daemon server, open Control Panel, then System and Security, then
Administrative Tools, and then Services. Right-click on the MMS Backup Daemon Service
and select Start.
Step 7: Open Ops Manager and access the Backup configuration page.
Open the Ops Manager home page and log in as the user you registered when installing the Ops Manager Application.
Then click the Admin link at the top right of the page. Then click the Backup tab.
Step 8: Enter configuration information for the Backup database.
Enter the configuration information described here, and then click Save. Ops Manager uses this information to create
the connection string URI used to connect to the database.
:: Enter a comma-separated list of the fully qualified domain names and port numbers for all
replica set members for the Backup database. For test deployments, you can use a standalone MongoDB instance for
the database.
MongoDD Auth Username and MongoDB Auth Password: Enter the user credentials if the database uses authentication.

53

Encrypted Credentials: Check this if the user credentials use the Ops Manager credentialstool. For more
information, see Encrypt MongoDB User Credentials.
Use SSL: Check this if the MongoDB database uses SSL. If you select this, you must configure SSL settings for both
the Ops Manager Application and Backup Daemon. See Ops Manager Configuration Files.
Connection Options: To add additional connection options, enter them using the MongoDB Connection String URI
Format.
Next Step
Set up security for your Ops Manager servers, Ops Manager agents, and MongoDB deployments.
Note: To set up a deployment for a test environment, see Test Ops Manager Monitoring. The tutorial populates the
replica set with test data, registers a user, and installs the Monitoring and Backup Agents on a client machine in order
to monitor the test replica set.

2.6 Upgrade Ops Manager
Upgrade with DEB Packages Upgrade Ops Manager on Debian and Ubuntu systems.
Upgrade with RPM Packages Upgrade Ops Manager on Red Hat, Fedora, CentOS, and Amazon AMI Linux.
Upgrade from Archives on Linux Upgrade Ops Manager on other Linux systems, without using package management.
Upgrade from Version 1.2 and Earlier Upgrade from a version before 1.3.
Upgrade Ops Manager with deb Packages

On this page
• Overview
• Prerequisite
• Procedures

Warning: To use Ops Manager 1.6 Automation to manage MongoDB Enterprise deployments, you must first
install the MongoDB Enterprise dependencies to each server that runs MongoDB Enterprise. You must install the
dependencies to the servers before using Automation.
Note that Automation does not yet support use of the MongoDB Enterprise advanced security features (SSL,
LDAP, Kerberos, and auditing). Automation will support these features in the next major Ops Manager release.
To run MongoDB Enterprise with advanced security now, deploy MongoDB Enterprise manually (not through
Automation) and use Ops Manager only for Monitoring and Backup.

54

Overview
This tutorial describes how to upgrade an existing Ops Manager Application and Backup Daemon using deb packages.
Prerequisite
You must have administrative access on the machines on which you perform the upgrade.
You must have the download link available on the customer downloads page provided to you by MongoDB. If you do
not have this link, you can access the download page for evaluation at http://www.mongodb.com/download.
Procedures
If your version is earlier than 1.3, please see instead Upgrade from Version 1.2 and Earlier.
Upgrade the Ops Manager Application from Version 1.3 and Later
If you have an existing Ops Manager Application, use the following procedure to upgrade to the latest release. There
are no supported downgrade paths for Ops Manager.
Step 1: Recommended. Take a full backup of the Ops Manager database before beginning the
upgrade procedure.
Step 2: Shut down Ops Manager.
For example:
sudo service mongodb-mms stop

Step 3: If you are running Ops Manager Backup, shutdown the Ops Manager Backup Daemon.
The daemon may be installed on a different server. It is critical that this is also shut down. To shut down, issue a
command similar to the following:
sudo service mongodb-mms-backup-daemon stop

Step 4: Save a copy of your previous configuration file.
For example:
sudo cp /opt/mongodb/mms/conf/conf-mms.properties ~/.

55

Step 5: Upgrade the package.
For example:
sudo dpkg -i mongodb-mms__x86_64.deb

Step 6: Edit the new configuration file.
Fill in the new configuration file at /opt/mongodb/mms/conf/conf-mms.properties using your old file
as a reference point.
Step 7: Start Ops Manager.
For example:
sudo service mongodb-mms start

Step 8: Update all Monitoring Agents.
See Install Monitoring Agent for more information.
Step 9: Update the Backup Daemon and any Backup Agent, as appropriate.
If you are running Backup, update the Backup Daemon package and any Backup Agent.
See Install Backup Agent for more information.
Upgrade the Backup Daemon
Step 1: Stop the currently running instance.
sudo service mongodb-mms-backup-daemon stop

Step 2: Download the latest version of the Backup Daemon.
Download the new version of the Backup Daemon package by issuing a curl command with the download link
available on the customer downloads page provided to you by MongoDB.
curl -OL 

Step 3: Point the Backup Daemon to the Ops Manager Application database.
Open the /opt/mongodb/mms-backup-daemon/conf/conf-daemon.properties file with root privileges and set the mongo.mongoUri value to the servers and ports hosting the Ops Manager Application database.
For example:

56

mongo.mongoUri=mongodb://mongodb1.example.net:27017,mongodb2.example.net:27017,
˓→mongodb3.example.net:27017

Step 4: Synchronize the gen.key file
Synchronize the /etc/mongodb-mms/gen.key file from an Ops Manager Application server. This is only required if the Backup Daemon was installed on a different server than the Ops Manager Application.
Step 5: Start the back-end software package.
To start the Backup Daemon run:
sudo service mongodb-mms-backup-daemon start

If everything worked the following displays:
Start Backup Daemon

[

OK

]

If you run into any problems, the log files are at /opt/mongodb/mms-backup-daemon/logs.
Step 6: Open Ops Manager and access the Backup configuration page.
Open the Ops Manager home page and log in as the user you first registered when installing the Ops Manager Application. (This user is the global owner.) Then click the Admin link at the top right of the page. Then click the Backup
tab.
Step 7: Enter configuration information for the Backup database.
Enter the configuration information described below, and then click Save. Ops Manager uses this information to create
the connection string URI used to connect to the database.
:: Enter a comma-separated list of the fully qualified domain names and port numbers for all
replica set members for the Backup database.
MongoDD Auth Username and MongoDB Auth Password: Enter the user credentials if the database uses authentication.
Encrypted Credentials: Check this if the user credentials use the Ops Manager credentialstool. For more
information, see Encrypt MongoDB User Credentials.
Use SSL: Check this if the MongoDB database uses SSL. If you select this, you must configure SSL settings for both
the Ops Manager Application and Backup Daemon. See Ops Manager Configuration Files.
Connection Options: To add additional connection options, enter them using the MongoDB Connection String URI
Format.
Upgrade Ops Manager with rpm Packages

On this page

57

• Overview
• Prerequisite
• Procedures

Warning: To use Ops Manager 1.6 Automation to manage MongoDB Enterprise deployments, you must first
install the MongoDB Enterprise dependencies to each server that runs MongoDB Enterprise. You must install the
dependencies to the servers before using Automation.
Note that Automation does not yet support use of the MongoDB Enterprise advanced security features (SSL,
LDAP, Kerberos, and auditing). Automation will support these features in the next major Ops Manager release.
To run MongoDB Enterprise with advanced security now, deploy MongoDB Enterprise manually (not through
Automation) and use Ops Manager only for Monitoring and Backup.

Overview
This tutorial describes how to upgrade an existing Ops Manager Application and Backup Daemon using rpm packages.
Prerequisite
You must have administrative access on the machines on which you perform the upgrade.
You must have the download link available on the customer downloads page provided to you by MongoDB. If you do
not have this link, you can access the download page for evaluation at http://www.mongodb.com/download.
Procedures
If your version is earlier than 1.3, please see instead Upgrade from Version 1.2 and Earlier.
Upgrade the Ops Manager Application from Version 1.3 and Later
If you have an existing Ops Manager Application, use the following procedure to upgrade to the latest release. There
are no supported downgrade paths for Ops Manager.
Step 1: Recommended. Take a full backup of the Ops Manager database before beginning the
upgrade procedure.
Step 2: Shut down Ops Manager.
For example:
sudo service mongodb-mms stop

58

Step 3: If you are running Ops Manager Backup, shutdown the Ops Manager Backup Daemon.
The daemon may be installed on a different server. It is critical that this is also shut down. To shut down, issue a
command similar to the following:
sudo service mongodb-mms-backup-daemon stop

Step 4: Save a copy of your previous configuration file.
For example:
sudo cp /opt/mongodb/mms/conf/conf-mms.properties ~/.

Step 5: Upgrade the package.
For example:
sudo rpm -U mongodb-mms-.x86_64.rpm

Step 6: Move the new version of the configuration file into place.
Move the conf-mms.properties configuration file to the following location:
/opt/mongodb/mms/conf/conf-mms.properties

Step 7: Edit the new configuration file.
Fill in the new configuration file at /opt/mongodb/mms/conf/conf-mms.properties using your old file
as a reference point.
Step 8: Start Ops Manager.
For example:
sudo service mongodb-mms start

Step 9: Update all Monitoring Agents.
See Install Monitoring Agent for more information.
Step 10: Update the Backup Daemon and any Backup Agent, as appropriate.
If you are running Backup, update the Backup Daemon package and any Backup Agent.
See Install Backup Agent for more information.

59

Upgrade the Backup Daemon
Step 1: Stop the currently running instance.
sudo service mongodb-mms-backup-daemon stop

Step 2: Download the latest version of the Backup Daemon.
Download the new version of the Backup Daemon package by issuing a curl command with the download link
available on the customer downloads page provided to you by MongoDB.
curl -OL 

Step 3: Point the Backup Daemon to the Ops Manager Application database.
Open the /opt/mongodb/mms-backup-daemon/conf/conf-daemon.properties file with root privileges and set the mongo.mongoUri value to the servers and ports hosting the Ops Manager Application database.
For example:
mongo.mongoUri=mongodb://mongodb1.example.net:27017,mongodb2.example.net:27017,
˓→mongodb3.example.net:27017

Step 4: Synchronize the gen.key file
Synchronize the /etc/mongodb-mms/gen.key file from an Ops Manager Application server. This is only required if the Backup Daemon was installed on a different server than the Ops Manager Application.
Step 5: Start the back-end software package.
To start the Backup Daemon run:
sudo service mongodb-mms-backup-daemon start

If everything worked the following displays:
Start Backup Daemon

[

OK

]

If you run into any problems, the log files are at /opt/mongodb/mms-backup-daemon/logs.
Step 6: Open Ops Manager and access the Backup configuration page.
Open the Ops Manager home page and log in as the user you first registered when installing the Ops Manager Application. (This user is the global owner.) Then click the Admin link at the top right of the page. Then click the Backup
tab.

60

Step 7: Enter configuration information for the Backup database.
Enter the configuration information described below, and then click Save. Ops Manager uses this information to create
the connection string URI used to connect to the database.
:: Enter a comma-separated list of the fully qualified domain names and port numbers for all
replica set members for the Backup database.
MongoDD Auth Username and MongoDB Auth Password: Enter the user credentials if the database uses authentication.
Encrypted Credentials: Check this if the user credentials use the Ops Manager credentialstool. For more
information, see Encrypt MongoDB User Credentials.
Use SSL: Check this if the MongoDB database uses SSL. If you select this, you must configure SSL settings for both
the Ops Manager Application and Backup Daemon. See Ops Manager Configuration Files.
Connection Options: To add additional connection options, enter them using the MongoDB Connection String URI
Format.
Upgrade Ops Manager from tar.gz or zip Archives

On this page
• Overview
• Prerequisite
• Procedures

Warning: To use Ops Manager 1.6 Automation to manage MongoDB Enterprise deployments, you must first
install the MongoDB Enterprise dependencies to each server that runs MongoDB Enterprise. You must install the
dependencies to the servers before using Automation.
Note that Automation does not yet support use of the MongoDB Enterprise advanced security features (SSL,
LDAP, Kerberos, and auditing). Automation will support these features in the next major Ops Manager release.
To run MongoDB Enterprise with advanced security now, deploy MongoDB Enterprise manually (not through
Automation) and use Ops Manager only for Monitoring and Backup.

Overview
This tutorial describes how to upgrade an existing Ops Manager Application and Backup Daemon using tar.gz or zip
files.
Prerequisite
You must have administrative access on the machines on which you perform the upgrade.
You must have the download link available on the customer downloads page provided to you by MongoDB. If you do
not have this link, you can access the download page for evaluation at http://www.mongodb.com/download.

61

Procedures
If your version is earlier than 1.3, please see instead Upgrade from Version 1.2 and Earlier.
Upgrade the Ops Manager Application from Version 1.3 and Later
If you have an existing Ops Manager Application, use the following procedure to upgrade to the latest release. There
are no supported downgrade paths for Ops Manager.
To upgrade a tarball installation, backup the configuration file and logs, and then re-install the Ops Manager server.
Important: It is crucial that you back up the existing configuration because the upgrade process will delete existing
data.
In more detail:
Step 1: Shutdown the Ops Manager server and take a backup of your existing configuration and
logs.
For example:
/bin/mongodb-mms stop
cp -a /conf ~/mms_conf.backup
cp -a /logs ~/mms_logs.backup

Step 2: If you are running Ops Manager Backup, shutdown the Ops Manager Backup Daemon.
The daemon may be installed on a different server. It is critical that this is also shut down. To shut down, issue a
command similar to the following:
/bin/mongodb-mms-backup-daemon stop

Step 3: Remove your existing Ops Manager server installation entirely and extract the latest release
in its place.
For example:
cd /../
rm -rf 
tar -zxf -C . /path/to/mongodb-mms-.x86_64.tar.gz

Step 4: Compare and reconcile any changes in configuration between versions.
For example:
diff -u ~/mms_conf.backup/conf-mms.properties /conf/conf-mms.properties
diff -u ~/mms_conf.backup/mms.conf /conf/mms.conf

62

Step 5: Edit your configuration to resolve any conflicts between the old and new versions.
Make configuration changes as appropriate. Changes to mms.centralUri, email addresses, and MongoDB are the
most common configuration changes.
Step 6: Restart the Ops Manager server.
For example:
/bin/mongodb-mms start

Step 7: Update all Monitoring Agents.
See Install Monitoring Agent for more information.
Step 8: Update the Backup Daemon and any Backup Agent, as appropriate.
If you are running Backup, update the Backup Daemon package and any Backup Agent.
See Install Backup Agent for more information.
Upgrade the Backup Daemon
Step 1: Stop the currently running instance.
/bin/mongodb-mms-backup-daemon stop

Step 2: Download the latest version of the Backup Daemon.
Download the new version of the Backup Daemon archive by issuing a curl command with the download link
available on the customer downloads page provided to you by MongoDB.
curl -OL 

Step 3: To install the Backup Daemon, extract the downloaded archive file.
tar -zxf 

Step 4: Point the Backup Daemon to the Ops Manager Application database.
Open the /conf/conf-daemon.properties file and set the mongo.mongoUri value to
the servers and ports hosting the Ops Manager Application database. For example:
mongo.mongoUri=mongodb://mongodb1.example.net:27017,mongodb2.example.net:27017,
˓→mongodb3.example.net:27017

63

Additionally, ensure that the file system that holds the rootDirectory has sufficient space to accommodate the
current snapshots of all backed up instances.
Step 5: Synchronize the mms-gen-key file.
Synchronize the /bin/mms-gen-key file from the Ops Manager Application server. This is
required only if the Backup Daemon is installed on a different server than the Ops Manager Application.
Step 6: Start the Backup Daemon.
To start the Backup Daemon run:
/bin/mongodb-mms-backup-daemon start

If you run into any problems, the log files are at /logs.
Step 7: Open Ops Manager and access the Backup configuration page.
Open the Ops Manager home page and log in as the user you first registered when installing the Ops Manager Application. (This user is the global owner.) Then click the Admin link at the top right of the page. Then click the Backup
tab.
Step 8: Enter configuration information for the Backup database.
Enter the configuration information described below, and then click Save. Ops Manager uses this information to create
the connection string URI used to connect to the database.
:: Enter a comma-separated list of the fully qualified domain names and port numbers for all
replica set members for the Backup database.
MongoDD Auth Username and MongoDB Auth Password: Enter the user credentials if the database uses authentication.
Encrypted Credentials: Check this if the user credentials use the Ops Manager credentialstool. For more
information, see Encrypt MongoDB User Credentials.
Use SSL: Check this if the MongoDB database uses SSL. If you select this, you must configure SSL settings for both
the Ops Manager Application and Backup Daemon. See Ops Manager Configuration Files.
Connection Options: To add additional connection options, enter them using the MongoDB Connection String URI
Format.
Upgrade from Version 1.2 and Earlier

On this page
• Overview
• Procedure

64

Warning: To use Ops Manager 1.6 Automation to manage MongoDB Enterprise deployments, you must first
install the MongoDB Enterprise dependencies to each server that runs MongoDB Enterprise. You must install the
dependencies to the servers before using Automation.
Note that Automation does not yet support use of the MongoDB Enterprise advanced security features (SSL,
LDAP, Kerberos, and auditing). Automation will support these features in the next major Ops Manager release.
To run MongoDB Enterprise with advanced security now, deploy MongoDB Enterprise manually (not through
Automation) and use Ops Manager only for Monitoring and Backup.

Overview
Because of a company name change, the name of the Ops Manager package changed between versions 1.2 and 1.3.
Therefore, to upgrade from any version before 1.3, use the following procedure:
Procedure
1. Recommended. Take a full backup of the MMS database before beginning the upgrade procedure.
2. Shut down MMS, using the following command:
/etc/init.d/10gen-mms stop

3. Download the latest Ops Manager package from the downloads page and proceed with the instructions for a
fresh install. Do not attempt to use your package manager to do an upgrade.
4. Follow the procedure for a new install, including steps to configure the conf-mms.properties file. If
you used encrypted authentication credentials you will need to regenerate these manually. Do not copy the
credentials from your old properties file. Old credentials will not work.
5. Start Ops Manager using the new package name.
For upgrades using rpm or deb packages, issue:
sudo /etc/init.d/mongodb-mms start

For upgrades using tar.gz or zip archives, issue:
/bin/mongodb-mms start

6. Update the Monitoring Agent. See Install Monitoring Agent for more information.

2.7 Configure Local Mode if Ops Manager has No Internet Access
On this page
• Overview
• Prerequisites
• Required Access
• Procedure

65

Overview
The Automation Agent requires access to MongoDB binaries in order to install MongoDB on new deployments or
change MongoDB versions on existing ones. In a default configuration, the agents access the binaries over the internet
from MongoDB Inc. If you deploy MongoDB on servers that have no internet access, you can run Automation by
configuring Ops Manager to run in “local” mode, in which case the Automation Agents access the binaries from a
directory on the Ops Manager Application server.
Specify the directory in the conf-mms.properties configuration file and then place .tgz archives of the binaries in that
directory. The Automation Agents will use these archives for all MongoDB installs. The “mongodb-mms” user must
possess the permissions to read the .tgz files in the “binaries” directory.
The following shows the ls -l output of the “binaries” directory for an example Ops Manager install that deploys
only the versions of MongoDB listed:
$ cd /opt/mongodb/mms/mongodb-releases
$ ls -l
total 355032
-rw-r----- 1 mongodb-mms staff 116513825
-rw-r----- 1 mongodb-mms staff 50194275
-rw-r----- 1 mongodb-mms staff 95800685
˓→amzn64-2.6.9.tgz
-rw-r----- 1 mongodb-mms staff 50594134
˓→amzn64-3.0.2.tgz
-rw-r----- 1 mongodb-mms staff 50438645
˓→suse11-3.0.2.tgz

Apr 27 15:06 mongodb-linux-x86_64-2.6.9.tgz
Apr 27 15:05 mongodb-linux-x86_64-3.0.2.tgz
Apr 27 15:05 mongodb-linux-x86_64-enterpriseApr 27 15:04 mongodb-linux-x86_64-enterpriseApr 27 15:04 mongodb-linux-x86_64-enterprise-

Prerequisites
Download Binaries Before Importing a Deployment
Populate the binaries directory with all required MongoDB versions before you import the deployment. If a version
is missing, the Automation Agents will not be able to take control of the deployment.
Determine Which Binaries to Store
Your binaries directory will require archives of following versions:
• versions used by existing deployments that you will import
• versions you will use to create new deployments
• versions you will use during an intermediary step in an upgrade. For example, if you will import an existing
MongoDB 2.6 Community deployment and upgrade it first to MongoDB 3.0 Community and then to MongoDB
3.0 Enterprise, you must include all those editions and versions.
If you use both the MongoDB Community edition and the MongoDB Enterprise subscription edition, you must include
the required versions of both.
The following table describes the archives required for specific versions:

66

Edition
Community
Community

Version
2.6+, 2.4+
3.0+

Enterprise

3.0+, 2.6+, 2.4+

Archive
Linux archive at http://www.mongodb.org/downloads.
Linux 64-bit legacy version at http://www.
mongodb.org/downloads.
Platform-specific archive available from http:
//mongodb.com/download.

Warning: If you run Ops Manager 1.6 and use Automation to deploy MongoDB Enterprise, then you must
pre-install the MongoDB Enterprise Dependencies on your servers.
If your MongoDB Enterprise deployments use Enterprise’s advanced security features (SSL, LDAP, Kerberos, and
auditing), you can use Ops Manager for Monitoring and Backup only. You cannot currently use Automation with
the advanced security features. Automation will support these features in the next major Ops Manager release.

Prerequisite Packages Required for MongoDB Enterprise
If you use MongoDB Enterprise you must install the following prerequisite packages to each server that will run
MongoDB Enterprise:
• net-snmp
• net-snmp-libs
• openssl
• net-snmp-utils
• cyrus-sasl
• cyrus-sasl-lib
• cyrus-sasl-devel
• cyrus-sasl-gssapi
To install these packages on RHEL, CentOS and Amazon Linux, you can issue the following command:
sudo yum install openssl net-snmp net-snmp-libs net-snmp-utils cyrus-sasl cyrus-sasl˓→lib cyrus-sasl-devel cyrus-sasl-gssapi

Version Manifest
When you run in local mode, you must provide Ops Manager with the MongoDB version manifest, which lists all
released MongoDB versions. You provide the manifest to make Ops Manager aware of MongoDB versions, but
to make a version available to the Automation Agent, you must install the version’s archive on the Ops Manager
Application server and then select it for use in the associated group’s Version Manager.
To provide Ops Manager with the manifest, , in the Ops Manager web interface, click Admin in the upper right,
then click General, and then click Version Manifest. Here, you can update the manifest directly, if your browser has
internet access. Otherwise, copy the manifest from https://opsmanager.mongodb.com/static/version_manifest/1.6.json
and click the link for pasting in the manifest.
When MongoDB releases new versions, update the manifest.

67

Required Access
You must have Global Automation Admin or Global Owner access to perform this procedure.
Procedure
Step 1: Stop the Ops Manager Application if not yet running in local mode.
Use the command appropriate to your operating system.
On a Linux system installed with a package manager:
sudo service mongodb-mms stop

On a Linux system installed with a .tar file:
/bin/mongodb-mms stop

Step 2: Edit the conf-mms.properties configuration file to enable local mode and to specify the
local directory for MongoDB binaries.
Open conf-mms.properties with root privileges and set the following automation.versions values:
Set the automation.versions.source setting to the value local:
automation.versions.source=local

Set automation.versions.directory to the directory on the Ops Manager Application server where you
will store .tgz archives of the MongoDB binaries for access by the Automation Agent.
For example:
automation.versions.directory=/opt/mongodb/mms/mongodb-releases/

Step 3: Start the Ops Manager Application.
Use the command appropriate to your operating system.
On a Linux system installed with a package manager:
sudo service mongodb-mms start

On a Linux system installed with a .tar file:
/bin/mongodb-mms start

Step 4: Populate the Ops Manager Application server directory with the .tgz files for the MongoDB
binaries.
Populate the directory you specified in the automation.versions.directory setting with the necessary versions of MongoDB as determined by the Determine Which Binaries to Store topic on this page.

68

Important: If you have not yet read the Determine Which Binaries to Store topic on this page, please do so before
continuing with this procedure.
For example, to download MongoDB Enterprise 3.0 on Amazon Linux, issue a command similar to the following,
replacing  with the download url for the archive:
sudo curl -OL 

Step 5: Ensure that the “mongodb-mms” user can read the MongoDB binaries.
The “mongodb-mms” user must be able to read the .tgz files placed in the directory you specified in the
automation.versions.directory.
For example, if on a Linux platform you place the .tgz files in the /opt/mongodb/mms/mongodb-releases/
directory, you could use the following sequence of commands to change ownership for all files in that directory to
“mongodb-mms”:
cd /opt/mongodb/mms/mongodb-releases/
sudo chown mongodb-mms:mongodb-mms ./*

Step 6: Open Ops Manager.
If you have not yet registered a user, click the Register link and follow the prompts to register a user and create the
first group. The first registered user is automatically assigned the Global Owner role.
Step 7: Copy the version manifest to Ops Manager.
1. If you have not already done so, copy the version manifest from https://opsmanager.mongodb.com/static/
version_manifest/1.6.json.
2. Click the Admin link in the upper right to go to the system-wide Administration settings.
3. Click the General tab and then click Version Manifest.
4. Do one of the following:
• If your browser has internet access, click Update the MongoDB Version Manifest and paste in the manifest.
• If your browser does not have internet access, follow the instructions on the page.
Step 8: Specify which versions are available for download by Automation Agents associated with
each group.
1. Click Ops Manager in the upper left to leave the system-wide Administration settings.
2. Click Deployment and then click Version Manager.
3. Select the checkboxes for the versions of MongoDB that you have made available on the Ops Manager Application server.
4. Click Review & Deploy at the top of the page.
5. Click Confirm & Deploy.

69

Step 9: Install the Automation Agent on each server on which you will manage MongoDB processes.
1. Click Administration and then Agents.
2. In the Automation section of the page, click the link for the operating system to which you will install. Following
the installation instructions.
3. Pre-install the MongoDB Enterprise Dependencies if deploying MongoDB Enterprise.

2.8 Configure High Availability
Application High Availability Outlines the process for achieving a highly available Ops Manager deployment.
Backup High Availability Make the Backup system highly available.
Configure a Highly Available Ops Manager Application

On this page
• Overview
• Prerequisites
• Procedure
• Additional Information

Overview
The Ops Manager Application provides high availability through horizontal scaling and through use of a replica set
for the backing MongoDB instance that hosts the Ops Manager Application Database.
Horizontal Scaling
The components of the Ops Manager Application are stateless between requests. Any Ops Manager Application server
can handle requests as long as all the servers read from the same backing MongoDB instance. If one Ops Manager
Application becomes unavailable, another fills requests.
To take advantage of this for high availability, configure a load balancer to balance between the pool of Ops Manager Application servers. Use the load balancer of your choice. Configure each application server’s conf-mms.
properties file to point the mms.centralUrl and mms.backupCentralUrl properties to the load balancer.
For more information, see Ops Manager Configuration Files.
The mms.remoteIp.header property should reflect the HTTP header set by the load balancer that contains the
original client’s IP address, i.e. X-Forwarded-For. The load balancer then manages the Ops Manager HTTP
Service and Backup HTTP Service each application server provides.
The Ops Manager Application uses the client’s IP address for auditing, logging, and white listing for the API.

70

Replica Set for the Backing Instance
Deploy a replica set rather than a standalone as the backing MongoDB instance for monitoring. Replica sets have
automatic failover if the primary becomes unavailable.
When deploying a replica set with members in multiple facilities, ensure that a single facility has enough votes to elect
a primary if needed. Choose the facility that hosts the core application systems. Place a majority of voting members
and all the members that can become primary in this facility. Otherwise, network partitions could prevent the set from
being able to form a majority. For details on how replica sets elect primaries, see Replica Set Elections.
To deploy a replica set, see Deploy a Replica Set.
You can create backups of the replica set using file system snapshots. File system snapshots use system-level tools to
create copies of the device that holds replica set’s data files.
Prerequisites
Deploy a replica set for the backing instance for the Ops Manager Application Database. To deploy a replica set, see
Deploy a Replica Set.
Procedure
To configure multiple application servers with load balancing:
Step 1: Configure a load balancer with the pool of Ops Manager Application servers.
This configuration depends on the general configuration of your load balancer and environment.
Step 2: Update each Ops Manager Application server with the load balanced URL.
On each server, edit the conf-mms.properties file to configure the mms.centralUrl and mms.
backupCentralUrl properties to point to the load balancer URL.
The conf-mms.properties file is located in the /conf/ directory. See Ops Manager Configuration Files for more information.
Step 3: Update each Ops Manager Application server with the replication hosts information.
On each server, edit the conf-mms.properties file to set the mongo.mongoUri property to the connection
string of the Ops Manager Application Database. You must specify at least 3 hosts in the mongo.mongoUri
connection string. For example:
mongo.mongoUri=mongodb://:<27017>,:<27017>,:<27017>/?maxPoolSize=100

Step 4: Synchronize the gen.key file across all the Ops Manager Application servers.
Synchronize the /etc/mongodb-mms/gen.key file across all Application Servers. The Ops Manager Application server uses this file to encrypt sensitive information before storing the data in a database.

71

Additional Information
For information on making Ops Manager Backup highly available, see Configure a Highly Available Ops Manager
Backup Service.
Configure a Highly Available Ops Manager Backup Service

On this page
• Overview
• Additional Information

Overview
The Backup Daemon maintains copies of the data from your backed up mongod instances and creates snapshots used
for restoring data. The file system that the Backup Daemon uses must have sufficient disk space and write capacity to
store the backed up instances.
For replica sets, the local copy is equivalent to an additional secondary replica set member. For sharded clusters the
daemon maintains a local copy of each shard as well as a copy of the config database.
To configure high availability
• scale your deployment horizontally by using multiple backup daemons, and
• provide failover for your Ops Manager Application Database and Backup Database by deploying replica sets
for the dedicated MongoDB processes that host the databases.
Multiple Backup Daemons
To increase your storage and to scale horizontally, you can run multiple instances of the Backup Daemon. With
multiple daemons, Ops Manager binds each backed up replica set or shard to a particular Backup Daemon. For
example, if you run two Backup Daemons for a cluster that has three shards, and if Ops Manager binds two shards to
the first daemon, then that daemon’s server replicates only the data of those two shards. The server running the second
daemon replicates the data of the remaining shard.
Multiple Backup Daemons allow for manual failover should one daemon become unavailable. You can instruct Ops
Manager to transfer the daemon’s backup responsibilities to another Backup Daemon. Ops Manager reconstructs the
data on the new daemon’s server and binds the associated replica sets or shards to the new daemon. See Move Jobs
from a Lost Backup Service to another Backup Service for a description of this process.
Ops Manager reconstructs the data using a snapshot and the oplog from the Backup Blockstore Database. Installing
the Backup Daemon is part of the procedure to Install Ops Manager. Select the procedure specific to your operation
system.
Replica Sets for Application and Backup Data
Deploy replica sets rather than standalones for the dedicated MongoDB processes that host the Ops Manager Application Database and Backup Database. Replica sets provide automatic failover should the primary become unavailable.

72

When deploying a replica set with members in multiple facilities, ensure that a single facility has enough votes to elect
a primary if needed. Choose the facility that hosts the core application systems. Place a majority of voting members
and all the members that can become primary in this facility. Otherwise, network partitions could prevent the set from
being able to form a majority. For details on how replica sets elect primaries, see Replica Set Elections.
To deploy a replica set, see Deploy a Replica Set.
Additional Information
To move jobs from a lost Backup server to another Backup server, see Move Jobs from a Lost Backup Service to
another Backup Service.
For information on making the Ops Manager Application highly available, see Configure a Highly Available Ops
Manager Application.

2.9 Configure Backup Jobs and Storage
Backup Data Locality Use multiple Backup daemons and blockstore instances to improve backup data locality.
Manage Backup Daemon Jobs Manage job assignments among the backup daemon
Configure Multiple Blockstores in Multiple Data Centers

On this page
• Overview
• Prerequisites
• Procedures

Overview
The Backup Blockstore databases are the primary storage systems for the backup data of your MongoDB deployments.
You can add new blockstores to your data center if existing ones have reached capacity.
If needed, you can also deploy blockstores in multiple data centers and assign backups of particular MongoDB deployments to particular data centers, as described on this page. You assign backups to data centers by attaching specific
Ops Manager groups to specific blockstores.
Deploy blockstores to multiple data centers when:
• Two sets of backed up data cannot have co-located storage for regulatory reasons.
• You have multiple data centers and want to reduce cross-data center network traffic by keeping each blockstore
in the data center it backs.
This tutorial sets up two blockstores in two separate data centers and attaches a separate group to each.

73

Prerequisites
Each data center hosts a Backup Blockstore database and requires its own Ops Manager Application, Backup Daemon,
and Backup Agent
The two Ops Manager Application instances must share a single dedicated Ops Manager Application Database. You
can put members of the Ops Manager Application database replica set in each data center.
Configure each Backup Agent to use the URL for its local Ops Manager Application. You can configure each Ops
Manager Application to use a different hostname, or you can use split-horizon DNS to point each agent to its local
Ops Manager Application.
The Ops Manager Application database and the Backup Blockstore databases are MongoDB databases and can run as
standalones or replica sets. For production deployments, use replica sets to provide database high availability.
Procedures
Provision Servers in Each Data Center
Each server must meet the cumulative hardware and software requirements for the components it runs. See Ops
Manager Hardware and Software Requirements.
Servers that run the Backup Damon, Ops Manager Application database, and the Backup Blockstore databases all run
MongoDB. They must meet the configuration requirements in the MongoDB Production Notes.
Install MongoDB
Install MongoDB on the servers that host the:
• Backup Daemon
• Ops Manager Application database
• Backup Blockstore databases
See Install MongoDB in the MongoDB manual to find the correct install procedure for your operating system. To run
replica sets for the Ops Manager Application database and Backup Blockstore databases, see Deploy a Replica Set in
the MongoDB manual.
Install the Ops Manager Application
Install the Ops Manager Application in each data center but do not perform the step to start the service. See Install
Ops Manager to find the procedure for your operating system.
In the step for configuring the conf-mms.properties file, set the same Ops Manager Application URL in both
data centers. For example, in both data centers, set mms.centralUrl and mms.centralBackupUrl to point to
the server in Data Center 1:
mms.centralUrl = :<8080>
mms.centralBackupUrl = :<8081>

74

Start the Ops Manager Application
The Ops Manager Application creates a gen.key file on initial startup. You must start the Ops Manager Application
in one data center and copy its gen.key file before starting the other Ops Manager Application. Ops Manager uses
the same gen.key file for all servers in both data centers.
The gen.key file is binary. You cannot copy the contents: you must copy the file. For example, use SCP.
Step 1: Start the Ops Manager Application in Data Center 1.
Issue the following:
service mongodb-mms start

Step 2: Copy the gen.key file.
Copy the /etc/mongodb-mms/gen.key file from the Ops Manager Application server in Data Center 1 to the:
• /etc/mongodb-mms directory on the Ops Manager Application server in Data Center 2.
• /etc/mongodb-mms directory of each Backup Daemon server in each data center.
Step 3: Start the Ops Manager Application server in Data Center 2.
Issue the following:
service mongodb-mms start

Install the Backup Daemon
Install and start the Backup Daemon in each data center. See Install Ops Manager for instructions for your operating
system.
Bind Groups to Backup Resources
Step 1: In a web browser, open Ops Manager.
Step 2: Create a new Ops Manager group for the first data center.
To create a group, select the Administration tab and open the My Groups page. Then, click Add Group and specify the
group name.”
Step 3: Create a second Ops Manager group for the second data center.
Step 4: Open the Admin interface.
In Ops Manager, select the Admin link in the upper right.

75

Step 5: Configure backup resources.
Select the Backup tab and do the following:
• Select the Daemons page and ensure there are two daemons listed.
• Select the Blockstores page. Add a blockstore with the hostname and port of the blockstore for the second data
center and click Save.
• Select the Sync Stores page. Add a sync store with the hostname and port of the blockstore for the second data
center and click Save.
• Select the Oplog Stores page. Add an oplog store with the hostname and port of the blockstore for the second
data center and click Save.
Step 6: Assign resources to the data centers.
Open the General tab, then the Groups page. Select the group you will house in the first data center, and then select
the View link for the Backup Configuration.
For each of the following, click the drop-down box and select the local option for the group:
• Backup Daemons
• Sync Stores
• Oplog Stores
• Block Stores
Repeat the above steps for the second group.
Step 7: Install agents.
If you are using Automation, install the Automation Agent for the group in Data Center 1 on each server in Data Center
1. Install the Automation Agent for Data Center 2 on each server in Data Center 2. The Automation Agent will then
install Monitoring and Backup agents as needed.
If you are not using Automation, download and install the Monitoring and Backup agents for the group assigned to
Data Center 1 by navigating to the Administration and then Agents page while viewing that group in Ops Manager.
Then, switch to the group in Data Center 2 by choosing it from the drop-down menu in the top navigation bar in Ops
Manager, and download and install its Monitoring and Backup agents.
See the following pages for the procedures for installing the agents manually:
• Install Monitoring Agent
• Install Backup Agent
Move Jobs from a Lost Backup Service to another Backup Service

On this page
• Overview
• Procedure

76

Overview
If the server running a Backup Daemon fails, and if you run multiple Backup Daemons, then an administrator with the
global owner or global backup admin role can move all the daemon’s jobs to another Backup Daemon.
The new daemon takes over the responsibility to back up the associated shards and replica sets.
When you move jobs, the destination daemon reconstructs the data using a snapshot and the oplog from the Backup
Blockstore database. Reconstruction of data takes time, depending on the size of the databases on the source.
During the time it takes to reconstruct the data and reassign the backups to the new Backup Daemon:
• Ops Manager Backup does not take new snapshots of the jobs that are moving until the move is complete. Jobs
that are not moving are not affected.
• Ops Manager Backup does save incoming oplog data. Once the jobs are on the new Backup Daemon’s server,
Ops Manager Backup takes the missed snapshots at the regular snapshot intervals.
• Restores of previous snapshots are still available.
• Ops Manager can produce restore artifacts using existing snapshots with point-in-time recovery for replica sets
or checkpoints for sharded clusters.
Procedure
With administrative privileges, you can move jobs between Backup daemons using the following procedure:
Step 1: Click the Admin link at the top of Ops Manager.
Step 2: Select Backup and then select Daemons.
The Daemons page lists all active Backup Daemons.
Step 3: Locate the failed Backup Daemon and click the Move all heads link.
Ops Manager displays a drop-down list from which to choose the destination daemon. The list displays only those
daemons with more free space than there is used space on the source daemon.
Step 4: Move the jobs to the new daemon.
Select the destination daemon and click the Move all heads button.

2.10 Test Ops Manager Monitoring
On this page
• Overview
• Procedure

77

Overview
The following procedure creates a MongoDB replica set and sets up a database populated with random data for use in
testing an Ops Manager installation. Create the replica set on a separate machine from Ops Manager.
These instructions create the replica set on a server running RHEL 6+ or Amazon Linux. The procedure installs all
the members to one server.
Procedure
Step 1: Increase each server’s default ulimit settings.
If you are installing to RHEL, check whether the /etc/security/limits.d directory contains the 90-nproc.
conf file. If the file exists, remove it. (The 90-nproc.conf file overrides limits.conf.) Issue the following
command to remove the file:
sudo rm /etc/security/limits.d/90-nproc.conf

For more information, see UNIX ulimit Settings in the MongoDB manual.
Step 2: Install MongoDB on each server.
First, set up a repository definition by issuing the following command:
echo "[mongodb-org-3.0]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.0/x86_64/
gpgcheck=0
enabled=1" | sudo tee -a /etc/yum.repos.d/mongodb-org-3.0.repo

Second, install MongoDB by issuing the following command:
sudo yum install -y mongodb-org mongodb-org-shell

Step 3: Create the data directories for the replica set.
Create a data directory for each replica set member and set mongod.mongod as each data directory’s owner.
For example, the following command creates the directory /data and then creates a data directory for each member
of the replica set. You can use different directory names:
sudo mkdir -p /data /data/mdb1 /data/mdb2 /data/mdb3

The following command sets mongod.mongod as owner of the new directories:
sudo chown mongod:mongod /data /data/mdb1 /data/mdb2 /data/mdb3

Step 4: Start a separate MongoDB instance for each replica set member.
Start each mongod instance on its own dedicated port number and with the data directory you created in the last step.
For each instance, specify mongod as the user. Start each instance with the replSet command-line option specifying
the name of the replica set.
78

For example, the following three commands start three members of a new replica set named rs0:
sudo -u mongod mongod --port 27017 --dbpath /data/mdb1 --replSet rs0 --logpath /data/
˓→mdb1/mongodb.log --fork
sudo -u mongod mongod --port 27018 --dbpath /data/mdb2 --replSet rs0 --logpath /data/
˓→mdb2/mongodb.log --fork
sudo -u mongod mongod --port 27019 --dbpath /data/mdb3 --replSet rs0 --logpath /data/
˓→mdb3/mongodb.log --fork

Step 5: Connect to one of the members.
For example, the following command connects to the member running on port 27017:
mongo --port 27017

Step 6: Initiate the replica set and add members.
In the mongo shell, issue the rs.initiate() and rs.add() methods, as shown in the following example. Replace the
hostnames in the example with the hostnames of your servers:
rs.initiate()
rs.add("mdb.example.net:27018")
rs.add("mdb.example.net:27019")

Step 7: Verify the replica set configuration.
Issue the rs.conf() method and verify that the members array lists the three members:
rs.conf()

Step 8: Add data to the replica set.
Issue the following for loop to create a collection titled testData and populate it with 25,000 documents, each
with an _id field and a field x set to a random string.
for (var i = 1; i <= 25000; i++) {
db.testData.insert( { x : Math.random().toString(36).substr(2, 15) } );
sleep(0.1);
}

Step 9: Confirm data entry.
After the script completes, you can view a document in the testData collection by issuing the following:
db.testData.findOne()

To confirm that the script inserted 25,000 documents into the collection, issue the following:

79

db.testData.count()

Step 10: Open the Ops Manager home page in a web browser and register the first user.
The first user created for Ops Manager is automatically assigned the Global Owner role.
Enter the following URL in a web browser, where  is the IP address of the server:
http://:8080

Click the Register link and enter information for the new Global Owner user. When you finish, you are logged into
the Ops Manager Application as that user. For more information on creating and managing users, see Manage Ops
Manager Users.
Step 11: Set up monitoring for the replica set.
If you have installed the Backup Daemon, click the Get Started button for Backup and follow the instructions. This
will set up both monitoring and backup. Otherwise click the Get Started button for Monitoring and follow instructions.
When prompted to add a host, enter the hostname and port of one of the replica set members in the form
:. For example: mdb.example.net:27018
When you finish the instructions, Ops Manager is running and monitoring the replica set.

3 Create a New MongoDB Deployment
Add Servers for Use by Automation Add servers to Ops Manager.
Deploy a Replica Set Use Ops Manager to deploy a managed replica set.
Deploy a Sharded Cluster Use Ops Manager to deploy a managed sharded cluster.
Deploy a Standalone For testing and deployment, create a new standalone MongoDB instance.
Connect to a MongoDB Process Connect to a MongoDB deployment managed by Ops Manager.

3.1 Add Servers for Use by Automation
This section describes how to add servers for use by Ops Manager Automation.
Overview How to add servers to Ops Manager.
Add Existing Servers to Ops Manager Add your existing servers to Ops Manager.
Overview

On this page
• Add Servers for Use by Automation
• Server Requirements

80

Add Servers for Use by Automation
You can add servers to Automation in the following ways:
• Provision existing systems and infrastructure. Ops Manager can deploy and manage MongoDB on your existing
servers. To use your existing hardware, you must deploy the Automation Agent to each server on which Ops
Manager will deploy MongoDB. See Add Existing Servers to Ops Manager.
• Use your local system for testing. Ops Manager can deploy MongoDB to your laptop or desktop system. Do
not use local systems for production deployments. To use your local system, you must deploy the Automation
Agent.
Server Requirements
The following are the minimum requirements for the servers that will host your MongoDB deployments:
Hardware:
• At least 10 GB of free disk space plus whatever space is necessary to hold your MongoDB data.
• At least 4 GB of RAM.
• If you use Amazon Web Services (AWS) EC2 instances, we recommend at least an m3.medium instance.
Networking:
• For each server that hosts a MongoDB process managed by Ops Manager, the output of hostname -f must
generate a hostname that is resolvable from all other servers in the MongoDB deployment.
Software:
If you provision your own servers and if you use MongoDB Enterprise, you must install the prerequisite packages on
the servers before deploying MongoDB Enterprise to the servers.
Add Existing Servers to Ops Manager

On this page
• Overview
• Prerequisites
• Procedure

Overview
You can allow Ops Manager to install, manage, and discover MongoDB processes on your existing servers. To do so,
you install the Automation Agent on each server.
Prerequisites
Ensure that the directories used by the Automation Agent have appropriate permissions for the user running the agent.
You can install the Automation Agent on the operating systems listed in Ops Manager on the Administration tab’s
Agents page.
81

To install the agent using rpm or deb packages, you must have root access.
Important: If a server runs MongoDB Enterprise, you must install the prerequisite packages on the server before
deploying MongoDB Enterprise on the servers.

Procedure
Install the Automation Agent on each each server that you want Ops Manager to manage. The following procedure
applies to all operating systems. For instructions for a specific operating system, see Install the Automation Agent.
Step 1: Select the Administration tab and then select Agents.
Step 2: Under Automation at the bottom of the page, click your operation system and follow the
instructions to install and run the agent.
If the install file is a tar.gz archive file, make sure to extract the archive after download.
Step 3: Ensure that the directories used by the Automation Agent have appropriate permissions for
the user that runs the agent.
Set the required permissions described in Directory and File Permissions.
Once you have installed the agent to all your servers, you can deploy your first replica set, cluster, or standalone.

3.2 Deploy a Replica Set
On this page
• Overview
• Consideration
• Prerequisites
• Procedure

Overview
A replica set is a group of MongoDB deployments that maintain the same data set. Replica sets provide redundancy and
high availability and are the basis for all production deployments. See the Replication Introduction in the MongoDB
manual for more information about replica sets.
Use this procedure to deploy a new replica set managed by Ops Manager. After deployment, use Ops Manager to
manage the replica set, including such operations as adding, removing, and reconfiguring members.

82

Consideration
Use unique replica set names for different replica sets within an Ops Manager group. Do not give different replica sets
the same name. Ops Manager uses the replica set name to identify which set a member belongs to.
Prerequisites
You must have an existing set of servers to which to deploy, and Ops Manager must have access to the servers.
The servers can exist on your own system or on Amazon Web Services (AWS). To give Ops Manager access to servers
on your system, install the Automation Agent to each server.
Important: If you provision your own servers and if you use MongoDB Enterprise, you must install the prerequisite
packages on the servers before deploying MongoDB Enterprise on the servers.

Procedure
Step 1: Select the Deployment tab and then the Deployment page.
Step 2: Click edit mode, and then click Add and select Create New Replica Set.
Step 3: Configure the replica set.
Enter information as required and click Apply.
To select specific servers to use for the deployment, enter the prefix of the servers in the Eligible Server RegExp field.
You can use regular expressions. To use all provisioned servers, enter a period (“.”). To run the deployment on your
local machine, enter the name of the machine.
For information on replica set options in the Member Options box, see Replica Set Members in the MongoDB Manual.
The votes field applies only to pre-2.6 versions of MongoDB.
To configure additional mongod runtime options, such as specifying the oplog size, or modifying journaling settings,
click Advanced Options. For option descriptions, see Advanced Options for MongoDB Deployments.
Step 4: Click Review & Deploy to review the configuration.
Ops Manager displays the full configuration for you to review.
Step 5: Click Confirm & Deploy.
To view deployment progress, click View Agent Logs and select an agent at the top of the Agent Logs page. To check
for updated entries, refresh the page.
If you diagnose an error and need to correct the deployment configuration, click Edit Configuration and then click Edit
Configuration again. Reconfigure the deployment through the deployment arrow button or through the Add button. If
you cannot find a solution, shut down the deployment. When you complete your changes, click Review & Deploy and
then Confirm & Deploy.

83

3.3 Deploy a Sharded Cluster
On this page
• Overview
• Prerequisites
• Procedure

Overview
Sharded clusters provide horizontal scaling for large data sets and enable high throughput operations by distributing
the data set across a group of servers. See the Sharding Introduction in the MongoDB manual for more information.
Use this procedure to deploy a new sharded cluster managed by Ops Manager. Later, you can use Ops Manager to add
shards and perform other maintenance operations on the cluster.
Prerequisites
You must have an existing set of servers to which to deploy, and Ops Manager must have access to the servers.
The servers can exist on your own system or on Amazon Web Services (AWS). To give Ops Manager access to servers
on your system, install the Automation Agent to each server.
Important: If you provision your own servers and if you use MongoDB Enterprise, you must install the prerequisite
packages on the servers before deploying MongoDB Enterprise on the servers.

Procedure
Step 1: Select the Deployment tab and then the Deployment page.
Step 2: Click edit mode, and then click Add and select Create New Cluster.
Step 3: Configure the sharded cluster.
Enter information as required and click Apply.
To select specific servers to use for the deployment, enter the prefix of the servers in the Eligible Server RegExp field.
You can use regular expressions. To use all provisioned servers, enter a period (“.”). To run the deployment on your
local machine, enter the name of the machine.
For information on replica set options in the Member Options box, see Replica Set Members in the MongoDB Manual.
The votes field applies only to pre-2.6 versions of MongoDB.
To configure additional mongod or mongos options, such as specifying the oplog size, or modifying journaling
settings, click Advanced Options. For option descriptions, see Advanced Options for MongoDB Deployments.

84

Step 4: Click Review & Deploy to review the configuration.
Ops Manager displays the full configuration for you to review.
Step 5: Click Confirm & Deploy.
To view deployment progress, click View Agent Logs and select an agent at the top of the Agent Logs page. To check
for updated entries, refresh the page.
If you diagnose an error and need to correct the deployment configuration, click Edit Configuration and then click Edit
Configuration again. Reconfigure the deployment through the deployment arrow button or through the Add button. If
you cannot find a solution, shut down the deployment. When you complete your changes, click Review & Deploy and
then Confirm & Deploy.

3.4 Deploy a Standalone MongoDB Instance
On this page
• Overview
• Prerequisites
• Procedure

Overview
You can deploy a standalone MongoDB instance managed by Ops Manager. Use standalone instances for testing and
development. Do not use these deployments, which lack replication and high availability, for production systems. For
all production deployments use replica sets. See Deploy a Replica Set for production deployments.
Prerequisites
You must have an existing server to which to deploy. For testing purposes, you can use your localhost, or another
machine to which you have access.
Important: If you provision your own server and if you use MongoDB Enterprise, you must install the prerequisite
packages to the server before deploying MongoDB Enterprise on it.

85

Procedure
Step 1: Select the Deployment tab and then the Deployment page.
Step 2: Click edit mode, and click Add and select Create New Standalone.
Step 3: Configure the standalone MongoDB instance.
Enter information as required and click Apply. For descriptions of Advanced Options, see Advanced Options for
MongoDB Deployments.
Step 4: Click Review & Deploy to review the configuration.
Ops Manager displays the full configuration for you to review.
Step 5: Click Confirm & Deploy.
To view deployment progress, click View Agent Logs and select an agent at the top of the Agent Logs page. To check
for updated entries, refresh the page.
If you diagnose an error and need to correct the deployment configuration, click Edit Configuration and then click Edit
Configuration again. Reconfigure the deployment through the deployment arrow button or through the Add button. If
you cannot find a solution, shut down the deployment. When you complete your changes, click Review & Deploy and
then Confirm & Deploy.

3.5 Connect to a MongoDB Process
On this page
• Overview
• Firewall Rules
• Procedures

Overview
To connect to a MongoDB instance, retrieve the hostname and port information from the Ops Manager console and
then use a MongoDB client, such as the mongo shell or a MongoDB driver, to connect to the instance. You can
connect to a cluster, replica set, or standalone.
Firewall Rules
Firewall rules and user authentication affect your access to MongoDB. You must have access to the server and port of
the MongoDB process.
If your MongoDB instance runs on Amazon Web Services (AWS), then the security group associated with the AWS
servers also affects access. AWS security groups control inbound and outbound traffic to their associated servers.

86

Procedures
Get the Connection Information for the MongoDB Instance
Step 1: Select the Deployment tab and then the Deployment page.
Step 2: Click view mode.
Step 3: Click the process.
For a replica set, click the primary. For a sharded cluster, click the mongod.
Ops Manager displays the hostname and port of the process at the top of the charts page.
Connect to a Deployment Using the Mongo Shell
Step 1: Select the Deployment tab and then the Deployment page.
Step 2: Click view mode.
Step 3: Click the process.
For a replica set, click the primary. For a sharded cluster, click the mongod.
Ops Manager displays the hostname and port of the process above the status charts.
Step 4: On a system shell, run mongo and specify the host and port of the deployment.
Issue a command in the following form:
mongo --username  --password  --host  --port 

Connect to a Deployment Using a MongoDB Driver
Step 1: Select the Deployment tab and then the Deployment page.
Step 2: Click view mode.
Step 3: Click the process.
For a replica set, click the primary. For a sharded cluster, click the mongod.
Ops Manager displays the hostname and port of the process at the top of the charts on the page.
Step 4: Connect from your driver.
Use your driver to create a connection string that specifies the hostname and port of the deployment. The connection
string for your driver will resemble the following:

87

mongodb://[:@]hostname0<:port>[,hostname1:][,hostname2:
˓→][...][,hostnameN:]

If you specify a seed list of all hosts in a replica set in the connection string, your driver will automatically connect to
the term:primary.
For standalone deployments, you will only specify a single host. For sharded clusters, only specify a single mongos
instance.
Retrieve the Command to Connect Directly from the Process’s Server
Step 1: Select the Deployment tab and then the Deployment page.
Step 2: Click view mode.
Step 3: Click the instance’s gear icon and select Connect to this Instance.
Ops Manager provides a mongo shell command that you can use to connect to the MongoDB process if you are
connecting from the system where the deployment runs.

4 Import an Existing MongoDB Deployment
Add Existing Processes to Monitoring Add existing MongoDB processes to Ops Manager Monitoring.
Add Monitored Processes to Automation Add an existing MongoDB deployment to be managed through Ops Manager Automation.
Reactivate Monitoring for a Process Reactivate a deactivated MongoDB process.
Remove Hosts Remove processes you no longer use from monitoring.

4.1 Add Existing MongoDB Processes to Monitoring
On this page
• Overview
• Prerequisite
• Add MongoDB Processes

Overview
You can monitor existing MongoDB processes in Ops Manager by adding the hostnames and ports of the processes.
Ops Manager will start monitoring the mongod and mongos instances.
If you add processes from an environment that uses authentication, you must add each mongod instance separately
and explicitly set the authentication credentials on each.

88

If you add processes in an environment that does not use authentication, you can manually add one process from a
replica set or a sharded cluster as a seed. Once the Monitoring Agent has the seed, it automatically discovers all the
other nodes in the replica set or sharded cluster.
Unique Replica Set Names
Do not add two different replica sets with the same name. Ops Manager uses the replica set name to identify which
set a member belongs to.
Preferred Hostnames
If the MongoDB process is accessible only by specific hostname or IP address, or if you need to specify the hostname
to use for servers with multiple aliases, set up a preferred hostname. For details, see the Preferred Hostnames setting
in Group Settings.
Prerequisite
You must install a Monitoring Agent to your infrastructure.
To monitor or back up MongoDB 3.0 deployments, you must install Ops Manager 1.6 or higher. To monitor a MongoDB 3.0 deployment, you must also run Monitoring Agent version 2.7.0 or higher.
Important: If you provision your own servers and if you use MongoDB Enterprise, you must install the prerequisite
packages on the servers before deploying MongoDB Enterprise on the servers.

Add MongoDB Processes
If your deployments use authentication, perform this procedure for each process. If your deployment does not use
authentication, add one process from a replica set or sharded cluster and Ops Manager will discover the other nodes in
the replica set or sharded cluster.
Step 1: Select the Deployment tab and then the Deployment page.
Step 2: Select view mode and then select Add Host.
Step 3: Enter information for the MongoDB process.
Enter the following information, as appropriate:

89

Host Type
Internal Hostname
Port
Auth Mechanism

DB Username

DB Password

My deployment supports SSL for MongoDB connections

The type of MongoDB deployment.
The hostname of the MongoDB instance as seen from the Monitoring
Agent.
The port on which the MongoDB instance runs.
The authentication mechanism used by the host.
• MONGODB-CR,
• LDAP (PLAIN), or
• Kerberos(GSSAPI).
See Add Monitoring Agent User for MONGODB-CR, Configure Monitoring Agent for LDAP, or Configure the Monitoring Agent for Kerberos for
setting up user credentials.
If the authentication mechanism is MONGODB-CR or LDAP, the username used to authenticate the Monitoring Agent to the MongoDB deployment.
If the authentication mechanism is MONGODB-CR or LDAP, the password used to authenticate the Monitoring Agent to the MongoDB deployment.
If checked, the Monitoring Agent must have a trusted CA certificate in
order to connect to the MongoDB instances. See Configure Monitoring
Agent for SSL.

Step 4: Click Add.
To view agent output logs, click the Administration tab, then Agents, and then view logs for the agent.
To view process logs, click the Deployment tab, then the Deployment page, then the process, and then the Logs tab.
For more information on logs, see View Logs.

4.2 Add Monitored Processes to Automation
On this page
• Overview
• Prerequisites
• Procedures

Overview
Ops Manager can automate operations for your monitored MongoDB processes. Adding your processes to Automation
lets you reconfigure, stop, and restart MongoDB through the Ops Manager interface.
Adding monitored processes involves two steps. First, install the Automation Agent on each server hosting a monitored
MongoDB process. Second, add the processes to Automation through the Ops Manager interface.
Prerequisites
Automation Agents can run only on 64-bit architectures.

90

Automation supports most but not all available MongoDB options. Automation supports the options described in
Supported MongoDB Options for Automation.
The user running the Automation Agent must the same as the user running the MongoDB process to be managed.
The servers that host the MongoDB processes must have full networking access to each other through their fully
qualified domain names (retrieved on each server by issuing hostname -f). Each server must be able to reach
every other server through the FQDN.
Ensure that your network configuration allows each Automation Agent to connect to every MongoDB process listed
on the Deployment tab. Ensure that the network and security systems, including all interfaces and firewalls, allow
these connections.
Ops Manager must be currently monitoring the MongoDB processes, and the Monitoring Agent must be running. The
processes must appear in the Ops Manager Deployment tab.
Procedures
Install the Automation Agent on Each Server
Install the Automation Agent on each server that hosts a monitored MongoDB process. Ensure that the Automation
Agent has adequate permissions to stop and restart the existing Monitoring Agent so that it can update the Monitoring
Agent as new versions are released.
On each server, you must download the agent, create the necessary directories, and configure the agent’s local.
config file with the Group ID and API key.
Step 1: In Ops Manager, select the Administration tab and then select Agents.
Step 2: Under Automation at the bottom of the page, click your operation system and follow the
instructions to install and run the agent.
For additional information on installing the agent, see Install the Automation Agent.
Add the Processes to Automation
Step 1: Select the Deployment tab and then the Deployment page.
Step 2: Click edit mode, and then click the Add button and select Attach to Existing.
Step 3: Select the MongoDB processes.
Click the Deployment Item field to display your currently monitored processes. Select the cluster, replica set or
standalone to add.
Step 4: Click Start Import.
Ops Manager displays the progress of the import for each MongoDB process, including any errors. If you need to
correct errors, click Stop Import, correct them, and restart this procedure.

91

Step 5: Click Confirm Import.
Step 6: Click Review & Deploy.
Step 7: Click Confirm & Deploy.
Ops Manager Automation takes over the management of the processes and peforms a rolling restart. To view progress,
click View Agent Logs.
If you diagnose an error that causes Automation to fail to complete the deployment, click Edit Configuration to correct
the error.

4.3 Reactivate Monitoring for a Process
On this page
• Overview
• Procedure

Overview
If the Monitoring Agent cannot collect information from a MongoDB process, Ops Manager stops monitoring the
process. By default, Ops Manager stops monitoring a mongos that is unreachable for 24 hours and a mongod that is
unreachable for 7 days. Your group might have different default behavior. Ask your system administrator.
When the system stops monitoring a process, the Deployment page marks the process with an x in the Last Ping
column. If the instance is a mongod, Ops Manager displays a caution icon at the top of each Deployment page.
You can reactivate monitoring for the process whether or not the process is running. When you reactivate monitoring,
the Monitoring Agent has an hour to reestablish contact and provide a ping to Ops Manager. If a process is running
and reachable, it appears marked with a green circle in the Last Ping column. If it is unavailable, it appears marked
with a red square. If it remains unreachable for an hour, Ops Manager again stops monitoring the process.
You can optionally remove a process that you are no longer using. Removed processes are permanently hidden from
Ops Manager. For more information, see Remove Hosts.
Procedure
To reactivate monitoring for a process:
Step 1: Select the Deployment tab and then the Deployment page.
Step 2: Click the warning icon at the top of the page.
Step 3: Click Reactivate ALL hosts.
The processes that are now running and reachable by the Monitoring Agent will appear marked with green circles in
the Last Ping column.

92

The processes that are unavailable or unreachable will appear marked with a red square. If a process does not send a
ping within an hour after reactivation, it is deactivated again.
Step 4: Add the mongos instances.
To activate the mongos instances, click the Add Host button and enter the hostname, port, and optionally an admin
database username and password. Then click Add.

4.4 Remove Hosts
On this page
• Overview
• Procedure

Overview
You can remove hosts that you no longer use, but when you do they are hidden permanently. If you run the instance
again, Ops Manager will not discover it. If you choose to add the host again, Ops Manager will not display it.
Only a global administrator can undelete the host so that it will again display if added. The administrator can add the
host back through the Deleted Hosts page in the Ops Manager Admin interface.
Instead of removing a host, you can optionally disable alerts for the host, which does not remove it from the Deployment pages. See Manage Host Alerts.
Procedure
To remove a host from Ops Manager:
Step 1: Select the Deployment tab and then the Deployment page.
Step 2: On the line listing the process, click the gear icon and select Remove Host.
Step 3: Click Delete.
Step 4: If prompted for a two-factor authentication code, enter it, click Verify, and then click Delete
again.

5 Manage Deployments
Edit a Replica Set Add hosts to, remove hosts from, or modify the configuration of hosts in an Ops Manager managed
replica set. - ‘cloud’ - ‘onprem’
Migrate a Replica Set Member to a New Server Migrate replica sets to new underlying systems by adding members
to the set and decommissioning existing members.

93

Move or Add a Monitoring or Backup Agent Migrate a backup and monitoring agents to different servers.
Change the Version of MongoDB Upgrade or downgrade MongoDB deployments managed by Ops Manager.
Restart a MongoDB Process Restart MongoDB deployments managed by Ops Manager.
Shut Down MongoDB Processes Shut down MongoDB deployments managed by Ops Manager.
Remove Processes from Monitoring Remove MongoDB deployments from management by Ops Manager.
Alerts Set up and manage alert configurations.
Monitoring Metrics Interpreting the metrics.
Logs View host and agent logs.

5.1 Edit a Replica Set
On this page
• Overview
• Procedures
• Additional Information

Overview
You can add, remove, and reconfigure members in a replica set directly in the Ops Manager console.
Procedures
Add a Replica Set Member
You must have an existing server to which to deploy the new replica set member. To add a member to an existing
replica set, increasing the size of the set:
Step 1: Select the Deployment tab and then the Deployment page.
Step 2: Edit the replica set.
Click edit mode. Then click the arrow to the right of the replica set and click the Edit button.
Step 3: Add the member.
In the MongoDs Per Replica Set field, click + to increase the number of members for the replica set.
Configure the new members as desired in the Member Options box.
Then click Apply.

94

Step 4: Click Review & Deploy to review the configuration.
Ops Manager displays the full configuration for you to review.
Step 5: Click Confirm & Deploy.
To view deployment progress, click View Agent Logs and select an agent at the top of the Agent Logs page. To check
for updated entries, refresh the page.
If you diagnose an error and need to correct the deployment configuration, click Edit Configuration and then click Edit
Configuration again. Reconfigure the deployment through the deployment arrow button or through the Add button. If
you cannot find a solution, shut down the deployment. When you complete your changes, click Review & Deploy and
then Confirm & Deploy.
Edit a Replica Set Member
Use this procedure to:
• Reconfigure a member as hidden
• Reconfigure a member as delayed
• Reset a member’s priority level in elections
• Reset a member’s votes (for pre-2.6 versions of MongoDB)
To reconfigure a member as an arbiter, see Replace a Member with an Arbiter.
Step 1: Select the Deployment tab and then the Deployment page.
Step 2: Click edit mode, and then click the arrow to the right of the replica set and click the Edit
button.
Step 3: In the Member Options box, configure each member as needed.
For information on member options, see the following in the MongoDB manual:
• Hidden Members
• Delayed Members
• Elections, which describes priority levels.
The Votes field applies to pre-2.6 versions of MongoDB.
Step 4: Click Apply.
Step 5: Click Review & Deploy to review the configuration.
Ops Manager displays the full configuration for you to review.

95

Step 6: Click Confirm & Deploy.
To view deployment progress, click View Agent Logs and select an agent at the top of the Agent Logs page. To check
for updated entries, refresh the page.
If you diagnose an error and need to correct the deployment configuration, click Edit Configuration and then click Edit
Configuration again. Reconfigure the deployment through the deployment arrow button or through the Add button. If
you cannot find a solution, shut down the deployment. When you complete your changes, click Review & Deploy and
then Confirm & Deploy.
Replace a Member with an Arbiter
You cannot directly reconfigure a member as an arbiter. Instead, you must must add a new member to the replica set
as an arbiter. Then you must shut down an existing secondary.
Step 1: Select the Deployment tab and then the Deployment page.
Step 2: Click edit mode.
Step 3: Click the arrow to the right of the replica set and then click the Edit button.
Step 4: Add an arbiter.
In the MongoDs Per Replica Set field, increase the number of members by 1.
In the Member Options box, click the member’s drop-down arrow and select Arbiter.
Click Apply.
Step 5: Click Review & Deploy.
Step 6: Click Confirm & Deploy.
Step 7: Remove the secondary.
When the deployment completes, click the arrow to the right of the secondary to remove. Click the gear icon dropdown list, and select Remove Member.”
Step 8: Click Review & Deploy.
Step 9: Click Confirm & Deploy.
To view deployment progress, click View Agent Logs and select an agent at the top of the Agent Logs page. To check
for updated entries, refresh the page.
If you diagnose an error and need to correct the deployment configuration, click Edit Configuration and then click Edit
Configuration again. Reconfigure the deployment through the deployment arrow button or through the Add button. If
you cannot find a solution, shut down the deployment. When you complete your changes, click Review & Deploy and
then Confirm & Deploy.

96

Upon completion, Ops Manager removes the member from the replica set, but it will continue to run as a standalone
MongoDB instance. To shut down the standalone, see Shut Down MongoDB Processes.
Remove a Replica Set Member
Removing a member from a replica set does not shut down the member or remove it from Ops Manager. Ops Manager
still monitors the mongod as as standalone instance. To remove a member:
Step 1: Select the Deployment tab and then the Deployment page.
Step 2: Click edit mode, and then click the arrow to the right of the member to be removed.
Step 3: Click the gear icon drop-down list and select Remove Member.
Step 4: Click Remove to confirm.
Step 5: Click Review & Deploy.
Step 6: Click Confirm & Deploy.
To view deployment progress, click View Agent Logs and select an agent at the top of the Agent Logs page. To check
for updated entries, refresh the page.
If you diagnose an error and need to correct the deployment configuration, click Edit Configuration and then click Edit
Configuration again. Reconfigure the deployment through the deployment arrow button or through the Add button. If
you cannot find a solution, shut down the deployment. When you complete your changes, click Review & Deploy and
then Confirm & Deploy.
Upon completion, Ops Manager removes the member from the replica set, but it will continue to run as a standalone
MongoDB instance. To shut down the standalone, see Shut Down MongoDB Processes.
Additional Information
To view data from all replica set members at once, see Replica Set Statistics.
For more information on replica set configuration options, see, Replica Set Configuration in the MongoDB manual.

5.2 Migrate a Replica Set Member to a New Server
On this page
• Overview
• Considerations
• Procedure

97

Overview
For Ops Manager managed replica sets, you can replace one member of a replica set with another new member from
the Ops Manager console. Use this process to migrate members of replica sets to new underlying servers. From a high
level, this procedure requires that you: you add a member to the replica set on the new server and then shut down the
existing member on the old server. Specifically, you will
1. Provision the new server.
2. Add an extra member to the replica set.
3. Shut down old member of the replica set.
4. Un-manage the old member (Optional).
Considerations
Initial Sync
When you add a new replica set member, the member must perform an initial sync, which takes time to complete,
depending on the size of your data set. For more information on initial sync, see Replica Set Data Synchronization.
Migrating Multiple Members
If you are moving multiple members to new servers, migrate each member separately to keep the replica set available.
Procedure
Perform this procedure separately for each member of a replica set to migrate.
Step 1: Provision the new server.
See Add Existing Servers to Ops Manager.
Step 2: Select the Deployment tab and then the Deployment page.
Step 3: Click edit mode, and then click the arrow to the right of the replica set and click the Edit
button.
Step 4: Add a member to the replica set.
In the Nodes Per Replica Set field, increase the number of members by 1, and then click Apply.
Step 5: Verify changes.
Verify the server to which Ops Manager will deploy the new replica set member. If necessary, select a different server.
The Deployment page’s Topology View lists the new replica set member and indicates the server to which Ops Manager
will deploy it.

98

If Ops Manager has not chosen the server you intended, click the Deployment page’s Server View to display your
available servers and their processes. Processes not yet deployed can be dragged to different servers. Drag the new
replica set member to the server to which to deploy it.
Step 6: Click Review & Deploy.
Step 7: Click Confirm & Deploy.
To view deployment progress, click View Agent Logs and select an agent at the top of the Agent Logs page. To check
for updated entries, refresh the page.
If you diagnose an error and need to correct the deployment configuration, click Edit Configuration and then click Edit
Configuration again. Reconfigure the deployment through the deployment arrow button or through the Add button. If
you cannot find a solution, shut down the deployment. When you complete your changes, click Review & Deploy and
then Confirm & Deploy.
Step 8: Verify that the new member has synchronized.
Select the Deployment page’s view mode to view the new member’s status. Verify that the new member has synchronized and is no longer in the Recovering state.
Step 9: Remove the old member from the replica set.
Select the Deployment page’s edit mode, and then click the arrow to the right of the replica set member. Then click
the gear icon and select Remove Member. Then click Review & Deploy. Then click Confirm & Deploy.
Step 10: Shut down the old member.
Select the arrow to the right of the removed replica set member, and then click the gear icon and select Shut Down.
Then click Review & Deploy. Then click Confirm & Deploy.
Step 11: Optionally, unmanage the old member.
Select the arrow to the right of the removed replica set member, and then click the gear icon and select Unmanage.
Then click Review & Deploy. Then click Confirm & Deploy.

5.3 Move or Add a Monitoring or Backup Agent
On this page
• Overview
• Procedures

99

Overview
When you deploy MongoDB as a replica set or sharded cluster to a group of servers, Ops Manager selects one server
to run the Monitoring Agent. If you enable Ops Manager Backup, Ops Manager also selects a server to run the Backup
Agent.
You can move the Monitoring and Backup Agents to different servers in the deployment. You might choose to do this,
for example, if you are terminating a server.
You also can add additional instances of each agent as hot standbys for high availability. However, this is not standard
practice. A single Monitoring Agent and single Backup Agent are sufficient and strongly recommended. If you run
multiple agents, only one Monitoring Agent and one Backup Agent per group or environment are primary. Only the
primary agent reports cluster status and performs backups. If you run multiple agents, see Confirm Only One Agent is
Actively Monitoring.
Procedures
Move a Monitoring or Backup Agent to a Different Server
To move an agent to a new server, you install a new instance of the agent on the target server, and then remove the
agent from its original server.
Step 1: Select the Deployment tab and then the Deployment page.
Step 2: Click edit mode, and then select the Server View.
The Server View displays each provisioned server that is currently running one or more agents.
Step 3: On the server to which to move the agent, click the gear icon and select to install that type
of agent.
Step 4: On the server from which to remove the agent, click the gear icon and remove the agent.
Step 5: Click Review & Deploy to review the configuration.
Ops Manager displays the full configuration for you to review.
Step 6: Click Confirm & Deploy.
To view deployment progress, click View Agent Logs and select an agent at the top of the Agent Logs page. To check
for updated entries, refresh the page.
If you diagnose an error and need to correct the deployment configuration, click Edit Configuration and then click Edit
Configuration again. Reconfigure the deployment through the deployment arrow button or through the Add button. If
you cannot find a solution, shut down the deployment. When you complete your changes, click Review & Deploy and
then Confirm & Deploy.

100

Install Additional Agent as Hot Standby for High Availability
In general, using only one Monitoring Agent and one Backup Agent is sufficient and strongly recommended. If you
run multiple agents, see Confirm Only One Agent is Actively Monitoring to ensure no conflicts.
Step 1: Select the Deployment tab and then the Deployment page.
Step 2: Click edit mode, and then select the Server View.
The Server View displays each provisioned server that is currently running one or more agents.
Step 3: On the server to which to add an additional agent, click the gear icon and select the agent
to add.
Step 4: Click Review & Deploy to review the configuration.
Ops Manager displays the full configuration for you to review.
Step 5: Click Confirm & Deploy.
To view deployment progress, click View Agent Logs and select an agent at the top of the Agent Logs page. To check
for updated entries, refresh the page.
If you diagnose an error and need to correct the deployment configuration, click Edit Configuration and then click Edit
Configuration again. Reconfigure the deployment through the deployment arrow button or through the Add button. If
you cannot find a solution, shut down the deployment. When you complete your changes, click Review & Deploy and
then Confirm & Deploy.

5.4 Change the Version of MongoDB
On this page
• Overview
• Considerations
• Procedure

Overview
For Ops Manager managed MongoDB, Ops Manager supports safe automatic upgrade and downgrade operations
between releases of MongoDB while maximizing the availability of your deployment. Ops Manager supports upgrade
and downgrade operations for sharded clusters, replica sets,and standalone MongoDB instances.

101

Considerations
Before changing a deployment’s MongoDB version:
• Consult the following documents for any special considerations or application compatibility issues:
– The MongoDB Release Notes
– The documentation for your driver.
• Plan the version change during a predefined maintenance window.
• Before changing version on a production environment, change versions on a staging environment that reproduces
your production environment to ensure your configuration is compatible with all changes.
• To monitor or back up MongoDB 3.0 deployments, you must install Ops Manager 1.6 or higher. To monitor a
MongoDB 3.0 deployment, you must also run Monitoring Agent version 2.7.0 or higher.
Procedure
Step 1: Select the Deployment tab and then the Deployment page.
Step 2: Click edit mode, and then click the arrow to the right of the deployment.
Step 3: In the configuration screen, click the gear icon and select Edit.
Step 4: In the Version field select the version. Then click Apply.
If the drop-down menu does not include the desired MongoDB version, you must first enable it in the Version Manager.
Step 5: Click Review & Deploy to review the configuration.
Ops Manager displays the full configuration for you to review.
Step 6: Click Confirm & Deploy.
To view deployment progress, click View Agent Logs and select an agent at the top of the Agent Logs page. To check
for updated entries, refresh the page.
If you diagnose an error and need to correct the deployment configuration, click Edit Configuration and then click Edit
Configuration again. Reconfigure the deployment through the deployment arrow button or through the Add button. If
you cannot find a solution, shut down the deployment. When you complete your changes, click Review & Deploy and
then Confirm & Deploy.

5.5 Restart a MongoDB Process
On this page
• Overview
• Considerations

102

• Procedure

Overview
If an Ops Manager-managed MongoDB process is not currently running, you can restart it directly from the Ops
Manager console.
Considerations
If the Monitoring Agent cannot collect information from a MongoDB process, Ops Manager stops monitoring the
process. By default, Ops Manager stops monitoring a mongos that is unreachable for 24 hours and a mongod that is
unreachable for 7 days. Your group might have different default behavior. Ask your system administrator.
Procedure
To restart a process:
Step 1: Select the Deployment tab and then the Deployment page.
Step 2: Click edit mode, and then click the arrow to the right of the deployment.
Step 3: In the box displaying the deployment configuration, click the gear icon and select Start Up.
Step 4: Click Startup to confirm.
Step 5: Click Review & Deploy to review the configuration.
Ops Manager displays the full configuration for you to review.
Step 6: Click Confirm & Deploy.
To view deployment progress, click View Agent Logs and select an agent at the top of the Agent Logs page. To check
for updated entries, refresh the page.
If you diagnose an error and need to correct the deployment configuration, click Edit Configuration and then click Edit
Configuration again. Reconfigure the deployment through the deployment arrow button or through the Add button. If
you cannot find a solution, shut down the deployment. When you complete your changes, click Review & Deploy and
then Confirm & Deploy.

5.6 Shut Down MongoDB Processes
On this page
• Overview
• Procedure

103

• Additional Information

Overview
You can shut down selected mongod and mongos processes, or you can shut down all the processes at once for a
sharded cluster or replica set. Ops Manager continues to monitor the processes, and you can later restart them. If you
no longer want Ops Manager to monitor the processes, you can remove them from monitoring.
Procedure
Step 1: Select the Deployment tab and then the Deployment page.
Step 2: Click edit mode, and then click the arrow to the right of the deployment.
Step 3: Click the gear icon drop-down list and select Shut Down.
Step 4: Click Shutdown to confirm.
Step 5: Click Review & Deploy.
Step 6: Click Confirm & Deploy.
Additional Information
To restart processes after shutting them down, see Restart a MongoDB Process.
To remove processes from Ops Manager monitoring, see Remove Processes from Monitoring.

5.7 Remove Processes from Monitoring
On this page
• Overview
• Considerations
• Procedure

Overview
You can remove mongod or mongos processes from Ops Manager monitoring by “unmanaging” them. Unmanaging
a process removes it from the Deployment page and from management by Ops Manager but does not shut it down.
When you unmanage a mongod or mongos process, it continues to run. It simply isn’t managed by Ops Manager
anymore.

104

Considerations
If you intend to shut down a process and intend to do so through Ops Manager, do that first, before unmanaging the
process. See Shut Down MongoDB Processes.
Instead of removing processes from monitoring, you can optionally disable their alerts, which allows you to continue
to view the processes in the Deployment page. See Manage Host Alerts.
Procedure
To unmanage a process:
Step 1: Select the Deployment tab and then the Deployment page.
Step 2: Click edit mode, and then click the arrow to the right of the deployment.
Step 3: In the box displaying the deployment configuration, click the gear icon and select Unmanage.
Step 4: Click Review & Deploy.
Step 5: Click Confirm & Deploy.

5.8 Alerts
Manage Host Alerts Procedure to enable/disable alerts for hosts.
Create an Alert Configuration Procedures to create alert configurations.
Manage Alert Configuration Procedures for managing alert configurations.
Manage Alerts Procedures for managing alerts.
Alert Conditions Identifies all available alert triggers and conditions.
Manage Host Alerts
Step 1: Select the Deployment tab and then the Deployment page.
Step 2: On the line listing the process, click the gear icon and select Edit Host.
To narrow or expand the list of processes, click the drop-down box above the list and select the type of processes to
display.
Step 3: Select Alert Status and then modify the alert settings.
Create an Alert Configuration

105

On this page
• Overview
• Procedures

Overview
An alert configuration defines the conditions that trigger an alert and defines the notifications to be sent.
You can create an alert configuration from scratch or clone it from an existing alert.
Considerations
Costs
Costs to send alerts depend on your telephone service contract. Many factors may affect alert delivery, including do
not call lists, caps for messages sent or delivered, delivery time of day, and message caching.
Alert Intervals
To implement alert escalation, you can create multiple alert configurations with different minimum frequencies. Ops
Manager processes alerts on a 5-minute interval. Therefore, the minimum frequency for an alert is 5 minutes. The time
between re-notifications increases by the frequency amount every alert cycle (e.g. 5 minutes, 10 minutes, 15 minutes,
20 minutes, etc.) up to a maximum of 24 hours. The default frequency for a new alert configuration is 60 minutes.
When an alert state triggers, you can set a time to elapse before Ops Manager will send alert messages at the specified
interval. This helps eliminate false positives. Type in the after waiting field the number of minutes to wait before
sending the alert at the specified interval for each recipient.
Procedures
You can create a new alert configuration or clone an existing one. This section provides both procedures.
Create an Alert Configuration
Step 1: Select the Activity tab and then select Alert Settings.
Step 2: Click the Add Alert button.
Step 3: Select the component to monitor and the condition that triggers the alert.
In Alert if, select the target component. If you select Host, you must also select the type of host.
Next, select the condition and, if applicable, specify the threshold for the metric. For explanations of alert conditions
and metrics, see Alert Conditions.

106

If the options in the For section are available, you can optionally filter the alert to apply only to a subset of the
monitored targets. is, is not, contains, does not contain, starts with, and ends with use direct string comparison, while
matches uses regular expressions. These options are available only if the targets are hosts or replica sets.
Step 4: Select the alert recipients and choose how they receive the alerts.
In Send to, specify the alert interval and distribution method for each alert recipient. Click Add to add more recipients.
To receive an SMS alert, a user must have correctly entered their telephone number in their Account page on the
Administration tab. Ops Manager removes all punctuation and letters and only uses the digits for the telephone
number.
If you are outside of the United States or Canada, you will need to include ‘011’ and your country code. For instance,
for New Zealand (country code 64), you would need to enter ‘01164’, followed by your phone number. Alternately,
you can sign up for a Google Voice number, and use that number for your authentication.
For HipChat alerts, enter the HipChat room name and API token. Alerts will appear in the HipChat room message
stream. See the Group Settings page to define default group settings for HipChat.
For PagerDuty alerts, enter only the service key. Define escalation rules and alert assignments in PagerDuty. See the
Group Settings page to define default group settings for PagerDuty.
For SNMP alerts, specify the hostname that will receive the v2c trap on standard port 162. The MIB file for SNMP
is available for download here.
Step 5: Click Save.
Clone an Alert Configuration
You can create new alert configurations by cloning an existing one then editing it.
Step 1: Select the Activity tab and then select Alert Settings.
Step 2: Click the gear icon to the right of an alert and then select Clone.
Step 3: Select the component to monitor and the condition that triggers the alert.
In Alert if, select the target component. If you select Host, you must also select the type of host.
Next, select the condition and, if applicable, specify the threshold for the metric. For explanations of alert conditions
and metrics, see Alert Conditions.
If the options in the For section are available, you can optionally filter the alert to apply only to a subset of the
monitored targets. is, is not, contains, does not contain, starts with, and ends with use direct string comparison, while
matches uses regular expressions. These options are available only if the targets are hosts or replica sets.
Step 4: Select the alert recipients and choose how they receive the alerts.
In Send to, specify the alert interval and distribution method for each alert recipient. Click Add to add more recipients.
To receive an SMS alert, a user must have correctly entered their telephone number in their Account page on the
Administration tab. Ops Manager removes all punctuation and letters and only uses the digits for the telephone
number.

107

If you are outside of the United States or Canada, you will need to include ‘011’ and your country code. For instance,
for New Zealand (country code 64), you would need to enter ‘01164’, followed by your phone number. Alternately,
you can sign up for a Google Voice number, and use that number for your authentication.
For HipChat alerts, enter the HipChat room name and API token. Alerts will appear in the HipChat room message
stream. See the Group Settings page to define default group settings for HipChat.
For PagerDuty alerts, enter only the service key. Define escalation rules and alert assignments in PagerDuty. See the
Group Settings page to define default group settings for PagerDuty.
For SNMP alerts, specify the hostname that will receive the v2c trap on standard port 162. The MIB file for SNMP
is available for download here.
Step 5: Click Save.
Manage Alert Configuration

On this page
• Overview
• Manage Alert Configurations

Overview
You can manage alert configurations from the Activity tab. An alert configuration defines the conditions that trigger an
alert and defines the notifications to be sent.
Manage Alert Configurations
View Alert Configurations
To view alert configurations, click the Activity tab and then select the Alert Settings page.
Alert configurations define the conditions that trigger alerts and the notifications sent when alerts are triggered.
Ops Manager creates the following alert configurations automatically when you create a new group:
• Users awaiting approval to join group
• Host is exposed to the public internet
• User added to group
• Monitoring Agent is down
If you enable backup, Ops Manager creates the following alert configurations for the group if they do not already exist:
• Oplog Behind
• Resync Required
• Cluster Mongos Is Missing

108

Create or Clone an Alert Configuration
To create or clone an alert configuration, see Create an Alert Configuration.
Modify an Alert Configuration
Each alert configuration has a distribution list, a frequency for sending the alert, and a waiting period after an alert
state triggers before sending the first alert.
By default, an alert configuration sends alerts at 60-minute intervals. You can modify the interval. The minimum
interval is 5 minutes.
Step 1: Select the Activity tab and then select Alert Settings.
Step 2: Click the gear icon to the right of an alert and then select Edit.
Step 3: Select the component to monitor and the condition that triggers the alert.
In Alert if, select the target component. If you select Host, you must also select the type of host.
Next, select the condition and, if applicable, specify the threshold for the metric. For explanations of alert conditions
and metrics, see Alert Conditions.
If the options in the For section are available, you can optionally filter the alert to apply only to a subset of the
monitored targets. is, is not, contains, does not contain, starts with, and ends with use direct string comparison, while
matches uses regular expressions. These options are available only if the targets are hosts or replica sets.
Step 4: Select the alert recipients and choose how they receive the alerts.
In Send to, specify the alert interval and distribution method for each alert recipient. Click Add to add more recipients.
To receive an SMS alert, a user must have correctly entered their telephone number in their Account page on the
Administration tab. Ops Manager removes all punctuation and letters and only uses the digits for the telephone
number.
If you are outside of the United States or Canada, you will need to include ‘011’ and your country code. For instance,
for New Zealand (country code 64), you would need to enter ‘01164’, followed by your phone number. Alternately,
you can sign up for a Google Voice number, and use that number for your authentication.
For HipChat alerts, enter the HipChat room name and API token. Alerts will appear in the HipChat room message
stream. See the Group Settings page to define default group settings for HipChat.
For PagerDuty alerts, enter only the service key. Define escalation rules and alert assignments in PagerDuty. See the
Group Settings page to define default group settings for PagerDuty.
For SNMP alerts, specify the hostname that will receive the v2c trap on standard port 162. The MIB file for SNMP
is available for download here.

109

Step 5: Click Save.
Delete an Alert Configuration
Step 1: Select the Activity tab and then select Alert Settings.
Step 2: Click the gear icon to the right of an alert and then select Delete.
Step 3: Click Confirm.
When you delete an alert configuration that has open alerts associated to it, Ops Manager cancels the open alerts and
sends no further notifications. This is true whether users have acknowledged the alerts or not.
Disable or Enable an Alert Configuration
Step 1: Select the Activity tab and then select Alert Settings.
Step 2: Click the gear icon to the right of an alert and then select either Disable or Enable.
When you disable an alert configuration it remains visible in a grayed out state. Ops Manager automatically cancels
active alerts related to a disabled alert configuration. You can reactivate disabled alerts.
For example, if you have an alert configured for Host Down and you currently have an active alert telling you a host is
down, Ops Manager automatically cancels active Host Down alerts if you disable the default Host Down configuration.
Ops Manager will send no further alerts of this type unless the disabled alert is re-enabled.
Manage Alerts

On this page
• Overview
• Manage Alerts

Overview
You can manage alerts from the Activity tab.
When a condition triggers an alert, users receive the alert at regular intervals until the alert is resolved or canceled.
Users can mark the alert as acknowledged for a period of time but will again receive notifications when the acknowledgment period ends if the alert condition still exists.
Alerts end when the alert is resolved or canceled. An alert is resolved, also called “closed,” when the condition that
triggered the alert has been corrected. Ops Manager sends users a notification at the time the alert is resolved.
An alert is canceled if the alert configuration that triggered the alert is deleted or disabled, or if the target of the alert is
removed from the system. For example, if you have an open alert for “Host Down” and you delete that host from Ops
Manager, then the alert is canceled. When an alert is canceled, Ops Manager does not send a notification and does not
record an entry in the activity feed.

110

Manage Alerts
View Open Alerts
To view open alerts, click the Activity tab and then select All Activity. The All Activity page displays a feed of all events
tracked by Ops Manager. If you have open alerts, the page displays them above the feed.
Filter Activity Feed
You can filter the event feed by date.
Step 1: Select the Activity tab and then select All Activity.
Step 2: Click the gear icon and specify a date range.
Download Activity Feed
You can download the event feed as a CSV file (comma-separated values).
Step 1: Select the Activity tab and then select All Activity.
Step 2: Click the gear icon and select Download as CSV File.
You can download all events or choose to filter the feed before downloading. Ops Manager limits the number of events
returned to 10,000.
Acknowledge an Open Alert
Step 1: Select the Activity tab.
The All Activity page appears.
Step 2: On the line item for the alert, click Acknowledge.
Step 3: Select the time period for which to acknowledge the alert.
Ops Manager will send no further alert messages for the period of time you select.
Step 4: Click Acknowledge.
After you acknowledge the alert, Ops Manager sends no further notifications to the alert’s distribution list until the
acknowledgement period has passed or until the alert is resolved. The distribution list receives no notification of the
acknowledgment.

111

If the alert condition ends during the acknowledgment period, Ops Manager sends a notification of the resolution. For
example, if you acknowledge a host-down alert and the host comes back up during the acknowledgement period, Ops
Manager sends you a notification that the host is up.
If you configure an alert with PagerDuty, a third-party incident management service, you can only acknowledge the
alert on your PagerDuty dashboard.
Unacknowledge an Acknowledged Alert
Step 1: Select the Activity tab.
The All Activity page appears.
Step 2: On the line item for the alert, click Unacknowledge.
Step 3: Click Confirm.
If the alert condition continues to exist, Ops Manager will resend alerts.
View Closed Alerts
To view closed alerts, click the Activity tab and then select Closed Alerts. The Closed Alerts page displays alerts that
users have closed explicitly or where the metric has dropped below the threshold of the alert.
Alert Conditions

On this page
• Overview
• Host Alerts
• Replica Set Alerts
• Agent Alerts
• Backup Alerts
• User Alerts
• Group Alert

Overview
Ops Manager provides configurable alert conditions that you can apply to Ops Manager components, such as hosts,
clusters, or agents. This document groups the conditions according to the target components to which they apply.
Select alert conditions when configuring alerts, for more information on configuring alerts, see the Create an Alert
Configuration and Manage Alerts documents.

112

Host Alerts
The Host Alerts are applicable to MongoDB hosts (i.e. mongos and mongod instances). and are grouped here
according to the category monitored.
Host Status
is down
Sends an alert when Ops Manager does not receive a ping from a host for more than 9 minutes. Under normal
operation the Monitoring Agent connects to each monitored host about once per minute. Ops Manager will
not alert immediately, however, but waits nine minutes in order to minimize false positives, as would occur, for
example, during a host restart.
is recovering
Sends an alert when a secondary member of a replica set enters the RECOVERING state. For information on the
RECOVERING state, see Replica Set Member States.
does not have latest version
This does not apply to Ops Manager.
Sends an alert when the version of MongoDB running on a host is more than two releases behind. For example
if the current production version of MongoDB is 2.6.0 and the previous release is 2.4.9 then a host running
version 2.4.8 will trigger this alert but a host running 2.4.9 (previous) 2.6.0 (current) or 2.6.1-rc2 (nightly) will
not.
Asserts
These alert conditions refer to the metrics found on the host’s asserts chart. To view the chart, see Accessing a
Host’s Statistics.
Asserts: Regular is
Sends an alert if the rate of regular asserts meets the specified threshold.
Asserts: Warning is
Sends an alert if the rate of warnings meets the specified threshold.
Asserts: Msg is
Sends an alert if the rate of message asserts meets the specified threshold. Message asserts are internal server
errors. Stack traces are logged for these.
Asserts: User is
Sends an alert if the rate of errors generated by users meets the specified threshold.
Opcounter
These alert conditions refer to the metrics found on the host’s opcounters chart. To view the chart, see Accessing
a Host’s Statistics.
Opcounter: Cmd is
Sends an alert if the rate of commands performed meets the specified threshold.
Opcounter: Query is
Sends an alert if the rate of queries meets the specified threshold.
Opcounter: Update is
Sends an alert if the rate of updates meets the specified threshold.
113

Opcounter: Delete is
Sends an alert if the rate of deletes meets the specified threshold.
Opcounter: Insert is
Sends an alert if the rate of inserts meets the specified threshold.
Opcounter: Getmores is
Sends an alert if the rate of getmore (i.e. cursor batch) operations meets the specified threshold. For more
information on getmore operations, see the Cursors page in the MongoDB manual.
Opcounter - Repl
These alert conditions apply to hosts that are secondary members of replica sets. The alerts use the metrics found on
the host’s opcounters - repl chart. To view the chart, see Accessing a Host’s Statistics.
Opcounter: Repl Cmd is
Sends an alert if the rate of replicated commands meets the specified threshold.
Opcounter: Repl Update is
Sends an alert if the rate of replicated updates meets the specified threshold.
Opcounter: Repl Delete is
Sends an alert if the rate of replicated deletes meets the specified threshold.
Opcounter: Repl Insert is
Sends an alert if the rate of replicated inserts meets the specified threshold.
Memory
These alert conditions refer to the metrics found on the host’s memory and non-mapped virtual memory
charts. To view the charts, see Accessing a Host’s Statistics. For additional information about these metrics, click the
i icon for each chart.
Memory: Resident is
Sends an alert if the size of the resident memory meets the specified threshold. It is typical over time, on a
dedicated database server, for the size of the resident memory to approach the amount of physical RAM on the
box.
Memory: Virtual is
Sends an alert if the size of virtual memory for the mongod process meets the specified threshold. You can
use this alert to flag excessive memory outside of memory mapping. For more information, click the memory
chart’s i icon.
Memory: Mapped is
Sends an alert if the size of mapped memory, which maps the data files, meets the specified threshold. As
MongoDB memory-maps all the data files, the size of mapped memory is likely to approach total database size.
Memory: Computed is
Sends an alert if the size of virtual memory that is not accounted for by memory-mapping meets the specified
threshold. If this number is very high (multiple gigabytes), it indicates that excessive memory is being used outside of memory mapping. For more information on how to use this metric, view the non-mapped virtual
memory chart and click the chart’s i icon.

114

B-tree
These alert conditions refer to the metrics found on the host’s btree chart. To view the chart, see Accessing a Host’s
Statistics.
B-tree: accesses is
Sends an alert if the number of accesses to B-tree indexes meets the specified average.
B-tree: hits is
Sends an alert if the number of times a B-tree page was in memory meets the specified average.
B-tree: misses is
Sends an alert if the number of times a B-tree page was not in memory meets the specified average.
B-tree: miss ratio is
Sends an alert if the ratio of misses to hits meets the specified threshold.
Lock %
This alert condition refers to metric found on the host’s lock % chart. To view the chart, see Accessing a Host’s
Statistics.
Effective Lock % is
Sends an alert if the amount of time the host is write locked meets the specified threshold. For details on this
metric, view the lock % chart and click the chart’s i icon.
Background
This alert condition refers to metric found on the host’s background flush avg chart. To view the chart, see
Accessing a Host’s Statistics.
Background Flush Average is
Sends an alert if the average time for background flushes meets the specified threshold. For details on this
metric, view the background flush avg chart and click the chart’s i icon.
Connections
The following alert condition refers to a metric found on the host’s connections chart. To view the chart, see
Accessing a Host’s Statistics.
Connections is
Sends an alert if the number of active connections to the host meets the specified average.
Queues
These alert conditions refer to the metrics found on the host’s queues chart. To view the chart, see Accessing a
Host’s Statistics.
Queues: Total is
Sends an alert if the number of operations waiting on a lock of any type meets the specified average.
Queues: Readers is
Sends an alert if the number of operations waiting on a read lock meets the specified average.

115

Queues: Writers is
Sends an alert if the number of operations waiting on a write lock meets the specified average.
Page Faults
These alert conditions refer to metrics found on the host’s Record Stats and Page Faults charts. To view the
charts, see Accessing a Host’s Statistics.
Accesses Not In Memory: Total is
Sends an alert if the rate of disk accesses meets the specified threshold. MongoDB must access data on disk if
your working set does not fit in memory. This metric is found on the host’s Record Stats chart.
Page Fault Exceptions Thrown: Total is
Sends an alert if the rate of page fault exceptions thrown meets the specified threshold. This metric is found on
the host’s Record Stats chart.
Page Faults is
Sends an alert if the rate of page faults (whether or not an exception is thrown) meets the specified threshold.
This metric is found on the host’s Page Faults chart.
Cursors
These alert conditions refer to the metrics found on the host’s cursors chart. To view the chart, see Accessing a
Host’s Statistics.
Cursors: Open is
Sends an alert if the number of cursors the server is maintaining for clients meets the specified average.
Cursors: Timed Out is
Sends an alert if the number of timed-out cursors the server is maintaining for clients meets the specified average.
Cursors: Client Cursors Size is
Sends an alert if the cumulative size of the cursors the server is maintaining for clients meets the specified
average.
Network
These alert conditions refer to the metrics found on the host’s network chart. To view the chart, see Accessing a
Host’s Statistics.
Network: Bytes In is
Sends an alert if the number of bytes sent to the database server meets the specified threshold.
Network: Bytes Out is
Sends an alert if the number of bytes sent from the database server meets the specified threshold.
Network: Num Requests is
Sends an alert if the number of requests sent to the database server meets the specified average.
Replication
These alert conditions refer to the metrics found on a primary’s replication oplog window chart or a secondary’s replication lag chart. To view the charts, see Accessing a Host’s Statistics.

116

Replication Oplog Window is
Sends an alert if the approximate amount of time available in the primary’s replication oplog meets the specified
threshold.
Replication Lag is
Sends an alert if the approximate amount of time that the secondary is behind the primary meets the specified
threshold.
Replication Headroom is
Sends an alert when the difference between the primary oplog window and the replication lag time on a secondary meets the specified threshold.
Oplog Data per Hour is
Sends an alert when the amount of data per hour being written to a primary’s oplog meets the specified threshold.
DB Storage
This alert condition refers to the metric displayed on the host’s db storage chart. To view the chart, see Accessing
a Host’s Statistics.
DB Storage is
Sends an alert if the amount of on-disk storage space used by extents meets the specified threshold. Extents are
contiguously allocated chunks of datafile space.
DB storage size is larger than DB data size because storage size measures the entirety of each extent, including
space not used by documents. For more information on extents, see the collStats command.
DB Data Size is
Sends an alert if approximate size of all documents (and their paddings) meets the specified threshold.
Journaling
These alert conditions refer to the metrics found on the host’s journal - commits in write lock chart and
journal stats chart. To view the charts, see Accessing a Host’s Statistics.
Journaling Commits in Write Lock is
Sends an alert if the rate of commits that occurred while the database was in write lock meets the specified
average.
Journaling MB is
Sends an alert if the average amount of data written to the recovery log meets the specified threshold.
Journaling Write Data Files MB is
Sends an alert if the average amount of data written to the data files meets the specified threshold.
Replica Set Alerts
These alert conditions are applicable to replica sets.
Primary Elected
Sends an alert when a set elects a new primary. Each time Ops Manager receives a ping, it inspects the output
of the replica set’s rs.status() method for the status of each replica set member. From this output, Ops Manager
determines which replica set member is the primary. If the primary found in the ping data is different than the
current primary known to Ops Manager, this alert triggers.

117

Primary Elected does not always mean that the set elected a new primary. Primary Elected may
also trigger when the same primary is re-elected. This can happen when Ops Manager processes a ping in the
midst of an election.
No Primary
Sends an alert when a replica set does not have a primary. Specifically, when none of the members of a replica
set have a status of PRIMARY, the alert triggers. For example, this condition may arise when a set has an even
number of voting members resulting in a tie.
If the Monitoring Agent collects data during an election for primary, this alert might send a false positive. To
prevent such false positives, set the alert configuration’s after waiting interval (in the configuration’s Send to
section).
Number of Healthy Members is below
Sends an alert when a replica set has fewer than the specified number of healthy members. If the replica set has
the specified number of healthy members or more, Ops Manager triggers no alert.
A replica set member is healthy if its state, as reported in the rs.status() output, is either PRIMARY or
SECONDARY. Hidden secondaries and arbiters are not counted.
As an example, if you have a replica set with one member in the PRIMARY state, two members in
the SECONDARY state, one hidden member in the SECONDARY, one ARBITER, and one member in the
RECOVERING state, then the healthy count is 3.
Number of Unhealthy Members is above
Sends an alert when a replica set has more than the specified number of unhealthy members. If the replica set
has the specified number or fewer, Ops Manager sends no alert.
Replica set members are unhealthy when the agent cannot connect to them, or the member is in a rollback or
recovering state.
Hidden secondaries are not counted.
Agent Alerts
These alert conditions are applicable to Monitoring Agents and Backup Agents.
Monitoring Agent is down
Sends an alert if the Monitoring Agent has been down for at least 7 minutes. Under normal operation, the
Monitoring Agent sends a ping to Ops Manager roughly once per minute. If Ops Manager does not receive a
ping for at least 7 minutes, this alert triggers. However, this alert will never trigger for a group that has no hosts
configured.
Important: When the Monitoring Agent is down, Ops Manager will trigger no other alerts. For example, if a
host is down there is no Monitoring Agent to send data to Ops Manager that could trigger new alerts.
Backup Agent is down
Sends an alert if the Backup Agent has been down for at least 15 minutes. Under normal operation, the Backup
Agent periodically sends data to Ops Manager. This alert is never triggered for a group that has no running
backups.
Monitoring Agent is out of date
Sends an alert when the Monitoring Agent is not running the latest version of the software.
Backup Agent is out of date
Sends an alert when the Backup Agent is not running the latest version of the software.

118

Backup Alerts
These alert conditions are applicable to the Ops Manager Backup service.
Oplog Behind
Sends an alert if the most recent oplog data received by Ops Manager is more than 75 minutes old.
Resync Required
Sends an alert if the replication process for a backup falls too far behind the oplog to catch up. This occurs when
the host overwrites oplog entries that backup has not yet replicated. When this happens, backup must be fully
resynced.
Cluster Mongos Is Missing
Sends an alert if Ops Manager cannot reach a mongos for the cluster.
User Alerts
These alert conditions are applicable to the Ops Manager Users.
Added to Group
Sends an alert when a new user joins the group.
Removed from Group
Sends an alert when a user leaves the group.
Changed Roles
Sends an alert when a user’s roles have been changed.
Group Alert
This alert condition applies to Ops Manager groups.
Users awaiting approval to join group
Sends an alert if there are users who have asked to join the group. A user can ask to join a group when first
registering for Ops Manager.

5.9 Monitoring Metrics
Deployment Description of the Deployment tab, which lists all hosts that are currently being monitored.
Host Statistics In-depth guide to host statistics and the options that you can specify to customize your view.
Aggregated Cluster Statistics Compare hosts dynamically across the cluster.
Replica Set Statistics Compare hosts dynamically across a replica set.
Profile Databases Collect profile data for the host.
Deployment

On this page
• Deployment Page

119

• Host Mapping Page
Deployment provides access to all your monitored objects. The Deployment tab includes the pages described here.
Deployment Page
The Deployment page provides access to all monitored mongod and mongos instances. The page includes the
following information:
Field
Last Ping
Host
Orange triangle icon
under a host name.

Description
The last time this agent sent a ping to the Ops Manager servers. Click a ping to view a
detailed status from the ping.
The hostname and port of the instance. Click the hostname to view host statistics.
The startup warning indicator for the host. Only displayed when warnings exist. Click
the host’s last ping for warning details. Ops Manager startup warnings can include the
following:
• Ops Manager suspects the host has a low ulimit setting of less than 1024. Ops
Manager infers the host’s ulimit setting using the total number of available and
current connections. See the UNIX ulimit Settings reference page.
• Ops Manager flags a deactivated host.
Important: If you have deactivated hosts, review all deactivated hosts to ensure
that they are still in use, and remove all hosts that are not active. Then click on the
warning icon and select Reactive ALL hosts.

Type

Cluster

Shard
Repl Set
Up Since
Version

The type of host. Possible types include the following:
• PRIMARY
• SECONDARY
• STANDALONE
• ARBITER
When the host recovers, the rectangle flag turns yellow and displays RECOVERING. When
the host returns a fatal error, the flag displays FATAL. The flag also can display NO DATA.
The name of the cluster to which this instance belongs. Only cluster members display this
value. Click the cluster name to display aggregated information on the cluster’s replica sets.
See Aggregated Cluster Statistics for details.
The name of the shard.
The name of the shard’s replica set. Click the replica set name to display replica set statistics. See Replica Set Statistics for details.
The date the host first pinged Ops Manager.
The version of the MongoDB running on this instance.

Host Mapping Page
The Host Mapping page shows the mapping between system hostnames and the names provided by the monitored
mongod and mongos processes.
Host Statistics

120

On this page
• Accessing a Host’s Statistics
• Accessing Information on a Host’s Chart
• Chart Annotations
For each host, Ops Manager provides an extensive set of charts for analyzing the statistics collected by the Monitoring
Agent.
Accessing a Host’s Statistics
Step 1: Select the Deployment tab and then the Deployment page.
Step 2: Click view mode.
Step 3: Click the name of the cluster, replica set or process for which to view statistics.
Step 4: Hover the mouse pointer over a chart to display chart controls.
To use the controls, see Accessing Information on a Host’s Chart.
Accessing Information on a Host’s Chart
Hover the mouse cursor over the chart to display the chart controls.
• Click the i icon for a description of the chart.
• Click-and-drag to select a portion of the chart to zoom into. All charts on the page zoom to the same level.
• Double-click to revert the charts back to the default zoom setting.
• Hover the mouse pointer over a point on the chart to display statistics for that point in time.
• Click the two-way arrow to open an expanded version of the chart.
• Click the curved arrow for a list of additional actions:
– Chart Permalink opens a page that displays only this chart.
– Email Chart opens a dialogue box where you can input an email address and short message to send the
chart by email.
• Click and hold the upper-left triangular grabber to move the chart to a different place on the page.
Chart Annotations
Annotations may appear as colored vertical lines on your charts to indicate server events. The following color/event
combinations are:
• A red bar indicates a server restart.
• A purple bar indicates the server is now a primary.
• A yellow bar indicates the server is now a secondary.

121

If you do not wish to see the chart annotations, you can disable them on the Administration tab’s Personalization page.
Aggregated Cluster Statistics

On this page
• Overview
• Procedure

Overview
Cluster statistics provide an interface to view data for an entire cluster at once. You can compare components dynamically across the cluster and view host-specific and aggregated data, as well as pinpoint moments in time and isolate
data to specific components.
Procedure
To view cluster statistics:
Step 1: Select the Deployment tab and then the Deployment page.
Step 2: Click view mode.
Step 3: Click the name of the sharded cluster.
Ops Manager displays a chart and table with an initial set of cluster statistics. At the top of the chart, the DATA SIZE
field measures the cluster’s data size on disk. For more information, see the explanation of dataSize on the dbStats
page.
If Backup is enabled, hover the mouse pointer over the “clock” icon to view the time of the last snapshot and time of
the next scheduled snapshot. Click the icon to view snapshots.
Step 4: Select the components to display.
In the buttons above the chart, select whether to display the cluster’s shards, mongos instances, or config servers.
If you select shards, select whether to display Primaries, Secondaries, or Both using the buttons at the chart’s lower
right.
The chart displays a different colored line for each component. The table below displays additional data for each
component, using the same colors.
Step 5: Select the data to display.
Select the type of data in the CHART drop-down list. Ops Manager graphs the data for each component individually.
You can instead graph the data as an average or sum by clicking the Averaged or Sum button at the chart’s lower right.

122

Step 6: Change the granularity and zoom.
To the right of the chart, select a GRANULARITY for the data. The option you select determines the available ZOOM
options. Whenever you change the granularity, the selected zoom level changes to the closest zoom level available for
that granularity.
To zoom further and isolate a specific region of data, click-and-drag on that region of the chart. To reset the zoom
level, double-click anywhere on the chart.
Step 7: View metrics for a specific date and time.
Move the mouse pointer over the chart to view data for a point in time. The data in the table below the chart changes
as you move the pointer.
Step 8: Isolate certain components for display.
To remove a component from the chart, click its checkmark in the table below the chart. To again display it, click the
checkmark again.
To quickly isolate just a few components from a large number displayed, select the None button below the chart
and then select the checkmark for the individual components to display. Alternatively, select the All button and then
deselect the checkmark for individual components not to display.
Step 9: View statistics for a specific component.
In the table below the chart, click a component’s name to display its statistics page.
If your are viewing shards, you can click the replica set name in the SHARDS column to display replica set statistics,
or you can click the P or S icon in the MEMBERS column to display host statistics for a primary or secondary. Hover
over an icon for tooltip information.
Step 10: Change the name of the cluster.
If you want to change the name of the cluster, hover the mouse pointer over the cluster name. A pencil icon appears.
Click the icon and enter the new name.
Replica Set Statistics

On this page
• Overview
• Procedure

Overview
The Replica set statistics interface makes is possible to view data from all replica set members at once.

123

Procedure
Step 1: Select the Deployment tab and then the Deployment page.
Step 2: Click view mode.
Step 3: Click the name of the replica set.
Ops Manager displays a separate chart for each replica set member.
Step 4: Select the members to display.
In the TOGGLE MEMBERS section at the top of the page, click the P and S icons to choose which members to display.
Hover the mouse pointer over an icon to display member information.
Step 5: Select the granularity and the zoom.
Select the GRANULARITY of the data displayed. The selected granularity option determines the available ZOOM
options.
To isolate a specific region of data, click-and-drag on that region of the chart. All other charts automatically zoom to
the same region.
To reset the zoom level, double-click anywhere on the chart.
Step 6: Add, remove, and reorder charts.
Add and remove charts using either the Add Chart drop-down list or the buttons at bottom of the page.
Move a chart within the display by hovering the mouse over the chart, clicking the grabber in the upper left corner,
and dragging the chart to the new position.
Step 7: View an explanation of a chart’s data.
Hover the mouse pointer over the and click the i icon.
Step 8: View metrics for a specific date and time.
Move the mouse pointer over the chart to view data for a point in time.
Profile Databases

On this page
• Overview
• Considerations

124

• Procedures

Overview
Monitoring can collect data from MongoDB’s profiler to provide statistics about performance and database operations.
Considerations
Before enabling profiling, be aware of these issues:
• Profile data can include sensitive information, including the content of database queries. Ensure that exposing
this data to Monitoring is consistent with your information security practices.
• The profiler can consume resources which may adversely affect MongoDB performance. Consider the implications before enabling profiling.
Procedures
Enable Profiling
To allow Monitoring to collect profile data for a specific process:
Note: The Monitoring Agent attempts to minimize its effect on the monitored systems. If resource intensive operations, like polling profile data, begins to impact the performance of the database, Monitoring will throttle the frequency
that it collects data. See How does Ops Manager gather database statistics? for more information about the agent’s
throttling process.
When enabled, Monitoring samples profiling data from monitored processes. The agent sends only the most recent 20
entries from last minute.
With profiling enabled, configuration changes made in Ops Manager can take up to 2 minutes to propagate to the agent
and another minute before profiling data appears in the Ops Manager interface.
Step 1: Select the Deployment tab and then the Deployment page.
Step 2: Click view mode, and then click the process’s gear icon and select Edit Host.
Step 3: Click the Profiling tab.
Step 4: Turn on profiling.
Click the button to toggle between Off and On. When the button is On, Ops Manager receives database profile
statistics.

125

Step 5: Start database profiling by using the mongo shell to modify the setProfilingLevel command.
See the database profiler documentation for instructions for using the profiler.
Display Profiling Levels
When profiling is on, the Profile Data tab displays profiled data. For more information on profiling, see the database
profiler documentation in the MongoDB manual.
Step 1: Select the Deployment tab and then the Deployment page.
Step 2: Click view mode.
Step 3: Select the Profile Data tab.
Delete Profile Data
Deleting profile data deletes the Web UI cache of the current profiling data. You must then disable profiling or drop or
clear the source collection, or Ops Manager will repopulate the profiling data.
If Monitoring is storing a large amount of profile data for your instance, the removal process will not be instantaneous.
Step 1: Select the Deployment tab and then the Deployment page.
Step 2: Click view mode, and then click the process.
Step 3: Select the Profile Data tab.
Step 4: Click the Delete Profile Data button at the bottom of the page.
Step 5: Confirm the deletion.
Ops Manager begins removing stored profile data from the server’s record. Ops Manager removes only the Web UI
cache of the current profiling data. The cache quickly re-populates with the same data if you do not disable profiling
or drop or clear the profiled collection.

5.10 View Logs
On this page
• Overview
• MongoDB Real-Time Logs
• Agent Logs

126

Overview
Ops Manager collects log information for both MongoDB and the Ops Manager agents. For MongoDB deployments,
Ops Manager provides access to both real-time logs and on-disk logs.
The MongoDB logs provide the diagnostic logging information for your mongod and mongos instances. The Agent
logs provide insight into the behavior of your Ops Manager agents.
MongoDB Real-Time Logs
The Monitoring Agent collects real-time log information from each MongoDB deployment by issuing the getLog
command with every monitoring ping. The getLog command collects log entries from the MongoDB RAM cache.
Ops Manager enables real-time log collection by default. You can disable log collection for either the whole Ops
Manager group or for individual MongoDB instances. If you disable log collection, Ops Manager continues to display
previously collected log entries.
View MongoDB Real-Time Logs
Step 1: Select the Deployment tab and then the Deployment page.
Step 2: Click view mode.
Step 3: Click the process.
Step 4: Click the Logs tab.
The tab displays log information. If the tab instead diplays the Collect Logs For Host option, toggle the option to On
and refresh the page.
Step 5: Refresh the browser window to view updated entries.
Enable or Disable Log Collection for a Deployment
Step 1: Select the Deployment tab and then the Deployment page.
Step 2: Click view mode.
Step 3: Click the deployment’s gear icon and select Edit Host.
Step 4: Click the Logs tab and toggle the Off /On button as desired.
Step 5: Click X to close the Edit Host box.
The deployment’s previously existing log entries will continue to appear in the Logs tab, but Ops Manager will not
collect new entries.

127

Enable or Disable Log Collection for the Group
Step 1: Select the Administration tab, then the Group Settings page.
Step 2: Set the Collect Logs For All Hosts option to On or Off, as desired.
MongoDB On-Disk Logs
Ops Manager can collect on-disk logs even if the MongoDB instance is not running. The Automation Agent collects
the logs from the location specified by the MongoDB systemLog.path configuration option. The MongoDB
on-disk logs are a subset of the real-time logs and therefore less verbose.
You can configure log rotation for the on-disk logs. Ops Manager enables log rotation by default.
View MongoDB On-Disk Logs
Step 1: Select the Deployment tab and then Mongo Logs page.
Alternatively, you can select the Deployment page’s edit mode, then the arrow to the right of a deployment, then the
gear icon drop-down list, and then Request Logs.
Step 2: Request the latest logs.
To request the latest logs:
1. Click the Manage drop-down button and select Request Server Logs.
2. Select the checkboxes for the logs you want to request, then click Request Logs.
Step 3: To view a log, select the Show Log link for the desired date and hostname.
Configure Log Rotation
Step 1: Select the Deployment tab and then Mongo Logs page.
Step 2: Click the Manage drop-down button and select MongoDB Log Settings.
Step 3: Configure the log rotation settings and click Save.
Step 4: Click Review & Deploy.
Step 5: Click Confirm & Deploy.
Agent Logs
Ops Manager collects logs for all your Automation Agents, Monitoring Agents, and Backup Agents.

128

View Agent Logs
Step 1: From any page, click an agent icon at the top of the page and select Logs.
Ops Manager opens the Agent Logs page and displays the log entries for agents of the same type.
You can also open the Agent Logs page by selecting the Administration tab, then Agents page, and then view logs link
for a particular agent. The page displays the agent’s log entries.
Step 2: Filter the log entries.
Use the drop-down list at the top of the page to display different types of agents.
Use the gear icon to the right of the page to clear filters and to export logs.
Configure Agent Log Rotation
Step 1: Select the Administration tab and then Agents page.
Step 2: Edit the log settings.
Under the Agent Log Settings header, click the pencil icon to edit the log settings for the appropriate agent.
You can modify the following fields:
Name
Linux Log Path
Rotate Logs
Size Threshold (MB)
Time
Threshold
(Hours)
Max Uncompressed
Files
Max Percent of Disk

Type
string
boolean
number
integer
integer
number

Description
The path to which the agent writes its logs.
Specifies whether Ops Manager should rotate the logs for the agent.
Max size in MB for an individual log file before rotation.
Max time in hours for an individual log file before rotation.
Optional Max number of total log files to leave uncompressed, including the
current log file.
Optional Max percent of the total disk space all log files can occupy before
deletion.

When you are done modifying the agent log settings, click Confirm.
Step 3: Return to the Deployment page
Step 4: Click Review & Deploy.
Step 5: Click Confirm & Deploy.

6 Back Up MongoDB Deployments
Backup Flows Describes how Ops Manager backs up MongoDB deployments.

129

Backup Preparations Before backing up your cluster or replica set, decide how to back up the data and what data to
back up.
Activate Backup Activate Backup for a cluster or replica set.
Edit a Backup’s Settings Modify a backup’s schedule, storage engines, and excluded namespaces.
Restore MongoDB Deployments Procedures to restore complete MongoDB deployments using Backup data.
Restore MongoDB Data Procedures to restore data from Backup to MongoDB instances.
Backup Maintenance Procedures to manage backup operations for maintenance.

6.1 Backup Flows
On this page
• Introduction
• Initial Sync
• Routine Operation
• Snapshots
• Grooms

Introduction
The Backup service’s process for keeping a backup in sync with your deployment is analogous to the process used by
a secondary to replicate data in a replica set. Backup first performs an initial sync to catch up with your deployment
and then tails the oplog to stay caught up. Backup takes scheduled snapshots to keep a history of the data.
Initial Sync
Transfer of Data and Oplog Entries
When you start a backup, the Backup Agent streams your deployment’s existing data to the Backup HTTP Service
in batches of documents totaling roughly 10MB. The batches are called “slices.” The Backup HTTP Service stores
the slices in a sync store for later processing. The sync store contains only the data as it existed when you started the
backup.
While transferring the data, the Backup Agent also tails the oplog and also streams the oplog updates to the Backup
HTTP Service. The service places the entries in the oplog store for later processing offline.
By default, both the sync store and oplog store reside on the backing MongoDB replica set that hosts the Backup
Blockstore database.
Building the Backup
When the Backup HTTP Service has received all of the slices, a Backup Daemon creates a local database on its server
and inserts the documents that were captured as slices during the initial sync. The daemon then applies the oplog
entries from the oplog store.

130

The Backup Daemon then validates the data. If there are missing documents, Ops Manager queries the deployment
for the documents and the Backup Daemon inserts them. A missing document could occur because of an update that
caused a document to move during the initial sync.
Once the Backup Daemon validates the accuracy of the data directory, it removes the data slices from the sync store.
At this point, Backup has completed the initial sync process and proceeds to routine operation.
Routine Operation
The Backup Agent tails the deployment’s oplog and routinely batches and transfers new oplog entries to the Backup
HTTP Service, which stores them in the oplog store. The Backup Daemon applies all newly received oplog entries in
batches to its local replica of the backed-up deployment.
Snapshots
During a preset interval, the Backup Daemon takes a snapshot of the data directory for the backed-up deployment,
breaks it into blocks, and transfers the blocks to the Backup Blockstore database. For a sharded cluster, the daemon
takes a snapshot of each shard and of the config servers. The daemon uses checkpoints to synchronize the shards and
config servers for the snapshots.
When a user requests a snapshot, a Backup Daemon retrieves the data from the Backup Blockstore database and
delivers it to the requested destination. See: Restore Flows for an overview of the restore process.

131

Grooms
Groom jobs perform periodic “garbage collection” on the Backup Blockstore database to remove unused blocks and
reclaim space. Unused blocks are those that are no longer referenced by a live snapshot. A scheduling process
determines when grooms are necessary.

6.2 Backup Preparations
On this page
• Overview
• Snapshot Frequency and Retention Policy
• Excluded Namespaces
• Storage Engine
• Resyncing Production Deployments
• Checkpoints
• Snapshots when Agent Cannot Stop Balancer
• Snapshots when Agent Cannot Contact a mongod

Overview
Before backing up your cluster or replica set, decide how to back up the data and what data to back up. This page
describes items you must consider before starting a backup.
For an overview of how Backup works, see Backup.
Snapshot Frequency and Retention Policy
By default, Ops Manager takes a base snapshot of your data every 6 hours. If desired, administrators can change the
frequency of base snapshots to 8, 12, or 24 hours. Ops Manager creates snapshots automatically on a schedule. You
cannot take snapshots on demand.
Ops Manager retains snapshots for the time periods listed in the following table. If you terminate a backup, Ops
Manager immediately deletes the backup’s snapshots.
Snapshot
Base snapshot.
Daily snapshot
Weekly snapshot
Monthly snapshot

Default Retention Policy
2 days
1 week
1 month
1 year

Maximum Retention Setting
5 days.
1 year
1 year
3 years.

You can change a backed-up deployment’s schedule through its Edit Snapshot Schedule menu option, available through
the Backup tab. Administrators can change snapshot frequency and retention through the snapshotSchedule resource
in the API. If you change the schedule to save fewer snapshots, Ops Manager does not delete existing snapshots
to conform to the new schedule. To delete unneeded snapshots, see Delete Snapshots for Replica Sets and Sharded
Clusters.

132

Excluded Namespaces
Excluded namespaces are databases or collections that Ops Manager will not back up. Exclude namespaces to prevent
backing up collections that contain logging data, caches, or other ephemeral data. Excluding these kinds of databases
and collections will allow you to reduce backup time and costs.
Storage Engine
When you enable backups for a cluster or replica set that runs on MongoDB 3.0 or higher, you can choose the
storage engine for the backups. Your choices are the MMAPv1 engine or WiredTiger engine. If you do not specify a
storage engine, Ops Manager uses MMAPv1 by default. For more information on storage engines, see Storage in the
MongoDB manual.
You can choose a different storage engine for a backup than you do for the original data. There is no requirement
that the storage engine for a backup match that of the data it replicates. If your original data uses MMAPv1, you can
choose WiredTiger for backing up, and vice versa.
You can change the storage engine for a cluster or replica set’s backups at any time, but doing so requires an initial
sync of the backup on the new engine.
If you choose the WiredTiger engine to back up a collection that already uses WiredTiger, the initial sync replicates all the collection’s WiredTiger options. For information on these options, see the storage.wiredTiger.
collectionConfig section of the Configuration File Options page in the MongoDB manual.
For collections created after initial sync, the Backup Daemon uses its own defaults for storing data. The Daemon will
not replicate any WiredTiger options for a collection created after iniitial sync.
Important: The storage engine chosen for a backup is independent from the storage engine used by the Backup
Database. If the Backup Database uses the MMAPv1 storage engine, it can store backup snapshots for WiredTiger
backup jobs in its blockstore.
Index collection options are never replicated.
Resyncing Production Deployments
For production deployments, it is recommended that as a best practice you periodically (annually) resync all backed-up
replica sets. When you resync, data is read from a secondary in each replica set. During resync, no new snapshots are
generated.
Checkpoints
For sharded clusters, checkpoints provide additional restore points between snapshots. With checkpoints enabled, Ops
Manager Backup creates restoration points at configurable intervals of every 15, 30 or 60 minutes between snapshots.
To create a checkpoint, Ops Manager Backup stops the balancer and inserts a token into the oplog of each shard
and config server in the cluster. These checkpoint tokens are lightweight and do not have a consequential impact on
performance or disk use.
Backup does not require checkpoints, and they are disabled by default.
Restoring from a checkpoint requires Ops Manager Backup to apply the oplog of each shard and config server to
the last snapshot captured before the checkpoint. Restoration from a checkpoint takes longer than restoration from a
snapshot.

133

Snapshots when Agent Cannot Stop Balancer
For sharded clusters, Ops Manager disables the balancer before taking a cluster snapshot. In certain situations, such
as a long migration or no running mongos, Ops Manager tries to disable the balancer but cannot. In such cases, Ops
Manager will continue to take cluster snapshots but will flag the snapshots with a warning that data may be incomplete
and/or inconsistent. Cluster snapshots taken during an active balancing operation run the risk of data loss or orphaned
data.
Snapshots when Agent Cannot Contact a mongod
For sharded clusters, if the Backup Agent cannot reach a mongod instance, whether a shard or config server, then the
agent cannot insert a synchronization oplog token. If this happens, Ops Manager will not create the snapshot and will
display a warning message.

6.3 Activate Backup
On this page
• Overview
• Prerequisites
• Procedure

Overview
You can back up a sharded cluster or replica set. To back up a standalone mongod process, you must first convert it
to a single-member replica set. You can choose to back up all databases and collections on the deployment or specific
ones.
Prerequisites
• Ops Manager must be monitoring the deployment. For a sharded cluster, Ops Manager must also be monitoring
at least one mongos in the cluster.
• A replica set must be MongoDB version 2.2.0 or later.
• A sharded-cluster must be MongoDB version 2.4.3 or later.
• Each replica set must have an active primary.
• For a sharded cluster, all config servers must be running and the balancing round must have completed within
the last hour.
• If you explicitly select a sync target, ensure that the sync target is accessible on the network and keeping up with
replication.
Procedure
Before using this procedure, see the Backup Preparations to decide how to back up the data and what data to back up.

134

Step 1: Select the Backup tab.
Step 2: Select the replica set or cluster to back up.
If you have not yet enabled Backup, select Begin Setup and follow the prompts. Skip the rest of the this procedure.
If you have already enabled Backup, navigate to either the Sharded Cluster Status or Replica Set Status page. Then
click the Start button for the replica set or cluster to back up.
Step 3: Select which process to use for the initial Sync Source.
To minimize the impact on the primary, sync off a secondary.
Step 4: If using access control, specify mechanism and credentials, as needed.

Auth Mechanism
Current DB Username

Current DB Password

My deployment supports SSL for MongoDB connections

The authentication mechanism used by the host.
Can specify
MONGODB-CR, LDAP (PLAIN), or Kerberos(GSSAPI).
If the authentication mechanism is MONGODB-CR or LDAP, the username used to authenticate the Monitoring Agent to the MongoDB deployment. See Configure Backup Agent for MONGODB-CR, Configure
Backup Agent for LDAP Authentication, or Configure the Backup Agent
for Kerberos for setting up user credentials.
If the authentication mechanism is MONGODB-CR or LDAP, the password used to authenticate the Monitoring Agent to the MongoDB deployment. See Configure Backup Agent for MONGODB-CR, Configure
Backup Agent for LDAP Authentication, or Configure the Backup Agent
for Kerberos for setting up user credentials.
If checked, the Monitoring Agent must have a trusted CA certificate in
order to connect to the MongoDB instances. See Configure Monitoring
Agent for SSL.

You can optionally configure authentication credentials later through the deployment’s gear icon.
Step 5: To optionally select a storage engine or exclude namespaces, click Show Advanced Options.
Select the following as desired:
Storage Engine: Select MongoDB Memory Mapped for the MongoDB default MMAPv1 engine or WiredTiger for
the 64-bit WiredTiger engine available beginning with MongoDB 3.0. Before selecting a storage engine, see the
considerations in Storage Engines.
Manage Excluded Namespaces: Click Manage Excluded Namespaces and enter the databases and collections to exclude. For colletions, enter the full namespace: .. Click Save. You can later add or
remove namespaces from the backup, as needed. For more information, see Excluded Namespaces.
Step 6: Click Start Backup.

6.4 Edit a Backup’s Settings

135

On this page
• Overview
• Procedure

Overview
You can modify a backup’s schedule, excluded namespaces, and storage engine.
Procedure
To edit Backup’s settings, select the Backup tab and then the Overview page. The Overview page lists all available
backups. You can then access the settings for each backup using the ellipsis icon.
Enable Cluster Checkpoints
Step 1: Select Edit Snapshot Schedule.
On the line listing the process, click the ellipses icon and click Edit Snapshot Schedule.
Step 2: Enable cluster checkpoints.
Select Create cluster checkpoint every and set the interval. Then click Submit.
Change Snapshot Settings
Step 1: Select Edit Snapshot Schedule
On the line listing the process, click the ellipses icon and click Edit Snapshot Schedule.
Step 2: Configure the Snapshot
Enter the following information as needed and click Submit.

136

Take snapshots every . . . and save for

Create cluster checkpoint
(Sharded Clusters only)

every

Store daily snapshots for
Store weekly snapshots for
Store monthly snapshots for

Sets how often Ops Manager takes a base snapshot of the deployment and
how long Ops Manager retains base snapshots. For information on how
these settings affect Ops Manager, see Snapshot Frequency and Retention
Policy.
Sets how often Ops Manager creates a checkpoint in between snapshots
of a sharded cluster. Checkpoints provide restore points that you can use
to create custom “point in time” snapshots. For more information, see
Checkpoints.
Sets the time period that Ops Manager retains daily snapshots. For defaults, see Snapshot Frequency and Retention Policy.
Sets the time period that Ops Manager retains weekly snapshots. For
defaults, see Snapshot Frequency and Retention Policy.
Sets the time period that Ops Manager retains monthly snapshots. For
defaults, see Snapshot Frequency and Retention Policy.

Configure Excluded Namespaces
Step 1: Select Edit Excluded Namespaces
On the line listing the process, click the ellipses icon and click Edit Excluded Namespaces. Excluded namespaces are
databases and collections that Ops Manager will not back up.
Step 2: Modify the excluded namespaces.
Add or remove excluded namespaces as desired and click Submit.
Modify the Storage Engine Used for Backups
Step 1: Select Edit Storage Engine
On the line listing the process, click the ellipses icon and click Edit Storage Engine.
Step 2: Select the storage engine.
Select the storage engine. See: Storage Engine for more about choosing an appropriate storage engine for your backup.
Step 3: Select the sync source.
Select the Sync source from which to create the new backup. In order to use the new storage engine, Ops Manager
must resync the backup on the new storage engine.

137

Step 4: Click Submit.

6.5 Restore MongoDB Deployments
Use these procedures to restore an entire MongoDB deployment using Backup artifacts. For more specific tutorials
for restoration, please see the Restore MongoDB Instances with Backup procedures.
Restore Flows Overview of the different restore types, and how they operate internally.
Restore Sharded Cluster Restore a sharded cluster from a stored snapshot.
Restore Replica Set Restore a replica set from a stored snapshot or custom point-in-time snapshot.
Restore Flows

On this page
• Overview
• Restore Flows

Overview
Ops Manager Backup enables you to restore your mongod instance, replica set, or sharded cluster using stored snapshots, or from a point in time as far back as the retention period of your longest snapshot.
The general flows and options are the same whether you are restoring a mongod, replica set, or sharded cluster; the
only major difference is that sharded cluster restores result in the production of multiple restore files that must be
copied to the correct destination.
This page describes the different types of restores and different delivery options, and then provides some insight into
the actual process that occurs when you request a restore through Ops Manager.
For a step-by-step guide to restoring a replica set or sharded cluster using Backup, see: Restore MongoDB Deployments.
Restore Types
With Backup, you can restore from a stored snapshot or build a custom snapshot reflecting a specific point in time.
For all backups, restoring from a stored snapshot is faster than restoring from a specific point in time.
Snapshots provide a complete backup of the state of your MongoDB deployment at a given point in time. You can
take snapshots every 6, 8, 12, or 24 hours and set a retention policy to determine for how long the snapshots are stored.
Point-in-time restores let you restore your mongod instance or replica set to a specific point in time in the past. You
can restore to any point back to your oldest retained snapshot. For sharded clusters, point-in-time restores let you
restore to a checkpoint. See Checkpoints for more information.
Point-in-time restores take longer to perform than snapshot restores, but allow you to restore more granularly. When
you perform a point-in-time restore, Ops Manager takes the most recent snapshot that occurred prior to that point and
then applies the oplog to bring the database up to the state it was in at that point in time. This way, Ops Manager
creates a custom snapshot, which you can then use in your restore.

138

Delivery Methods and File Formats
Ops Manager provides two delivery methods: HTTP delivery, and SCP.
With HTTP delivery, Ops Manager creates a link that you can use to download the snapshot file or files. See: Advanced
Backup Restore Settings for informaton about configuring the restore behaviors.
With the SCP delivery option, the Backup Daemon securely copies the restore file or files directly to your system. The
Backup File Format and Method tutorial describes how to select a restore’s delivery method and file format.
For SCP delivery, you can choose your file format to better suit your restore needs. With the Individual DB Files
format, Ops Manager transmits the MongoDB data files directly to the target directory. The individual files format
only requires sufficient space on the destination server for the data files.
In contrast, the Archive (tar.gz) option bundles the database files into a single tar.gz archive file, which you must
extract before reconstruction your databases. This is generally faster than the individual files option, but requires
temporary space on the server hosting the Backup Daemon and sufficient space on the destination server to hold the
archive file and extract it.
Windows does not come with SCP and require additional setup outside the scope of this manual.
Restore Flows
Regardless of the delivery method and restore type, Ops Manager’s restore flow follows a consistent pattern: when
you request a restore, the MMS HTTP service calls out to the Backup Daemon, which prepares the snapshot you will
receive, then either the user downloads the files from the MMS HTTP service, or the Backup Daemon securely copies
the files to the destination server.
The following sections describe the restore flows for both snapshot restores and point-in-time restores, for each delivery and file format option.
HTTP Restore
Snapshot
With the HTTP PULL snapshot restore, the Backup Daemon simply creates a link to the appropriate snapshot in the
Backup Blockstore database. When the user clicks the download link, they download the snapshot from the MMS
HTTP Service, which streams the file out of the Backup Blockstore.
This restore method has the advantage of taking up no space on the server hosting the Backup Daemon: the file passes
directly from the Backup Blockstore to the destination server.
Point-In-Time
The HTTP PULL point-in-time restore follows the same pattern as the HTTP PULL snapshot restore, with added steps
for applying the oplog. When the user requests the restore, the Backup Daemon retrieves the snapshot that immediately
precedes the point in time and writes that snapshot to disk. The Backup Daemon then retrieves oplog entries from the
Backup Blockstore and applies them, creating a custom snapshot from that point in time. The Daemon then writes
the snapshot back to the Backup Blockstore. Finally, when the user clicks the download link, the user downloads the
snapshot from the MMS HTTP Service, which streams the file out of the Backup Blockstore.
This restore method requires that you have adequate space on the server hosting the Backup Daemon for the snapshot
files and oplog.

139

Archive SCP Restore
Snapshot
For a snapshot restore, with SCP archive delivery, the Backup Daemon simply retrieves the snapshot from the Backup
Blockstore and writes it to its disk. The Backup Daemon then combines and compresses the snapshot into a .tar.gz
archive and securely copies the archive to the destination server.
This restore method requries that you have adequate space on the server hosting the Backup Daemon for the snapshot
files and archive.
Point-In-Time
The point-in-time restore with SCP archive delivery follows the same pattern as the snapshot restore, but with added
steps for applying the oplog.
When the user requests the restore, the Backup Daemon retrieves the snapshot that immediately precedes the point in
time and writes that snapshot to disk. The Backup Daemon then retrieves oplog entries from the Backup Blockstore and
applies them, creating a custom snapshot for that point in time. The Backup Daemon then combines and compresses
the snapshot into a tar.gz archive and securely copies the archive to the destination server.
This restore method requries that you have adequate space on the server hosting the Backup Daemon for the snapshot
files, oplog, and archive.

140

Individual Files SCP Restore
Snapshot
For a snapshot restore, with SCP individual files delivery, the Backup Daemon simply retrieves the snapshot from the
Backup Blockstore and securely copies the data files to the target directory on the desintation server.
This restore method also has the advantage of taking up no space on the server hosting the Backup Daemon: the file
passes directly from the Backup Blockstore to the destination server. The destination server requires only sufficient
space for the uncompressed data files. The data is compressed during transmission.
Point-In-Time
The point-in-time restore with SCP individual files delivery follows the same pattern as the snapshot restore, but with
added steps for applying the oplog.
When the user requests the restore, the Backup Daemon retrieves the snapshot that immediately precedes the point in
time and writes that snapshot to disk. The Backup Daemon then retrieves oplog entries from the Backup Blockstore
and applies them, creating a custom snapshot for that point in time. The Backup Daemon then securely copies the data
files to the target directory on the desintation server
This restore method also requries that you have adequate space on the server hosting the Backup Daemon for the
snapshot files and oplog. The destination server requires only sufficient space for the uncompressed data files. The
data is compressed during transmission.

141

Restore a Sharded Cluster from a Backup

On this page
• Overview
• Sequence
• Considerations
• Procedures

Overview
You can restore a sharded cluster onto new hardware from the artifacts captured by Backup.
You can restore from a snapshot or checkpoint. You must enable checkpoints to use them. When you restore from
a checkpoint, Ops Manager takes the snapshot previous to the checkpoint and applies the oplog to create a custom
snapshot. Checkpoint recovery takes longer than recovery from a stored snapshot.
Ops Manager provides restore files as downloadable archive; Ops Manager can also scp files directly to your system.
The scp delivery method requires additional configuration but provides faster delivery.
Ops Manager provides a separate backup artifacts for each shard and one file for the config servers.

142

Sequence
The sequence to restore a snapshot is to:
• select and download the restore files,
• distribute the restore files to their new locations,
• start the mongod instances,
• configure each shard’s replica set, and
• configure and start the cluster.
Considerations
Client Requests During Restoration
You must ensure that the MongoDB deployment does not receive client requests during restoration. You must either:
• restore to new systems with new hostnames and reconfigure your application code once the new deployment is
running, or
• ensure that the MongoDB deployment will not receive client requests while you restore data.

143

Snapshots when Agent Cannot Stop Balancer
Ops Manager displays a warning next to cluster snapshots taken while the balancer is enabled. If you restore from
such a snapshot, you run the risk of lost or orphaned data. For more information, see Snapshots when Agent Cannot
Stop Balancer.
Procedures
Select and Download the Snapshot Files
Step 1: Select the Backup tab and then select Sharded Cluster Status.
Step 2: Click the name of the sharded cluster to restore.
Ops Manager displays your selection’s stored snapshots.
Step 3: Click the Restore button and select the snapshot from which to restore.
Click the Restore button at the top of the page. On the resulting page, choose from restoring from a stored snapshot or
creating a custom snapshot.
To select a stored snapshot, click the Restore this snapshot link next to the snapshot.

144

To create a custom snapshot from a checkpoint, select the Use Custom Point In Time checkbox and enter the point in
time in the Date and Time fields, and click Next.
Step 4: Select the checkpoint.
Ops Manager lists the checkpoints that are the closest match to the point in time that you selected. Select a checkpoint
from which to create the snapshot, and click Next.
Step 5: Select HTTP as the delivery method for the snapshot.
In the Delivery Method field, select Pull via Secure HTTP (HTTPS).
Optionally, you can instead choose SCP as the delivery method. See: Retrieve a Snapshot with SCP Delivery for
the SCP delivery option’s configuration. If you choose SCP, you must provide the hostname and port of the server to
receive the files and provide access to the server through a username and password or though an SSH key. Follow the
instructions on the Ops Manager screen.
Step 6: Select tar.gz as the download format.
In the Format drop-down list, select Archive (tar.gz).

145

Step 7: Finalize the request.
Click Finalize Request and confirm your identify via two-factor verification. Then click Finalize Request again.
Step 8: Retrieve the snapshot.
Ops Manager creates one-time links to tar files for the snapshot. The links are available for one download each, and
each expires after an hour.
To download the tar files, select the Ops Manager Backup tab and then Restore Jobs. When the restore job completes,
the download link appears for every config server and shard in the cluster. Click each link to download the tar files
and copy each tar file to its server. For a shard, copy the file to every member of the shard’s replica set.
If you optionally chose SCP as the delivery method, the files are copied to the server directory you specfied. To verify
that the files are complete, see the section on how to validate an SCP restore.
Restore Each Shard’s Primary
For all shards, restore the primary. You must have a copy of the snapshot on the server that provides the primary:
Step 1: Shut down the entire replica set.
Shut down the replica set’s mongod processes using one of the following methods, depending on your configuration:
• Automated Deployment:
If you use Ops Manager Automation to manage the replica set, you must shut down through the Ops Manager
console. See Shut Down MongoDB Processes.
• Non-Automated Deployment on MongoDB 2.6 or Later:
Connect to each member of the set and issue the following:
use admin
db.shutdownServer()

• Non-Automated Deployment on MongoDB 2.4 or earlier:
Connect to each member of the set and issue the following:
use admin
db.shutdownServer( { force: true } )

Step 2: Restore the snapshot data files to the primary.
Extract the data files to the location where the mongod instance will access them through the dbpath setting. If
you are restoring to existing hardware, use a different data directory than used previously. The following are example
commands:
tar -xvf .tar.gz
mv  /data

146

Step 3: Start the primary with the new dbpath.
For example:
mongod --dbpath / --replSet  --logpath /
˓→/mongodb.log --fork

Step 4: Connect to the primary and initiate the replica set.
For example, first issue the following to connect:
mongo

And then issue rs.initiate():
rs.initiate()

Step 5: Restart the primary as a standalone, without the --replSet option.
Use the following sequence:
1. Shut down the process using one of the following methods:
• Automated Deployment:
Shut down through the Ops Manager console. See Shut Down MongoDB Processes.
• Non-Automated Deployment on MongoDB 2.6 or Later:
use admin
db.shutdownServer()

• Non-Automated Deployment on MongoDB 2.4 or earlier:
use admin
db.shutdownServer( { force: true } )

2. Restart the process as a standalone:
mongod --dbpath / --logpath //mongodb.log --fork

Step 6: Connect to the primary and drop the oplog.
For example, first issue the following to connect:
mongo

And then issue rs.drop() to drop the oplog.
use local
db.oplog.rs.drop()

147

Step 7: Run the seedSecondary.sh script on the primary.
The seedSecondary.sh script re-creates the oplog collection and seeds it with the timestamp of the snapshot’s creation.
This will allow the secondary to come back up to time without requiring a full initial sync. This script is customized
by Ops Manager for this particular snapshot and is included in the backup restore file.
To run the script, issue the following command at the system prompt, where  is the port of the
mongod instance and  is the size of the replica set’s oplog:
./seedSecondary.sh  

Step 8: Restart the primary as part of a replica set.
Use the following sequence:
1. Shut down the process using one of the following methods:
• Automated Deployment:
Shut down through the Ops Manager console. See Shut Down MongoDB Processes.
• Non-Automated Deployment on MongoDB 2.6 or Later:
use admin
db.shutdownServer()

• Non-Automated Deployment on MongoDB 2.4 or earlier:
use admin
db.shutdownServer( { force: true } )

2. Restart the process as part of a replica set:
mongod --dbpath / --replSet 

Restore All Secondaries
After you have restored the primary for a shard you can restore all secondaries. You must have a copy of the snapshot
on all servers that provide the secondaries:
Step 1: Connect to the server where you will create the new secondary.
Step 2: Restore the snapshot data files to the secondary.
Extract the data files to the location where the mongod instance will access them through the dbpath setting. If
you are restoring to existing hardware, use a different data directory than used previously. The following are example
commands:
tar -xvf .tar.gz
mv  /data

148

Step 3: Start the secondary as a standalone, without the --replSet option.
Use the following sequence:
1. Shut down the process using one of the following methods:
• Automated Deployment:
Shut down through the Ops Manager console. See Shut Down MongoDB Processes.
• Non-Automated Deployment on MongoDB 2.6 or Later:
use admin
db.shutdownServer()

• Non-Automated Deployment on MongoDB 2.4 or earlier:
use admin
db.shutdownServer( { force: true } )

2. Restart the process as a standalone:
mongod --dbpath / --logpath //mongodb.log --fork

Step 4: Run the seedSecondary.sh script on the secondary.
The seedSecondary.sh script re-creates the oplog collection and seeds it with the timestamp of the snapshot’s creation.
This will allow the secondary to come back up to time without requiring a full initial sync. This script is customized
by Ops Manager for this particular snapshot and is included in the backup restore file.
To run the script, issue the following command at the system prompt, where  is the port of the
mongod instance and  is the size of the replica set’s oplog:
./seedSecondary.sh  

Step 5: Restart the secondary as part of the replica set.
Use the following sequence:
1. Shut down the process using one of the following methods:
• Automated Deployment:
Shut down through the Ops Manager console. See Shut Down MongoDB Processes.
• Non-Automated Deployment on MongoDB 2.6 or Later:
use admin
db.shutdownServer()

• Non-Automated Deployment on MongoDB 2.4 or earlier:
use admin
db.shutdownServer( { force: true } )

2. Restart the process as part of a replica set:

149

mongod --dbpath / --replSet 

Step 6: Connect to the primary and add the secondary to the replica set.
Connect to the primary and use rs.add() to add the secondary to the replica set.
rs.add(":")

Repeat this operation for each member of the set.
Restore Each Config Server
Perform this procedure separately for each config server. Each config server must have a copy of the tar file with the
config server data.
Step 1: Restore the snapshot to the config server.
Extract the data files to the location where the config server’s mongod instance will access them. This is the location
you will specify as the dbPath when running mongod for the config server.
tar -xvf .tar.gz
mv  /data

Step 2: Start the config server.
The following example starts the config server using the new data:
mongod --configsvr --dbpath /data

Step 3: Update the sharded cluster metadata.
If the new shards do not have the same hostnames and ports as the original cluster, you must update the shard metadata.
To do this, connect to each config server and update the data.
First connect to the config server with the mongo shell. For example:
mongo

Then access the shards collection in the config database. For example:
use config
db.shards.find().pretty()

The find() method returns the documents in the shards collection. The collection contains a document for each
shard in the cluster. The host field for a shard displays the name of the shard’s replica set and then the hostname and
port of the shard. For example:
{ "_id" : "shard0000", "host" : "shard1/localhost:30000" }

150

To change a shard’s hostname and port, use the MongoDB update() command to modify the documents in the
shards collection.
Start the mongos
Start the cluster’s mongos bound to your new config servers.
Restore a Replica Set from a Backup

On this page
• Overview
• Sequence
• Prerequisites
• Procedures

Overview
You can restore a replica set from the artifacts captured by Ops Manager Backup. You can restore either a stored
snapshot or a point in time in the last 24 hours between snapshots. If you restore from a point in time, Ops Manager
Backup creates a custom snapshot for the selected point by applying the oplog to the previous regular snapshot. Pointin-time recovery takes longer than recovery from a stored snapshot.
When you select a snapshot to restore, Ops Manager creates a link to download the snapshot as a tar file. The link
is available for one download only and times out after an hour. You can optionally have Ops Manager scp the tar
file directly to your system. The scp delivery method requires additional configuration but provides faster delivery.
Windows does not come with scp and require additional setup outside the scope of this manual.
You can restore either to new hardware or existing hardware. If you restore to existing hardware, use a different data
directory than used previously.
Sequence
The sequence used here to restore a replica set is to download the restore file and distribute it to each server, restore
the primary, and then restore the secondaries. For additional approaches to restoring replica sets, see the procedure
from the MongoDB Manual to Restore a Replica Set from a Backup.
Prerequisites
Oplog Size
To seed each replica set member, you will use the seedSecondary.sh script included in the backup restore file.
When you run the script, you will provide the replica set’s oplog size, in gigabytes. If you do not have the size, see the
section titled “Check the Size of the Oplog” on the Troubleshoot Replica Sets page of the MongoDB manual.

151

Client Requests
You must ensure that the MongoDB deployment does not receive client requests during restoration. You must either:
• restore to new systems with new hostnames and reconfigure your application code once the new deployment is
running, or
• ensure that the MongoDB deployment will not receive client requests while you restore data.
Procedures
Select and Download the Snapshot
Step 1: Select the Backups tab and then select Replica Set Status.
Step 2: Click the name of the replica set to restore.
Ops Manager displays your selection’s stored snapshots.
Step 3: Select the snapshot from which to restore.
To select a stored snapshot, click the Restore this snapshot link next to the snapshot.
To select a custom snapshot, click the Restore button at the top of the page. In the resulting page, select a snapshot
as the starting point. Then select the Use Custom Point In Time checkbox and enter the point in time in the Date and
Time fields. Ops Manager includes all operations up to but not including the point in time. For example, if you select
12:00, the last operation in the restore is 11:59:59 or earlier. Click Next.
Step 4: Select HTTP as the delivery method for the snapshot.
In the Delivery Method field, select Pull via Secure HTTP (HTTPS).
Optionally, you can instead choose SCP as the delivery method. See: Retrieve a Snapshot with SCP Delivery for
the SCP delivery option’s configuration. If you choose SCP, you must provide the hostname and port of the server to
receive the files and provide access to the server through a username and password or though an SSH key. Follow the
instructions on the Ops Manager screen.
Step 5: Finalize the request.
Click Finalize Request and confirm your identify via two-factor verification. Then click Finalize Request again.
Step 6: Retrieve the snapshot.
Ops Manager creates a one-time link to a tar file of the snapshot. The link is available for one download and times out
after an hour.
To download the snapshot, select the Ops Manager Backup tab and then select Restore Jobs. When the restore job
completes, select the download link next to the snapshot.
If you optionally chose SCP as the delivery method, the files are copied to the server directory you specfied. To verify
that the files are complete, see the section on how to validate an SCP restore.
152

Step 7: Copy the snapshot to each server to restore.
Restore the Primary
You must have a copy of the snapshot on the server that provides the primary:
Step 1: Shut down the entire replica set.
Shut down the replica set’s mongod processes using one of the following methods, depending on your configuration:
• Automated Deployment:
If you use Ops Manager Automation to manage the replica set, you must shut down through the Ops Manager
console. See Shut Down MongoDB Processes.
• Non-Automated Deployment on MongoDB 2.6 or Later:
Connect to each member of the set and issue the following:
use admin
db.shutdownServer()

• Non-Automated Deployment on MongoDB 2.4 or earlier:
Connect to each member of the set and issue the following:
use admin
db.shutdownServer( { force: true } )

Step 2: Restore the snapshot data files to the primary.
Extract the data files to the location where the mongod instance will access them through the dbpath setting. If
you are restoring to existing hardware, use a different data directory than used previously. The following are example
commands:
tar -xvf .tar.gz
mv  /data

Step 3: Start the primary with the new dbpath.
For example:
mongod --dbpath / --replSet  --logpath /
˓→/mongodb.log --fork

Step 4: Connect to the primary and initiate the replica set.
For example, first issue the following to connect:
mongo

And then issue rs.initiate():

153

rs.initiate()

Step 5: Restart the primary as a standalone, without the --replSet option.
Use the following sequence:
1. Shut down the process using one of the following methods:
• Automated Deployment:
Shut down through the Ops Manager console. See Shut Down MongoDB Processes.
• Non-Automated Deployment on MongoDB 2.6 or Later:
use admin
db.shutdownServer()

• Non-Automated Deployment on MongoDB 2.4 or earlier:
use admin
db.shutdownServer( { force: true } )

2. Restart the process as a standalone:
mongod --dbpath / --logpath //mongodb.log --fork

Step 6: Connect to the primary and drop the oplog.
For example, first issue the following to connect:
mongo

And then issue rs.drop() to drop the oplog.
use local
db.oplog.rs.drop()

Step 7: Run the seedSecondary.sh script on the primary.
The seedSecondary.sh script re-creates the oplog collection and seeds it with the timestamp of the snapshot’s creation.
This will allow the secondary to come back up to time without requiring a full initial sync. This script is customized
by Ops Manager for this particular snapshot and is included in the backup restore file.
To run the script, issue the following command at the system prompt, where  is the port of the
mongod instance and  is the size of the replica set’s oplog:
./seedSecondary.sh  

Step 8: Restart the primary as part of a replica set.
Use the following sequence:

154

1. Shut down the process using one of the following methods:
• Automated Deployment:
Shut down through the Ops Manager console. See Shut Down MongoDB Processes.
• Non-Automated Deployment on MongoDB 2.6 or Later:
use admin
db.shutdownServer()

• Non-Automated Deployment on MongoDB 2.4 or earlier:
use admin
db.shutdownServer( { force: true } )

2. Restart the process as part of a replica set:
mongod --dbpath / --replSet 

Restore Each Secondary
After you have restored the primary you can restore all secondaries. You must have a copy of the snapshot on all
servers that provide the secondaries:
Step 1: Connect to the server where you will create the new secondary.
Step 2: Restore the snapshot data files to the secondary.
Extract the data files to the location where the mongod instance will access them through the dbpath setting. If
you are restoring to existing hardware, use a different data directory than used previously. The following are example
commands:
tar -xvf .tar.gz
mv  /data

Step 3: Start the secondary as a standalone, without the --replSet option.
Use the following sequence:
1. Shut down the process using one of the following methods:
• Automated Deployment:
Shut down through the Ops Manager console. See Shut Down MongoDB Processes.
• Non-Automated Deployment on MongoDB 2.6 or Later:
use admin
db.shutdownServer()

• Non-Automated Deployment on MongoDB 2.4 or earlier:
use admin
db.shutdownServer( { force: true } )

155

2. Restart the process as a standalone:
mongod --dbpath / --logpath //mongodb.log --fork

Step 4: Run the seedSecondary.sh script on the secondary.
The seedSecondary.sh script re-creates the oplog collection and seeds it with the timestamp of the snapshot’s creation.
This will allow the secondary to come back up to time without requiring a full initial sync. This script is customized
by Ops Manager for this particular snapshot and is included in the backup restore file.
To run the script, issue the following command at the system prompt, where  is the port of the
mongod instance and  is the size of the replica set’s oplog:
./seedSecondary.sh  

Step 5: Restart the secondary as part of the replica set.
Use the following sequence:
1. Shut down the process using one of the following methods:
• Automated Deployment:
Shut down through the Ops Manager console. See Shut Down MongoDB Processes.
• Non-Automated Deployment on MongoDB 2.6 or Later:
use admin
db.shutdownServer()

• Non-Automated Deployment on MongoDB 2.4 or earlier:
use admin
db.shutdownServer( { force: true } )

2. Restart the process as part of a replica set:
mongod --dbpath / --replSet 

Step 6: Connect to the primary and add the secondary to the replica set.
Connect to the primary and use rs.add() to add the secondary to the replica set.
rs.add(":")

Repeat this operation for each member of the set.

6.6 Restore MongoDB Instances with Backup
Use the procedures in these tutorials to restore data from Backup artifacts to MongoDB instances. If you want to
restore an entire MongoDB deployment, use the Restore MongoDB Deployments tutorials.
Restore from a Stored Snapshot with HTTP Pull Restore a replica set or sharded cluster from a stored snapshot with
HTTP Pull delivery.
156

Restore from a Stored Snapshot with SCP Delivery Restore a replica set or sharded cluster from a stored snapshot
with SCP delivery.
Restore from a Point in the Last Day Restore a replica set from a custom snapshot from any point within a 24-hour
period of time.
Restore a Single Database Restore only a portion of a backup to a new mongod instance.
Seed a New Secondary Use Ops Manager Backup to seed a new secondary in an existing replica set.
Restore from a Stored Snapshot

On this page
• Overview
• Procedure
• Additional Information

Overview
With Backup, you can restore from a stored snapshot or build a custom snapshot reflecting a different point in the last
24 hours. For all backups, restoring from a stored snapshot is faster than restoring from a custom snapshot in the last
24 hours.
By default, Backup automatically takes and stores a snapshot every 6 hours. Snapshots remain available for restoration
following the snapshot retention policy.
For replica sets, you will receive one .tar.gz file containing your data; for sharded clusters, you will receive a series
of .tar.gz files.
Procedure
Step 1: Select the Backups tab, and then select either Sharded Cluster Status or Replica Set Status.
Step 2: Click the name of the sharded cluster or replica set to restore.
Ops Manager displays your selection’s stored snapshots.
Step 3: Select the snapshot from which to restore.
To select a stored snapshot, click the Restore this snapshot link next to the snapshot.
To select a custom snapshot, click the Restore button at the top of the page. In the resulting page, select a snapshot
as the starting point. Then select the Use Custom Point In Time checkbox and enter the point in time in the Date and
Time fields. Ops Manager includes all operations up to but not including the point in time. For example, if you select
12:00, the last operation in the restore is 11:59:59 or earlier. Click Next.

157

Step 4: Select HTTP as the delivery method for the snapshot.
In the Delivery Method field, select Pull via Secure HTTP (HTTPS).
Optionally, you can instead choose SCP as the delivery method. See: Retrieve a Snapshot with SCP Delivery for
the SCP delivery option’s configuration. If you choose SCP, you must provide the hostname and port of the server to
receive the files and provide access to the server through a username and password or though an SSH key. Follow the
instructions on the Ops Manager screen.
Step 5: Finalize the request.
Click Finalize Request and confirm your identify via two-factor verification. Then click Finalize Request again.
Step 6: Retrieve the snapshot.
To download the snapshot, select the Ops Manager Backup tab and then select Restore Jobs. When the restore job
completes, select the download link next to the snapshot.
For a sharded clusters, Ops Manager provides several download links for the several .tar.gz files.
Step 7: Extract the data files from the .tar.gz archive created by the backup service.
tar -zxvf .tar.gz

Step 8: Select the location where the mongod will access the data files.
The directory you choose will become the mongod’s data directory. You can either create a new directory or use the
existing location of the extracted data files.
If you create a new directory, move the files to that directory.
If you use the existing location of the extracted data files, you can optionally create a symbolic link to the location
using the following command, where --