Managing Your MongoDB Deployment

Overview

When you host your MongoDB database with mLab, you have access to special tools and processes that help you manage routine MongoDB tasks. Some of these tasks are described here, with detailed instructions and links for further reading.

MongoDB version management

As MongoDB Inc. continues to roll out new versions of MongoDB, it’s important to keep your database up-to-date so as to take advantage of new features, bug fixes and more.

Versions currently available at mLab

The version of MongoDB currently used by mLab as the default is version 3.2 (as of September 27, 2016). However, you have the option of selecting other version(s).

On September 27, 2016, our default release version became MongoDB 3.2. During the scheduled maintenance window starting on that day, all free Sandboxes databases running MongoDB 3.0.x will be upgraded to MongoDB 3.2.x, and all for-pay Shared databases running MongoDB 2.6.x will be upgraded to MongoDB 3.0.x.

An email notification with details about this upcoming maintenance was sent to potentially affected users in July 2016. However, note that ALL deployments created prior to September 27 that meet the stated criteria will be also be affected.

Plan Type Currently Supported Versions  
Sandbox 3.2.x  
For-pay Shared 3.2.x, 3.0.x  
For-pay Dedicated 3.2.x, 3.0.x, 2.6.x (notifications about our desupport 2.6.x will likely go out Q1 2017)

Determining your current MongoDB version

Follow these steps to see which version of MongoDB your deployment is currently running:

  1. Log in to the mLab management portal
  2. Navigate to the MongoDB deployment whose version you want to determine
  3. At the top of the screen, you will see a box with the connection information; the MongoDB version is indicated in the lower right-hand corner of this box
    img-connectstring

Alternately, you can use the db.version() method via the mongo shell to see which version your deployment is running.

> db.version()
3.0.7

How to change MongoDB versions

Not available for Sandbox databases

If you have a for-pay deployment, you can upgrade (or change) the version of MongoDB you are running directly from the mLab management portal. The process is seamless if you are making a replica set connection to one of our Cluster plans.

Before doing so:

When you’re ready to upgrade, follow these steps:

  1. Log in to the mLab management portal
  2. From your account’s Home page, navigate to the deployment that will be modified
  3. Click the “Tools” tab
  4. Select the desired version in the drop-down menu that appears in the “Upgrade MongoDB version” section
  5. Click the “Upgrade to…” button

Version change process

If you have a replica set cluster with auto-failover, we will first restart your non-primary nodes (e.g., arbiter and secondary node(s)) with the new version. Then we will intentionally fail you over in order to upgrade your original primary. Finally, we will fail you back over so that your original primary is once again primary. You should experience no downtime if your drivers and client code have been properly configured for failover. It should be noted that during failover, it may take 5-30 seconds for a new member to be elected primary.

If you are on a single-node plan, your database server will be restarted which typically involves approximately 20 seconds of downtime.

Frequently asked questions

Q. Are for-pay deployments automatically upgraded when mLab supports a new MongoDB version?

Maintenance (minor) versions

We do not automatically patch any for-pay deployments to the latest maintenance/minor version (e.g., 3.0.7 to 3.0.8). Instead, we send out email notifications if maintenance releases have very important bug fixes in them.

That being said, if there were any truly critical issues (e.g., one that would result in data loss), it’s likely that we would automatically patch and send a notification.

Release (major) versions

The only time we will automatically upgrade the MongoDB version on a for-pay deployment is when we de-support the currently-running version.

We typically support at least two release (major) versions on our for-pay Shared plans and three release versions on our Dedicated plans (listed here). Eventually, as release versions are de-supported, an upgrade will be necessary. In those cases, we send multiple notifications well in advance of a mandatory upgrade such as this example notice. If the user doesn’t perform the upgrade at their convenience by the stated deadline, we will automatically perform the version upgrade to our minimum supported version.

Q. Why can’t I change the MongoDB version that’s running on my Sandbox database?

Because our Sandbox databases are running on server processes shared by multiple users, version changes are not possible. All Sandbox plans are automatically upgraded to the latest MongoDB version we support. To run on a specific version of MongoDB, you will need to upgrade to one of our for-pay plans which provide your own mongod server process and the flexibility of making version changes at your convenience.

Q. How do I test a specific maintenance (minor) version?

We do not offer the ability to change to a specific maintenance (minor) version. Maintenance versions are supposed to contain bug fixes and patches only and as a result, we don’t consider it necessary to treat these versions (e.g., 3.2.7 and 3.2.8) differently. At any given time, we offer only the latest maintenance version of each release.

That being said, if you are upgrading to a different release (major) versions (e.g., 2.6.x vs. 3.0.x), we highly recommend thorough testing in a Staging environment.

Viewing and killing current operations

Not available for Sandbox databases

Although there can be many reasons for unresponsiveness, we sometimes find that particularly long-running and/or blocking operations (either initiated by a human or an application) are the culprit. Some examples of common operations that can bog down the database are:

To quickly see if one or more operations are particularly long-running, use the tool in the mLab management portal.

If you have a Dedicated plan, you can directly get a report on these current operations by running the db.currentOp() method in the mongo shell. In addition, you can use the db.killOp() helper method in the mongo shell to terminate a currently running operation. To do this pass the value of the opid field as an argument to db.killOp().

Don’t hesitate to contact support@mlab.com for help. If you have a Dedicated plan and are in an emergency situation, use the emergency email that we provided to you.

Additional reading from the mLab Blog: Finding and terminating long-running operations in MongoDB

Restarting your MongoDB processes

Not available for Sandbox databases

On a rare occasion, you may need to restart your server processes. If you have a for-pay database, you can do this directly from the mLab management portal.

  1. Log in to the mLab management portal
  2. From your account’s Home page, navigate to the deployment that needs to be restarted
  3. Click the “Tools” tab
  4. Click the “Restart server” or “Restart cluster” (the wording is different depending on your plan type) button just beneath the tab headers
  5. Follow the instructions in the “Warning” window to confirm the restart, then click “Restart”

If you have a replica set cluster with auto-failover, we will use MongoDB’s replSetFreeze command to ensure that your current primary remains primary during the restart. Then we will restart each of your nodes in turn. The entire process could take a few minutes, but you should only lose access to your primary for about 20 seconds.

If you are on a single-node plan, your database server will be restarted which typically involves approximately 20 seconds of downtime.

Sandbox database limitations
Because our Sandbox databases are shared by multiple users, restarting MongoDB on-demand is not possible. If you suspect a restart is required, contact support@mlab.com.

Compacting your database

Sometimes it’s necessary to compact your database in order to reclaim disk space (e.g., are you quickly approaching your storage limits?) and/or reduce fragmentation. When you compact your database, you are effectively reducing its file size.

Understanding file size vs. data size

mLab’s Sandbox and Shared plans use the fileSize (as opposed to dataSize) value from the output of the dbStats command as the basis for determining whether you are nearing your storage quota. However, when you compare the two metrics, you’ll notice that fileSize is often a much larger value. This is because when MongoDB deletes objects, deletes collections, or moves objects due to a change in size, it leaves “holes” in the data files. MongoDB does try to re-use these holes but they are not freed to the OS.

For a more detailed explanation of how this works, read our discussion about how MongoDB’s size metrics are calculated.

How to compact your database(s)

Sandbox and single-node plans

If you are on a Sandbox or single-node plan and would like to try to reclaim disk space, you can use MongoDB’s repairDatabase command.

If your fileSize is under 496 MB, you can run this repair command directly through our UI by visiting the page for your database, clicking on the “Tools” tab and selecting “repairDatabase” from the drop-down list. Otherwise, you can run the a db.repairDatabase() after connecting to your database using the mongo shell.

We would also be happy to run this command for you - send your request to support@mlab.com.

The repairDatabase command is a blocking operation. Your database will be unavailable until the repair is complete.

Replica set cluster plans

If you are on a multi-node, highly available replica set cluster plan (Shared or Dedicated) and would like to try to reclaim disk space, you can do so while still using your database.

Option 1: Use our rolling node replacement process

Available for Dedicated Cluster plans only

This first option is the preferred method because it is not only a seamless process but your deployment will maintain the same level of availability during the process. Read about mLab’s rolling node replacement process below.

Option 2: Resync each node from scratch

Available for any Cluster plan, but best for Shared Cluster plans

When resyncing a secondary member of your replica set, you’ll be able to use the primary member of your replica set.

However, note that:

High-level process:

  1. Resync the current secondary node using an initial sync
  2. Initiate a failover
  3. Resync the now secondary, originally primary

An application will gracefully handle failover events if it has been properly configured to use a replica set connection.

Steps to compact a Shared Cluster plan deployment:

  1. Log in to the mLab management portal
  2. Navigate to the Shared Cluster deployment that you want to compact
  3. On the “Databases” tab, note values of the “Size” and “File Size” columns
    • If your database’s “File Size” is only a little larger than the “Size” a compaction will have no or little effect. A good rule of thumb is that a compaction is only likely to be effective if the “File Size” is more than 20% larger than the “Size”.
  4. Navigate to the “Servers” tab
    • First click “initiate resync” on the node that’s currently in the state of SECONDARY
    • Once the sync is complete, then click “step down (fail over)” on the node that’s currently in the state of PRIMARY
    • Finally click “initiate resync” on the node that was primary but is now in the state of SECONDARY

If your for-pay plan is hosted on Azure and you want a compaction, you will need to send a request to support@mlab.com for help.

Initiating a failover for your cluster

If you would like to force your current primary to step down, you can do so through the mLab management portal. The following instructions are the equivalent of running the rs.stepDown() function in the mongo shell:

  1. Log in to the mLab management portal
  2. From your account’s Home page, navigate to the deployment that needs a failover
  3. Click the “Servers” tab
  4. Click the “step down (fail over)” link that appears under the “Manage” column in the row for your current primary
  5. In the dialog box that appears, click the “Step down” button

mLab’s rolling node replacement process

If you are on a replica set cluster plan with auto-failover, mLab’s rolling node replacement process will allow you to maintain high availability and keep your existing connection string during scheduled maintenance. If your application/driver is configured properly for replica set connections, you should experience no downtime during this process except during failover.

A Dedicated Cluster plan cannot be downgraded to a Shared Cluster plan using the rolling node replacement process. However, a downgrade from one Dedicated Cluster plan to another Dedicated Cluster plan using this process is both possible and recommended.

What is this process used for?

The rolling node replacement process is most commonly used for:

Steps

Overall steps:

  1. mLab replaces each secondary in turn (see the steps under “Steps to replace a secondary”)
  2. Either you or mLab intentionally initiates a failover so that your current primary is now secondary
  3. mLab replaces your original primary (now secondary)

Steps to replace a secondary:

  1. mLab adds a new, hidden node to your existing replica set
  2. Wait for the new node to complete its initial sync 1
  3. mLab swaps out your existing node with the new node, updating DNS records

Expected impact on running applications

The rolling node replacement process is mostly seamless, but note that a failover will be necessary and that it may take 5-30 seconds for a new member to be elected primary. mLab will coordinate with you for the required failover unless you explicitly tell us it’s not necessary (see next section).

In addition, MongoDB’s replica set reconfiguration command, replSetReconfig, will be run several times during this process. While this command can sever existing connections and temporarily cause errors in driver logs, these types of disconnects usually have minimal effect on application/drivers that have been configured properly.

Notification and coordination

Swapping out a current secondary:

Swapping out your current primary:

Additional charges

The extra virtual machines that are used during a rolling node replacement process in order to maintain the same level of availability may incur charges.



  1. When it makes sense, we will use a recent block storage snapshot as the basis for the new node instead of waiting for the new node to complete its initial sync.