Guide to Migrating to Atlas

This page provides a list of the prerequisites for migrating to Atlas and a step-by-step guide to the migration process.

For details on migration timing and our FAQ please visit https://docs.mlab.com/mlab-to-atlas/.

Migration Prerequisites

Ensure minimum required versions:

Review key differences between mLab and Atlas:

Ensure that all recurring query patterns are well-indexed and that your deployment is running healthy on mLab:

Consider enabling SSL on your mLab deployment before migrating:

Immediately before starting to migrate a specific mLab deployment to Atlas:

Pre-Migration Setup

A. Create your Atlas organization and project(s)

If you will be using Atlas on behalf of a company, note that you will only need a single Atlas organization. Unlike mLab, MongoDB Atlas provides the ability to have multiple Organization Owners and to assign fine-grained privileges to users.

  1. Visit https://mongodb.com/cloud/atlas and register for an Atlas account.
    • The email address that you sign up with will become the username that you use to log in.
  2. Ensure that you are on the Project view by visiting https://cloud.mongodb.com.
  3. (Optional) Rename the default organization.
    • If you will be using this account on behalf of a company, rename it to the name of the company.
  4. (Optional) Rename the default project.
    • We recommend creating separate projects for your various environments (e.g., production, test, development). We advise against mixing development and production clusters within the same project since they share security-related resources such as database users and IP whitelists.
    • Common project names are “<your app’s name> Production” and “<your app’s name> Development”.
    • You’ll later be able to manage access to your project(s).
  5. (Optional) Create multiple projects within your Atlas organization.

B. Connect your Atlas organization to the source mLab account

In order to use the migration tool that was custom-built for migrations from mLab to Atlas, you’ll need to create a connection between the source mLab account (the account from which you want to migrate deployment(s)) to the target Atlas organization.

If you have multiple mLab accounts that belong to the same company, note that you can first connect the target Atlas organization to one source mLab account. Then at any point you can disconnect and then connect the same target Atlas organization to a different source mLab account. There are no restrictions to the number of times you can disconnect and connect to different source mLab accounts.

Steps:

  1. Log in to the target Atlas organization as an Atlas Organization Owner.
    • Only the Organization Owner will be able to establish a connection to the source mLab account. However, this root-level access is not required for migrating; any Atlas user that is a Project Owner of the target Atlas project can migrate a specific deployment from mLab.
  2. Ensure that the target Atlas organization has been selected from the Organizations menu (the drop-down menu in the top-left corner next to the MongoDB green leaf logo).
  3. Click on the Organization Settings icon next to the Organizations menu (the gears icon). img-atlas-org-settings.
  4. Click on the green “Connect to mLab” button.

  5. Log in to the source mLab account from the same browser as the mLab Admin User.
  6. In mLab’s UI review the text presented by the “Authorize MongoDB Atlas” form and click “Authorize” to complete the connecting of your mLab account with your Atlas organization.

The “mLab Account” link in the left navigation pane should now be highlighted, and you should see the “mLab Account” view. This view lists your mLab deployments on the “Deployments” tab and lists your mLab account users on the Account Users tab. This view makes it easier to invite your mLab account users to your Atlas organization and to migrate your mLab deployment(s) to Atlas.

If you’ve navigated away from this “mLab Account” view you can return back at any time by navigating to the Organization Home view (click on the green MongoDB leaf icon in the upper-left corner) and then clicking on the “mLab Account” link.

C. Configure payment method (not required for free databases)

In order to create a for-pay Atlas cluster (M2 or above), you will need to first configure a payment method.

mLab’s service and MongoDB Atlas’ service are completely separate which means that even if you already have a credit card attached at mLab, you’ll need to supply it to MongoDB Atlas as well.

Configure Credit Card or PayPal

This is required for most for-pay customers.

Steps:

  1. Log in to the target Atlas organization.
  2. Ensure that the target Atlas organization has been selected from the Organizations menu (the drop-down menu in the top-left corner next to the MongoDB green leaf logo).
  3. From the top navigation select “Billing”.
  4. Click the “Edit” button in the “Payment method” panel.

Enter Activation Code

This is only applicable to customers who have executed a MongoDB Order Form (i.e., entered into a contract with MongoDB).

Steps:

  1. Log in to the target Atlas organization.
  2. Ensure that the target Atlas organization has been selected from the Organizations menu (the drop-down menu in the top-left corner next to the MongoDB green leaf logo).
  3. From the left navigation select “Billing”.
  4. Click the “Apply Credit” button and enter the activation code that you received via email.

D. (Optional) Invite the source mLab account’s users to your Atlas organization and grant project access

After the connection between the source mLab account and the target Atlas organization has been established you will be able to see a list of the account users which exist on the source mLab account and have the opportunity to invite a given mLab user’s email address to your Atlas organization.

Unlike with mLab, a given email address (and username) on Atlas can be associated with many Atlas organizations.

Steps:

  1. Navigate to the “mLab Account” view.
    • From the left navigation click “mLab Account”.
  2. Select the “Account Users” tab.

  3. Open the “Invite mLab User to Organization” dialog.
    • Locate the user that you want to invite to the target Atlas organization.
    • Click the ellipses (…) button for that user.
    • Click “Invite User”.
  4. In the dialog, click “Send Invitation”.
    • The dialog will close and Atlas will send an email to that user’s email address inviting them to the Atlas organization. The subject of this email will start with “Invitation to MongoDB Cloud”.
    • If this email address is new to Atlas (i.e., isn’t associated with an existing Atlas user), when the invitation is accepted, this email address will become the username for the new Atlas user. This new Atlas user will only be able to log into Atlas and not mLab.
    • Otherwise when the invitation is accepted, this user will now see your Atlas organization associated with their profile.
  5. Ensure that the correct target project is selected.

  6. Grant access to the target project.
    • View Atlas documentation on managing project access and project roles.
    • The Atlas user that initiates a migration into a target Atlas cluster needs to have “Project Owner” access.

Migrating a Specific Deployment

After reviewing the migration prerequisites detailed above and completing the pre-migration setup steps, perform the following steps to migrate a specific mLab deployment.

E. Initiate the migration process for a specific deployment

The steps in the migration wizard are different depending on whether you are migrating to the Atlas shared tier (M0, M2, or M5) or the Atlas dedicated tier (M10 or above). This is because there are two different migration processes.

For example, you will only see the “Test Migration” step if you are migrating to an Atlas shared-tier cluster (M0, M2, or M5) which uses a mongodump/mongorestore process. However, you’ll still be able to test the Live Migration process when you migrate to an Atlas dedicated-tier cluster (M10 or above).

Steps:

  1. Log in to the target Atlas organization.
  2. Ensure that the target Atlas organization has been selected from the Organizations menu (the drop-down menu in the top-left corner next to the MongoDB green leaf logo).
  3. Navigate to the Organization Home view (click on the green MongoDB leaf icon in the upper-left corner).
  4. From the left navigation select “mLab Account”.
  5. Locate the mLab deployment that you want to migrate to Atlas.
    • Click the ellipses (…) button for that deployment.
    • Click “Configure Migration” to open the migration wizard for the given deployment.

F. Complete the tasks in the migration wizard

Migration wizard tasks include:

Below is an example of what the migration wizard looks like. Note that the order and tasks will vary depending on the characteristics of the mLab deployment being migrated.

img-atlas-migration-checklist

Post-Migration Atlas Configuration Review

G. Select a support plan for your Atlas organization

By default Atlas organizations include only a Basic Support plan which does not include database support or a response time SLA. Unlike on mLab, in order to get support for your database you need to purchase an Atlas support plan separately. You will not be able to open a ticket with Atlas Support unless you have a support plan.

When you migrate from mLab to any for-pay Atlas cluster, you will be presented with the option to select a support plan. The price of Atlas support plans are determined by monthly usage costs, which are the sum of cluster (VM/disk), data transfer, and backup costs.

Our intent is that your all-in Atlas costs do not exceed what you’re paying at mLab. As such, when you migrate your source mLab deployment to Atlas, the migration tool will resent you with an option to select a support plan.

As part of the migration process many customers will be offered an Atlas support plan at a significant discount. If you select a support plan through this process note that as long as you stay with this support plan the pricing (in terms of a % uplift with a minimum), will never change. However, if you change your support plan or change the way you pay Atlas (e.g. enter into an annual contract) it would potentially be subject to change.

Developer Plan: Atlas Developer

If you are in development or are running a non-critical application, the Atlas Developer plan is a great choice. This plan has been designed to provide in-depth technical support but with slower response times.

Premium Plan: Atlas Pro

If your application(s) have demanding uptime and/or performance requirements, we highly recommend the Atlas Pro plan. This plan has been designed to provide high-touch, in-depth technical support for advanced issues such as performance tuning, as well as rapid response times for emergencies, 24x7.

img-atlas-support-plans

If you have questions please email support@mlab.com for help.

H. Customize the backup policy for your Atlas cluster (M10+)

We recommend reviewing the backup schedule and retention policy for your target Atlas cluster.

Be aware that:

View Atlas documentation on Snapshot Scheduling and Retention Policy.

I. Test failover and ensure resilience (M10+)

Atlas performs maintenance periodically. This maintenance can happen at any time unless you’ve configured a maintenance window. To ensure that maintenance on Atlas is seamless for your application(s):

  1. Configure your application(s) to use the exact connection string published in Atlas’ UI. In particular retryWrites=true will provide your application with more resilience when replica set failovers occur.
  2. Test failover on Atlas.
  3. (Optional) Configure a maintenance window for your Atlas project.

Once you’ve configured a preferred cluster maintenance start time, any users with the Project Owner role will receive an email notification 72 hours before the scheduled maintenance. At that point you have the option to begin the maintenance immediately or defer the maintenance for one week. You can defer a single project maintenance event up to two times.

All nodes in the Atlas cluster could be restarted over a short time window during maintenance. Also some urgent maintenance activities (e.g., urgent security patches) will not wait for your chosen window, and Atlas will start those activities when needed.

Reference

Configuring Database Users

Unlike with mLab, different Atlas clusters can share the same database users (and network configuration). Specifically, be aware that Atlas clusters within a given Atlas project share the same database users (and whitelisted IP addresses).

To access the Atlas cluster, you must authenticate using a MongoDB (database) user that has access to the desired database(s) on your Atlas cluster. You can configure database users by selecting “Database Access” in the left navigation for your project. On Atlas, database users are separate from Atlas users just as on mLab, database users are separate from mLab users.

View Atlas documentation on configuring MongoDB users.

Database users for Atlas clusters cannot be managed via database commands sent to the database directly (e.g., via the mongo shell or a MongoDB driver). Instead, database users are managed indirectly via the Atlas cloud service, either via the Atlas management UI or via the Atlas management API.

Database user configuration changes will take up to a couple minutes to complete on Atlas shared-tier clusters (M0/M2/M5).

Differences in Atlas database user privileges

mLab Role/Privileges Mapped Atlas Privileges Description of Differences
root Atlas admin mLab Dedicated plan deployments support the root role. The Atlas privilege that is most equivalent is Atlas admin, but be aware that this privilege does not have all the permissions that the root role has. In particular, see the commands that Atlas does not support on its dedicated-tier clusters (M10 and above).
dbOwner@mydb dbAdmin@mydb, readWrite@mydb mLab deployments support the dbOwner role which combines the privileges granted by the readWrite, dbAdmin, and userAdmin roles. The set of Atlas privileges that is most equivalent does not include the userAdmin role.
dbOwner@mydb readWriteAnyDatabase The default mapping that the migration presents is “dbAdmin@mydb, readWrite@mydb” (see row above). However, you may choose to instead map this to the readWriteAnyDatabase role which allows read and write operations on all databases in the Atlas cluster (except for “local” and “config”) and provides the ability to run the “listDatabases” command.
readOplog read@local mLab’s custom “readOplog” role allows read operations on the “oplog.rs” and “system.replset” collections in the “local” database only. This difference should be transparent to your application.
read@mydb readAnyDatabase The default mapping that the migration tool presents is “read@mydb” which is an exact match. However, you may choose to instead map this to the readAnyDatabase role which allows read operations on all databases in the Atlas cluster (except for “local” and “config”).

Import limitations

The migration tool makes it very easy to import the database users(s) that exist on the source mLab deployment. However, there are some situations where you’ll need to manually configure database users.

If the target Atlas project already has database user(s) configured, you will not be able to use the migration tool’s database user importer.

Conflicting Username

Database usernames must be unique across an Atlas project. As such if your source mLab deployment has two database users with the same name (e.g., “myuser” with the “dbOwner@db1” privilege and “myuser” with the “dbOwner@db2” privilege), the database user importer will not be able to import them.

Couple of options in Atlas for this example:

Unsupported on Atlas

Atlas provides a curated list of database user privileges. These privileges provide access to a subset of MongoDB commands only and do not support all MongoDB commands.

Configuring the IP Whitelist

Unlike with mLab, different Atlas clusters can share the same network configuration (and database users). Specifically, be aware that Atlas clusters within a given Atlas project share the same whitelisted IP addresses (and database users).

To access your target Atlas cluster, you’ll need to ensure that any necessary whitelist entries have been created.

Atlas is different than mLab in that you must configure IP whitelist entries in order to connect to an Atlas cluster. To access the Atlas cluster, you must connect from an IP address on the Atlas project’s IP whitelist. You can configure IP whitelist entries by selecting “Network Access” from the left navigation for your Atlas project.

Note that mLab’s Sandbox and Shared plan deployments are always accessible by all IP addresses. To match the firewall settings of your mLab Sandbox or Shared plan deployment you can whitelist all IP addresses (0.0.0.0/0) on your Atlas cluster. However, we recommend whitelisting only the addresses that require access. To match the firewall settings of your mLab Dedicated plan deployment on Atlas you can review your current mLab firewall settings on the “Networking” tab in mLab’s UI.

View Atlas documentation on configuring whitelist entries.

Sizing the Target Atlas Cluster

The migration tool detailed in this guide will allow you to automatically create the most equivalent Atlas cluster and provide estimated pricing. As such we highly recommend letting the migration tool build the target Atlas cluster instead of creating it yourself. You can do so by selecting “Create most equivalent new cluster” from the drop-down menu when you are at the “Target Cluster” step.

The Atlas M0, M2, and M5 tiers run on shared resources (VM/disk) while the Atlas M10 tiers and above run on dedicated resources.

mLab Plan Suggested Atlas Tier
Sandbox M0 (free) or M2
Shared M2, M5, or M10
Dedicated M10 and above

mLab Sandbox

If your deployment is running on a mLab Sandbox (free) plan, you may migrate to the Atlas M0 (free tier).

However note that:

mLab Shared

If your deployment is currently running on an mLab Shared plan, the tier that we suggest on Atlas depends on the size of your database.

mLab Size 1 Suggested Atlas Tier  
Less than 1.6 GB M2 Review warnings below
Between 1.6 GB and 4 GB M5 Review warnings below
More than 4 GB M10 Ensure CPU usage will not be too high

Atlas M2/M5 Warnings:

Atlas M10 Warning:

The Atlas M10 tier (which is similar to mLab’s M1 tier) has just 1 CPU core.

As such before migrating from an mLab Shared plan to the Atlas M10 make sure that you visit the mLab “Slow Queries” tab to view and build the indexes recommended by mLab’s Slow Queries Analyzer. If the “Slow Queries” tab continues to show a high rate of slow operations, please email support@mlab.com for help before attempting to migrate to Atlas.

mLab Dedicated

If your deployment is currently running on an mLab Dedicated plan, the following table shows the equivalent Atlas tier.

Note that Atlas sets limits for concurrent incoming connections based on instance size.

mLab Plan Equivalent Atlas Tier
M1 Standard or High Storage M10
M2 Standard or High Storage M20
M3 Standard or High Storage M30
M4 Standard or High Storage M40 (General class)
M5 Standard or High Storage M50 (General class)
M6 Standard or High Storage M60 (General class)
M7 Standard or High Storage M80 (Low-CPU)
M8 Standard or High Storage M200 (Low-CPU)
   
M3 High Performance (legacy) M40 (Local NVMe SSD)
M4 High Performance (legacy) M40 (Local NVMe SSD)
M5 High Performance M50 (Local NVMe SSD)
M6 High Performance M60 (Local NVMe SSD)
M7 High Performance M80 (Local NVMe SSD)

Choosing the Atlas disk type and size (M10 and above)

AWS

mLab’s Dedicated Standard and High Storage plans use AWS’s General Purpose SSD (gp2) EBS volumes. By default Atlas uses the same volume type.

The performance of this volume type is tied to volume size. As such when you create your cluster on Atlas:

Google Cloud Platform (GCP) and Azure 2 (AZR2)

mLab’s Dedicated plans on GCP and Azure 2 use the same disk type as Atlas. On GCP both use GCP’s SSD Persistent Disks. On Azure both use Premium SSD Managed Disks.

The performance of these disk types is tied to disk size. As such when you create your cluster on Atlas:

Azure Classic (AZR)

mLab’s Azure Classic Dedicated plans use magnetic disks (Azure’s page blobs and disks) while Atlas uses Premium SSD Managed Disks.

Disk I/O will be significantly improved on your target Atlas cluster. As such when you create your cluster on Atlas:

Operational Limitations of the Atlas M0, M2, and M5 tiers

Although the Atlas free-tier clusters (M0) offer more storage than mLab Sandbox databases and although Atlas shared-tier clusters (M2/M5) offer more storage per $ than mLab’s Shared plans, the Atlas M0, M2 and M5 tiers have operational limitations that you should be aware of. Most importantly:

Note that the maximum number of operations per second and the amount of data that can be transfered have been raised specifically for Atlas clusters migrating from mLab using the migration tool detailed in this guide. Please email support@mlab.com for more details.

We recommend reviewing all of the Atlas operational limitations:

If you are concerned about these limitations please email support@mlab.com with the deployment identifier for the mLab deployment you want to migrate so that we can provide advice. Again note that some limits have been raised specifically for Atlas clusters migrating from mLab.

Manually Creating the Target Atlas Cluster

The migration tool will enable you to automatically create the target Atlas cluster, but if you’d like to manually create it (not recommended), here are the steps:

  1. Ensure that the correct target project (within the correct organization) is selected.
  2. Click on the “Build a Cluster” or “Build a New Cluster” button to create an Atlas cluster, noting our recommendations in the table below.
IMPORTANT:  
Cloud Provider & Region Select the same region as the source mLab deployment.
Cluster Tier Size the target Atlas cluster to be at least as performant as the source mLab deployment in order to make the migration process as smooth as possible. You’ll be able to seamlessly downgrade on Atlas within the Atlas Dedicated tiers (M10+) once the migration is complete.

Note that the Atlas Live Migration process is only available for the Atlas M10+ tiers.
Additional Settings > Version On the M10+ tiers select the same MongoDB release version as the source mLab deployment (3.6).
Additional Settings > Backup On the M10+ tiers disable “Continuous Cloud Backup” unless you want it. It is not a feature that mLab had and is more expensive.
Cluster Name If you want to customize the name of your Atlas cluster, now is the time. This name will be part of the cluster’s connection string, and you will not be able to change it later.

Restrictions on Target Atlas Cluster Modifications

Once an Atlas cluster has been set as the target of a migration from mLab, you will not be able to:

These restrictions are in place to help ensure a smooth migration from mLab to Atlas. We recommend waiting 1-2 days or at least through a period of peak traffic before making these types of changes to the target Atlas cluster. This way if there’s an unexpected issue after migrating to Atlas, it will be much easier to determine the root cause.

You can lift the restriction on these types of cluster modifications if you cancel the migration. However once you’ve made these kinds of cluster modifications, the migration tool will no longer allow you to migrate to that cluster.

To cancel a migration:

  1. Log in to the target Atlas organization.
  2. Ensure that the target Atlas organization has been selected from the Organizations menu (the drop-down menu in the top-left corner next to the MongoDB green leaf logo).
  3. Navigate to the Organization Home view (click on the green MongoDB leaf icon in the upper-left corner).
  4. From the left navigation select “mLab Account”.
  5. Locate the mLab deployment that you want to migrate to Atlas.
    • Click the ellipses (…) button for that deployment.
    • Click “Cancel Migration” to cancel the migration process for the given deployment.

Testing Connectivity

A critical step to ensuring a migration to Atlas with very minimal downtime is testing connectivity to the target Atlas cluster from all applications before cutting over from the source mLab deployment.

For each application that depends on this deployment:

  1. Log into a representative client machine.
    • If you can’t do this, ensure that you have the same network configuration on Atlas that you did on mLab.
  2. From that machine, download and install the mongo shell.
  3. From that machine, connect to your cluster using the exact connection string that has been published in Atlas’ UI.
  4. After connecting with the mongo shell, switch to one of the databases that this application will be using (e.g., use mydb) and then run a simple query to test your database credentials (e.g., db.mycollection.findOne()). Ensure the mongo shell does not generate an “unauthorized” error. Note that before starting the migration process it’s expected that your target Atlas cluster might be empty. It’s not necessary for the database to exist in order to test your database credentials. A null result for querying a nonexistent database and collection is expected if database credentials are working.
  5. To also test application driver compatibility, connect to the target Atlas cluster using your application code and driver with the same connection string.

Performing these tests will help ensure that each of your applications has network connectivity and working database credentials.

If you are having trouble connecting to the target Atlas cluster please visit our troubleshooting connection issues guide for help.

Ensuring independence from the Heroku add-on’s config var

This step is only relevant when migrating an mLab deployment that is being managed via mLab’s Heroku add-on.

When you are done migrating your mLab Heroku add-on to Atlas, you will be deleting the Heroku add-on. Deleting the Heroku add-on will not only delete your mLab deployment but it will also delete the Heroku config var that was automatically created when you provisioned the Heroku add-on.

As such, it’s critical that you switch to a new Heroku config var before starting the migration process.

  1. Copy and paste the value of the existing config var into a brand-new config var (e.g., called “DB_URI”).
    • The existing config var that was automatically provisioned should look like “MONGODB_URI” or “MONGOLAB_URI” or “MONGOLAB_<color>_URI”.
  2. Change your application code such that it no longer uses the original config var and now uses the new config var.
  3. Redeploy your Heroku app with the code change.

mLab Heroku Add-on FAQs

How do I know whether my Heroku app is dependent on a Heroku add-on config var?

  1. Log in to Heroku
  2. Select your app
  3. On the “Resources” tab, you’ll see a list of the Heroku add-ons for this app, including the “mLab MongoDB” ones.
  4. For each “mLab MongoDB” add-on, you’ll see “Attached as MONGODB” or “Attached as MONGOLAB” or “Attached as “MONGOLAB_<color>”. Append “_URI” to that string to get the corresponding Heroku config var name (see screenshots below).
  5. Search in your application code for usage of an environment variable by the config var’s name.

Screenshot from Heroku app’s “Resources” tab:

img-heroku-resources-addons

Screenshot from Heroku app’s “Settings” tab:

img-heroku-config-vars

What if I am using Nightscout with an mLab Heroku add-on?

You only need to perform Step 1 above, and you can skip Steps 2 and 3:

  1. Copy and paste the value of the existing config var into a brand-new config var called “MONGO_CONNECTION”
    • The existing config var that was automatically provisioned should look like “MONGODB_URI” or “MONGOLAB_URI” or “MONGOLAB_<color>_URI”.

This is because the Nightscout app already looks for a config var called “MONGO_CONNECTION”.

Migration Process

The mLab->Atlas migration tool uses two different import processes, depending on whether the target Atlas cluster is a shared-tier cluster or a dedicated-tier cluster.

mongodump/mongorestore process

The migration process used when migrating into the Atlas M0, M2, or M5 tiers

To migrate a mLab Sandbox (free) database to the Atlas M10 tier, you’ll need to first migrate to the Atlas M0, M2, or M5 tier. From there you’ll be able to upgrade to the Atlas M10 or above without needing to change your connection string yet again (7-9 minutes of downtime).

  1. Navigate to the “mLab Account” view.
    • Ensure that the target Atlas organization has been selected from the Organizations menu (the drop-down menu in the top-left corner next to the MongoDB green leaf logo).
    • Navigate to the Organization Home view (click on the green MongoDB leaf icon in the upper-left corner).
    • From the left navigation select “mLab Account”.
  2. Start the migration process.
    • Locate the mLab deployment that you want to start migrating to Atlas.
    • Click the ellipses (…) button for that deployment.
    • Click “Configure Migration” to start the migration wizard.
    • Follow the steps in the migration wizard to start the import of your data and indexes.
  3. Perform a test migration.
    • The “Test Migration” step of the migration wizard will delete any data existing on the target Atlas cluster and then run a mongodump on your source mLab deployment followed by a mongorestore into the target Atlas cluster.
    • We strongly recommend performing a test run of the migration. The test run will not only give you an estimate for how long the process will take, but it will validate that the process will work for your particular set of data and indexes.
  4. Perform the real migration.
    • Do NOT proceed until you’ve tested connectivity from all applications (see “Testing Connectivity” section above).
    • Stop writes to the source mLab deployment by stopping all application clients.
    • Click “Begin Migration” which will perform the same actions as the test migration but it will also change the role of your database user(s) on your source mLab deployment to be read-only (as a safeguard for ensuring that writes have stopped).
    • Wait for the import to complete with success.
    • Restart application clients with the target Atlas connection string.

Clicking “Begin Migration” is the real migration! It will change the role of the database user(s) which exist on the source mLab deployment to be read-only.

If an unexpected error occurs, and you want to abort the migration process and start using your source mLab deployment again, you will need to recreate the database user(s) on your source mLab deployment through mLab’s management portal. Performing a test migration and testing connectivity ahead of time will help avoid the need to do this.

If you do rollback to mLab, note that any writes that are made to the target Atlas cluster will not exist on the source mLab deployment and will be deleted if you attempt another migration later.

Below is a screenshot of what you’ll see when you reach Step 4 of the mongodump/mongorestore process.

img-atlas-migration-review.

Live Migration process

The migration process used when migrating into the Atlas M10 and above

Atlas can perform a live migration of your deployment, keeping the target Atlas cluster in sync with the source mLab deployment until you cut your applications over to the target Atlas cluster.

This seamless process is available for mLab Dedicated plan deployments as well as Shared plan deployments migrating to Atlas dedicated-tier clusters (M10 and above).

Because the target Atlas cluster will stay in sync with the source mLab deployment until you’re ready to cut over your application’s reads and writes to Atlas, the only downtime necessary is when you stop your application and then restart it again with a new connection string that points to Atlas.

Do not cutover to the target Atlas cluster until you have tested connectivity from all applications (see “Testing Connectivity” section above) and ensured that all writes have stopped on the source mLab deployment.

Only by executing the cutover to Atlas as documented in Steps 4 and 5 below can we guarantee that the migration will neither lose data nor introduce data consistency issues.

You cannot use the renameCollection command during the initial phase of the Live Migration process. Note that executing an aggregation pipeline that includes the $out aggregation stage or a map-reduce that includes an out will use renameCollection under the hood.

  1. Navigate to the “mLab Account” view.
    • Ensure that the target Atlas organization has been selected from the Organizations menu (the drop-down menu in the top-left corner next to the MongoDB green leaf logo).
    • Navigate to the Organization Home view (click on the green MongoDB leaf icon in the upper-left corner).
    • From the left navigation select “mLab Account”.
  2. Start the migration process.
    • Locate the mLab deployment that you want to start migrating to Atlas.
    • Click the ellipses (…) button for that deployment.
    • Click “Configure Migration” to start the migration wizard.
    • Follow the steps in the migration wizard to start the Live Migration process. Note that this process will delete any existing data on the target Atlas cluster before it starts syncing.
  3. Wait until the migration process is ready for cutover (how long will it take?).
    • When Atlas detects that the source and target clusters are in sync, it starts an extendable 72-hour timer to begin the cutover procedure. If the 72-hour period passes, Atlas stops synchronizing with the source cluster. You can extend the time remaining by 24 hours by clicking the “Extend time” hyperlink any number of times.
    • The “Prepare to Cutover” button will now be clickable. It’s normal during this time for the optime difference counter to show 1-2 seconds of lag.
    • Atlas users with the “Project Owner” role will receive a notification when the Live Migration is ready for cutover.
  4. Prepare to cutover.
    • Click the “Prepare to Cutover” button to open a walk-through screen with instructions on how to proceed.
    • Copy and paste the connection string for your target Atlas cluster and prepare to change your connection string.
    • Do NOT proceed until you’ve tested connectivity from all applications (see “Testing Connectivity” section above).
  5. Cutover.
    • Stop writes to the source mLab deployment by stopping all application clients.
    • Wait for the optime gap to reach 0 (this should be extremely quick/instantaneous).
    • Click the “Cut over” button to stop the target Atlas cluster from syncing from the source mLab deployment.
    • Restart application clients with the target Atlas connection string.

See Frequently Asked Questions (FAQ) about the Live Migration process below.

Below is a screenshot of what you’ll see when you reach Step 4 of the Live Migration process and click the “Prepare to Cutover” button.

img-atlas-prepare-to-cutover.

Live Migration Process FAQs

Q. Is the sync one-way or multi-directional?

The Live Migration process will read from the source mLab deployment and write to the target Atlas cluster. This process is one-way and not multi-directional.

Q. Will the source mLab deployment be available during the sync?

Yes. After you start the Live Migration process, your application can continue to read from and write to the source mLab deployment as it always has. Even after you have cut over your application to Atlas, your source mLab deployment will remain until you have manually deleted it.

Q. Which mLab node will the Live Migration process sync from?

The Live Migration process will sync from the Primary node of the source mLab deployment and add additional load as it copies your data to Atlas (this is essentially a collection scan on every collection).

A deployment which is well-indexed and performing well on mLab should be able to handle this without any problems. By ensuring a healthy, well-sized deployment prior to migrating, you can dramatically reduce the risk of migration failures and delays. You’ll also be in a much better position to handle future growth on Atlas and to more efficiently use your database resources (which in the long run will lower your costs).

If you do face an unexpected issue after you start the Live Migration process, you can cancel it immediately to stop the syncing. Once you have stopped the process, please email mLab Support (support@mlab.com) for help so that we can advise you on next steps.

Q. How long does the Live Migration process take?

The Live Migration process must complete the following four phases before it’s ready for cutover:

During these four phases, your application can continue to read and write from the source mLab deployment as it normally does. The only downtime that the Live Migration process requires is the time it takes for you to complete the cutover steps - for most customers this is just a few minutes.

Once Phase 4 is complete, you’ll see that the “Prepare for Cutover” button is enabled. The users with the “Project Owner” role will also receive an email notification. By default you have just 72 hours to cut over your application but this window can be extended any number of times by 24 hours using the “Extend time” link.

The total time that these four phases take depends greatly on the size of the data and indexes as well as the resources available on the source mLab deployment and the target Atlas cluster.

That said, even for a large dataset (hundreds of gigabytes), Phase 1 (copy data) generally completes within 6 hours. If the source mLab deployment is running on Azure Classic (magnetic, non-SSD disks), this can be much longer.

Phase 3 (build indexes) is the most difficult to estimate and depends on the RAM available on the target Atlas cluster as well as the number and sizes of the indexes. For a deployment with a total index size over 15 GB, it would not be surprising if this phase takes on the order of days. If you’re concerned about timing, we recommend ensuring that the target Atlas cluster has at least enough RAM to hold the total size of indexes. Atlas pro-rates charges by the day, and you can downgrade seamlessly after you have successfully cut over your application to Atlas.

During Phase 3 (build indexes) the Secondary node(s) may appear to be down in the Atlas UI - this is normal. The Live Migration process builds indexes on the Secondary nodes in the foreground during which time Atlas’ infrastructure is unable to connect. However, if you download the database server log for one of the Secondary nodes you can view index build progress.

Please email support@mlab.com if you would like us to help you monitor the progress of a Live Migration.

Q. Can I perform a trial/test Live Migration?

Yes. When you migrate into an Atlas dedicated-tier cluster (M10 and above), there is no explicit step in the migration wizard to test the migration.

However, once the Live Migration process is ready for cutover, you can cancel it at any time in order to stop the syncing process. Cancelling it is the same exact thing as clicking the “Cut over” button except that there’s no validation to help ensure that writes have stopped on the source mLab deployment.

If you click the “Restart Migration” button, the Live Migration process will start again (from scratch). You can do this any number of times in order to test the process.

Q. I missed the cutover window and now the Live Migration process has errored out. What do I do?

Assuming you have not started reading or writing from the target Atlas cluster, you would click on the “Restart Migration” button from the migration checklist. Restarting the migration process will delete all data on the target Atlas cluster before starting the sync again.

Going forward note that by default you have 72 hours to cut over your application but this cutover window can be extended by 24 hours any number of times using the “Extend time” link.

When the Live Migration process is ready for cutover again, click on the “Prepare to Cutover” button and follow the instructions on that screen.

Q. When cutting over to Atlas during the Live Migration process, do I need to stop my application?

We strongly recommend that you perform the cutover step of the Live Migration process by stopping all writes to the source mLab deployment before clicking on the “Cut over” button and directing traffic to the target Atlas cluster. Only by executing the cutover using this strategy can we guarantee that the migration will neither lose data nor introduce data consistency issues.

However, if your application cannot withstand even a few minutes of downtime, you may be seeking a method by which you can cut over to Atlas without stopping your application.

A custom cutover strategy is possible, but you will be responsible for reasoning through all of the possible issues that could arise from that strategy. Such analysis is very specific to the way your application works, and MongoDB can make no guarantees about the correctness of the migration in these cases.

Important things to know about the Live Migration process for a Replica Set:

If you decide to allow your application to read and/or write to both the source and the target during the cutover phase you will be responsible for ensuring that your application can handle any consequence of this strategy. For example, your application may lose data, unintentionally duplicate data, generate inconsistent data, or behave in unexpected ways.

This FAQ is applicable only if the target Atlas cluster is a Replica Set. When migrating a Sharded Cluster the target cluster is not available on the network for 3-5 minutes after the cutover button is pressed and replication from the source to the destination has stopped.

Q. How can I be sure that my app has stopped writing to the source mLab deployment before I cut over to Atlas?

During the Live Migration process, the “Cut over” button will not be enabled unless the last optime of the source mLab deployment and the target Atlas cluster are the same (i.e., until the optime difference is 0). This validation is in place to help you ensure that writes have stopped against the source mLab deployment before you cut over to Atlas.

If you would also like to check yourself, you can authenticate as admin database user run an oplog query against the Primary of the source mLab deployment to see the timestamp of the most recent write. An example follows (assuming you’ve connected to the Primary node using the mongo shell).

Note: This query will not work if the source mLab deployment has a TTL index that is actively deleting.

rs-ds123456:PRIMARY> use local
switched to db local

rs-ds123456:PRIMARY> db.oplog.rs.find( {"op":{"$ne":"n"}, "ns": {"$nin": ["config.system.sessions", "config.transactions"]}}, {"ts":1, "op": 1, "ns": 1} ).sort( {"$natural": -1} ).limit(1)
{ "ts" : Timestamp(1579134905, 31), "op" : "u", "ns" : "mydb.mycoll" }

rs-ds123456:PRIMARY> new Date(1579134905 * 1000)
ISODate("2020-01-16T00:35:05Z")

Q. I have scheduled the migration for a certain time. Will I have support?

The Live Migration process can take a substantial amount of time prior to it being ready for cutover, so we recommend starting the migration process well in advance of your scheduled maintenance so that you will have confidence that it will be ready for cutover when you need it to be.

When the process is ready for cutover, you can perform Steps 4 and 5 of the official Live Migration process at the best time for your team and application.

Note that it’s critical that you test connectivity from all of your application clients before you start the cutover process to Atlas.

If you experience an emergency issue when you actually perform the cutover steps, the source mLab deployment will still be available, fully intact. As such in case of a true emergency you can either:

Deleting the source mLab deployment

To stop incurring charges on mLab for a for-pay deployment, you must manually ensure that the mLab deployment has been deleted. We recommend deleting the source mLab deployment (via mLab’s UI) as soon as you are confident that it has been successfully migrated to Atlas, and you no longer need it.

Note that mLab bills at the start of each month for all chargeable services provided in the prior month, so you will still be charged one additional time even after your mLab account stops incurring charges.









  1. Size of data plus indexes of mLab deployment. This is the value under the “SIZE” heading in mLab’s UI. 

  2. In addition to stopping replication, the “Cut over” button during a Live Migration process will also perform a rolling restart of the target Replica Set in order to re-enable a setting that is temporarily disabled during a Live Migration, ttlMonitorEnabled