Percona Operator for MongoDB Goes Cluster-WidePercona Operator for MongoDB version 1.13 was recently released and it comes with various ravishing features. In this blog post, we are going to look under the hood and see what are the practical use cases for these improvements.

Cluster-wide deployment

There are two modes that Percona Operators support:

  1. Namespace scope
  2. Cluster-wide

Namespace scope limits the Operator operations to a single namespace, whereas in cluster-wide mode Operator can deploy and manage databases in multiple namespaces of a Kubernetes cluster. Our Operators for PostgreSQL and MySQL already support cluster-wide mode. With the 1.13 release, we are closing the gap for Percona Operator for MongoDB. 

Multi-tenant clusters are the most common call for cluster-wide mode. You as a cluster administrator manage a single deployment of the Operator and equip your teams with the way to deploy and manage MongoDB in their isolated namespaces. Read more about multi-tenancy and best practices in our Multi-Tenant Kubernetes Cluster with Percona Operators.

How does it work?

To deploy in cluster-wide mode we introduce cw-*.yaml manifests. The quickest way would be to use the cw-bundle.yaml which deploys the following:

  • Custom Resource Definition
  • Service Account and Cluster Role that allows Operator to create and manage Kubernetes objects in various namespaces
  • Operator Deployment itself

By default, Operator monitors all the namespaces in the cluster. The WATCH_NAMESPACE environment variable in the Operator Deployment limits the scope. It can be a comma-separated list that instructs the Operator on which namespaces to monitor for Custom Resource objects:

This is useful if you want to limit the blast radius, but run multiple Operators monitoring various namespaces. For example, you can run an Operator per environment – development, staging, production.

Deploy the bundle:

Now you can start deploying databases in the namespaces you need:

See the demo below where I deploy two clusters in different namespaces with a single Operator.

Hashicorp Vault integration for encryption-at-rest

We take security seriously at Percona. Data-at-rest encryption prevents data visibility in the event of unauthorized access or theft. It is supported by all our Operators. With this release, we introduce the support for integration with Hashicorp Vault, where the user can keep the keys in the Vault and instruct Percona Operator for MongoDB to use those. This feature is in a technical preview stage.

There is a good blog post that describes how Percona Server for MongoDB works with the Vault. In Operator, we implement the same functionality and follow the structure of the same parameters.

How does it work?

We are going to assume that you already have Hashicorp Vault installed – you either use Cloud Platform or a self-hosted version. We will focus on the configuration of the Operator.

To instruct the Operator to use Vault you need to specify two things in the Custom Resource:

  1. secrets.vault – Secret resource with a Vault token in it
  2. Custom configuration for mongod for a replica set and config servers

secrets.vault

Example of cr.yaml:

The secret object itself should contain the token that has access to create, read, update and delete the secrets in the desired path in the Vault. Please refer to the Vault documentation to understand policies better.

Example of a Secret:

Custom configuration

The operator allows users to fine-tune mongod and mongos configurations. For encryption to work, you must specify vault configuration for replica sets – both data and config servers.

Example of cr.yaml:

What to note here:

  • tokenFile: /etc/mongodb-vault/token
    • It is where the Operator is going to mount the Secret with the Vault token you created before. This is the default path and in most cases should not be changed.
  • secret: secret/data/dc/cluster1/rs0
    • It is the path where keys are going to be stored in the Vault. 

You can read more about Percona Server for MongoDB and Hashicorp Vault parameters in our documentation.

Once you are done with the configuration, apply the Custom Resource as usual. If everything is set up correctly you will see the following message in mongod log:

Azure Kubernetes Service support

All Percona Operators are going through rigorous QA testing throughout the development lifecycle. Hours of QA engineers’ work are put into automating the test suites for specific Kubernetes flavors. 

AKS, or Azure Kubernetes Service, is the second most popular managed Kubernetes offering according to Flexera 2022 State of the Cloud report. After adding the support for Azure Blob Storage in version 1.11.0, it was just a matter of time before we started supporting AKS in full.

Starting with the 1.13.0 release, Percona Operator for MongoDB supports AKS in Technical Preview. You can see more details in our documentation.

The installation process of the Operator is no different from any other Kubernetes flavor. You can use a helm chart or apply YAML manifests with kubectl. I ran the cluster-wide demo above with AKS.

Admin user

This is a minor change, but frankly, it is my favorite as it impacts the user experience in a great way. Our Operator is coming with systems users that are used to manage and track the health of the database. Also, there are userAdmin and clusterAdmin for users to control the database, create users, and so on.

The problem here is neither userAdmin nor clusterAdmin allows you to start playing with the database right away. At first, you need to create the user that has permission to create databases and collections, and only after that, you can start using your fresh MongoDB cluster. 

With the release 1.13, we say no more to this. We add the databaseAdmin user that acts like a database admin, enabling users to start innovating right away.

databaseAdmin credentials are added to the same Secret object where other users are:

Get your password like this:

Connect to the database as usual and start innovating:

What’s next

Percona is committed to running databases anywhere. Kubernetes adoption grows year over year, turning from a container orchestrator into a cloud operating system. Our Operators are supporting the community and our customers’ journey in infrastructure transformations by automating the deployment and management of the databases in Kubernetes.

The following links are going to help you to get familiar with Percona Operator for MongoDB:

  1. Quickstart guides
  2. Free Kubernetes cluster for easier and quicker testing
  3. Percona Operator for MongoDB community forum if you have general questions or need assistance

Read about Percona Monitoring and Management DBaaS, an open source solution that simplifies the deployment and management of MongoDB even more.

Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments