Why expose the Couchbase database on the public network?

Below are some examples:

  • Cross-Data Center Replication (XDCR) for High Availability and Disaster Recovery

  • Client SDK access to Couchbase cluster

  • Database-as-a-Service (DBaaS) platforms

*Note – All of these use cases share a common goal; they allow clients to access the database instance without having to establish a VPN to a Kubernetes instance. They also require TLS protected secure  communication that is sometimes difficult to achieve with typical Kubernetes architecture.

How we solved public networking using Kubernetes External DNS?

When deploying apps on Kubernetes, you usually use Kubernetes resources like Service and Ingress to expose apps outside the Kubernetes cluster at your desired domain. This involves a lot of manual configuration of Kubernetes resources and also the DNS records at your provider, which can be a time consuming and erring process. This can soon become a snag as your application grows in complexity, and also when the external IP changes, it is necessary to update the DNS records accordingly.

To address this, the Kubernetes sig-network team created the External DNS solution to manage external DNS records in an autonomous way from within a Kubernetes cluster. Once you deploy the External DNS, it works in the background and requires almost no additional configuration. It creates DNS records at DNS providers external to Kubernetes such that Kubernetes resources are discoverable via the external DNS providers, and allows you to control DNS records dynamically in a DNS provider agnostic way. Whenever it discovers a Service or Ingress being created or updated, the External DNS controller will update the records instantly.

While deploying the Couchbase database using the public networking with External DNS strategy for its network architecture, Couchbase cluster nodes are exposed using load-balancer services that have public IP addresses allocated to them. The External DNS controller will then be responsible for managing dynamic DNS (DDNS) in a cloud-based provider to provide stable addressing and a basis for TLS.

Now, Let’s see it in action!

We will now go through the steps to deploy the Couchbase cluster using Autonomous Operator 2.0 in EKS and access the Couchbase cluster through public networking that is managed through External DNS. Below is a quick overview of the architecture of our deployment.

Public Networking with Couchbase Autonomous Operator using Kubernetes External DNS

Public Networking with Couchbase Autonomous Operator using Kubernetes External DNS

Prerequisites

Before we begin, there are few important prerequisites below.

  1. Install and setup kubectl on your local machine – kubectl is a command-line interface for running commands against Kubernetes clusters. 
  2. Install the latest AWS CLI – The AWS CLI is a unified tool that enables you to interact with AWS services using commands in your command-line shell. In this case, we will be using AWS CLI to communicate securely with the Kubernetes cluster running on AWS.
  3. Deploy the EKS cluster. The EKS cluster can be deployed using the AWS console or eksctl. In this article, we will be deploying the EKS cluster in the us-east-1 region with 3 worker nodes in three availability zones as mentioned below.

4. You will need a public DNS domain. The domain can be purchased from a registrar such as GoDaddy, AWS Route 53, Namecheap, etc. For this article, I’m using my own registered (GoDaddy) domain balajiacloud.guru and I would suggest getting yours before continuing further.

5. Finally, you will need an External DNS provider. During the life cycle of a Couchbase cluster, nodes may be added and removed for cluster scaling, upgrades, or fault recovery. In each instance, new DNS names need to be created for any new Couchbase pods that are created, or DNS names removed from pods that are deleted. The DDNS provider exposes a REST API that allows the External DNS controller in Kubernetes to synchronize what the Couchbase cluster looks like with public DNS. 

Here is the list of all documented and known External DNS solutions for the Kubernetes platform. In this article, we will be using Cloudflare as our External DNS provider. If you plan to use Cloudflare as your External DNS provider, then you will need to create a Cloudflare account and add the DNS domain to the account.

Couchbase Autonomous Operator using Kubernetes External DNS

Creating TLS Certificates

The Operator ensures you configure your Couchbase clusters securely. If the Operator detects a cluster is being exposed on the public internet, it will enforce TLS encryption. 

Before we generate TLS certificates we need to determine what DNS domain the Couchbase cluster will be in. We can use our balajiacloud.guru directly, but then it can only ever be used by a single Couchbase cluster. Therefore we shall use a subdomain called cbdemo.balajiacloud.guru as a unique namespace for our cluster. In general, a wildcard DNS name (*.cbdemo.balajiacloud.guru) will handle all public DNS names generated by the Operator. This needs to be added to the Couchbase cluster certificate.

We will use the EasyRSA to create the TLS Certificates. EasyRSA by OpenVPN makes operating a public key infrastructure (PKI) relatively simple and is the recommended method to get up and running quickly.

1. Let’s create a directory called tls and clone the EasyRSA repository.

2. Initialize and create the CA certificate/key. You will be prompted for a private key password and the CA common name (CN), something like Couchbase CA is sufficient. The CA certificate will be available as pki/ca.crt.

3. Create the Couchbase Cluster Server Certificate.

You need to create a server wildcard certificate and a key to be used on Couchbase Server pods. In this article, we will use the below command to generate a certificate for the Couchbase cluster cbopedns in the demo namespace and using the cbdemo.balajiacloud.guru subdomain.

Note: Password-protected keys are not supported by Couchbase Server or the Operator.

The key/certificate pair can be found in pki/private/couchbase-server.key and pki/issued/couchbase-server.crt and used as pkey.key and chain.pem, respectively, in the spec.networking.tls.static.serverSecret cluster parameter.

4. Private Key Formatting – Due to a limitation with Couchbase Server’s private key handling, server keys need to be PKCS#1 formatted.

First, let’s copy the .key and .pem file to the tls directory for easy access.

Now, Lets format the server keys.

We will use these keys to create the Couchbase cluster server secret.

Deploy Couchbase Autonomous Operator 2.0 (Latest)

The Couchbase Autonomous Operator for Kubernetes enables cloud portability and automates operational best practices for deploying and managing Couchbase.

The operator is composed of two components: a per-cluster dynamic admission controller (DAC) and a per-namespace Operator. Refer to the operator architecture for additional information on what is required and security considerations.

1. Download the Operator package

You can download the latest Couchbase Autonomous Operator package and unzip it to the local machine. The Operator package contains YAML configuration files and command-line tools that you will use to install the Operator.

2. Install the Custom Resource Definition (CRD)

The first step in installing the Operator is to install the custom resource definitions (CRD) that describe the Couchbase resource types. This can be achieved by running the below command from the Operator package directory:

3. Install the Dynamic Admission Controller (DAC)

The DAC allows custom resources to be modified and interrogated before a resource is accepted and committed to etcd. Running the DAC allows us to add sensible defaults to Couchbase cluster configurations thus minimizing the size of specifications. It also allows us to maintain backward compatibility when new attributes are added and must be populated. This makes the experience of using Couchbase resources similar to that of native resource types.

Now, let’s install the Dynamic Admission Controller.

Open a Terminal window and go to the directory where you unpacked the Operator package and cd to the bin folder. Run the following command to install the DAC into the default namespace.

Confirm the admission controller has deployed successfully.

4. Create a Namespace

Namespaces are a way to allocate cluster resources, plus set network and security policy between multiple applications. We will create a unique namespace called demo to deploy the Operator and later will use the demo namespace to deploy the Couchbase cluster.

Run the following command to create the namespace.

Confirm the Namespace is created successfully.

5. Configure TLS

Secrets are specified in the CouchbaseCluster resource, and you will notice that in the cluster definition YAML while we deploy the Couchbase cluster.

Server Secret

Server secrets need to be mounted as a volume within the Couchbase Server pod with specific names. The certificate chain must be named chain.pem and the private key as pkey.key. Run the below command to create the Couchbase server secret.

Operator Secret

The Operator client secrets are read directly from the API. It expects only a single value to be present; ca.crt is the top-level CA which is used to authenticate all TLS server certificate chains. Run the below command to create the Operator secret.

6. Install the Couchbase Operator

Now let’s deploy the Operator in the demo namespace by running the following command, from the bin folder in the Operator package directory.

Running the above command downloads the Operator Docker image and creates a deployment, which manages a single instance of the Operator. The Operator pod is run-as deployment so that Kubernetes can reinitialize upon failure.

After you run the kubectl create command, it generally takes less than a minute for Kubernetes to deploy the Operator and for the Operator to be ready to run.

Check the status of the Operator Deployment

You can use the following command to check on the status of the deployment:

If you run this command immediately after the Operator is deployed, the output will have the AVAILABLE column as 0. However, the AVAILABLE field indicates that the pod is not ready yet since its value is 0 and not 1.

Run the following command to verify that the Operator pod has started successfully. If the Operator is up and running, the command returns an output where the READY field shows 1/1, such as:

You can also check the logs to confirm that the Operator is up and running, by running the below command.

Deploy the External DNS

Assuming you have already completed the above steps to deploy the Operator in a namespace; the demo namespace, the next thing you need to install is the External DNS controller. This must be installed before the Couchbase cluster as the Operator will wait for DNS propagation before creating in Couchbase Server pods. This is because clients must be able to reach the Couchbase Server pods in order to serve traffic and prevent application errors.

1. Create a service account for the External DNS controller in the namespace where you are installing the Operator.

2. The External DNS controller requires a role in order for it to be able to poll for resources and look for DNS records to replicate into the DDNS provider. 

3. Now, link the external DNS role to the service account.

4. The last step is to deploy the External DNS. Don’t forget to update the below values specific to your deployment.

    • The spec.template.spec.serviceAccountName attribute ensures External DNS pods are running as the service account we set up. This grants the controller permission to poll resources and look for DDNS requests.
    • The – -domain-filter argument tells External DNS to only consider DDNS entries that are associated with DNS entries related to our balajiacloud.guru domain.
    • The – -txt-owner-id argument tells External DNS to label TXT management records with a string unique to the External DNS instance. External DNS uses TXT records to record metadata , especially ownership information  associated with the DNS records it is managing. If the balajiacloud.guru domain is used by multiple instances of External DNS without specifying any ownership, then they would conflict with one another.
    • The CF_API_KEY environment variable is used by the Cloudflare provider to authenticate against the Cloudflare API.
    • The CF_API_EMAIL environment variable is used by the Cloudflare provider to identify what account to use against the Cloudflare API.

You can get the CF_API_KEY from the Cloudfare account’s overview page. Click on “Get your API token” link as shown below and view the Global API Key.

External DNS provider

Deploy the External DNS

Finally, install the External DNS deployment by running the below command.

Check the status of the External DNS Deployment

You can use the following command to check on the status of the deployment:

Run the following command to verify that the external-dns has started successfully. If the external-dns is up and running, the command returns an output where the RUNNING field shows 1/1, such as:

You can also check the logs to confirm that the external-dns is up and running.

We have now successfully deployed the External DNS.

Deploy the Couchbase Cluster

Now that we have deployed the Couchbase Autonomous Operator and the External DNS in EKS, now let’s deploy the Couchbase Cluster.

We will deploy the Couchbase cluster with 3 data nodes in 3 availability zones with the minimum required configuration parameters. Please refer to the Configure Public Networking for the required configuration options. 

Create the Secret for Couchbase Admin Console

Let’s create a secret credential that will be used by the administrative web console during login. When you create the below secret in your Kubernetes cluster, the secret sets the username to Administrator and the password to password.

Deploy the Couchbase cluster definition

We will use the default StorageClass that we get with EKS, let’s check by running the following command. You can create a storage class that meets your requirements.

To deploy a Couchbase Server cluster using the Operator, all you have to do is create a Couchbase cluster definition that describes what you want the cluster to look like (e.g. the number of nodes, types of services, system resources, etc), and then push that cluster definition into Kubernetes. 

The Operator package contains an example CouchbaseCluster definition file (couchbase-cluster.yaml).

The below cluster definition will deploy the Couchbase cluster with 3 Data pods across 3 different zones using persistent volumes. Please check the Couchbase Cluster Resource documentation for the complete list of the cluster configuration.

After receiving the configuration, the Operator automatically begins creating the cluster. The amount of time it takes to create the cluster depends on the configuration. You can track the progress of cluster creation using the cluster status.

Verifying the Deployment

To check the progress run the below command, which will watch (-w argument) the progress of pods creating. If all goes fine then we will have three Couchbase cluster pods hosting the services as per the Couchbase cluster definition.

If for any reason there is an exception, then you can find the details of exception from the couchbase-operator log file. To display the last 20 lines of the log, copy the name of your Operator pod and run the below command by replacing the Operator pod name with the name in your environment.

Let’s make sure we check the external-dns logs to see if the DNS records for the Couchbase pods are being created.

At this point, you can also check the DNS page by logging to your Cloudfare account. You can see the CNAME and TXT records added by your External DNS provider.

External DNS provider

Accessing the Couchbase Web Console

Now, you have a publicly addressable cluster that you can start using it. In the EKS environment, the Couchbase web console can be accessed through an exposed LoadBalancer service of a specific pod. You should be able to connect to the Couchbase console using the URL https://cbopedns-0000.cbdemo.balajiacloud.guru:18091/ (replace the pod name and the DNS domain based on your environment).

Please refer to Access the Couchbase Server User Interface documentation for more details on how to connect to the Couchbase console. Also, you can check Configure Client SDKs for details on how to connect client SDK with the Couchbase cluster while using DNS Based Addressing with External DNS.

Publicly addressable Couchbase cluster

Conclusion

In this blog, we saw how the Couchbase cluster can be publicly addressable using Couchbase Operator with Kubernetes External DNS. We discussed how the External DNS solution helps to dynamically manage the external DNS records from within a Kubernetes cluster. In this article, we used Amazon EKS as our Kubernetes environment but the same steps would also be applicable if you are using other Kubernetes environments like AKS, GKE, OpenShift, etc.

Resources

 

Author

Posted by Balaji Narayanan, Solutions Architect, Couchbase

Balaji Narayanan is a Solutions Architect in the CoE team at Couchbase. He has deep expertise in Enterprise Application Design, Development, and Implementation using Java/Java EE technologies and Cloud platforms. He has extensive experience developing solutions architecture and designing solutions to implement Cloud architecture using AWS, Azure, GCP Cloud platforms. He has expertise in designing and evaluating architectural alternatives for private, public, and hybrid cloud models. He is a certified professional in AWS and Kubernetes. Prior to joining Couchbase, Balaji was engaged with Microsoft building IaaS and PaaS platforms for Azure Cloud-native services. He holds a Bachelor's degree in Information Technology from Anna University (India).

One Comment

  1. Thanks Balaji. This is useful.

    Is registering a new public DNS domain necessary? I mean, an aws domain (e.g. ap-south-1.elb.amazonaws.com) can not be used, may be with a subdomain?

Leave a reply