Two weeks ago, I have created a WordPress installation on an OpenShift test system. It is very convenient that OpenShift has provided me with persistent volumes for WordPress and database. However, those volumes are not easily accessible on the OpenShift host. Therefore the question arises: how can I create a  MySQL backup on OpenShift without prior knowledge of the database credentials?

We will explore below, which command-line options OpenShift gives us at hand for accomplishing backup tasks on command-line.

Here, I am closely following the instructions found in the official v3.10 OpenShift documentation, chapter 30.

MySQL Backup & Restore

Step 1: Retrieve POD Information

Step 1.1: Log in and enter the correct Project

For retrieving the POD Information, you need to log in to OpenShift and enter the correct project (the project name might be different in your case):

oc login
oc project wordpress-standalone

Full log:

# oc login
Authentication required for https://console.159.69.198.32.nip.io:8443 (openshift)
Username: admin
Password:
Login successful.

You have access to the following projects and can switch between them with 'oc project ':

  * default
    kube-public
    kube-service-catalog
    kube-system
    management-infra
    openshift
    openshift-infra
    openshift-logging
    openshift-node
    openshift-sdn
    openshift-template-service-broker
    openshift-web-console
    wordpress
    wordpress-standalone
# oc project wordpress-standalone
Now using project "wordpress-standalone" on server "https://console.159.69.198.32.nip.io:8443".

Step 1.2: Retrieve POD Name

Let us list the PODs of the project:

# oc get pods
NAME                           READY     STATUS      RESTARTS   AGE
my-wordpress-site-1-build      0/1       Completed   0          17d
my-wordpress-site-1-deploy     0/1       Error       0          17d
my-wordpress-site-2-build      0/1       Completed   0          17d
my-wordpress-site-2-deploy     0/1       Error       0          17d
my-wordpress-site-3-hh6gn      1/1       Running     0          17d
my-wordpress-site-db-1-jghrf   1/1       Running     2          17d

We also can write the POD name into a POD variable as follows. If the result is weird or unambiguous, you might need to select the correct name from the list above.

# POD=pod/$(oc get pods | grep Running | grep '\-db\-' | awk '{print $1}'); echo $POD
pod/my-wordpress-site-db-1-jghrf

Step 2: Read MySQL Data from the POD

The MySQL data can be retrieved from the POD as follows:

# oc describe $POD | grep MYSQL | grep -v Readiness
      MYSQL_DATABASE:  wordpress
      MYSQL_USER:      myuser
      MYSQL_PASSWORD:  mypass

However, the user and password will be some random, auto-generated values in your case. We now can use this to dump the database.

Programmatically, we can feed the values in corresponding environment variables like follows:

POD=pod/$(oc get pods | grep Running | grep '\-db\-' | awk '{print $1}')
source <(oc describe $POD | grep MYSQL | grep -v Readiness | sed 's/:[ ]*/=/')
echo $MYSQL_DATABASE
echo $MYSQL_USER
echo $MYSQL_PASSWORD

The output will be something like:

wordpress
myuser
mypass

Step 3: Dump and Retrieve Data

Step 3.1 (Simple Solution): Remote MySQL Dump to local File

First, try to directly dump the MySQL DB to a temp file on the OpenShift node like follows:

POD=pod/$(oc get pods | grep Running | grep '\-db\-' | awk '{print $1}')
source <(oc describe $POD | grep MYSQL | grep -v Readiness | sed 's/:[ ]*/=/')
oc rsh $POD /opt/rh/rh-mysql57/root/usr/bin/mysqldump --skip-lock-tables -h 127.0.0.1 -P 3306 -u $MYSQL_USER --password=$MYSQL_PASSWORD --all-databases > /tmp/all.sql

If it works, the dump file will be located on /tmp/all.sql:

# ls -l /tmp/all.sql
-rw-r--r--. 1 root root 88303 Nov 21 23:46 /tmp/all.sql

If this does not work, the PATH to ‚mysqldump‘ might be different in your case. In that case, you explicitly need to log in to the container, dump the database and rsync the dump file to the OpenShift container:

Step 3.2 (Reliable Solution): Interactive Dump with RSYNC to local Directory

If Step 3.1 does not work, try the following:

POD=pod/$(oc get pods | grep Running | grep '\-db\-' | awk '{print $1}')
oc rsh $POD
mkdir /var/lib/mysql/data/db_archive_dir
mysqldump --skip-lock-tables -h 127.0.0.1 -P 3306 \
  --user=$MYSQL_USER --password=$MYSQL_PASSWORD --all-databases > /var/lib/mysql/data/db_archive_dir/all.sql
exit
oc rsync $POD:/var/lib/mysql/data/db_archive_dir /tmp/

Note: the environment variables MYSQL_USER and MYSQL_PASSWORD are defined within the mysql container. Therefore, those environment variables can be used inside the container.

Here, you need to replace ‚myuser‘ and ‚mypass‘ by the User/Pass credentials you will have found out on step 2.

If successful, the file all.sql will now be available on ‚/tmp/db_archive_dir‘:

# ls -l /tmp/db_archive_dir/
total 88
-rw-r--r--. 1 1000120000 root 87800 Nov 21 23:55 all.sql

Step 4: Testing the Restore Process on OpenShift

You should never perform a backup and rely on it without having tested that a restore will work correctly. We will do that now. For a simple test, we create a non-persistent MySQL POD (a POD without volumes), copy and import the backup file into the database, and run MySQL commands inside the POD to verify that the correct data is retrieved.

oc login
oc new-project my-wordpress-site-backup
source <(oc describe $POD | grep MYSQL | grep -v Readiness | sed 's/:[ ]*/=/') 
oc new-app mysql-ephemeral \
  -p MYSQL_USER=$MYSQL_USER \
  -p MYSQL_PASSWORD=$MYSQL_PASSWORD \
  -p MYSQL_DATABASE=$MYSQL_DATABASE \
  -p DATABASE_SERVICE_NAME='mysql2'

Note: to create a persistent MySQL installation with a volume attached to it, use the mysql-persistent instead of the mysql-ephemeral template. For our purpose to test the restore, an ephemeral installation is sufficient.

Note: according to the documentation, the DATABASE_SERVICE_NAME must be unique (within the project?). ‚mysql‘ is the default value, so we are choosing ‚mysql2‘ here.

Note: To review the the content of the mysql-ephemeral template, just type:

oc get template -n openshift mysql-ephemeral -o yaml
# or
oc describe template -n openshift mysql-ephemeral

Now let us wait until the POD is up and running:

watch oc get pod

With this command, we can wait until the POD is in „Running“ state. After some time we see:

Every 2.0s: oc get pod                                                                                                            Tue Nov 27 23:10:16 2018

NAME              READY     STATUS    RESTARTS   AGE
mysql2-1-deploy   1/1       Running   0          45s
mysql2-1-rf99h    0/1       Running   0          22s

The POD can be retrieved programmatically with:

# POD=$(oc get pod | grep mysql2 | grep -v deploy | awk '{print $1}')
# echo $POD
mysql2-1-rf99h

We now copy the database backup I had placed on the ‚db_archive‘ folder:

oc rsync /tmp/db_archive_dir $POD:/var/lib/mysql/data

Let us enter the POD:

oc rsh $POD

Inside the container, we enter $HOME and verify that the backup file is present:

$ cd $HOME
$ ls data/db_archive_dir
all.sql

Let us import the backup file we just had transferred:

$ mysql -u root
mysql> source data/db_archive_dir/all.sql

Subsequently, you should see many OK messages.

At this point, You might need to mess around with GRANTs as described in the OpenShift docu. However, I was working as root, and it was not necessary in my case.

Now let us check, that the database has the correct content on both, the source MySQL POD and the backup MySQL POD:

# on source POD:
mysql> select user_login from wp_users;
+----------------+
| user_login     |
+----------------+
...
| oveits         |
...
+----------------+

# on destination POD:
mysql> select user_login from wp_users;
+----------------+
| user_login     |
+----------------+
...
| oveits         |
...
+----------------+

Yes, the content seems to be the same on the source system and on the destination system. The backup and restore were successful, it seems.

Excellent!

Comments

Diese Website verwendet Akismet, um Spam zu reduzieren. Erfahre mehr darüber, wie deine Kommentardaten verarbeitet werden.