/

hasura-header-illustration

Automated Schema Migrations

The first part of this series is available “Evolutionary Database Design, GraphQL, APIs, and Database Schema Migrations” and the second is also published now too under "Schema Migrations with Hasura Console".

There are two primary paths to follow: the automated hand crafted way, as I’ll detail in this blog post and the graphical way that uses the Hasura Console interface that will come in a subsequent post. To start with, both methods start with the Hasura CLI tool. Check out the documentation, and for some extra information I wrote a post Hasura CLI Installation & Notes. Once that prerequisite is taken care of, we can start on the data model schema and what we want to start building.

Why

As discussed in the first post of this series “Evolutionary Database Design, GraphQL, APIs, and Database Schema Migrations: Part 1” I outline why we want to set a good practice around schema migrations. To get into a real world example of setup and usage I’ve created the following database schema.

In this schema there are a number of tables and relationships between those tables. One-to-many, many-to-many, and even some recursive (elephant ear) relationships.

Inception Migration

First, we’ll need to get the basic tables created and then we’ll add the relationships. At least, that will be the easiest way. For this first schema migration I’ve put together the SQL needed to create these tables. We’ll add the relationships and additional elements after the tables are created. This will make up our first migration.

CREATE TABLE "Source" (
  "Id" uuid PRIMARY KEY,
  "Stamp" timestamp,
  "Name" text,
  "Uri" text,
  "Details" text
);
 
CREATE TABLE "SourceNotes" (
  "SourceId" uuid,
  "NotesId" uuid,
  "Details" text,
  "Stamp" timestamp
);
 
CREATE TABLE "NoteJot" (
  "Id" uuid PRIMARY KEY,
  "Stamp" timestamp,
  "NoteId" uuid,
  "Details" text
);
 
CREATE TABLE "Activity" (
  "Id" uuid PRIMARY KEY,
  "Stamp" timestamp,
  "Activity" json
);
 
CREATE TABLE "Connection" (
  "Id" uuid PRIMARY KEY,
  "Stamp" timestamp,
  "ActivityId" uuid,
  "SourceId" uuid
);
 
CREATE TABLE "Formatter" (
  "Id" uuid PRIMARY KEY,
  "Stamp" timestamp,
  "ConnectionId" uuid,
  "FormatterMap" json
);
 
CREATE TABLE "Schema" (
  "Id" uuid PRIMARY KEY,
  "Stamp" timestamp,
  "ConnectionId" uuid,
  "SchemaMap" json
);

An important detail to note here, that this inception migration breaks a few things that moving forward we won’t want to do. Here are a few of the issues with this first migration;

  1. The migration has many parts to it. Generally a goal is to keep each migration to a singular step such as a creation of a table, an added column, or related addition.
  2. When creating new tables after this inception migration, we’d want to create the migration so that any table creation, or column addition, would have their respective relationships added with them in that particular step.
  3. There is no semblance of ordering of the tables, that in subsequent steps we need to pay close attention to, as a table that has relationships needs to have each table exist before any relationships can be drawn to it, and respectively, drops of tables or other deletions need to be done in the particular reverse order of their creation.

But the reason, since this is an inception schema migration, is to quickly get that initial table schema added and then start making more iterative changes after these core tables exist.

Getting Started

Before creating the migration we’ll need to initialize and point our configuration file to our respective Hasura API and database. For this example, I’ll just execute the initialization command without any switches, and accept the default options.

hasura init


To create our first schema migration, use the Hasura CLI command. Navigate into the hasura directory just created and execute the following.

hasura migrate create "inception"

The result will return, displaying the name of the migration and the version.

Now open up this folder, ideally with something like Visual Studio Code or your preferred editor of choice. In the following image I’ve opened up the folder in Visual Studio Code and opened the up.sql file for the migration we just created.


At this point the file is empty, just like the down.sql file. What are these up.sql and down.sql files? Let’s discuss.

Migrations Go Up & Down

In every migration, there is a path forward to bring the database schema up to the latest version, and a path backwards to take the database schema back to the previous version. Every single migration requires these two iterative steps. From the Hasura CLI usage, that involves specifically two commands to move iteratively back and forth.

To go forward, or execute the migration that we’re about to create we would execute the following command.

hasura migrate apply

This executes all up migrations and brings the database schema up to the latest version. If we needed to rollback these changes, and move to the previous schema migration version the following command would execute the down migration one version back.

hasura migrate apply --down 1

The Up Migration

Ok, so these are the steps we’ll need to take, and the steps we’ll need to implement in SQL in order to finish the first migration.  The above SQL, copy that into the up.sql file.

We could execute the hasura migrate apply command right now, but instead let’s build out the drop commands for the down.sql first. That way, if any errors occur we would be able to roll back immediately. Even though that isn’t super important in this first step, it’s a good practice to always build out both the up and down steps before executing the migration.

Since none of these tables are created with relationships at this point, the drop commands can be in any order. If we had drawn any relationships, these relationships would require the drop command list the tables drops in the order in which the relationships could be removed. Usually running schema migrations we wouldn’t run into this problem, as the migration itself would be a small enough unit of work that we wouldn’t run into complexity issues. Only the relationship, or only the table would be being created at any particular point in time.

The Down Migration

The drop commands to drop each of the tables is as follows.

DROP TABLE "Source";

DROP TABLE "SourceNotes";

DROP TABLE "NoteJot" ;

DROP TABLE "Activity";

DROP TABLE "Connection";

DROP TABLE "Formatter";

DROP TABLE "Schema";

Add each of those to the down.sql file.

Point & Execute

The next step would be to execute the migration against the Hasura instance. To get your own instance and try this out yourself, I’ve got a quick 5 minute and 37 second video on setting up a free tier Hasura instance. I’ll presume at this point you’ve got your own instance now. In order to execute the migration now the URI for the instance is needed. Navigate to the console of the instance and get the URI as shown in this image.

Now open up the config.yaml file that is located at the root of our project folder structure, and change out the localhost URI and add our new URI path. Now remove the path on the URI, ` /v1/graphql` so that we only have the base URI in the config.yaml file. It’ll look something like this.

Save that, and then execute the migration with the Hasura CLI.

hasura migrate apply

Navigating over to the console, click on the data tab, and we will now see the following tables.

Now if we wanted to roll back, issue the following command.

hasura migrate apply --down 1 to migrate back one step. Once that is done if we take a look back at the data tab again, click refresh on the browser to ensure it’s updated, and the tables will have been dropped.

Go ahead and apply those changes, and we’ll use this as a base for the next part of this series. In part 3, we’ll get into how to use the console itself to build out our migrations. We’ll add some relationships between tables, and other schema changes, covering what is and is not done through the migration process and touch on how the metadata pairs with the Hasura migrations to provide full iterative versioning and movement back and forth between changes in the database and changes on the Hasura API server itself.

Summary

With these steps done, the first migration is now completed. A recap of the steps covered here, even if at first they seem simple, open up a wide expanse of what can be done from here. At the start I provided a schema to implement. Next the SQL that would need to be executed to create each of the tables in that schema. Finally, I covered the creation of our schema migrations project folder structure using hasura init, and the creation of the first migration folder and up.sql and down.sql files. From there, we worked together to add the SQL needed for the up and down migration steps, repointed the config.yaml URI to our respective Hasura API servers, and executed our first up migration. We then wrapped up with a down migration just to compare before and after, and then ended with an up migration, leaving us prepared for the next post!

Blog
28 Jan, 2021
Email
Subscribe to stay up-to-date on all things Hasura. One newsletter, once a month.
Loading...
v3-pattern
Accelerate development and data access with radically reduced complexity.