How and When to Use Multiple Prisma Schemas

Article summary

On my current software development project, we’re leveraging SQLite as an application file format for performance and scalability reasons. We decided to go with Prisma to interact with our application file, both on the write side and the read side. However, the read and write side had different, competing constraints. The upstream data is asynchronously generated and persisted using some clever application of content addressable storage.

To take full advantage of the async content addressed nature of the data being fed into our application file, it was helpful to have a very permissive schema that would allow for null values in our records. After the entire file has been written, we still wanted accurate types coming from Prisma.

Background

It turns out, Prisma can handle having a write time and a read time schema pretty easily! Consider the following Prisma schema:


model Info {
    id String @id
    foo String
    bar String
    baz String
}

This is where we want our schema to end up, but while the database is being populated, foo, bar, and baz will be inserted at separate times. That will look something like the following:


db.info.upsert({
    where: { id: "info1" },
    create: { foo: "foo1" },
    update: { foo: "foo1" }
});

But Prisma will blow up, complaining that we didn’t specify the entire record in the create block. In order to support our async upsert write approach, we need this schema:


model Info {
    id String @id
    foo String?
    bar String?
    baz String?
}

Now, our upsert operation succeeds at both compile and runtime. However, our downstream use of our application format has been polluted with spurious nulls that need handling everywhere. We could just point the read schema client at our database and carry on with our lives (and Prisma will let you do that), but we can do better.

Solution

Prisma is able to compare two schema files and generate a script that migrates a database from one to the other. This gives us all the building blocks we need to have a permissive, flexible write time schema, but also a verified, accurate read time schema.

First, we need to have two schemas generating two separate clients that we can use from our project code. The secret sauce is to create two Prisma schemas and set their generator blocks up like so:

For the write schema:


generator client {
  provider = "prisma-client-js"
  output   = "node_modules/nif-prisma-generated/write"
}

And for the read schema:


generator client {
  provider = "prisma-client-js"
  output   = "node_modules/nif-prisma-generated/read"
}

Then, our write module can import the write side Prisma client as follows:


import { Prisma, PrismaClient } from "nif-prisma-generated/write";

And the read side can use this:


import { Prisma, PrismaClient } from "nif-prisma-generated/read";

Finally, we need to migrate the database from write mode to read mode. For that, we can combine a pair of Prisma CLI commands:


yarn --silent prisma migrate diff \
      --from-schema-datamodel path/to/write.schema \
      --to-schema-datamodel path/to/read.schema --script \
    | yarn prisma db execute --stdin \
      --schema path/to/read.schema

This command will correctly error if the actual database doesn’t match the more restrictive read schema.


Use Prisma to compare two schema files and generate a script that migrates a database from one to the other. This gives us just what we need: a permissive, flexible write time schema and a verified, accurate read time schema.

Conversation

Join the conversation

Your email address will not be published. Required fields are marked *