DEV Community

Cover image for Caching GraphQL Responses with TrunQ
Brian Haller
Brian Haller

Posted on

Caching GraphQL Responses with TrunQ

TrunQ

A lightweight NPM library for GraphQL response caching in its beta release. Landing Page and GitHub

A Universal Problem

In a world where over 50% of web traffic is from mobile devices, load time optimization becomes more important than ever. In a world where developers rent compute power, client-server data optimization is king. In a world where...well, you get it. Every millisecond of load time, every byte of data saved translates to sales and savings for your project.

GraphQL Gets You Halfway There...

GraphQL boasts improvements over RESTful APIs in one of those optimization categories. Properly implemented, GraphQL always fetches the exact response needed for a data-hungry client. However, GraphQL fails in some other areas of optimization. The fetching still takes time, especially to remote or external servers and APIs. GraphQL’s singular employment of the POST method forfeits native HTTP caching, reopening the danger of potential over-fetching by re-running queries and bogging down load times.

...But It Falls Short

One of the easiest ways to reduce load times and minimize data fetching is caching. However, because GraphQL’s architecture does not allow for HTTP caching via GET, there is not an easy and native way to implement this basic optimization. Developers are left to their own devices to implement their own caching solutions if they desire any sort of performant response. This has resulted in many unique, custom-tailored systems but no wide catch-all. The leading option on the market right now is Apollo, but this option requires the developer to use Apollo as their GraphQL server and takes a decent amount of code and effort to implement. Many times, small applications running GraphQL do not need the heavy and complex Apollo InMemoryCache but would greatly benefit from a small and easy way of stashing query responses.

Our Solution

Enter TrunQ, our lightweight solution. Our team just released a beta version on NPM broken into two packages: client-side caching in local memory and server-side caching leveraging a Redis database (npm i trunq and npm i trunq-server, respectively). Perfect for small applications that use query as a way to access a third-party or remotely hosted GraphQL server, we simplify caching down to a few couple parameters and then do all fetching, caching, and parsing on behalf of the developer.

A Very Simple Implementation

Take a simple GraphQL request:

const myGraphQLQuery = query { 
  artist (id: 'mark-rothko') { 
    name artworks (paintingId: 'chapel' size: 1) {    
      name imgUrl  
    } 
  }
} 

After requiring it in, all the client side implementation takes is turning your code from:

function fetchThis (myGraphQLQuery) {
  let results
  fetch('/graphQL', {
    method: "POST"
    body: JSON.stringify(myGraphQLQuery)
  })
  .then(res => res.json)
  .then(parsedRes => results = parsedRes)
  ...(rest of code)
}

Into:

async function fetchThis (myGraphQLQuery) {
  let results = await trunq.trunQify(myGraphQLQUery, ['id', 'paintingId'], '/graphQL', 'client')
  ...(rest of code)
}

And voila, you can now cache things in client-side! We've saved you lines of code and the heartache of resolving asynchronous operations between a cache and a fetch, and instead presented a response in the same format you would expect from a regular GraphQL response.

Our server-side implementation is just as easy and out-of-the-box.
After requiring it in and spinning up a Redis database, all that is needed is specifying an endpoint for your 3rd party or remote API and an optional Redis port. Then just add our middleware in and it will return right back to our client-side package!

const trunQ = new TrunQServer(graphQL_API_URL, [redisPort], [cacheExpire]);

app.use('/graphql', trunQ.getAllData, (req, res, next) => {
    res.status(200).json(trunQ.data);
})

Fun Features

  • Server and client side caching specified by developer
  • No need to tag queries with keys, we do that for you
  • Partial field matching means only missing fields will be fetched
  • Superset and subset matching prevents refetching any data previously fetched
  • Nested queries in one body of a fetch request will be parsed and cached separately, subsequent independent calls to each query will return from cache.

Visit our GitHub for detailed step-by-step instructions for our packages and Redis, and leave stars or feedback for improvement!

Top comments (0)