DEV Community

Cover image for Testing Node serverless applications — AWS Lambda functions
Brian Neville-O'Neill
Brian Neville-O'Neill

Posted on • Originally published at blog.logrocket.com on

Testing Node serverless applications — AWS Lambda functions

Written by Darko Milosevic✏️

We all know that tests are critical to writing maintainable, high-quality code. It’s never easy to implement them, but it’s an important part of the development process.

The rise of serverless architecture has introduced new challenges. We now have functions that run in an environment we don’t control. There are ways to simulate that cloud environment, but is that reliable enough?

In this article, we’ll discuss several ways to simplify and streamline the task of testing serverless applications. We’ll focus on AWS, one of the most popular cloud providers, and we’ll write the code in Node.js since it is one of the most commonly used languages for serverless apps. Having said that, everything we’ll discuss here can be applied to other cloud providers and programming languages.

What are unit, integration, and end-to-end testing?

Generally speaking, there are three types of tests:

  1. Unit — Testing single, isolated pieces of logic
  2. Integration — Testing contracts between two or more units
  3. End-to-end — Running a complete test that covers everything

There are many differences between these three tests, including the learning curve, required resources, and effectiveness in reducing bugs. Unit tests are the cheapest option, for example, but you usually get what you pay for. On the other hand, end-to-end testing, while more expensive, is typically the most effective method.

In theory, you should have many, many unit tests, several integration tests, and a few end-to-end tests — at least, that’s the generally accepted best practice for standard applications. For serverless apps, however, we tend to write more end-to-end tests and eschew unit and integration testing, since the execution environment is outside our control.

In my opinion, with the right code structure and design, it’s possible to achieve solid code quality and a high level of confidence while maintaining a proper proportion of test types. To demonstrate, I’ll use a small but handy Lambda function as an example.

Now let’s dive into the code!

LogRocket Free Trial Banner

Testing serverless applications

Let’s say we have an assignment to implement a Lambda function that will:

  • Receive certain parameters, such as from an SQS queue (Amazon’s simple queue service)
  • Fetch an image from an S3 bucket (Amazon’s file storage service) according to those parameters
  • Reduce the size of the image and change it to a different format if desired
  • Upload the resulting image to the same S3 bucket

This is a fairly common use case for a Lambda function. Remember, to write good tests, you must first write testable code and functions. For that reason, I’ll show you both the implementation and the tests.

The trick when writing serverless functions is to detect all the places where the function communicates with the rest of the world and abstract that away so you can test those occurrences in isolation with some cheap unit tests. We’ll call these abstractions adapters.

Let’s go over some basics to help determine what adapters we’ll need for our project:

  • The function receives data/event in the form of a function parameter — let’s call it the EventParser
  • The function needs to fetch and upload files to S3 — we’ll call that adapter FileService

Adapters are, in a way, for I/O. Now we have some logic to implement in our function aside from sending and receiving data to and from the outside world. The core logic — reducing and reformatting of images — will be inside image-reducer.js.

Adapters and image-reducer.js are logically isolated and, therefore, suitable for unit testing. When we’re done with that, we’ll need to connect them according to our business needs. We’ll do that inside the main.js file, which is suitable for integration testing (we’ll demonstrate that a bit later).

The folder structure would look like this:

image-reducer-service/
  adapters/          - abstractions for sockets/file system etc. 
    event-parser.js
    file-service.js
  utils/             - regular utils functions based on our needs 
  tests/             - all of the tests
  image-reducer.js   - core lambda logic
  main.js            - connects adapters and core logic, good for integration test
  index.js           - entry file for serverless app
  serverless.yml
  package.json
Enter fullscreen mode Exit fullscreen mode

The main.js file will export a wrapper function that will receive, by dependency injection, every adapter and core-logic function needed. This way, integration test are easy to implement.

Here’s how that looks at the beginning:

// main.js
exports.imageReducerService = async (event, FileService, ImageReducer) => {
    const executionId = generateRandomId();
    try {
        console.log(`Started imageReducerService id: ${executionId}`);
        /*----------------
        Connect everything here
        -----------------*/
        console.log(`Finished imageReducerService id: ${executionId}`);
    }
    catch (error) {
        console.error(`Thrown imageReducerService id: ${executionId}`);
        throw error;
    }
};
Enter fullscreen mode Exit fullscreen mode

This main function is required in the index.js file, which contains the actual Lambda function that will be run on AWS and injects everything into our main function.

// index.js
const { EventParser, FileService } = require('./adapters');
const ImageReducer = require('./image-reducer.js');
const ImageReducerService = require('./main.js');

exports.handler = (sqsMessage) =>
    ImageReducerService(EventParser.parse(sqsMessage), FileService, ImageReducer);
Enter fullscreen mode Exit fullscreen mode

Unit testing

Let’s write code and tests for the first adapter, EventParser. The purpose of this adapter is to receive an event and sanitize it so that our main function always gets a standard set of properties. This can be particularly interesting on AWS because Lambda functions can be connected to many sources (SQS, SNS, S3, etc.), and every source has its own event schema.

EventParser can be used to process every one of these and output a standardized event. For now, we only receive events via SQS queue. This is how it would look:

// adapters/event-parser.js
const Joi = require('@hapi/joi');

const eventSchema = Joi.object({
    bucket: Joi.string().required(),
    key: Joi.string().required(),
    format: Joi.string().valid('png', 'webp', 'jpeg').default('png')
});
const extractEvent = (sqsMessage) => sqsMessage.Records[0].body;

exports.parse = (sqsMessage) => {
    const eventObject = extractEvent(sqsMessage);
    const { value: payload, error } = eventSchema.validate(eventObject);
    if (error) {
        throw Error(`Payload error => ${error}.`);
    }
    return payload;
};
Enter fullscreen mode Exit fullscreen mode

This function extracts a nested event from the SQS payload and ensures that the event has every required property via the Joi validation library. For the SQS, payload (or at least the outer structure) is always the same; unit tests are more than enough to ensure everything works properly.

In this article, I’ll write tests using the Jest library. Here are the tests for the EventParser:

const EventParser = require('../../adapters/event-parser.js');
const createStubbedSqsMessage = (payload) => ({ Records: [{ body: payload }] });

describe('EventParser.parse() ', () => {
    test('returns parsed params if event has required params', async () => {
        const payload = {
            bucket: 'bucket',
            key: 'key',
            format: 'jpeg'
        };
        const stubbedSqsMessage = createStubbedSqsMessage(payload);
        const result = EventParser.parse(stubbedSqsMessage);
        expect(result).toBeDefined();
        expect(result.bucket).toBe(payload.bucket);
        expect(result.key).toBe(payload.key);
        expect(result.format).toBe(payload.format);
    });
    test('throws when event object has missing required params', async () => {
        const payload = {
            bucket: 'bucket'
        };
        const stubbedSqsMessage = createStubbedSqsMessage(payload);
        expect(() => EventParser.parse(stubbedSqsMessage)).toThrow();
    });
    test('throws when event has required params with incorrect type', async () => {
        const payload = {
            bucket: ['bucket'],
            key: 'key'
        };
        const stubbedSqsMessage = createStubbedSqsMessage(payload);
        expect(() => EventParser.parse(stubbedSqsMessage)).toThrow();
    });
});
Enter fullscreen mode Exit fullscreen mode

The second adapter, FileService, should have the functionality to fetch and upload an image. Let’s implement that with streams using Amazon’s SDK.

// adapters/file-service.js
const Assert = require('assert');
const { Writable } = require('stream');
const Aws = require('aws-sdk');

exports.S3 = new Aws.S3();
exports.fetchFileAsReadable = (bucket, key) => {
    Assert(bucket && key, '"bucket" and "key" parameters must be defined');
    return exports.S3.getObject({ Bucket: bucket, Key: key}).createReadStream();
}
exports.uploadFileAsWritable = (bucket, key, writable) => {
    Assert(bucket && key, '"bucket" and "key" parameters must be defined');
    Assert(
      writable instanceof Writable,
      '"writable" must be an instance of stream.Writable class'
    );
    return exports.S3.upload({
        Bucket: bucket, Key: key, Body: writable, ACL: 'private'
    }).promise();
}
Enter fullscreen mode Exit fullscreen mode

There aren’t any benefits to testing the Aws.S3 library since it is well-maintained. Problems will only arise if Lambda doesn’t have internet access — we’ll cover that in the end-to-end test. Here we’ll test for invalid parameters and/or proper passing of function parameters to the SDK.

Since the functions are very small in this case, we’ll only test the first case.

const FileService = require('../../adapters/file-service.js');

describe('FileService', () => {
    describe('fetchFileAsReadable()', () => {
        test('throws if parameters is are undefined', async () => {
            expect(() => FileService.fetchFileAsReadable())
                .toThrow('"bucket" and "key" parameters must be defined');
        });
    });
    describe('uploadFileAsWritable()', () => {
        it('throws if last argument is not a writable stream', async () => {
            expect(() => FileService.uploadFileAsWritable('bucket', 'key', {}))
                .toThrow('"writable" must be an instance of stream.Writable class');
        });
    });
});
Enter fullscreen mode Exit fullscreen mode

The next thing to implement and test is the core Lambda logic — i.e., the reducing and reformatting of images. We’ll keep it short and simple using the Sharp library for Node.js.

// image-reducer.js
const Sharp = require('sharp');
const WIDTH = 320;
const HEIGHT = 240;

exports.createTransformable = (format = 'png', width = WIDTH, height = HEIGHT) =>
    format === 'jpeg' ? Sharp().resize(width, height).jpeg() :
    format === 'webp' ? Sharp().resize(width, height).webp() :
    Sharp().resize(width, height).png()
Enter fullscreen mode Exit fullscreen mode

This function takes certain parameters and creates a transform stream that can receive a readable stream of image binary data and transform it into a smaller image in a different format. Using a bit of Node’s stream magic, we can test all of this pretty easily by creating readable and writable stream stubs.

const Path = require('path');
const Fs = require('fs');
const Sharp = require('sharp');
const ImageReducer = require('../image-reducer.js');

const BIG_IMAGE_PATH = Path.join(__dirname, '/big-lambda.png');
const SMALL_IMAGE_PATH_PNG = Path.join(__dirname, '/small-lambda.png');
const SMALL_IMAGE_PATH_WEBP = Path.join(__dirname, '/small-lambda.webp');
const SMALL_IMAGE_PATH_JPEF = Path.join(__dirname, '/small-lambda.jpeg');

describe('ImageReducer.createTransformable()', () => {
    describe('reducing size and transforming image in .png format', () => {
        test('reducing image', async () => {
            const readable = Fs.createReadStream(BIG_IMAGE_PATH);
            const imageReductionTransformable = ImageReducer.createTransformable();
            const writable = Fs.createWriteStream(SMALL_IMAGE_PATH_PNG);

            readable.pipe(imageReductionTransformable).pipe(writable);
            await new Promise(resolve => writable.on('finish', resolve));

            const newImageMetadata = await Sharp(SMALL_IMAGE_PATH_PNG).metadata();
            expect(newImageMetadata.format).toBe('png');
            expect(newImageMetadata.width).toBe(320);
            expect(newImageMetadata.height).toBe(240);
        });
    });
    describe('reducing size and transforming image in .webp format', () => {
        test('reducing image', async () => {
            const readable = Fs.createReadStream(BIG_IMAGE_PATH);
            const imageReductionTransformable = ImageReducer
              .createTransformable('webp', 200, 100);
            const writable = Fs.createWriteStream(SMALL_IMAGE_PATH_WEBP);

            readable.pipe(imageReductionTransformable).pipe(writable);
            await new Promise(resolve => writable.on('finish', resolve));

            const newImageMetadata = await Sharp(SMALL_IMAGE_PATH_WEBP).metadata();
            expect(newImageMetadata.format).toBe('webp');
            expect(newImageMetadata.width).toBe(200);
            expect(newImageMetadata.height).toBe(100);
        });
    });
    describe('reducing size and transforming image in .jpeg format', () => {
        test('reducing image', async () => {
            const readable = Fs.createReadStream(BIG_IMAGE_PATH);
            const imageReductionTransformable = ImageReducer
              .createTransformable('jpeg', 200, 200);
            const writable = Fs.createWriteStream(SMALL_IMAGE_PATH_JPEF);

            readable.pipe(imageReductionTransformable).pipe(writable);
            await new Promise(resolve => writable.on('finish', resolve));

            const newImageMetadata = await Sharp(SMALL_IMAGE_PATH_JPEF).metadata();
            expect(newImageMetadata.format).toBe('jpeg');
            expect(newImageMetadata.width).toBe(200);
            expect(newImageMetadata.height).toBe(200);
        });
    });
});
Enter fullscreen mode Exit fullscreen mode

Integration testing

The purpose of integration testing is to test contracts and integrations between two or more code components that are already unit tested. Since we didn’t integrate all the code above, let’s do that now.

// main.js
const { promisify } = require('util');
const { PassThrough, pipeline } = require('stream');
const { generateRandomId, appendSuffix } = require('./utils');
const pipelineAsync = promisify(pipeline);

exports.imageReducerService = async (event, FileService, ImageReducer) => {
    const executionId = generateRandomId();
    try {
        console.log(`Started imageReducerService id: ${executionId}`);

        const { bucket, key, format } = event;
        const readable = FileService.fetchFileAsReadable(bucket, key);
        const imageReductionTransformable = ImageReducer.createTransformable(format);
        const writable = new PassThrough();

        const newKey = appendSuffix(key, format);
        const pipelineProcess = pipelineAsync(
          readable,
          imageReductionTransformable,
          writable
        );
        const uploadProcess = FileService
          .uploadFileAsWritable(bucket, newKey, writable);
        await Promise.all([pipelineProcess, uploadProcess]);

        console.log(`Finished imageReducerService id: ${executionId}`);
    }
    catch (error) {
        console.error(`Thrown imageReducerService id: ${executionId}`);
        throw error;
    }
}; 
Enter fullscreen mode Exit fullscreen mode

This code takes the parsed event after it has been sanitized by our EventParser and, based on this, fetches an image from the S3 in the form of a readable stream on line 13. It then creates an image reduction transform stream on line 14 and a writable stream on line 15. A pipe chain is then created between the readable, transform, and writable stream on line 18. Next, the writable stream begins uploading on the S3 bucket on line 23. In other words, all this code does is fetch, resize, and upload images in a stream form.

Since this example Lambda function is not so big, all the wiring was done in a single file and we can cover it with a single test. In other situations, it may be necessary to split it into several tests.

Here’s our test:

require('dotenv').config();
const { EventParser, FileService, ImageReducer } = require('../adapters');
const { imageReducerService } = require('../main.js');
const { appendSuffix } = require('../utils');
const createFakeSqsMessage = (payload) => ({ Records: [{ body: payload }] });

describe('ImageReducerService', () => {
    test('integration', async () => {
        const realBucket = process.env.BUCKET;
        const existingFileKey = process.env.KEY;
        const sqsMessage = createFakeSqsMessage({
            bucket: realBucket,
            key: existingFileKey
        });
        await imageReducerService(
          EventParser.parse(sqsMessage),
          FileService,
          ImageReducer
        );
        // check if the new reduced image exists on the S3 bucket
        const reducedImageMetadata = await FileService.S3
            .headObject({
              bucket: realBucket,
              key: appendSuffix(existingFileKey, 'png')
            })
            .promise();
        expect(reducedImageMetadata).toBeDefined();
   });
});
Enter fullscreen mode Exit fullscreen mode

This test is actually targeting a real S3 bucket using environment variables. There are upsides and downsides to this approach. On one hand, it is more realistic, almost like an end-to-end test (if we don’t consider that the payload doesn’t actually originate from a real SQS queue). The downside is that it is fragile and flaky since the connection could always go down.

An alternative is to use several plugins that can simulate a Lambda environment — and, in fact, almost all of the AWS services — using docker images. One of them is Serverless Offline, which has a vast list of extensions. This can be really useful, but it has the opposite trade-offs: it is less realistic and provides less confidence, but it is easier to set up and faster to execute.

For this Lambda, I would just go with the first path since it is fairly simple. For more complex code, I would reconsider and go with the second option, since we are going to test the code again using real cloud infrastructure as part of the end-to-end testing.

End-to-end testing

If you recall, everything we wrote is integrated into a single line of code — actually, two lines, but only because of the formatting. It looks like this:

const { EventParser, FileService } = require('./adapters');
const ImageReducer = require('./image-reducer.js');
const ImageReducerService = require('./main.js');

exports.handler = (sqsMessage) =>
    ImageReducerService(EventParser.parse(sqsMessage), FileService, ImageReducer); 
Enter fullscreen mode Exit fullscreen mode

Now that we’ve finished all the unit and integration tests we need to conduct, it’s time to test our function in real-life conditions using real AWS infrastructure. Since our Lambda function receives events from an SQS queue, we need to insert a message into the queue that is connected to the function and determine whether a new image exists on a given S3 bucket after the function has finished executing.

require('dotenv').config();
const Aws = require('aws-sdk');
const { appendSuffix } = require('../utils');

Aws.config.update({region: 'us-east-1'});
const Sqs = new Aws.SQS({ apiVersion: '2012-11-05' });
const S3 = new Aws.S3();

describe('imageReducerService', () => {
    test('end-to-end functionality', async () => {
        const event = { bucket: process.env.BUCKET, key: process.env.KEY };
        const params = {
          MessageBody: JSON.strigify(event),
          QueueUrl: process.env.SQS_QUEUE
        };
        await Sqs.sendMessage(params).promise();

        const reducedImageMetadata = await S3
            .headObject({
              bucket: realBucket,
              key: appendSuffix(existingFileKey, 'png') 
            })
            .promise();
        expect(reducedImageMetadata).toBeDefined();
    });
});
Enter fullscreen mode Exit fullscreen mode

This test encompasses every piece of the infrastructure that our Lambda will use and helps ensure that everything is connected properly. It creates an action flow that is exactly like it would be in real time. Therefore, it requires that everything is already up and running on AWS.

We can run this test in a staging/QA environment first, and then again on the actual production environment to ensure that everything is connected. Optionally, we can use Lambda aliases to automate the flow. We would first deploy the new version of the function, then run an end-to-end test, and, if all goes well, switch aliases between the currently active function and the newer version.

Conclusion

If you’d like to see everything in one place, you can find the complete code from this article in this GitHub repo.

Writing tests for Lambda is not a simple task. For a Lambda function to be testable, we have to be mindful from the very beginning of the implementation and plan the design accordingly.


200's only ‎✅: Monitor failed and show GraphQL requests in production

While GraphQL has some features for debugging requests and responses, making sure GraphQL reliably serves resources to your production app is where things get tougher. If you’re interested in ensuring network requests to the backend or third party services are successful, try LogRocket.

Alt Text

LogRocket is like a DVR for web apps, recording literally everything that happens on your site. Instead of guessing why problems happen, you can aggregate and report on problematic GraphQL requests to quickly understand the root cause. In addition, you can track Apollo client state and inspect GraphQL queries' key-value pairs.

LogRocket instruments your app to record baseline performance timings such as page load time, time to first byte, slow network requests, and also logs Redux, NgRx, and Vuex actions/state. Start monitoring for free.


The post Testing Node serverless applications — AWS Lambda functions appeared first on LogRocket Blog.

Top comments (0)