DEV Community

miguel-penaloza
miguel-penaloza

Posted on • Updated on

The What, Why and How of React (Testing with Hooks)

This is the second part of a series of posts focused on testing a React app. If you want to check out the first one, where we discuss basic concepts, you can find it here.

Today, we're going to delve deeper into testing. I will explain how to test a React application, the best patterns for making tests, useful utilities, and some tips to make the TDD/BDD technique easier while you code your application to ensure your tests are more SOLID.

NOTES: In this post, I will replace Enzyme with 'React Testing Library,' which, in my humble opinion, is more restrictive than Enzyme but at the same time forces you to write better tests.

What else should I learn about testing?

In the previous post, we talked about what testing is, introduced basic concepts, and discussed some of the libraries we can use. However, that is just the tip of the iceberg. The culture of testing is not merely about learning acronyms and then applying them to your code; it is an integral part of development.

One thing that happened since my last post was the release of React v16.8. When they introduced the hooks concept into our lives, I saw many posts from people trying to explain why you should migrate to this new feature. They provided examples of how you can replace each internal lifecycle method (such as componentDidMount, etc.) with a hook (useEffect, useState, etc.). However, when I tried to find information on testing hooks, there wasn't much available.

In my projects, I try to migrate to the latest version of React so that I can use hooks, attracted by the promise of a more functional world where our React components simply receive data and actions. Their only responsibility is to trigger these actions and display the data, a concept I find very appealing.

When I first tried using a hook, I relied on my code and tests, trusting that my tests would identify failures when something broke. I expected my tests to fail if something was amiss during the migration from a Class component to a Component with hooks, yet my tests should not break. The UI should remain the same, the data received should be unchanged, and the same actions should be called; I'm merely shifting implementation details within my component.

In my team, we adhere to the rule of 'only create a Class React Component if you need it,' and this guideline works well for us. We only create Class Components when we need to handle state or manage lifecycle events (mount, update, or unmount). Otherwise, we use a function that returns the component, a common rule that I understand many people follow.

When I tried to migrate my first Class Component, it was easy because it only used state. I simply needed to replace the state of the Class Component with useState. My component looks like this: it's just a simple input that animates the title when you focus on the input. There's no business logic involved and nothing too complicated to handle.

export class Input extends Component {
    constructor(props){
        super(props);
        this.state = {
            focus: false
        };
    }
    render(){
        const { title, value, isFilter } = this.props;
        return(
            <div>
                {title && <LabelTitle value={value} isFocus={this.state.focus}>{title}</LabelTitle>}
                <InputForm 
                    onFocus={()=> this.setState({ focus: true })}
                    onBlur={()=> this.setState({ focus: false })}
                    {...this.props}
                />
                {isFilter && <IconInput><img src={iconEye} alt="icon-eye" /></IconInput> }
            </div>);

    }
}
Enter fullscreen mode Exit fullscreen mode

Now that I've migrated my component, it looks like this:

export const Input = ({ title, value, isFilter, type, width, onChange }) => {
    const [focus, changeFocus] = useState(false);
    return (
        <div>
            {title && <LabelTitle value={value} isFocus={focus}>{title}</LabelTitle>}
            <InputForm
                onFocus={() => changeFocus(true)}
                onBlur={() => changeFocus(false)}
                type={type}
                width={width}
                onChange={onChange}
            />
            {isFilter && <IconInput><img src={iconEye} alt="icon-eye" /></IconInput>}
        </div>);
};
Enter fullscreen mode Exit fullscreen mode

It's essentially the same component, with the same behavior but less code. However, my tests started to fail; all the unit tests related to the input behavior failed. When I tried to understand why, I realized that one of their assertions was verifying this:

expect(input.state('focus')).toBeFalsy();
Enter fullscreen mode Exit fullscreen mode

I realized that I no longer have a .state method because it's no longer a class; it's just a component. Then, I also noticed that I had overused .update() and setProps() in my previous tests. My tests were fine when I wrote them, but now they are too closely tied to my implementation. If I try to migrate to the latest version of React, my tests will fail. This means that I need to refactor all my tests and my code to use hooks.

I was at a crossroads: I could leave the code as it is—it’s working, and no one is asking me to migrate to hooks. I don’t need to refactor everything just to use something new. However, I realized something more significant than just the need of the new hooks in my code; my tests were preventing me from writing good code. That’s why I chose to refactor everything to make the code great again.

Before thinking about refactoring, I needed to understand why my tests were so tightly coupled to the implementation details. I reviewed my tests repeatedly and found instances where I used mount and shallow from Enzyme to render components and then check their state and props. I also used setProps to simulate received data, which was acceptable at the time. However, now that React has evolved (while maintaining backward compatibility), I can't upgrade because my code is too intertwined with its tests.

After extensive research, I discovered a new library called React Testing Library. I found that this library offers fewer features than Enzyme; you cannot check states, props, or manipulate lifecycles. Instead, you can only render components once, pass props, find elements by testid, and wait for elements to be displayed. Check this out:

test('Fetch makes an API call and displays the greeting when load-greeting is clicked', async () => {
  // Arrange
  axiosMock.get.mockResolvedValueOnce({data: {greeting: 'hello there'}})
  const url = '/greeting'
  const {getByText, getByTestId, container, asFragment} = render(
    <Fetch url={url} />,
  )

  // Act
  fireEvent.click(getByText(/load greeting/i))

  // Let's wait until our mocked `get` request promise resolves and
  // the component calls setState and re-renders.
  // getByTestId throws an error if it cannot find an element with the given ID
  // and waitForElement will wait until the callback doesn't throw an error
  const greetingTextNode = await waitForElement(() =>
    getByTestId('greeting-text'),
  )

  // Assert
  expect(axiosMock.get).toHaveBeenCalledTimes(1)
  expect(axiosMock.get).toHaveBeenCalledWith(url)
  expect(getByTestId('greeting-text')).toHaveTextContent('hello there')
  expect(getByTestId('ok-button')).toHaveAttribute('disabled')
  // snapshots work great with regular DOM nodes!
  expect(container.firstChild).toMatchSnapshot()
  // you can also get a `DocumentFragment`, which is useful if you want to compare nodes across renders
  expect(asFragment()).toMatchSnapshot()
})
Enter fullscreen mode Exit fullscreen mode

In the example you have three clear separations, prepare your component, make the action and wait to assert(Given, When, Then), and that it is. The test don't use anything that a normal user can't see, and the utility only return you this:

const {getByText, getByTestId, container, asFragment} = render(
    <Fetch url={url} />,
  )
Enter fullscreen mode Exit fullscreen mode

Within the rendered component, you can use functions like getByText and getByTestId to locate elements. The HTML DOM rendered is accessible via the container, and the asFragment function helps you create snapshots. You can find the full api here

NOTE: Today, I don't trust snapshots because they are hard to read, and most people, including myself, just use --update to fix problems. We're not machines capable of deciphering auto-generated code easily, so I don't see much value in what these snapshots produce. However, if you feel comfortable using them, feel free to do so.

As you can see, this library doesn't allow access to the implementation, unlike Enzyme, which does. I decided to migrate to this new library not because of the hooks; the main reason is that Enzyme allowed me to write incorrect tests. It's not Enzyme's fault; it was my mistake. I always say that libraries are just tools. The quality of the code depends entirely on the person who writes it, not on the language, framework, or library used.

Now, let's discuss another aspect of TDD: refactoring. Refactoring is a vital part of your job. You should refactor the code once you complete your development. Writing tests at the beginning helps you understand the requirements and ensures the code works as expected. Moreover, with reliable tests, you can be confident that your changes won't compromise the value your code provides. If your tests consistently pass (indicating 'green'), you're free to make improvements as needed. This is the beauty of good testing: it's not just about testing, but a safety net that protects my code from myself.

Why refactor is related to TDD?

Refactoring is a vital phase in development. It’s during the refactoring stage that you make your code do more than just meet the requirements. Here, you can enhance the architecture, make it easier to extend, clarify the responsibilities within the code, and upgrade to new libraries or functionalities that improve your code, as we saw with hooks. However, you need to thoroughly understand some rules before you begin refactoring:

  • A refactor should not alter the interface of your software. If you need to change the behavior of your code, create tests, make them fail, then adjust your code to make the tests pass, and only then should you proceed to refactor.
  • Never refactor anything that you don't understand. Often, we find ourselves dealing with "black-box" code, which no one fully understands. You might want to improve this code, but how can you be sure that everything will continue to work after your changes if you don't fully understand what it’s supposed to do in the first place?
  • Only refactor on green. Ensure that your changes are correct by never attempting to improve code when your tests indicate an error. The key here is to code in baby steps; a small amount of code is easier to manage during refactoring. If you use Git, you can employ techniques like fixup and autosquash to easily manage your changes and then squash them when you’re satisfied.
  • If you don't trust your tests, don't refactor your code. This is crucial: if your tests don’t provide the confidence you need, create the necessary tests before you begin refactoring.

How to really make a good test?

Now we're going to try a real-world exercise. We will continue addressing our challenge of migrating to hooks and adapting tests initially made with Enzyme.

We have a rule of trusting our tests, but I don't trust my current ones. Therefore, we are going to create new tests focused on DOM interaction rather than React instances.

This post will describe creating tests for an old dummy project called Brastlewark. This app is a simple CRA application that fetches a list of gnomes and displays them on a dashboard. You can filter the gnomes, and when you click on one, you can view its details. The project uses Redux and Saga. Let's examine my first test, which validates that the dashboard displays no gnomes if no data is fetched.


describe("Dashboard", () => {
    let store;
    beforeEach(() => {
        const sagaMiddleware = createSagaMiddleware();
        store = createStore(rootReducer, applyMiddleware(sagaMiddleware));

        sagaMiddleware.run(function* fullSaga() {
            const rootWatcher = combineWatchers(rootSaga);
            const watchers = Object.keys(rootWatcher)
                .map(type => createActionWatcher(type, rootWatcher[type]));
            yield all(watchers);
        });
    });
    it("should render empty dashboard", () => {
        const { getByTestId } = render(
            <Provider store={store}>
                <Dashboard />
            </Provider>,
        );

        expect(getByTestId("empty-gnomes-container")).toBeDefined();
        expect(getByTestId("empty-gnomes-container").textContent).toEqual("No gnomes to display");
    });
});
Enter fullscreen mode Exit fullscreen mode

NOTE: I added data attributes to my React components to simplify testing. To align with the new library I'm using, I'm employing data-testid to identify elements in the UI.

My test passed, but now you can see that my test depends on more implementation details than before. It now knows about Redux and Sagas, involves middleware and stores, and uses providers—it's not just rendering. However, this isn't entirely wrong because my tests depend on these elements, but they are external to the component I need to test. These are the minimal requirements needed to render; my components are connected with Redux and dispatch actions. With the React Testing Library, I just ensure to include the same basic elements that the real applications have.

My test now doesn't verify what's inside the component. I don’t test the current state or any internal props. Currently, I've inverted the order of dependencies in my tests.

Next, I should create a utility that provides these dependencies pre-loaded and ready for use in my tests to avoid duplication. I'm envisioning something like this:

const renderWithState = (Component, props = {}) => {
    const sagaMiddleware = createSagaMiddleware();
    const store = createStore(rootReducer, applyMiddleware(sagaMiddleware));
    sagaMiddleware.run(function* fullSaga() {
        const rootWatcher = combineWatchers(rootSaga);
        const watchers = Object.keys(rootWatcher)
            .map(type => createActionWatcher(type, rootWatcher[type]));
        yield all(watchers);
    });
    const renderedOptions = render(
        <Provider store={store}>
            <Component {...props} />
        </Provider>,
    );
    return renderedOptions;
}

describe("Dashboard", () => {
    afterEach(cleanup);

    it("should render empty dashboard", () => {
        const { getByTestId } = renderWithState(Dashboard);

        expect(getByTestId("empty-gnomes-container")).toBeDefined();
        expect(getByTestId("empty-gnomes-container").textContent).toEqual("No gnomes to display");
    });
});
Enter fullscreen mode Exit fullscreen mode

Now, you can see that the responsibility of creating the store with Redux and its sagas lies with the renderWithState function, which I can extract to another file, such as a test-utility. My test now looks simpler: I provide the entire environment to the component I want to test and no longer need to worry about implementation details.

My app currently only implements Redux and Saga, but the renderWithState function can be expanded to start anything you need. You should include all your base startup logic there, such as Context Providers (i.e., i18n, Styled Components, custom HOCs, React Router), Portals, and everything else our application requires.

The real key here is defining the limitations or boundaries of your test. As you can see, my tests are not unit tests; they validate business requirements, aligning more closely with what BDD expects from our tests. However, you can use this approach with TDD. The important thing for us is that our tests become fast, easy to write, and easy to understand. Keeping that in mind is crucial because a test that is easy to understand is more valuable than hundreds of pages of documentation.

But well, we now need to test more aspects. How can we pass values to the component? The code dispatches a Redux action, listens for our saga, and then calls the endpoint to retrieve information. So, what we need to do now is establish the 'yellow line'—the point where our test stops.

For this test, the limit will be the endpoint call. We're going to get there and mock the fetch; the rest of the application should be tested under real conditions, calling actual actions and functioning like our real environment.

One thing we're going to do is create a new API that will retrieve important information for our test. This information will include the actions dispatched. I don't want my test to be using or implementing Redux directly. To avoid testing with implementation details, I will create a Store Utils API, just like this:

class StoreUtil {
    actions = [];

    clearActions = () => {
        this.actions = []
    }

    pushAction = (action) => {
        this.actions.push(action);
    }

    getActions = () => {
        return this.actions;
    };

    getAction = (action) => {
        return new Promise(resolve => {
            let actionFound;
            while (!actionFound) {
                actionFound = this.actions.find(({ type }) => type === action);
            }
            resolve(actionFound)
        })
    }
}
Enter fullscreen mode Exit fullscreen mode

This class is very simple, we have these actions, an we can:

  • Get all actions called.
  • Get one specific action.
  • Push one action to the registry.
  • Delete all actions.

The getAction is a promise because the action dispatch process is asynchronous. When we render our app, all the Redux magic operates under the hood, and the components are only updated when the reducers alter their previous state. Without the promise and the while, we would miss the actions that take longer than the first render.

NOTE: The Promise will wait indefinitely for the element to be displayed. If the component is never rendered, the Jest timeout will stop the test and result in a failure. You can improve this code to make it work better, but it suits the purposes of this post perfectly, so I will leave it as is. Feel free to adapt it to meet your needs.

I have also created a new middleware that will listen to each action called and push each one to the StoreUtil. Now, our renderWithState includes that middleware and returns the StoreUtil along with the rest of the rendered options.

const loggerMiddleware = (storeUtil) => store => next => action => {
    storeUtil.pushAction(action);
    next(action);
};

export const renderWithState = (Component, props = {}) => {
    const storeUtil = new StoreUtil();
    storeUtil.clearActions();
    const sagaMiddleware = createSagaMiddleware();
    const store = createStore(rootReducer, applyMiddleware(loggerMiddleware(storeUtil), sagaMiddleware));
    sagaMiddleware.run(function* fullSaga() {
        const rootWatcher = combineWatchers(rootSaga);
        const watchers = Object.keys(rootWatcher)
            .map(type => createActionWatcher(type, rootWatcher[type]));
        yield all(watchers);
    });
    const renderedOptions = render(
        <Provider store={store}>
            <Component {...props} />
        </Provider>,
    );
    return { ...renderedOptions, storeUtil };
}
Enter fullscreen mode Exit fullscreen mode

NOTE: If you feel lost with the middleware, redux and saga terms, check these post that explain it very good the basic and the complex.

And now in our test, we can verify that one action was called:

it("should dispatch the fetchGnomes Action", async () => {
        const { storeUtil } = renderWithState(Dashboard);

        const fetchGnomesAction = await storeUtil.getAction("FETCH_GNOMES");

        expect(fetchGnomesAction).toEqual({ "payload": { "params": {} }, "type": "FETCH_GNOMES" });
    });
Enter fullscreen mode Exit fullscreen mode

The last assert of our test compare the action redux object, and this looks like an implementation detail for me, what we can do is replace this assert to check if the payload is called with the correct information, like this:

  it("should dispatch the fetchGnomes Action", async () => {
        const { storeUtil } = renderWithState(Dashboard);

        const fetchGnomesAction = await storeUtil.getAction("FETCH_GNOMES");

        expect(fetchGnomesAction.payload).toEqual({ "params": {} });
    });
Enter fullscreen mode Exit fullscreen mode

Right now, our test knows less about internal actions and models and simply verifies the parameters used to call the endpoint. This means that our test is verifying the code interfaces, which adds value by making the test easier to extend and understand.

The next part of our test verifies the boundaries and our interfaces. What I need now is to retrieve information, so I need to mock the fetch API call to get what I want. I'm using the JavaScript Fetch native API, and obviously, I don't want my test to be concerned with that. I always want to obscure the specifics of what I'm using from my test because I could switch to Axios, Request, or any other library. My test should handle the mocks without knowing which dependency I use. To achieve this, I have created a wrapper called fetchApi that will make the call to the resource. This function is the only one that knows what I'm using to make my REST request:

export const fetchApi = (url, {
    method = 'GET',
    params,
    cache= 'no-cache',
    headers = {
        'content-type': 'application/json'
    },
    data
}) => {
    let paramText = queryString.stringify(params);
    paramText = paramText ? `?${paramText}` : '';

    return fetch(`${url}${paramText}`, {
        body: JSON.stringify(data),
        cache,
        headers,
        method, // *GET, POST, PUT, DELETE, etc.
    }).then(response => {
        return response.json();
    }).catch(error => { 
        return { error }; 
    });
};
Enter fullscreen mode Exit fullscreen mode

I'm going to create a new fectApi test util to be able to mock this and to set mocked answers to my tests.

export class FetchUtilsMock {
    mockedFetch;
    constructor(fetchApi) {
        this.mockedFetch = fetchApi.mockReset();
    }

    setResponse = (payload) => {
        this.mockedFetch.mockReturnValue(payload)
    }
}
Enter fullscreen mode Exit fullscreen mode

Is a simple function that will store the mock, and then we can mock the responses that we want, the constructor reset the mock to avoid problems among tests, and you can call the set response method every time you need, the mockReturnValue is a function that the jest mocks allows implementing.

import fetchApi from '../../utils/api-utils';

jest.mock('../../utils/api-utils');

const emptyResponse = {
    "Brastlewark": []
}

describe("Dashboard", () => {
    let fetchUtil;

    afterEach(cleanup);

    beforeEach(() => {
        fetchUtil = new FetchUtilsMock(fetchApi);
    })

    it("should render empty dashboard", () => {
        fetchUtil.setResponse(emptyResponse);
        const { getByTestId } = renderWithState(Dashboard);

        expect(getByTestId("empty-gnomes-container")).toBeDefined();
        expect(getByTestId("empty-gnomes-container").textContent).toEqual("No gnomes to display");
    });
Enter fullscreen mode Exit fullscreen mode

This's how the test looks now, I'm mocking my api-utils with jest.mock('../../utils/api-utils');, on the beforeEach, I instance the mock utility and then each test will define the response. I'm mocking right now an empty response, but we can mock multiple scenarios and responses, our test now allows us to test different possible (and real-live) responses to test our application.

You can mock any other integration that you have on your application like this, from a REST request, Databases, Redis, a Queue or whatever you need. The important here is to always wrap your integrations boundaries, to make it easy to test and develop, with this strategy you can change your dependencies without to refactor your entire application.

The next logical step is to mock a happy-path scenario, I will set the response with valid data and then validate that the gnomes are displayed, I will use a utility from react-testing-library called waitForElement, you also have others async-await dom related tools to make your test here, this will wait for the element to be displayed and return the component who has the data-testid="gnome-box-container"

const correctAnswer = {Brastlewark: [...]} // mock data with valid information

it("should dispatch the gnomes", async () => {
        fetchUtil.setResponse(correctAnswer);
        const { getByTestId } = renderWithState(Dashboard);

        const boxContainer = await waitForElement(() => getByTestId("gnome-box-container"));

        expect(boxContainer.children.length).toEqual(correctAnswer.Brastlewark.length);
    });
Enter fullscreen mode Exit fullscreen mode

I will move the correctAnswer and the emptyAnswer constants to a file where I can isolate my mocked data, in that way if the model changes, I just need to update one file and all tests of my application should don't have the responsibility to create the data.

Always tests before refactor

As you can see, I'm just creating tests for my existing code, I'm writing tests to verify that my code works as I expected, and then I will move to the hooks. To my new tests, the details of which library I 'm using is not relevant, they only care about, display or not display something on the DOM, next we're going to tests interactions, clicking and submitting data, but before I will check my coverage, I use the same reporter that CRA3.0 gives me for jest, lets check it:


NOTE: To be able to use CRA coverage report I create a script on my package.json like this: "test:ci": "npm test -- --coverage --watchAll=false",

As you can see, my coverage is very low, but I'm sure that my tests are good, and at least the things that I test are working as I expected. The coverage is an indicator of different values; the branches are telling us that we have a lot of switches, if statements, for loops, etc., and we are not testing all the possible scenarios. Getting 100% coverage in most cases is not worth it. A good exercise for us as developers is to read these reports and verify if you really need those conditions to be tested. In some cases, you will find that the code is protecting you from a condition that's impossible to happen. Don't try to reach 100% just because it's the rule; try to cover the most realistic scenarios as you can, understand the cases, and then refactor or test them if you feel that you must.

Let's go with interactions

A UI is more than just display; we have interactions, but how can we test it? One common approach for me in the past was to use Enzyme to shallow render components, which looks something like this:

const wrapper = mount(<Stateful />);
const instance = wrapper.instance();

instance.clickButton(); // Internal method

expect(...).toEqual(...);

Enter fullscreen mode Exit fullscreen mode

This gives me the coverage, and also allow me to test the button click, what's wrong with this approach? well, I'm using the clickButton method and my test is never really clicking anything, I was wrong to marry my test with internal methods because now I want to migrate to a functional component and this test doesn't support that, my test is blocking me to improve my code.

Another thing very common on my tests with enzyme is this:

const wrapper = mount(<Foo />);

expect(wrapper.find(<Clicks />).children.length).to.equal(0);
wrapper.find('a').simulate('click');
expect(wrapper.find(<Clicks />).children.length).to.equal(1);
Enter fullscreen mode Exit fullscreen mode

This is my approach, and it's closer to being a good thing. I'm looking for a component inside Foo and then verifying its children in the DOM. I simulate a real click on the wrapper, and I don't care about internal methods. It's a good step toward better testing, but one thing is wrong: I'm assuming that <Clicks /> will be inside Foo. If I change the component, I will have to update it in all the tests that use this. Also, I'm assuming that the a element exists; if in the future it becomes a button or any other html element, it will break my tests. I shouldn't be concerned about which HTML element I'm clicking on. Even in a better test, I'm depending on an internal implementation to make my tests pass.

To improve these tests you can do something like this:

const wrapper = mount(<Foo />);

expect(wrapper.find('[data-testid="clicks-container"]').children.length).to.equal(0);
wrapper.find('wrapper.find('[data-testid="clicks-action"]'').simulate('click');
expect(wrapper.find(wrapper.find('[data-testid="clicks-container"]').children.length).to.equal(1);
Enter fullscreen mode Exit fullscreen mode

Now I've based my test on data-testid. On this abstracion the clicks-container represents where the information is located, and clicks-action is a representation of a clickable element, it's the thing that will handle how many clicks I've made. I don't care about the type; just the fact that something is containing, and that that something it's clickable, what it matters in my tests.

You can see how I improved my test using Enzyme, making it clear that you don't have to migrate to a new library to write better tests. The real importance here is how you write your tests: how clear they are, how isolated the runs are, not the library used.

With React Testing Library, you have fireEvent, which simulates events on the DOM—a very powerful utility. You can check its documentation here. My test is going to find the input, then change the input value to the first gnome name value, and finally verify that only the correct gnome is displayed

 it('should filter the gnomes', async () => {
    fetchUtil.setResponse(correctAnswer);
    const { storeUtil, getByTestId } = renderWithState(Dashboard);
    const gnomeName = correctAnswer.Brastlewark[0].name;
    const gnomeId = correctAnswer.Brastlewark[0].id;
    const filter = await waitForElement(() =>
      getByTestId('gnomes-filter-input')
    );

    fireEvent.change(filter, { target: { value: gnomeName } });

    await storeUtil.getAction('GNOMES_FILTERED');
    const boxContainer = await waitForElement(() =>
      getByTestId('gnome-box-container')
    );
    expect(boxContainer.children.length).toEqual(1);
    const gnomeDetails = await waitForElement(() =>
      getByTestId(`gnome-box-item-${gnomeId}`)
    );
    expect(gnomeDetails.textContent).toEqual(gnomeName);
  });
Enter fullscreen mode Exit fullscreen mode
  • Given I receive the correct information, and I have the input to filter the gnomes.
  • When I search for my gnome
  • Then I see only that gnome

As you can see my test follow the pattern Given-When-Then and I verify that the business requirements are delivered on my code. Now I can start migrating my code to hooks and the tests should not break.

Mutants on the code and the corner cases

Let's assume we're in a normal workflow, and you need to code a requirement. The requirement comprises three acceptance criteria that you must fulfill. After testing and coding, the original three requirements are already developed. However, it's common to find that there are more aspects to consider beyond these three requirements. You encounter peculiar cases that must be validated to prevent future bugs.

As a developer, one crucial task is to ensure that your code supports these unusual corner cases. If you're uncertain about the expected behavior in these new scenarios, you should consult with the recipient of the development (such as the Product Owner, Proxy PO, Stakeholder, client, etc.). They, as the owner, should guide you on the path to follow. However, regardless of guidance, if you believe that the code requires a test to validate a corner case, you must create the test and integrate it into the code. Neglecting to do so will lead to more difficulties in the future when you or others struggle to understand the rationale behind these corner cases.

TDD (Test-Driven Development) helps you develop with control, while BDD (Behavior-Driven Development) aids in understanding the business. However, there are times when you simply need to create tests to ensure that the code functions even when things don't go as expected. Always remember Murphy's Law: 'Things will go wrong in any given situation if you give them a chance.

Mutants are a different topic altogether. Mutant generation is a testing strategy where you intentionally modify your code and check if the tests still pass. For instance, you might remove a line, change a > to a =>, or include a "!" before an assertion. If your tests still indicate that everything is fine, it means your code might be flawed.

Testing mutants in your code is a healthy process to assess how robust your test suite is. Several libraries can assist you with this, with Stryker JS being one of the most popular options. It's essential to consider all these factors when testing your application, as each type of test provides different insights and contributes to your development as a better developer.

Conclusions

Today, we tested a React application with React Testing Library, simulating a real-life environment. We emphasized the importance of good tests in creating maintainable, extensible, and comprehensible code. It's crucial to keep implementation details outside the tests and learn how to mock our boundaries to ensure our app behaves like a normal application. By continuously improving our tests, we establish a safety net that allows us to implement, experiment, and enjoy building amazing applications.

It's worth noting that terms like scenarios, responsibilities, no implementation details in tests, mocking, and utils for creating mocks are essential vocabulary that every member of the development team should know and understand. When a team grasps the significance of these concepts, it indicates a lack of a Culture of Testing, changing that is the first step into a better team, better product, and better code.

InTheTestsWeTrust

Check my previous posts

Top comments (0)