Improve React App Performance Through Memoization

Chidume Nnamdi 🔥💻🎵🎮
Bits and Pieces
Published in
10 min readApr 22, 2019

--

I have spent a lot of time learning how to improve the workflow around building JavaScript applications. I’ve shared my experience on how different techniques like Memoization can vastly boost performance, how to adopt great engineering practices and design patterns, and how to leverage tools like Bit to build modular and maintainable apps with components.

In this post, I’ll take another step in my performance series, and we will look into an optimization trick in React using memoization to boost performance!

Tip: Use Bit to organize share and reuse your JS/React components between your team and/or across your projects. Also, check out Bit’s UI — it makes it really easy to discover shared components.

Memoization

For starters who don’t know memoization is. Memoization is the process of caching a result of inputs linearly related to its output, so when the input is requested the output is returned from the cache without any computation.

Memoization is an optimization technique used to primarily speed up programs by storing the results of expensive function calls and returning the cached results when the same inputs occur again.

Mostly, memoization is applied in expensive computations like:

  • Recursive functions (eg. Fibonacci sries, factorial, …)
  • Game Engine object computations
  • Object modeling
  • Machine Learning algorithms
  • Game Engine physics
  • GUI (Graphical User Interface) rendering
  • etc

Let’s demo how memoization works. Let’s say we have an expensive function like this:

function longOp(input) {
var now = Date.now()
var end = now + 3000
while(now < end) {
now = Date.now()
}
return input * 90
}

It takes 3s for the function to execute and return a result. Let's pretend thta 3secounds is very huge. Now, if we run the longOp function like 1o times in our app, that means it will take our app 30 secounds !!! to run and execute.

longOp(9) // 3s
longOp(5) // 3s
longOp(9) // 3s
longOp(7) // 3s
longOp(9) // 3s
longOp(9) // 3s
longOp(9) // 3s
longOp(5) // 3s
longOp(7) // 3s
longOp(5) // 3s
// Total: 30s

See that we called the longOp function with input 9 5 times, with 7 two times and with 5 three times. Seeing that the function is expensive it wouldn’t be wise to rerun the function with 9 after the initial run with the same value. We should cache the first value and return the cached value when the input occurs again.

So now, initial 9 will run for 3s but other subsequent runs with 9 will run faster like 0.1s.

Now, let’s re-write our longOp function to use caching.

function longOp(input) {
if (!longOp.cache) {
longOp.cache = {}
}
if (!longOp.cache[input]) {
var now = Date.now()
var end = now + 3000
while (now < end) {
now = Date.now()
}
return longOp.cache[input] = input * 90
}
return longOp.cache[input]
}

First, the function checks if the cache object is created, if not it creates a cache object. Next, it checks if the input is already in the cache object, if not it goes ahed to run the expensive operation and after that, it stores the result in the cache object with the input as key. if there is a hit in the cache, it simply returns the value of the input key from the cache object.

If we call the function with input 9, the expensive computation will be run and the result will be stored in the cache object, so next time we pass in 9, it will skip the expensive computation and return from the cache object.

Let’s test it out:

// ...// warm up the function
for (var i = 0; i < 10; i++) {
longOp(0)
}
C.time('')
log(longOp(9))
C.timeEnd('')
C.time('')
log(longOp(9))
C.timeEnd('')

First, we warmed the function. As we are using Nodejs it is built on top v8, JS engine by Google. One of the optimizaion tricks that it(v8) employs to fasten our JS code execution is speculative optimization. Speculative optimization is the process the JS engine can guess or predict future types of parameters passed to a function/method as it is dynamically typed i.e its data type is determined on execution time. JavaScript has specification on how to build compilers and interpreters.

If we write this code:

function add(a, b) {
return a + b
}

Here this function adds the inputs a and b. But add operation can of many types, we can't just say its integers or numbers addition

3 + 5 = 8
2 + 5 = 7
6 + 90 = 96

It can string or words concatenation

"Nnamdi" + " " + "Chidume" = Nnamdi Chidume

With all these, the JS engine will try to pass our code to all types of addition in other to get the result. And with it comes a huge performance bottleneck.

v8 inlines our code with a Feedback lattice that collects info on the type of data that is passed to a function. Witht he feedback it can generate code hugely optimized to deal with the type of data.

In our add function, if we call it with numbers types like this:

add(9,56)
add(45,56)
add(8,3)

v8 sees number types occur thrice, so it can now speculate or deduce that the next iputs will be of number types so it will generate machine code that will deal with numbers only.

To understand it (Speculative Optimization in v8), watch out for my article on the topic.

So that’s the reason we warmed our function by calling it 10 times this tells v8 to generate a code meant to only calculate and that takes only numbers without bothering about other data types, this will speed up our function execution time. So after the warming up, we run our benchmark code.

C.time('')
log(longOp(9))
C.timeEnd('')
C.time('')
log(longOp(9))
C.timeEnd('')

The first call will not hit a cache, so it eill take the function 3 ms to complete. the next call will hit a cache because the prev call would cache the result of 9.

$ node longop
810
: 3106.225ms
810
: 51.036ms

See!! more than half of the prev time!! That’s a huge performance boost.

The initial call would run the exensive function but on subsequent runs with the same input would only hit the cache skipping the expensive calculation.

With this, I think we have fully understood what memoization entails. Now, let’s look at how we can apply it to a React app to improve its performance.

In React

Tip: When working with UI components, use Bit to easily reuse and sync them between your applications. You can write a component once, and share it across all your projects and apps while syncing changes between them! Learn more

Let’s look at this component:

class AppComponent {    constructor(props) {
super(props)
this.state = {
input: 0
}
}
longOp = input => {
// simulating expensive operation
console.log('starting')
var now = Date.now()
var end = now + 1000
while (now < end) {
now = Date.now()
}
return input * 90
}
handleInput = evt => {
console.log('handling input', evt)
this.setState({ input: evt.target.value })
}
render() {
return (
<div>
<input onChange = {this.handleInput} />
<h2> { this.longOp(this.state.input) } </h2>
</div>
)
}
}

Whenever we enter a number in the input element, the state input is set using the setState function, this function triggers a re-render of the component. The render method is called, there the longOp is run with the input property of the state passed to it. The longOp will run under 10s window, this will cause our app to literally freeze, like remain unresponsive for 10 secounds!!! This is a huge performance bottleneck. This is not the case of our app doing unecessary re-renders which is officially termed wasted renders. A component might be made pure using the React.PureComponent component

class AppComponent extends React.PureComponent {    constructor(props) {
super(props)
this.state = {
input: 0
}
}
longOp = input => {
// simulating expensive operation
console.log('starting')
var now = Date.now()
var end = now + 1000
while (now < end) {
now = Date.now()
}
return input * 90
}
handleInput = evt => {
console.log('handling input', evt)
this.setState({ input: evt.target.value })
}
render() {
return (
<div>
<input onChange = {this.handleInput} />
<h2> { this.longOp(this.state.input) } </h2>
</div>
)
}
}

but for each neccesary run/render the React.PureComponent triggers it still takes 10 secounds.

If we enter 4, the longOp will run for 10ms. Enter 5, longOp will run for 10 ms. Enter 5, longOp will not run because the prev value was 5. Enter 4, longOp will run for 10 ms (it shouldn’t because the function has seen the input earlier). Enter 5, longOp will run for 10 ms (it shouldn’t because the function has seen the input earlier).

The purity now doesn’t matter anymore because it will still take a huge amount of time just to render!!. The longOp operation should not run on subsequent inputs of original input like 5 shouldn’t trigger the longOp after its original input.

If you both components in your browser, you will experience a serious slowdown of your browser!! Beware when running it.

We can apply memoization to the longOp method call in the render method. Now, we will create a generic memoize function that can be used to memoize any method or function passed to it.

function memoize(fn) {
return function () {
var args =
Array.prototype.slice.call(arguments)
fn.cache = fn.cache || {};
return fn.cache[args] ? fn.cache[args] : (fn.cache[args] = fn.apply(this,args))
}
}

In our React components, we will call it passing the emethod we want to memoize:

class AppComponent {    constructor(props) {
super(props)
this.state = {
input: 0
}
}
longOp = memoize((input) => {
// simulating expensive operation
console.log('starting')
var now = Date.now()
var end = now + 1000
while (now < end) {
now = Date.now()
}
return input * 90
})
handleInput = evt => {
console.log('handling input', evt)
this.setState({ input: evt.target.value })
}
render() {
return (
<div>
<input onChange = {this.handleInput} />
<h2> { this.longOp(this.state.input) } </h2>
</div>
)
}
}

We memoized the longOp method by passing it as a parameter to the memoize function. When an input is passed to the function it is stored in the cache and the expensive op is run, on subsequent input of the same value the expensive op is skipped and the result is returned from cache.

Now, our component will be slow when input is entered but on the entering of the same input again, the DOM will update almost instantly.

Re-writing the same component to functional component, it will look like this:

function longOp(input) {
// simulating expensive operation
console.log('starting')
var now = Date.now()
var end = now + 1000
while (now < end) {
now = Date.now()
}
return input * 90
}function handleInput(evt) {
return evt.target.value
}
function App() {
let [state, setState] = useState(0)
return (
<div>
<input onChange={(evt)=>setState(handleInput(evt))} />
<h2> { longOp(state) } </h2>
</div>
);
}export default App;

There will be great slowdowns in our app as it happened in our class component. We would memoize the longOp function with the memoize function.

const longOp = memoize((input) => {
// simulating expensive operation
console.log('starting')
var now = Date.now()
var end = now + 1000
while (now < end) {
now = Date.now()
}
return input * 90
})
// ...

As usual, the initial input will be slow but will be faster on repeated inputs.

Using useMemo

React have a hook caled useMemo, this hook is for memoizing an expensive function in functional components.

const memoizedFunc = useMemo(()=>longOp(input),[input])

It takes a callback function as a parameter and an array of dependencies. The callback function calls the overly expensive function when the functional components render. When the input parameter, React will re-compute the memoizedFunc to get the new value, with that React smartly avoids expensive function executing on every render with thtw same input as the previously seen one.

Adapting useMemo to our use case:

function longOp(input) {
// simulating expensive operation
console.log('starting')
var now = Date.now()
var end = now + 1000
while (now < end) {
now = Date.now()
}
return input * 90
}function handleInput(evt) {
return evt.target.value
}
function App() {
let [state, setState] = useState(0)
let memoizedLongOp = useMemo(()=> longOp(input), [input])
return (
<div>
<input onChange={(evt)=>setState(handleInput(evt))} />
<h2> { memoizedLongOp(state) } </h2>
</div>
);
}export default App;

BUT, there is still a problem with this our problem, same as we saw with React.PureComponent previously. The memoization that React provides is different from our own implementation.

What React does is that if the same value occurs the same time as the previous value and the current value is the same, that is when it does nothing to update the DOM. But what it fails to notice is that expensive functions might be run on the JSX markup inside the render method or inside a functional component. So with that, not recomputing values and updating the DOM won’t hold if an original input appears after two or three generations.

If we enter 8 the longOp is run (10ms). if we enter 8 the longOp is not run. Yay!! 1 if we enter 5, the longOp is run if we enter 8, longOp is run, which shouldn’t be because the function has already seen the input, it should have bypassed the expensive 10ms run and return the result it computed earlier from the cache.

You see.. this is where React provided memoization functions and implementations fail.

But everything is up to the dev and his design and implementation.

Conclusion

In this post, we looked at memoization, speculative optimization. Later in the post, we saw how we can leverage the power of memoization to speed up the performance of our React apps.

Memoization may seem great but it comes with a cost. It trades memory space for speed, it will go un-noticed in low-memory function but you will see the great effect in high-memory functions.

If you have any question regarding this or anything I should add, correct or remove, feel free to comment below and ask/share anything! Thanks đź‘Ť

--

--

JS | Blockchain dev | Author of “Understanding JavaScript” and “Array Methods in JavaScript” - https://app.gumroad.com/chidumennamdi 📕