time-machine versus freezegun, a benchmark

It’s a bench, mark.

I wrote my library time-machine last year as a way to speed up tests that need to accurately mock the current time.

The incumbent time mocking library, freezegun, mocks time by replacing every import of the date and time functions. To do this, it has to scan all attributes of all imported modules. Its runtime is thus proportional to the number of module level attributes in your project and dependencies.

time-machine takes a different approach. It mocks the date and time functions at the C layer, changing their pointers once. Its runtime is therefore constant, no matter how large your project grows.

This post covers a quick benchmark to demonstrate this difference. Even in a minimal setup, time-machine is 100 times faster than freezegun. Using a small Django project, the gap grows to time-machine being 200 times faster.

I set up the benchmark in a fresh ipython session by creating test functions for each library. These mock and unmock time with the same target datetime:

In [1]: import datetime as dt

In [2]: import time_machine

In [3]: import freezegun

In [4]: target = dt.datetime(2020, 1, 1)

In [5]: def freezegun_test():
   ...:     with freezegun.freeze_time(target, tick=True):
   ...:         pass
   ...:

In [6]: def time_machine_test():
   ...:     with time_machine.travel(target):
   ...:         pass
   ...:

I then used the %timeit ipython “magic” command to invoke Python’s timeit module on each function:

In [11]: %timeit freezegun_test()
6.43 ms ± 135 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

In [12]: %timeit time_machine_test()
16 µs ± 201 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

freezegun took 6.4 milliseconds per call, whilst time-machine took 16 microseconds. So time-machine came out over 100 times faster.

This plain ipython session had 647 imported modules:

In [13]: import sys

In [14]: len(sys.modules)
Out[14]: 647

I repeated the benchmark in an ipython session on my new Django project, which has about three times the number of imported modules:

In [2]: len(sys.modules)
Out[2]: 1464

The new results:

In [9]: %timeit freezegun_test()
%timeit 13.2 ms ± 158 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

In [10]: %timeit time_machine_test()
14.7 µs ± 1 µs per loop (mean ± std. dev. of 7 runs, 100000 loops each)

freezegun’s runtime nearly doubled, whilst time-machine stayed constant.

This Django project is in its infancy. Longer running projects can have 10 or even 100 times as many modules, which can make using freezegun so slow it can dominate test run time.

Fin

May your test runtime grow linearly,

—Adam


Read my book Boost Your Git DX to Git better.


Subscribe via RSS, Twitter, Mastodon, or email:

One summary email a week, no spam, I pinky promise.

Related posts:

Tags: