1. Code
  2. Python

Introduction to Parallel and Concurrent Programming in Python

Scroll to top

Python is one of the most popular languages for data processing and data science in general. The ecosystem provides a lot of libraries and frameworks that facilitate high-performance computing. Doing parallel programming in Python can prove quite tricky, though.

In this tutorial, we're going to study why parallelism is hard especially in the Python context, and for that, we will go through the following:

  • Why is parallelism tricky in Python? (Hint: it's because of the GIL—the global interpreter lock.)
  • Threads vs processes: different ways of achieving parallelism. When to use one over the other?
  • Parallel vs concurrent: why in some cases we can settle for concurrency rather than parallelism.
  • Building a simple but practical example using the various techniques discussed.

Global Interpreter Lock

The Global Interpreter Lock (GIL) is one of the most controversial subjects in the Python world. In CPython, the most popular implementation of Python, the GIL is a mutex that makes things thread-safe. The GIL makes it easy to integrate with external libraries that are not thread-safe, and it makes non-parallel code faster. This comes at a cost, though. Due to the GIL, we can't achieve true parallelism via multithreading. Basically, two different native threads of the same process can't run Python code at once.

Things are not that bad, though, and here's why: stuff that happens outside the GIL realm is free to be parallel. In this category fall long-running tasks like I/O and, fortunately, libraries like numpy.

Threads vs. Processes

So Python is not truly multithreaded. But what is a thread? Let's take a step back and look at things in perspective.

A process is a basic operating system abstraction. It is a program that is in execution—in other words, code that is running. Multiple processes are always running in a computer, and they are executing in parallel.

A process can have multiple threads. They execute the same code belonging to the parent process. Ideally, they run in parallel, but not necessarily. The reason why processes aren't enough is because applications need to be responsive and listen for user actions while updating the display and saving a file.

If that's still a bit unclear, here's a cheatsheet:

PROCESSES THREADS
Processes don't share memory Threads share memory
Spawning/switching processes is expensive Spawning/switching threads is less expensive
Processes require more resources Threads require fewer resources (are sometimes called lightweight processes)
No memory synchronisation needed You need to use synchronisation mechanisms to be sure you're correctly handling the data

There isn’t one recipe that accommodates everything. Choosing one is greatly dependent on the context and the task you are trying to achieve.

Parallel vs. Concurrent

Now we'll go one step further and dive into concurrency. Concurrency is often misunderstood and mistaken for parallelism. That's not the case. Concurrency implies scheduling independent code to be executed in a cooperative manner. Take advantage of the fact that a piece of code is waiting on I/O operations, and during that time run a different but independent part of the code.

In Python, we can achieve lightweight concurrent behaviour via greenlets. From a parallelization perspective, using threads or greenlets is equivalent because neither of them runs in parallel. Greenlets are even less expensive to create than threads. Because of that, greenlets are heavily used for performing a huge number of simple I/O tasks, like the ones usually found in networking and web servers.

Now that we know the difference between threads and processes, parallel and concurrent, we can illustrate how different tasks are performed on the two paradigms. Here's what we're going to do: we will run, multiple times, a task outside the GIL and one inside it. We're running them serially, using threads and using processes. Let's define the tasks:

1
import os
2
import time
3
import threading
4
import multiprocessing
5
6
NUM_WORKERS = 4
7
8
def only_sleep():
9
    """ Do nothing, wait for a timer to expire """
10
    print("PID: %s, Process Name: %s, Thread Name: %s" % (
11
        os.getpid(),
12
        multiprocessing.current_process().name,
13
        threading.current_thread().name)
14
    )
15
    time.sleep(1)
16
17
18
def crunch_numbers():
19
    """ Do some computations """
20
    print("PID: %s, Process Name: %s, Thread Name: %s" % (
21
        os.getpid(),
22
        multiprocessing.current_process().name,
23
        threading.current_thread().name)
24
    )
25
    x = 0
26
    while x < 10000000:
27
        x += 1

We've created two tasks. Both of them are long-running, but only crunch_numbers actively performs computations. Let's run only_sleep serially, multithreaded and using multiple processes and compare the results:

1
## Run tasks serially

2
start_time = time.time()
3
for _ in range(NUM_WORKERS):
4
    only_sleep()
5
end_time = time.time()
6
7
print("Serial time=", end_time - start_time)
8
9
# Run tasks using threads

10
start_time = time.time()
11
threads = [threading.Thread(target=only_sleep) for _ in range(NUM_WORKERS)]
12
[thread.start() for thread in threads]
13
[thread.join() for thread in threads]
14
end_time = time.time()
15
16
print("Threads time=", end_time - start_time)
17
18
# Run tasks using processes

19
start_time = time.time()
20
processes = [multiprocessing.Process(target=only_sleep()) for _ in range(NUM_WORKERS)]
21
[process.start() for process in processes]
22
[process.join() for process in processes]
23
end_time = time.time()
24
25
print("Parallel time=", end_time - start_time)

Here's the output I've got (yours should be similar, although PIDs and times will vary a bit):

1
PID: 95726, Process Name: MainProcess, Thread Name: MainThread
2
PID: 95726, Process Name: MainProcess, Thread Name: MainThread
3
PID: 95726, Process Name: MainProcess, Thread Name: MainThread
4
PID: 95726, Process Name: MainProcess, Thread Name: MainThread
5
Serial time= 4.018089056015015
6
7
PID: 95726, Process Name: MainProcess, Thread Name: Thread-1
8
PID: 95726, Process Name: MainProcess, Thread Name: Thread-2
9
PID: 95726, Process Name: MainProcess, Thread Name: Thread-3
10
PID: 95726, Process Name: MainProcess, Thread Name: Thread-4
11
Threads time= 1.0047411918640137
12
13
PID: 95728, Process Name: Process-1, Thread Name: MainThread
14
PID: 95729, Process Name: Process-2, Thread Name: MainThread
15
PID: 95730, Process Name: Process-3, Thread Name: MainThread
16
PID: 95731, Process Name: Process-4, Thread Name: MainThread
17
Parallel time= 1.014023780822754

Here are some observations:

  • In the case of the serial approach, things are pretty obvious. We're running the tasks one after the other. All four runs are executed by the same thread of the same process.

  • Using processes, we cut the execution time down to a quarter of the original time, simply because the tasks are executed in parallel. Notice how each task is performed in a different process and on the MainThread of that process.

  • Using threads, we take advantage of the fact that the tasks can be executed concurrently. The execution time is also cut down to a quarter, even though nothing is running in parallel. Here's how that goes: we spawn the first thread, and it starts waiting for the timer to expire. We pause its execution, letting it wait for the timer to expire, and in this time we spawn the second thread. We repeat this for all the threads. At one moment, the timer of the first thread expires, so we switch execution to it and terminate it. The algorithm is repeated for the second and for all the other threads. At the end, the result is as if things were run in parallel. You'll also notice that the four different threads branch out from and live inside the same process: MainProcess.

You may even notice that the threaded approach is quicker than the truly parallel one. That's due to the overhead of spawning processes. As we noted previously, spawning and switching processes is an expensive operation.

Let's do the same routine, but this time running the crunch_numbers task:

1
start_time = time.time()
2
for _ in range(NUM_WORKERS):
3
    crunch_numbers()
4
end_time = time.time()
5
6
print("Serial time=", end_time - start_time)
7
8
start_time = time.time()
9
threads = [threading.Thread(target=crunch_numbers) for _ in range(NUM_WORKERS)]
10
[thread.start() for thread in threads]
11
[thread.join() for thread in threads]
12
end_time = time.time()
13
14
print("Threads time=", end_time - start_time)
15
16
17
start_time = time.time()
18
processes = [multiprocessing.Process(target=crunch_numbers) for _ in range(NUM_WORKERS)]
19
[process.start() for process in processes]
20
[process.join() for process in processes]
21
end_time = time.time()
22
23
print("Parallel time=", end_time - start_time)

Here's the output I've got:

1
PID: 96285, Process Name: MainProcess, Thread Name: MainThread
2
PID: 96285, Process Name: MainProcess, Thread Name: MainThread
3
PID: 96285, Process Name: MainProcess, Thread Name: MainThread
4
PID: 96285, Process Name: MainProcess, Thread Name: MainThread
5
Serial time= 2.705625057220459
6
PID: 96285, Process Name: MainProcess, Thread Name: Thread-1
7
PID: 96285, Process Name: MainProcess, Thread Name: Thread-2
8
PID: 96285, Process Name: MainProcess, Thread Name: Thread-3
9
PID: 96285, Process Name: MainProcess, Thread Name: Thread-4
10
Threads time= 2.6961309909820557
11
PID: 96289, Process Name: Process-1, Thread Name: MainThread
12
PID: 96290, Process Name: Process-2, Thread Name: MainThread
13
PID: 96291, Process Name: Process-3, Thread Name: MainThread
14
PID: 96292, Process Name: Process-4, Thread Name: MainThread
15
Parallel time= 0.8014059066772461

The main difference here is in the result of the multithreaded approach. This time, it performs very similarly to the serial approach, and here's why: since it performs computations and Python doesn't perform real parallelism, the threads are basically running one after the other, yielding execution to one another until they all finish.

The Python Parallel and Concurrent Programming Ecosystem

Python has rich APIs for doing parallel and concurrent programming. In this tutorial, we're covering the most popular ones, but you have to know that for any need you have in this domain, there's probably something already out there that can help you achieve your goal. 

In the next section, we'll build a practical application in many forms, using all of the libraries presented. Without further ado, here are the modules/libraries we're going to cover:

  • threading: The standard way of working with threads in Python. It is a higher-level API wrapper over the functionality exposed by the _thread module, which is a low-level interface over the operating system's thread implementation.

  • concurrent.futures: A module part of the standard library that provides an even higher-level abstraction layer over threads. The threads are modelled as asynchronous tasks.

  • multiprocessing: Similar to the threading module, offering a very similar interface but using processes instead of threads.

  • gevent and greenlets: Greenlets, also called micro-threads, are units of execution that can be scheduled collaboratively and can perform tasks concurrently without much overhead.

  • celery: A high-level distributed task queue. The tasks are queued and executed concurrently using various paradigms like multiprocessing or gevent.

Building a Practical Application

Knowing the theory is nice and fine, but the best way to learn is to build something practical, right? In this section, we're going to build a classic type of application going through all the different paradigms.

Let's build an application that checks the uptime of websites. There are a lot of such solutions out there, the most well-known ones being probably Jetpack Monitor and Uptime Robot. The purpose of these apps is to notify you when your website is down so that you can quickly take action. Here's how they work:

  • The application goes very frequently over a list of website URLs and checks if those websites are up.
  • Every website should be checked every 5–10 minutes so that the downtime is not significant.
  • Instead of performing a classic HTTP GET request, it performs a HEAD request so that it does not affect your traffic significantly.
  • If the HTTP status is in the danger ranges (400+, 500+), the owner is notified.
  • The owner is notified either by email, text message, or push notification.

Here's why it's essential to take a parallel/concurrent approach to the problem. As the list of websites grows, going through the list serially won't guarantee that every website is checked every five minutes or so. The websites could be down for hours, and the owner won't be notified.

Let's start by writing some utilities:

1
# utils.py

2
3
import time
4
import logging
5
import requests
6
7
8
class WebsiteDownException(Exception):
9
    pass
10
11
12
def ping_website(address, timeout=20):
13
    """

14
    Check if a website is down. A website is considered down 

15
    if either the status_code >= 400 or if the timeout expires

16
    

17
    Throw a WebsiteDownException if any of the website down conditions are met

18
    """
19
    try:
20
        response = requests.head(address, timeout=timeout)
21
        if response.status_code >= 400:
22
            logging.warning("Website %s returned status_code=%s" % (address, response.status_code))
23
            raise WebsiteDownException()
24
    except requests.exceptions.RequestException:
25
        logging.warning("Timeout expired for website %s" % address)
26
        raise WebsiteDownException()
27
        
28
29
def notify_owner(address):
30
    """ 

31
    Send the owner of the address a notification that their website is down 

32
    

33
    For now, we're just going to sleep for 0.5 seconds but this is where 

34
    you would send an email, push notification or text-message

35
    """
36
    logging.info("Notifying the owner of %s website" % address)
37
    time.sleep(0.5)
38
    
39
40
def check_website(address):
41
    """

42
    Utility function: check if a website is down, if so, notify the user

43
    """
44
    try:
45
        ping_website(address)
46
    except WebsiteDownException:
47
        notify_owner(address)

We'll actually need a website list to try our system out. Create your own list or use mine:

1
# websites.py

2
3
WEBSITE_LIST = [
4
    'https://envato.com',
5
    'http://amazon.co.uk',
6
    'http://amazon.com',
7
    'http://facebook.com',
8
    'http://google.com',
9
    'http://google.fr',
10
    'http://google.es',
11
    'http://google.co.uk',
12
    'http://internet.org',
13
    'http://gmail.com',
14
    'http://stackoverflow.com',
15
    'http://github.com',
16
    'http://heroku.com',
17
    'http://really-cool-available-domain.com',
18
    'http://djangoproject.com',
19
    'http://rubyonrails.org',
20
    'http://basecamp.com',
21
    'http://trello.com',
22
    'http://yiiframework.com',
23
    'http://shopify.com',
24
    'http://another-really-interesting-domain.co',
25
    'http://airbnb.com',
26
    'http://instagram.com',
27
    'http://snapchat.com',
28
    'http://youtube.com',
29
    'http://baidu.com',
30
    'http://yahoo.com',
31
    'http://live.com',
32
    'http://linkedin.com',
33
    'http://yandex.ru',
34
    'http://netflix.com',
35
    'http://wordpress.com',
36
    'http://bing.com',
37
]

Normally, you'd keep this list in a database along with owner contact information so that you can contact them. Since this is not the main topic of this tutorial, and for the sake of simplicity, we're just going to use this Python list.

If you paid close attention, you might have noticed two really long domains in the list that are not valid websites (I hope nobody bought them by the time you're reading this to prove me wrong!). I added these two domains to be sure we have some websites down on every run. Also, let's name our app UptimeSquirrel.

Serial Approach

First, let's try the serial approach and see how badly it performs. We'll consider this the baseline.

1
# serial_squirrel.py

2
3
import time
4
5
6
start_time = time.time()
7
8
for address in WEBSITE_LIST:
9
    check_website(address)
10
        
11
end_time = time.time()        
12
13
print("Time for SerialSquirrel: %ssecs" % (end_time - start_time))
14
15
# WARNING:root:Timeout expired for website http://really-cool-available-domain.com

16
# WARNING:root:Timeout expired for website http://another-really-interesting-domain.co

17
# WARNING:root:Website http://bing.com returned status_code=405

18
# Time for SerialSquirrel: 15.881232261657715secs

Threading Approach

We're going to get a bit more creative with the implementation of the threaded approach. We're using a queue to put the addresses in and create worker threads to get them out of the queue and process them. We're going to wait for the queue to be empty, meaning that all the addresses have been processed by our worker threads.

1
# threaded_squirrel.py

2
3
import time
4
from queue import Queue
5
from threading import Thread
6
7
NUM_WORKERS = 4
8
task_queue = Queue()
9
10
def worker():
11
    # Constantly check the queue for addresses

12
    while True:
13
        address = task_queue.get()
14
        check_website(address)
15
        
16
        # Mark the processed task as done

17
        task_queue.task_done()
18
19
start_time = time.time()
20
        
21
# Create the worker threads

22
threads = [Thread(target=worker) for _ in range(NUM_WORKERS)]
23
24
# Add the websites to the task queue

25
[task_queue.put(item) for item in WEBSITE_LIST]
26
27
# Start all the workers

28
[thread.start() for thread in threads]
29
30
# Wait for all the tasks in the queue to be processed

31
task_queue.join()
32
33
        
34
end_time = time.time()        
35
36
print("Time for ThreadedSquirrel: %ssecs" % (end_time - start_time))
37
38
# WARNING:root:Timeout expired for website http://really-cool-available-domain.com

39
# WARNING:root:Timeout expired for website http://another-really-interesting-domain.co

40
# WARNING:root:Website http://bing.com returned status_code=405

41
# Time for ThreadedSquirrel: 3.110753059387207secs

The concurrent.futures API

As stated previously, concurrent.futures is a high-level API for using threads. The approach we're taking here implies using a ThreadPoolExecutor. We're going to submit tasks to the pool and get back futures, which are results that will be available to us in the future. Of course, we can wait for all futures to become actual results.

1
# future_squirrel.py

2
3
import time
4
import concurrent.futures
5
6
NUM_WORKERS = 4
7
8
start_time = time.time()
9
10
with concurrent.futures.ThreadPoolExecutor(max_workers=NUM_WORKERS) as executor:
11
    futures = {executor.submit(check_website, address) for address in WEBSITE_LIST}
12
    concurrent.futures.wait(futures)
13
14
end_time = time.time()        
15
16
print("Time for FutureSquirrel: %ssecs" % (end_time - start_time))
17
18
# WARNING:root:Timeout expired for website http://really-cool-available-domain.com

19
# WARNING:root:Timeout expired for website http://another-really-interesting-domain.co

20
# WARNING:root:Website http://bing.com returned status_code=405

21
# Time for FutureSquirrel: 1.812899112701416secs

The Multiprocessing Approach

The multiprocessing library provides an almost drop-in replacement API for the threading library. In this case, we're going to take an approach more similar to the concurrent.futures one. We're setting up a multiprocessing.Pool and submitting tasks to it by mapping a function to the list of addresses (think of the classic Python map function).

1
# multiprocessing_squirrel.py

2
3
import time
4
import socket
5
import multiprocessing
6
7
NUM_WORKERS = 4
8
9
start_time = time.time()
10
11
with multiprocessing.Pool(processes=NUM_WORKERS) as pool:
12
    results = pool.map_async(check_website, WEBSITE_LIST)
13
    results.wait()
14
15
end_time = time.time()        
16
17
print("Time for MultiProcessingSquirrel: %ssecs" % (end_time - start_time))
18
19
# WARNING:root:Timeout expired for website http://really-cool-available-domain.com

20
# WARNING:root:Timeout expired for website http://another-really-interesting-domain.co

21
# WARNING:root:Website http://bing.com returned status_code=405

22
# Time for MultiProcessingSquirrel: 2.8224599361419678secs

Gevent

Gevent is a popular alternative for achieving massive concurrency. There are a few things you need to know before using it:

  • Code performed concurrently by greenlets is deterministic. As opposed to the other presented alternatives, this paradigm guarantees that for any two identical runs, you'll always get the same results in the same order.

  • You need to monkey patch standard functions so that they cooperate with gevent. Here's what I mean by that. Normally, a socket operation is blocking. We're waiting for the operation to finish. If we were in a multithreaded environment, the scheduler would simply switch to another thread while the other one is waiting for I/O. Since we're not in a multithreaded environment, gevent patches the standard functions so that they become non-blocking and return control to the gevent scheduler.

To install gevent, run pip install gevent.

Here's how to use gevent to perform our task using a gevent.pool.Pool:

1
# green_squirrel.py

2
3
import time
4
from gevent.pool import Pool
5
from gevent import monkey
6
7
# Note that you can spawn many workers with gevent since the cost of creating and switching is very low

8
NUM_WORKERS = 4
9
10
# Monkey-Patch socket module for HTTP requests

11
monkey.patch_socket()
12
13
start_time = time.time()
14
15
pool = Pool(NUM_WORKERS)
16
for address in WEBSITE_LIST:
17
    pool.spawn(check_website, address)
18
19
# Wait for stuff to finish

20
pool.join()
21
        
22
end_time = time.time()        
23
24
print("Time for GreenSquirrel: %ssecs" % (end_time - start_time))
25
# Time for GreenSquirrel: 3.8395519256591797secs

Celery

Celery is an approach that mostly differs from what we've seen so far. It is battle-tested in the context of very complex and high-performance environments. Setting up Celery will require a bit more tinkering than all the above solutions.

First, we'll need to install Celery:

pip install celery

Tasks are the central concepts within the Celery project. Everything that you'll want to run inside Celery needs to be a task. Celery offers great flexibility for running tasks: you can run them synchronously or asynchronously, real-time or scheduled, on the same machine or on multiple machines, and using threads, processes, Eventlet, or gevent.

The arrangement will be slightly more complex. Celery uses other services for sending and receiving messages. These messages are usually tasks or results from tasks. We're going to use Redis in this tutorial for this purpose. Redis is a great choice because it's really easy to install and configure, and it's possible you already use it in your application for other purposes, such as caching and pub/sub. 

You can install Redis by following the instructions on the Redis Quick Start page. Don't forget to install the redis Python library, pip install redis, and the bundle necessary for using Redis and Celery: pip install celery[redis].

Start the Redis server like this: $ redis-server.

To get started building stuff with Celery, we'll first need to create a Celery application. After that, Celery needs to know what kind of tasks it might execute. To achieve that, we need to register tasks to the Celery application. We'll do this using the @app.task decorator:

1
# celery_squirrel.py

2
3
import time
4
from utils import check_website
5
from data import WEBSITE_LIST
6
from celery import Celery
7
from celery.result import ResultSet
8
9
app = Celery('celery_squirrel',
10
             broker='redis://localhost:6379/0',
11
             backend='redis://localhost:6379/0')
12
13
@app.task
14
def check_website_task(address):
15
    return check_website(address)
16
17
if __name__ == "__main__":
18
    start_time = time.time()
19
20
    # Using `delay` runs the task async

21
    rs = ResultSet([check_website_task.delay(address) for address in WEBSITE_LIST])
22
    
23
    # Wait for the tasks to finish

24
    rs.get()
25
26
    end_time = time.time()
27
28
    print("CelerySquirrel:", end_time - start_time)
29
    # CelerySquirrel: 2.4979639053344727

Don't panic if nothing is happening. Remember, Celery is a service, and we need to run it. Till now, we only placed the tasks in Redis but did not start Celery to execute them. To do that, we need to run this command in the folder where our code resides:

celery worker -A do_celery --loglevel=debug --concurrency=4

Now rerun the Python script and see what happens. One thing to pay attention to: notice how we passed the Redis address to our Redis application twice. The broker parameter specifies where the tasks are passed to Celery, and backend is where Celery puts the results so that we can use them in our app. If we don't specify a result backend, there's no way for us to know when the task was processed and what the result was.

Also, be aware that the logs now are in the standard output of the Celery process, so be sure to check them out in the appropriate terminal.

Conclusion

I hope this has been an interesting journey for you and a good introduction to the world of parallel/concurrent programming in Python. This is the end of the journey, and there are some conclusions we can draw:

  • There are several paradigms that help us achieve high-performance computing in Python.
  • For the multi-threaded paradigm, we have the threading and concurrent.futures libraries.
  • multiprocessing provides a very similar interface to threading but for processes rather than threads.
  • Remember that processes achieve true parallelism, but they are more expensive to create.
  • Remember that a process can have more threads running inside it.
  • Do not mistake parallel for concurrent. Remember that only the parallel approach takes advantage of multi-core processors, whereas concurrent programming intelligently schedules tasks so that waiting on long-running operations is done while in parallel doing actual computation.
Did you find this post useful?
Want a weekly email summary?
Subscribe below and we’ll send you a weekly email summary of all new Code tutorials. Never miss out on learning about the next big thing.
Looking for something to help kick start your next project?
Envato Market has a range of items for sale to help get you started.