Don’t Crash Your App: Load Records from the Database in Batches for Better Performance

Save your Python app’s performance by efficiently loading query

Don’t Crash Your App: Load Records from the Database in Batches for Better Performance
Python transporting small batches of data (image generated by ChatGPT)

This article is about optimizing the communication between your Python app and a database so that your app runs smoothly and your database server doesn’t melt. This article addresses a common inefficiency: the habit of loading all data from a query at once.

When faced with queries that returns a lot of records it’s often impractical or even impossible to load all returned records. Instead of loading all results in memory and then processing row-by-row, in this article we’ll find out how to load many small chunks. Instead of loading 1 million records and processing we’ll load a 400 batches of 2.500 records each! This way your app doesn’t have to load all results in memory which has clear benefits:

  • Enhanced memory usage
  • Better perceived response times
  • Reduces database stress

We will also take a look under the hood and dive into the technical details of this technique, showing you exactly how it works behind the scenes. Let’s code!


Why use fetchmany

The fetchmany method of SQLAlchemy (or similar methods in other libraries), allows you to fetch the same result in batches. In many cases this approach can be better than fetching all rows at once (fetchall) or one at a time (fetchone). In some cases retrieving many small batches can be the better choice over fetching all records for several reasons:

1. Memory Efficiency
In many cases batching through the dataset is much more memory-efficient since it prevents you from having to load all data in memory. This is essential when your dataset is too large to fit in memory.

2. Perceived response times
Batching leads to a better perceived response time since we can start processing the first batch after a small wait instead of waiting for the entire result set to be fetched.

This can lead to a better perceived response time in user-facing apps where responsiveness is required to maintain a smooth user experience. As an example think about loading the first 25 articles and loading the next batch when we’re all scrolled down on the page.

What is the difference between UNION and JOIN in SQL?
5 minute guide to UNION, EXCEPT and INTERSECT in SQL

3. Reduces database load
Retrieving data in batches helps spread the load on the database out over time. This staggered approach reduces the load on our database. Additionally it helps maintain a better overall performance of the database server. Retrieving large amounts of data puts a significant load on the server that might impact the performance of other applications using the same database.

4. More efficient use of database connections
Maintaining an open database connection while processing a large result set can be inefficient and can limit the scalability of your application.By fetching and processing data in batches you can reduce the duration that each connection needs to remain open, making more efficient use of available database connections. This approach allows the database to manage more concurrent connections efficiently.

SQL — DELETE INTO another table
Delete records from one table and insert them into another table in ONE statement “Why would I need a query like that?” A delete into statement is advantageous for two main reasons: * The statement is Atomic; either both occur or nothing occurs, i.e. if either the delete or the

Analogy

Let’s think of an analogy: imagine a truck full of bricks arrives at the build site. We have a few brick layers that need the bricks to build a wall. Instead of trying to transport a whole pallet full of bricks from the truck to the build site it’s a better idea to walk back and forth a few time with a wheelbarrow full of bricks. This is useful for a number of reasons:

  • Transporting the whole pallet on your own will take a very long time because. Additionally the bricklayers have no room to store all the bricks.
  • The bricklayers can already start working after you deliver the first wheelbarrow. Meanwhile you can walk back and forth for the second batch.
  • It’s much less busy at the truck; you only block the entrance for a short time while loading up your wheelbarrow. This way other workers can retrieve bricks for other teams of bricklayers.
Python args, kwargs, and All Other Ways to Pass Arguments to Your Function
Expertly design your function parameters in 6 examples

Practical Example

To demonstrate how fetchmany works solve a very simple problem. Let’s imagine we’re a web shop and we want to send a personalized email to all of our clients.

For this purpose we have a table that contains 2 million clients (our shop is very successful). The table is called clients and looks like this:

| id | name  | email                    | 
| -- | ----- | -------------------------| 
| 1  | oscar | oscar@sesamestreet.com   | 
| 2  | bert  | bert@sesamestreet.com    | 
| 3  | ernie | ernie@sesamestreet.com  |
Python: __init__ is NOT a constructor: a deep dive in Python object creation
Tinkering with Python’s constructor to create fast, memory-efficient classes

The straightforward Approach

The most straightforward approach is to just query all records and send them all to the function that sends personalized client emails:

stmt_select = "SELECT * FROM clients"

with session_maker() as con:
  all_records = con.execute(stmt_select).fetchall()
  # Process the found records 
  found_clients = [r.Client for r in all_records]   
  send_email_to_clients(list_of_clients=found_clients)

The benefit of this approach is that it’s pretty easy. The downsides are pretty clear too: we have to load 2 million clients into memory and pass them to the send_email_to_clients function.


Using fetchmany

Let’s be nice to our app and database and retrieve chunks. We can do this with an sqlalchemy method called fecthmany. This is what that looks like:

stmt_select = "SELECT * FROM clients" 

with session_maker() as con: 
    result = con.execute(stmt_select) 
    while found_rows := result.fetchmany(size=1000): 
        # Process the found records 
        found_clients = [r.Client for r in found_rows] 
        send_email_to_clients(list_of_clients=found_clients)

It’s a pretty easy change; we just replace result.fecthall() by result.fatchmany(). This method returns sequences of rows until there are no more, at which point it returns an empty sequence.

Next we use the rare walrus operator to store the rows in a variable called found_rows until there are no more. This is a batch of 1000 record, as specified by the size parameter. We can use this batch exactly like in the previous part and send the batch to the send_email_to_clients function.

A complete guide to using environment variables and files with Docker and Compose
Keep your containers secure and flexible with this easy tutorial

What happens under the Hood: how fetchmany works

Let’s take another look at the query and see that it consists of two parts: executing the query and fetching the results.

result = con.execute(stmt_select)                          # <-- execute 
    while found_rows := result.fetchmany(size=1000):       # <-- fetch 
        print(f"Found a batch with {len(found_rows)} rows")

Let’s break this down:

1. Execute the Query

The con.execute method sends the SQL (defined in stmt_select) to the database, which executes it and prepares a result set. At this point all data that matches the query is identified but has not necessarily been transferred.

2. Fetching Results

Fetching means actually transferring the data from the database to our Python app. This is where the resultmethods come into play:

  • result.fetchall() moves all the records from the database to our app
  • result.fetchmany() fetches results in batches

The clear upside is that we don’t have to send all records with the latter method. This is essential when the result set is too large to fit in our Python app’s memory. Transferring smaller objects is faster and takes less memory to process.

Applying Python multiprocessing in 2 lines of code
When and how to use multiple cores to execute many times faster

Implementation Consideration

When using fetchmany, it's important to choose an appropriate batch size. Too small a batch size can lead to inefficiency due to the overhead of making many small fetches, while too large a batch size can negate the benefits of using fetchmany in the first place. The optimal batch size depends on the specific application and the size of the rows being fetched.

The easiest way to UPSERT with SQLAlchemy
One command to both INSERT new data and UPDATE existing records in your database

Use cases

Many use cases have to do with maintaining high performance without melting your machine. We manage resource utilization effectively by spreading out the work over time. This principal knows many use cases:


Use case 1: Batch Data Migration/Processing

When migrating large amounts of data between databases, fetchmany can be used to fetch and insert data in batches. This reduces the risk of overwhelming either the source or target database and allows for progress tracking and potentially resuming the migration if it is interrupted.

In the same way you can use fetchmany to stream data from a database to a file without running into memory limitations. In machine learning workflows, fetchmany can be used to load training data in batches. This approach can help in optimizing the training process, especially when dealing with large datasets.

Use case 2: Web Pagination

For web applications that display large sets of database records (like search results or log entries), fetchmany can be used to implement server-side pagination. In stead of having your webpage wait for the API that is loading and sending the entire dataset, it’s more efficient to fetch only the subset of records needed for the current page. This improves the response time of the application.

Use case 3: Data Feeds and Integrations

Sending data in manageable chunks can be useful for systems with rate limits or where network bandwidth is a concern.

Simple trick to work with relative paths in Python
Calculate the file path at runtime with ease

Conclusion

In this article I’ve hoped to demonstrate that you can easily upgrade the behavior of your database connections to retrieve results in batches instead. Not only is this approach more manageable for your app, using less memory, user also perceive it as being faster since they only have to await the first, small batch before processing can begin.

I hope this article was as clear as I hope it to be but if this is not the case please let me know what I can do to clarify further. In the meantime, check out my other articles.

Happy coding!

— Mike

P.S: like what I’m doing? Follow me!