Friday, November 15, 2019

The joy of database configuration

I am wary of papers with performance results for too many products.Too many means including results from systems for which you lack expertise. Wary means I have less faith in the comparison even when the ideas in the paper are awesome. I have expertise in MySQL, MongoDB, RocksDB, WiredTiger and InnoDB but even for them I have made and acknowledged ridiculous mistakes.

Database configuration is too hard. There are too many options, most of them aren't significant and the approach is bottom-up. I an expert on this -- in addition to years of tuning I have added more than a few options to RocksDB and MySQL.

This post was motivated by PostgreSQL. I want to run the insert benchmark for it and need a good configuration. I have nothing against PG with the exception of a few too many why not Postgres comments. The community is strong, docs are great and the product is still improving. But I think PostgreSQL configuration has room to improve -- just like RocksDB (here, here) and MySQL/InnoDB.

Too many options

A non-expert user lacks both the ability to choose good values for options and the ability to understand which options might be useful to set. My solution to too many options and most aren't significant is to use good defaults and split the option name space into two parts -- regular and expert. Regular options are set by most users because they matter for performance and don't have good default values. The amount of memory the DBMS can use is one such option - the default will be small.

Everything else is an expert option. These include options for which the default is great and options that rarely impact performance. There is a reason for expert options -- some workloads benefit from their existence and being able to set that option at runtime might avoid downtime. Options are also added early in the lifecycle of new features to allow developers to evaluate the new feature and choose good default values. But such options don't need to be exposed to all users.

The benefit from doing this is to avoid presenting a new user with tens or hundreds of options to consider. That is a lousy experience. And while X is too hard isn't always a valid complaint -- language (human and database query) is complex because they let us express complex idea -- I don't think we gain much from the current approach.

RocksDB has added functions that simplify configuration and even split the option namespace into two parts -- regular and advanced. This is a step in the right direction but I hope for more. I confirmed that most RocksDB options either have good defaults or aren't significant for my workloads and then published advice on tuning RocksDB.

The performance configurations I use for MongoDB/WiredTiger and MySQL/InnoDB are similar to my experience with RocksDB. I don't have to set too many options to get great performance. Alas, it took a long time to figure that out.

Top-down configuration

Top-down configuration is another approach that can help. The idea is simple - tell the DBMS about the hardware it can use and optionally state a few constraints.

The basic hardware configuration is empty which implies the DBMS gets everything it can find -- all memory, all CPU cores, all IO capacity. When a host does more than run a DBMS it should be easy to enforce that limit with one option for memory consumption, one for CPU, etc. The user shouldn't have to set ten options for ten different memory consumers. It is even worse when these limits are per instance -- limiting how much memory each sort buffer gets is a lousy way to manage total memory usage. IO capacity is interesting. AFAIK there was a tool included in RethinkDB that characterized IO capacity, PostgreSQL has a tool for fsync performance and we can't forget fio. But it is easy to be mislead about SSD performance.

The constraints cover things that are subjective. What is the max recovery time objective? How do you rank read, write, space and memory efficiency?

 A great example of this is SQL Memory Management in Oracle 9i -- tell the DBMS how much memory it can use and let it figure out the best way to use it.

What about ML

I hope that ML makes it easier to discover the options that aren't significant and can be moved into the expert options namespace. But I prefer a solution with fewer tuning knobs, or at least fewer visible tuning knobs. I hope to avoid too many knobs (status quota) combined with ML. Lets make smarter database algorithms. If nothing else this should be a source of research funding, interesting PhDs and many papers worth reading.

Update

While I appreciate that someone made the MySQL memory calculator available I wish this weren't needed. Setting memory limits based on peak concurrency means you will under-allocate memory in the normal case or instead you can over-allocate at peak concurrency and get OOM.

3 comments:

  1. Basically I agree, that configuring databases is not an easy task. I still see MySQL both as a beginner system and a professional swiss army knife. For beginners there should be more intelligent templates that adapt to the system environment better than the current default tries to do.

    BUT there are some key mechanics like sort buffers. Dont try abstract this with magic. I WANT administrators to know what this is, how they can debug it, how this affects performance and how they have to configure it right. If you dont spread this knowledge it will be gone in 10 years ;) Users will complain about MySQL and say its a bad database because they dont understand it anymore.

    You should not solve that kind of problems with machine learning. There are so many performance affecting factors that it would not make sense, especially in virtual environments. It would feel like you teach a machine how to hold a spoon and feed it to your mouth, without teaching the machine the basic difference between meat and soup. In the end you would create a anti-intelligence because the system would screw it up as soon as it gets complicated. Have a look at Alpastar and how they struggle to solve the i-know-not-everything problem ;)

    The mechanics the old mysqltuner.pl uses was quite good. Do you know it?

    ReplyDelete
    Replies
    1. I never used mysqltuner.pl, that is likely to mean that two large web-scale deployments of MySQL never used them either. Of course I am wary of argument from authority.

      I claim that top-down memory allocation is a solved problem and that the common approach is broken -- see sort_buffer_size, join_buffer_size, read_buffer_size etc.

      I don't want ML to solve most of this. I want better algorithms and resource allocation, whether that is IO, CPU or memory, is an area where cost-based optimization can be done.

      For memory Oracle has shown a better approach. Perhaps other DBMS have done the same but many have not. See the excellent work in http://www.vldb.org/conf/2002/S29P03.pdf.

      The current approach, setting the max size for each consumer, leads to 1) allocating too little memory -- worse performance but no OOM or 2) allocating too much memory -- better performance or OOM if too much concurrency. Having seen more than enough web-scale dogpiles in production I will bet on OOM.

      Delete
  2. When explaining configuration options, I always feel it's important not to fall into the 1:1 trap. That is, if there are 100 options, the easy (and common) way is to put 100 different procedures or explanations in the documentation. There are some options that only make sense in combination with several others. There are other options that are useful in several different scenarios. So the number of sections under "Configuring" in the documentation should rarely if ever match up exactly with the number of config options. Same applies to command-line parameters, language statements, and so on.

    ReplyDelete