About READ UNCOMMITTED

Transaction isolation levels are the least understood feature of relational databases. Most developers don’t know them and just use the default one. Actually, a relevant part of them even believe they use MySQL without transactions.

Amongst isolation levels, READ UNCOMMITTED is the least understood. So here’s a quick note about what it is and why – if you know about it – you probably have false beliefs.

Basically, READ UNCOMMITTED is totally inconsistent. It sees changes (new, deleted, modified rows) made by other transactions, that didn’t COMMIT yet. And actually, it’s possible that those transactions will fail, which leads READ UNCOMMITTED to see something that will never happen.

Despite this, it is extremely useful in some cases. For example:

  • To run SELECTs which read a huge amount of rows for analytics.
  • To DELETE rows not used by anyone (archived historical contents).
  • To INSERT rows when no one else can possibly insert new rows.

When using the default isolation level, or even READ COMMITTED, if your transaction involves many rows (say, millions) your statements will probably be slow, and will probably use much CPU time, maybe will also cause replication lag.

READ UNCOMMITTED is not magic and cannot make a slow query fast. But it can save a lot of transaction logs processing, making a query a bit faster and avoiding consuming many of resources for no reason.

That said, its drawbacks should also be noted:

  • Bugs. Precisely because it is the least used (tested) isolation level.
  • It would be a mistake to assume that READ UNCOMMITTED is lock-free. When trying to modify a locked row, it’s queued. When writing anything, it puts a lock. In some cases (like the ones listed above) this is irrelevant, but you need to understand this.
  • Selecting all rows from a table while it’s been intensively INSERTed is slower with READ UNCOMMITTED.

One last important detail. When I explain isolation levels, people tend to think that READ UNCOMMITTED doesn’t take snapshots. This is wrong: it takes a snapshots for every statement it runs, but snapshots include uncommitted changes.

EDIT: Actually there is no snapshot, see the comments. But despite my mistake, the following part remains true.

This may sound slightly obscure. To understand better, create a test table and run something like:

SELECT a FROM my_test WHERE a > SLEEP(a);

The query will evaluate each row, and the evaluation will cause a wait of some seconds. So you have the time to add a new row from another connection. The row will not appear in the results of these query – it’s not in the snapshot that has been taken when the query was issued. However, if you run the query again, the new row will be there, because a new snapshot is taken.

Links for the curious:

Federico

8 thoughts on “About READ UNCOMMITTED

  1. Pingback: About READ UNCOMMITTED | FALSE IS NOT NULL | MySQL Unleashed

  2. I recently encountered misuse and misunderstanding of the READ UNCOMMITTED isolation. The developers in questions have been migrating an existing application from MSSQL to MySQL and have been using the nolock hint in some of their queries where it really doesn’t matter and the performance gain outweighs the drawbacks.

    They made an attempt to convert their queries with the help of Stackoverflow, probably directed in the (wrong) direction by this question:
    https://stackoverflow.com/questions/917640/any-way-to-select-without-causing-locking-in-mysql
    Since the nolock hint doesn’t exist in MySQL, the only way to achieve this is by setting it in the session after making a connection. Since it now is no longer a hint (only allowed on SELECT statements), the danger was that the connection potentially could be reused by the application as connection pooling was enabled.

    It took me some time to explain what could go wrong, but if I would have had your blogpost at hand it would have been much easier! 😀

    • Art, thanks for the real world story.
      Your devs could have used this syntax:
      START TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; –maybe also READ ONLY?
      Next transaction will use the previous isolation level, unless you use COMMIT AND CHAIN.
      Unfortunately there was never an effort to teach these things to developers, so these syntaxes are simply unused or misused 😦

  3. Federico, it is all about timing. InnoDB uses short-duration page locks inside mini-transactions. Roughly speaking, each read or write of an index record constitutes of one mini-transaction that bundles page locks and redo log. Forward index (or table) scans can move to the next page and release the read lock on the previous page.

    Furthermore, not all UPDATE are delete-marking any records. If you are updating a non-indexed column, it will be updated in-place (or at most as a “pessimistic update”) within a single mini-transaction. READ UNCOMMITTED might feel like a “snapshot” for those operations.

    Only when you are updating a key, you have to delete-mark the record with the old key and insert one with the new key. If you update the PRIMARY KEY, this will happen in all indexes, because secondary index records contain the primary key.

    Please look at the test innodb.update-cascade in MySQL 5.7. Before MySQL 5.7.21 accidentally fixed https://jira.mariadb.org/browse/MDEV-15199, the comments in the test were correct. Now they are a little misleading in the result:

    # The table t1 is updated.
    # In t2 delete-mark happened. Retry will happen on t2.
    # In t3 yet to be updated.
    set session transaction isolation level read uncommitted;
    start transaction;
    select f1, right(f2, 20) as p2 from t1;
    f1 p2
    select f1, right(f2, 20) as p2 from t2;
    f1 p2
    select f1, right(f2, 20) as p2 from t3;
    f1 p2
    2 ++++++++++++++++++++

    These SELECTs with READ UNCOMMITTED are executed while an UPDATE operation is waiting for locks. There is ON UPDATE CASCADE that should propagate the update from t1 to t2 to t3, but as you can see above, so far, the record was only delete-marked in t1 and t2, not yet in t3, and nothing was yet inserted into t1 or t2. Later in the test, we will do another read, demonstrating that the UPDATE was executed to completion.

  4. In InnoDB (the default transactional storage engine in MySQL and MariaDB), READ UNCOMMITTED does not take a snapshot. It simply returns the latest data.
    For example, if the READ UNCOMMITTED access occurs in the middle of an UPDATE which has delete-marked the old record but not yet inserted a new one (for example, because of a locking conflict), then the READ UNCOMMITTED would not see the delete-marked old record, but it would not see the new value of the record either.

    Another interesting (buggy?) property of InnoDB transaction isolation levels is that whenever a record is locked, it will be treated as READ COMMITTED. I believe that this has the side effect that the SERIALIZABLE isolation level is broken. I may be mistaken, but I would expect a SERIALIZABLE operation to return a “snapshot is too old” error if it visits a record that was modified by another transaction that was both started and committed after the SERIALIZABLE transaction started. But, InnoDB would just happily do READ COMMITTED within the SERIALIZABLE transaction.

    • Thanks for the explanation. But if there is no snapshot, why the “test case” at the end of the articles shows the behaviour I described?

    • I discussed the SERIALIZABLE behaviour with my colleagues at MariaDB today.
      I came up with a scenario like this:

      connection rr; SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
      START TRANSACTION WITH CONSISTENT SNAPSHOT; # assigns a read view immediately
      connection ser; SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
      BEGIN;
      connection rr;
      send set @a=(SELECT a FROM t1 LIMIT 1);
      connection ser;
      UPDATE t1 SET a=a+1;
      connection rr;
      reap; INSERT INTO t2 SET a=@a; COMMIT;
      disconnect rr;
      connection ser;
      –error ER_SNAPSHOT_TOO_OLD # would be nice to get
      SELECT * FROM t2;

      The transaction of connection rr would see the old data of the t1.a, independent of when the concurrent UPDATE was completed. Then, it would insert and commit this old data to t2.a, and the supposedly SERIALIZABLE transaction would see the old data in t2. This does not look serializable to me.
      This could mean that if you want SERIALIZABLE execution, all write transactions in the system must be serializable. All literature and RDBMS documentation that I was able to find seemed to imply this. So, maybe it is not a bug per se. Maybe it is just simply wrong to expect consistent results if some transactions mix writes with non-locking (MVCC) reads.

      Theoretically, we could introduce a new PARANOID_SERIALIZABLE isolation level that would return the ER_SNAPSHOT_TOO_OLD error (simply by looking up and read-locking the primary key record and checking if its DB_TRX_ID is greater than the current transaction ID). Unfortunately, I think that it could return this error also in a case where the later-started-and-earlier-committed transaction was at SERIALIZABLE isolation level and the execution would actually have been serializable.

      • This is very interesting. The current behaviour makes more sense to me. The paranoid version would force applications to do more checks – and in my experience, they simply wouldn’t do them. But I think that this detail should be noted in documentation.

Leave a comment