Bypassing the transaction log

Referring to my previous blog post about the amount of xlog written by PostgreSQL I wanted to clarify what I meant when talking about bypassing the PostgreSQL transaction log.

Normal WAL / xlog writes

Whenever data is changed inside PostgreSQL the change must be written to the xlog before it is written to the underlying table. The reason for that is simple: Imagine you are doing a large INSERT but the power goes out while your are writing the data to the table. The result would be an uncomplete record somewhere in the middle of the table. Index entries might be missing as well. In short: There would be a serious risk of corruption.

To avoid that, PostgreSQL writes all changes to the xlog to make sure that a table / index / etc. can always be repaired based on the xlog.

Optimizations

However, it is not always necessary to write to the xlog.

Imagine the following scenario:

   test=# BEGIN;

   BEGIN

   test=# CREATE TABLE t_test (id int4);

   CREATE TABLE

   test=# INSERT INTO t_test SELECT * FROM generate_series(1, 100000);

   INSERT 0 100000

   test=# COMMIT;

   COMMIT

In this case the transaction will not be seen by others until we commit the thing. We don’t have to worry about concurrency in this case. If we commit we can take the COMPLETE new data file – or, we simply throw the freshly created data file away in case the transaction fails. Under any circumstances: There is no situation, which would require the entire content of the table being written to the xlog. This kind of optimization can speed up things dramatically – especially in case of very large transactions.

However, there are more cases in which PostgreSQL can skip the transaction log. Consider this one:

   test=# BEGIN;

   BEGIN

   test=# TRUNCATE t_test;

   TRUNCATE TABLE

   test=# INSERT INTO t_test SELECT * FROM generate_series(1, 100000);

   INSERT 0 100000

   test=# COMMIT;

   COMMIT

In this case the TRUNCATE does the trick. It locks the table to make sure that nobody else can modify it and as soon as the first new row comes in, PostgreSQL creates a new data file (= new relfilenode). At the end of the transaction we got two choices then: If we can commit safely, we take the COMPLETE new data file. In case of a ROLLBACK we can take the complete old data file.

Of course, this can only be done if you are not using streaming replication (wal_level = minimal).

However, if you got a single-node system bypassing the transaction log is a pretty neat optimization and can speed things up considerably.

Hans-Juergen Schoenig
Hans-Jürgen Schönig has 15 years of experience with PostgreSQL. He is consultant and CEO of the company „Cybertec Schönig & Schönig GmbH“ (www.postgresq-support.de), which has served countless customers around the globe.