PostgreSQL underused features – WAL compression

The WAL compression feature

With the launch of Postgres 9.5 a new setting called „wal_compression“ was introduced, that should decrease the IO load on behalf of CPU load. And that is a desirable compromise considering typical constraints of modern hardware. But somehow the feature didn’t get the publicity it deserved – it wasn’t even mentioned in the What’s new roundup and by default it’s turned off. So I thought I’ll try to broadcast the message out again and provide a simple test scenario to highlight the benefits.

Test setup

First we’ll run the small script you can find below (for the „do try this at home“ folks out there) in two versions, with wal_compression „on“ and „off“ accordingly. Testing scenario itself is pretty simple, we rely on our good old friend pgbench again. First we initialize the pgbench schema with a „scale“ value of 100, translating into 10 Mio rows in the main „pgbench_accounts“ table and at about 1.3 GB of „on disk“ size. Then we do a short benchmark run with 200k transactions. Default transactions in pgbench consist mostly of random updates over the whole primary key space.

We’ll be interested here though only in the written WAL size, and take a note of that after every step. One thing to notice here is that for the WAL-s to accumulate we’ll need to set the wal_keep_segments (min_wal_size would do the trick also) to a high value, so that no WAL files would be recycled and we can measure write activity directly off the filesystem.

Here is our test script.

TX=100000 # per client. we use 2 clients

echo "running initdb..."
initdb -D $FOLDER &>/dev/null

OPTS=$(cat <<HERE
echo "$OPTS" $FOLDER/postgresql.conf

echo "staring postgresql on port $PORT..."
pg_ctl -D $FOLDER -l /dev/null start

echo "size after initdb"
du -b -s $FOLDER/pg_xlog

pgbench -p $PORT -i -s $SCALE &>/dev/null

echo "size after testdata creation"
du -b -s $FOLDER/pg_xlog

pgbench -p $PORT -c 2 -j 2 -t $TX -M prepared &>/dev/null

echo "size after $TX transactions"
du -b -s $FOLDER/pg_xlog

pg_ctl -D $FOLDER stop
#rm -rf $FOLDER

Test results

After going through 2 runs I got the following numbers: 1st run, wal_compression = on

size after initdb
16785408 wal_compression2/pg_xlog/
size after testdata creation
1207967744 wal_compression2/pg_xlog
size after 100000 transactions
1509957632 wal_compression2/pg_xlog

2nd run, wal_compression = off

size after initdb
16785408 wal_compression3/pg_xlog/
size after testdata creation
1308631040 wal_compression3/pg_xlog
size after 100000 transactions
2382381056 wal_compression3/pg_xlog

Now let’s do some calculations…

krl@postgres=# select (1308631040 - 1207967744 - 2*16785408) / (1207967744 - 16785408)::float;
(1 row)

krl@postgres=# with q_no_comp_diff as (select 2382381056 - 1308631040 as value),
q_comp_diff as (select 1509957632 - 1207967744 as value)
select (q_no_comp_diff.value - q_comp_diff.value) / q_comp_diff.value::float from q_no_comp_diff, q_comp_diff;
(1 row)

What we can see is that the size difference during initialization (sequential INSERTs) of the test schema yields a moderate ~6%, but random updates exhibit a whopping 255% difference in WAL written!


Ok, conclusion time. As we had a synthetic dataset we could say that if you’re doing mostly random updates over the whole dataset, enabling wal_compression could bring you noticeable improvement in form of replication lag decrease when network between replicas is slow, disk space savings for your WAL archive when doing PITR or most importantly general decrease of server disk IO contention. You should definitely give this setting a try! And the best part is that it’s as easy as switching the flag in your config and restarting your server.

Kaarel Moppel
I’ve been interested with databases for the last 10 years, working last 6 years exclusively with PostgreSQL. And still I’m constantly surprised by it’s powerful set of features and the fast pace of development by the globally friendly community. On my spare time I enjoy playing soccer and travelling.