CYBERTEC Logo

Tracking changes in PostgreSQL

12.2013 / Category: / Tags: |

UPDATED 21 March 2023: Tracking database changes and tracing users has always been a vitally important part of PostgreSQL database security and application security. Especially when critical data are stored, it might be interesting to know who has changed which data when and how.

Generic Changelog Triggers in PostgreSQL are the key to tracking changes

To track the changes made to tables in PostgreSQL you can write yourself a generic changelog trigger. The easiest way to do that is to write a simple PL/pgSQL function and use it for all tables in the system. As PostgreSQL provides good support for stored procedures, this is definitely not hard to do. In this post you will learn how this can be done and the easiest way to achieve your goal.

Create a table to store some history - tracking changes in PostgreSQL databases

First of all, we need a table to store the changes. For a quick prototype, we can use the following table structure:

For the sake of simplicity, we didn't use enumerators (enumerators are used to store data in a more efficient way). What is also important: The data we store should be in the most generic possible format. What we did here was to use the jsonb data type.
The point of this table is to keep track of all changes made to other tables. We want to know which operation has taken place. Whenever we insert, update or delete a row somewhere, we want to keep a record of that operation.

The backbone of tracking changes infrastructure in PostgreSQL

Taking this into account, we come up with the following code:

The interesting part here is that the trigger is totally generic. We can use the row_to_json function to encode any table row into a JSON object. The advantage is that we can use the very same trigger for countless tables. NEW and OLD will contain the rows before and after the data modification.
Once we have the backbone in place, we can test things using a table.

Here is an example:

Once we have created our tables, we can deploy triggers to do the real work:

What is important here is that we are using an AFTER trigger. We have two choices: BEFORE and AFTER. One trigger fires before the row really makes it into the table. The other one will fire after the row has made it into the table.
The main risk here is that if multiple triggers exist on the same table, we have to make sure that the order of firing is correct: In PostgreSQL (since version 7.3), all BEFORE triggers will fire in alphabetical order (of the trigger name), and then all AFTER triggers will fire in alphabetical order. This has to be taken into account when you have more than one trigger.

Let's test the trigger and see what happens:

Our history table will contain all changes we have made to the underlying tables:

Security considerations when tracking changes in a PostgreSQL database

If we assume that we use this mechanism to track changes to protect our data, we have to make sure that somebody changing the data cannot change the log as well. Therefore, we have to take precautions for this special case. One way to do this is to mark our trigger function as SECURITY DEFINER. This means that the function itself is not executed as the user making the change to the table, but as the user who has written the function. If the superuser deploys the trigger, we can protect ourselves against evil action.

Finally …

Quite often, tracking changes to the database is an important issue for many people. We hope the code listed above is a blueprint which will help you to get started quickly.

If you want to find out more about PostgreSQL, learn how to write SECURITY DEFINER functions securely - read this blog: https://www.cybertec-postgresql.com/en/abusing-security-definer-functions/

 


In order to receive regular updates on important changes in PostgreSQL, subscribe to our newsletter, or follow us on Twitter, Facebook, or LinkedIn.

0 0 votes
Article Rating
Subscribe
Notify of
guest
22 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Stephan Hochdörfer
Stephan Hochdörfer
10 years ago

Was there a specific reason why you choose json in favor of hstore? Dimitri Fontaine made a similar post[1] a while back using hstore. What I really like about the hstore approach is the easy way of figuring out the differences between the new_val and old_val.

[1] http://tapoueh.org/blog/2013/08/27-auditing-changes-with-hstore.html

Cybertec Schönig & Schönig Gmb

I just used JSON because it is more widely known by people. If I had the choice in production I would most likely go for hstore. I just thought JSON would be interesting to more people. Thank you for the feedback 🙂

Stephan Hochdörfer
Stephan Hochdörfer
10 years ago

Ok that makes sense 😉 I thought you might went down the JSON path because hstore is not part of the core postgres distro and needs to be enabled manually.

Guest
Guest
10 years ago

One of the reasons why people put off record-level logging is because it used to require a trigger that knows about the columns in the subject table, and the history table has to have all the columns that the subject table has. Hstore has removed that need quite some time ago and with JSON being as popular as it is today it's good that tricks like these are being posted once in a while. This particular one can really save your butt when the users of the database start making naughty claims about values that mysteriously changed in the database...

However, I would probably not log the NEW value, because that is the same value as the current record in the actual table, and not storing it in the history does reduce the size of the table by half, which is not a bad thing if you have lots of updates.

David Fetter
David Fetter
10 years ago

It's not that people haven't come up with this idea before. It's that it's pretty useless when you're actually trying to query the cleverly-plowed-together data. I suspect event triggers will make it possible to auto-create audit logging for each table, which while a little less clever would actually help you once you need to query them.

Radovan
Radovan
10 years ago

This approach solve the problem when the table change structure. Tablelog package has that issue.

Maximilian Tyrtania
Maximilian Tyrtania
10 years ago

Is there is specific reason why you used a "before"-trigger as opposed to an "after" one? The problem I see with a before trigger is that another before trigger executed later on might decide to cancel the action.

Hans-Jürgen Schönig
Hans-Jürgen Schönig
10 years ago

yes, because it is a plain mistake ;).

Cristian Pascottini
Cristian Pascottini
8 years ago

Great tip, thanks!
I've put all in a SQL script:
https://gist.github.com/cristianp6/29ce1c942448e95c2f95
Any advice or improvement is welcome 😉

Міша Василюк
Міша Василюк
7 years ago

i thank you very much!! your article helped me great. :))))

Rein Petersen
Rein Petersen
7 years ago

how would you go about cycling-out and/or aggregating log data so that your database size doesn't grow beyond manageable size with keeping such logs?

ASHu2
4 years ago

mmmm cool!

Randall Schmidt
Randall Schmidt
1 year ago

I think you can simplify this a lot by getting rid of the if statement and always doing the same INSERT. If OLD or NEW is null, then ROW_TO_JSON returns null. That's what I'm doing and it works fine unless I'm missing something.

Tim
Tim
3 years ago

Is it possible to use the log to rollback to the old state of the table? How to do that easily?

Hans-Jürgen Schönig
Hans-Jürgen Schönig
3 years ago
Reply to  Tim

those things are tricky. just consider things suchs as TRUNCATE, ALTER TABLE and so on ...

Tim
Tim
3 years ago

What if we only consider 3 operations on the table INSERT, DELETE and UPDATE?

Tim
Tim
3 years ago
Reply to  Tim

For example, if I insert one row in a table:

INSERT INTO table (id, column1, column2) VALUES (1,'value1', 'value2')

Then if I want to roll back later, I need to explicitly provide sql statement:

DELETE FROM table WHERE id=1, column1='value1', column2='value2'

Can this undo statement be generated programmatically?

Linjie Zhang
Linjie Zhang
3 years ago

Fantastic article, thank you! If we don't delete any logs, then we needn't to save the old_val, because the old value already been saved. I have an idea that the blockchain maybe help, if we encrypt these logs before upload to the blockchain.

Wong Kai Wen
Wong Kai Wen
4 years ago

Thank you very much! Your sharing helps me a lot!

raguru ramsai
raguru ramsai
5 years ago

Is any way to monitor IP Addresses also in the same way ??
--as multiple users uses same default user_id

Georg Klimm
1 year ago

If I do not want to create a role in the database for each user of the application, it may be useful to start with
set application_name = 'applicationuser@application';
and the definition
who text DEFAULT current_setting('application_name'::text)
to get the user from the application into the database.

CYBERTEC Logo white
CYBERTEC PostgreSQL International GmbH
Römerstraße 19
2752 Wöllersdorf
Austria

+43 (0) 2622 93022-0
office@cybertec.at

Get the newest PostgreSQL Info & Tools


    This site is protected by reCAPTCHA and the Google Privacy Policy & Terms of Service apply.

    ©
    2024
    CYBERTEC PostgreSQL International GmbH
    phone-handsetmagnifiercrosscross-circle
    22
    0
    Would love your thoughts, please comment.x
    ()
    x
    linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram