Preamble
Although using non-core Postgres extensions or forks is generally discouraged, there are some rare instances where doing so could actually be detrimental. However, because the BDR project is here to stay (as far as I can tell, the main functionality will be gradually integrated into Postgres core), the following information may still be useful in the future. To our surprise, we found that the official documentation (as well as our big brother, the Internet) is sadly pretty empty in that area with no working code examples when we received a client request to support them with custom replication handlers. Perhaps even on purpose, as a warning that you are already walking on thin ice and should exercise caution. The unknown. But in short, after doing some research, I became familiar with the subject, so let me give you a brief overview of what’s involved.
About BDR in two sentences
First off, for those who are unfamiliar with the relatively new project known as BDR (Bi-Directional Replication), it enables asynchronously replicated Master-Master Postgres clusters (up to 48 nodes). In these clusters, the nodes are somewhat independent of one another and simply try to communicate with one another in order to fetch and apply (synchronize) their data changes on a “best effort” basis. The biggest restriction may be that all of your tables must have UNIQUE or PRIMARY KEY constraints, and exclusion constraints and DDL operations that cause full table rewrites are disabled in order to have any real chance of keeping data synchronized. Which in reality shouldn’t be a big issue if you adhere to good database design best practices. However, there are some additional factors. Additionally, I advise conducting extensive testing for your specific use case before going “all in” on BDR because there are some other corner-case issues (see GitHub). However, in general, it is still a very potent tool and unquestionably the future.
BDR conflict handling basics
But, back to the conflict. In other words, when you model your data in accordance with the BDR guidelines, the framework will typically handle conflict situations where two nodes changed or inserted the same data-row at the same time. This is possible because there is no global locking implemented; instead, all nodes simply try to automatically sync with one another by using the “last timestamp wins” policy and working with local node data. That should cover the vast majority of use cases. Just keep in mind that some data inserts or updates will simply be discarded as a result, so one should design the applications accordingly.
But what should we do if we still want to tailor how we handle conflicts? The framework provides a way to do that by allowing you to declare common PL/pgSQL stored procedures with a specific signature that will take conflicting rows as input and let you choose whether to merge some column values or discard some rows. We’ll do a sample with the merging use case since that is likely the most popular method for custom handlers. Also keep in mind that only one UNIQUE constraint must be broken for custom handling to be possible, so try not to have too many!
Setting up a sample handler function
Due to the specific table (type) based inputs of user-written conflict handler stored procedures, each handled table will require at least one separate function. However, since the handler function will at least receive the conflict type (INSERT vs INSERT, INSERT vs UPDATE, etc.) as input, you can decide to use some “IF statements” to cover all the conflict types required for one table in the code.
A simple INSERT VS. INSERT conflict handler that will merge, say, our website hit counters (imagine two data centers with a webserver and one BDR database node, and some kind of inflexible log monitoring tool that can only do an INSERT at midnight with a fixed primary key ID translating to yesterday’s date) will look like this:
CREATE OR REPLACE FUNCTION public.hitcount_conflict_handler_ins_ins ( row1 public.hitcount, row2 public.hitcount, table_name text, table_regclass regclass, conflict_type bdr.bdr_conflict_type, /* [insert_insert | insert_update | update_update | update_delete | delete_delete | unhandled_tx_abort] */ OUT row_out public.hitcount, OUT handler_action bdr.bdr_conflict_handler_action) /* [IGNORE | ROW | SKIP] */ RETURNS record AS $BODY$ BEGIN raise warning 'conflict detected for public.hitcount, old_row: %, incoming_row: %', row1, row2; -- code to choose the output row or to merge values row1.counter = row1.counter + row2.counter; row_out := row1; handler_action := 'ROW'; END; $BODY$ LANGUAGE plpgsql; -- after writing the handler procedure we also need to register it as an handler select * from bdr.bdr_create_conflict_handler( ch_rel := 'hitcount', ch_name := 'hitcount_ins_ins_handler', ch_proc := 'public.hitcount_conflict_handler_ins_ins(public.hitcount, public.hitcount, text, regclass, bdr.bdr_conflict_type)', ch_type := 'insert_insert');
Our conflict resolution handler kicks into action after successful definition, and the server logs show the following messages:
2017-03-17 09:56:20.430 UTC,,,23833,,58cba8e3.5d19,10,,2017-03-17 09:14:11 UTC,5/32,719,WARNING,01000,"conflict detected for public.hitcount, old_row: (20170317,100), incoming_row: (20170317,50) ",,,,,"apply INSERT from remote relation public.t1 in commit 0/18DC780, xid 739 commited at 2017-03-17 09:56:20.425756+00 (action #2) from node (6398391759225146474,1,16385)",,,,"bdr (6398391788660796561,1,16385,): apply" 2017-03-17 09:56:20.430 UTC,,,23833,,58cba8e3.5d19,12,,2017-03-17 09:14:11 UTC,5/32,719,LOG,23000,"CONFLICT: remote INSERT: row was previously INSERTed at node 6398391788660796561:1. Resolution: conflict_trigger_returned_tuple; PKEY: id:20170317 counter[int4]:100",,,,,"apply INSERT from remote relation public.hitcount in commit 0/18DC780, xid 739 commited at 2017-03-17 09:56:20.425756+00 (action #2) from node (6398391759225146474,1,16385)",,,,"bdr (6398391788660796561,1,16385,): apply"
By setting bdr.log_conflicts_to_table=on in the server configuration, conflict handler “audit logging” into the specific bdr.bdr_conflict_history table can be enabled in addition to output to the server log. There, the same information will simply be better organized, but you may also need some sort of Cronjob to periodically clean up the table.
Avoid conflicts if possible
The best conflict handling strategy is to avoid any conflicts at all, despite the fact that we’ve seen that handling conflicts is somewhat manageable. Therefore, before launching your application, I’d advise trying to create a data model where all applications can use UUIDs or just a few data partitions. From the BDR documentation, an excerpt:
The simplest way to prevent conflicts is to only ever write to one node, or to only ever write to independent subsets of the database on each node, since conflicts can only occur when things are happening at the same time on multiple nodes.
NB! Use the most recent 1.0.2 version of BDR for production purposes; older versions had a bug that prevented the remotely changed tuple from populating correctly for insert/insert handlers.
About Enteros
Enteros offers a patented database performance management SaaS platform. It finds the root causes of complex database scalability and performance problems that affect business across a growing number of cloud, RDBMS, NoSQL, and machine learning database platforms.
The views expressed on this blog are those of the author and do not necessarily reflect the opinions of Enteros Inc. This blog may contain links to the content of third-party sites. By providing such links, Enteros Inc. does not adopt, guarantee, approve, or endorse the information, views, or products available on such sites.
Are you interested in writing for Enteros’ Blog? Please send us a pitch!
RELATED POSTS
Eliminating Growth Friction: How Enteros Aligns Database Performance, Cloud FinOps, and RevOps
- 28 January 2026
- Database Performance Management
Introduction For modern enterprises, growth is no longer limited by market demand alone—it is increasingly constrained by technology efficiency. As organizations scale digital platforms, launch new products, expand globally, and adopt AI-driven services, hidden friction inside their technology stack quietly erodes margins, slows execution, and undermines revenue outcomes. At the center of this friction sits … Continue reading “Eliminating Growth Friction: How Enteros Aligns Database Performance, Cloud FinOps, and RevOps”
AI SQL-Powered Database Management: Enteros’ Performance Intelligence Platform for Tech Enterprises
Introduction Technology enterprises today operate at unprecedented scale and speed. SaaS platforms, cloud-native applications, AI services, data marketplaces, and digital ecosystems now serve millions of users globally—often in real time. At the heart of this digital machinery lie databases. Databases power application responsiveness, AI pipelines, analytics engines, customer experiences, and revenue-generating workflows. Yet as technology … Continue reading “AI SQL-Powered Database Management: Enteros’ Performance Intelligence Platform for Tech Enterprises”
Keeping Operations Running at Scale: Enteros’ AIOps-Driven Database Performance Platform
- 27 January 2026
- Database Performance Management
Introduction In manufacturing plants and insurance enterprises alike, operational continuity is non-negotiable. A delayed production schedule, a failed claims transaction, or a slow underwriting system can ripple into lost revenue, regulatory exposure, and eroded customer trust. At the heart of these operations sit databases—quietly powering everything from shop-floor automation and supply chain planning to policy … Continue reading “Keeping Operations Running at Scale: Enteros’ AIOps-Driven Database Performance Platform”
Managing Real Estate Data at Scale: Enteros AI Platform for Database Performance and Cost Estimation
Introduction The real estate sector has undergone a dramatic digital transformation over the past decade. From commercial real estate (CRE) platforms and property management systems to residential marketplaces, smart buildings, and PropTech startups, modern real estate enterprises are now fundamentally data-driven organizations. Behind digital leasing platforms, pricing engines, tenant experience apps, IoT-enabled buildings, analytics dashboards, … Continue reading “Managing Real Estate Data at Scale: Enteros AI Platform for Database Performance and Cost Estimation”