What is LISTEN and NOTIFY?
Users can listen in on database activity using the LISTEN/NOTIFY capability. It is still one of PostgreSQL’s most used features despite being one of the older ones. What does the asynchronous query interface (LISTEN/NOTIFY) do, and why is it useful? The main query is To avoid polling is the essential tenet.
The code frequently functions like this:
while true SELECT * FROM todo_list; sleep; end
Numerous loads are put on the database for no reason if thousands of users poll it continuously. There must be a better way because the majority of polling requests are fruitless. The purpose of LISTEN/NOTIFY is to provide a better alternative to repeatedly polling the database. We instead connect to the database and wait for a pertinent event to rouse us up.
Set up PostgreSQL notifications
Two commands are relevant for using PostgreSQL notifications: Listen and be alert:
test=# \h LISTEN Command: LISTEN Description: listen for a notification Syntax: LISTEN channel URL: https://www.postgresql.org/docs/15/sql-listen.html
Your database connection will listen on a “channel” thanks to LISTEN. A channel is essentially just a name. It is unnecessary to make one or to make sure that one already exists. We will keep an eye on this name while waiting for a notification to arrive.
Notify any active database connections that are waiting: Using the NOTIFY command
test=# \h NOTIFY Command: NOTIFY Description: generate a notification Syntax: NOTIFY channel [ , payload ] URL: https://www.postgresql.org/docs/15/sql-notify.html
A channel and an optional payload—a regular string that is transmitted to the receiver—are all that are required to send a notice.
Let’s put it to the test and see how it performs:
test=# LISTEN x; LISTEN
LISTEN notifies the backend that we wish to be informed of messages arriving via channel “x” An application typically only listens to one channel. Nothing prevents us, however, from sending several LISTEN requests to listen to more than one channel at once. This is entirely possible and, occasionally, even quite desirable.
What happens when a notification is issued?
test=# NOTIFY x, 'some message'; NOTIFY Asynchronous notification "x" with payload "some message" received from server process with PID 62451.
The notification will be sent to all connections that used the LISTEN command to connect to the same channel.
Use triggers with LISTEN / NOTIFY
When a row is added to a table, we frequently want to let the client know. Use a table trigger installed for that purpose. Here is an illustration to illustrate how this operates:
CREATE TABLE t_message ( id serial, t timestamptz DEFAULT now(), message text ); CREATE FUNCTION capture_func() RETURNS trigger AS $$ DECLARE v_txt text; BEGIN v_txt := format('sending message for %s, %s', TG_OP, NEW); RAISE NOTICE '%', v_txt; EXECUTE FORMAT('NOTIFY mychannel, ''%s''', v_txt); RETURN NEW; END; $$ LANGUAGE 'plpgsql'; CREATE TRIGGER mytrigger BEFORE INSERT OR UPDATE ON t_message FOR EACH ROW EXECUTE PROCEDURE capture_func();
Another way: pg_notify()
The source code demonstrates how to establish a table and the trigger that will deliver notifications. EXECUTE was used in my example to execute dynamic SQL. But there is an additional approach as well:
SELECT pg_notify('mychannel', v_txt);
The pg_notify() function is a more tasteful method of communicating.
We use the INSERT statement to test the function.
INSERT INTO t_message (message) VALUES ('sample text');
The message will be sent:
NOTICE: sending message for INSERT, (1,"2022-07-13 16:18:24.709008+02","sample text")
When the transaction is actually committed, only then is the notification truly sent out. Why is this information significant? The modifications would not be apparent to other transactions if the transaction was sent out right after. Keep in mind that, unless you commit, updates are only visible to your own transaction. As soon as the first transaction is completed correctly, the message is sent to the second transaction. Recall that notifications are also a type of transaction because they can only be sent after a commit and not in the event of a rollback.
About Enteros
Enteros offers a patented database performance management SaaS platform. It proactively identifies root causes of complex business-impacting database scalability and performance issues across a growing number of clouds, RDBMS, NoSQL, and machine learning database platforms.
The views expressed on this blog are those of the author and do not necessarily reflect the opinions of Enteros Inc. This blog may contain links to the content of third-party sites. By providing such links, Enteros Inc. does not adopt, guarantee, approve, or endorse the information, views, or products available on such sites.
Are you interested in writing for Enteros’ Blog? Please send us a pitch!
RELATED POSTS
Governing AI Performance in Technology Enterprises: Enteros GenAI-Driven Intelligence Platform
- 26 January 2026
- Database Performance Management
Introduction Artificial Intelligence has moved from experimentation to the core of modern technology enterprises. AI now powers customer experiences, revenue optimization, fraud detection, personalization engines, autonomous operations, developer productivity tools, and mission-critical decision systems. From SaaS platforms and digital marketplaces to enterprise software and AI-native startups, organizations are embedding AI into nearly every layer of … Continue reading “Governing AI Performance in Technology Enterprises: Enteros GenAI-Driven Intelligence Platform”
Optimizing Healthcare Databases at Scale: How Enteros Aligns GenAI, Performance Intelligence, and Cloud FinOps
Introduction Healthcare organizations are under unprecedented pressure to deliver better patient outcomes while operating within increasingly constrained financial and regulatory environments. Hospitals, payer networks, life sciences companies, and digital health platforms now rely on massive volumes of data—electronic health records (EHRs), imaging repositories, genomics pipelines, AI-driven diagnostics, claims systems, and real-time patient monitoring platforms. At … Continue reading “Optimizing Healthcare Databases at Scale: How Enteros Aligns GenAI, Performance Intelligence, and Cloud FinOps”
Governing Cloud Economics at Scale: Enteros Cost Attribution and FinOps Intelligence for BFSI and Technology Organizations
- 25 January 2026
- Database Performance Management
Introduction Cloud adoption has become foundational for both BFSI institutions and technology-driven enterprises. Banks, insurers, fintechs, SaaS providers, and digital platforms now depend on cloud-native architectures to deliver real-time services, enable AI-driven innovation, ensure regulatory compliance, and scale globally. Yet as cloud usage accelerates, so does a critical challenge: governing cloud economics at scale. Despite … Continue reading “Governing Cloud Economics at Scale: Enteros Cost Attribution and FinOps Intelligence for BFSI and Technology Organizations”
Turning Telecom Performance into Revenue: Enteros Approach to Database Optimization and RevOps Efficiency
Introduction The telecom industry is operating in one of the most demanding digital environments in the world. Explosive data growth, 5G rollout, IoT expansion, cloud-native services, and digital customer channels have fundamentally transformed how telecom operators deliver services and generate revenue. Behind every call, data session, billing transaction, service activation, roaming event, and customer interaction … Continue reading “Turning Telecom Performance into Revenue: Enteros Approach to Database Optimization and RevOps Efficiency”