In a queue implementation in PostgreSQL, a producer inserts a row in a table and consumers remove rows from a table.
For example, pgqrs stores items for a queue my_queue in a table pgqrs.q_my_queue.
Consumers can either poll the table for a new item in a queue or use the LISTEN/NOTIFY feature where in consumers will LISTEN on a channel and a producer
will NOTIFY on that channel.
The two major disadvantages of LISTEN/NOTIFY are:
- Limited number of consumers Consumers require a dedicated connection OR session pooling mode
in a connection pooler like
PgBouncer. - Limited number of concurrent NOTIFY messages At high message rates, the mechanism may not scale.
Limits of LISTEN/NOTIFY Link to heading
PostgreSQL’s LISTEN/NOTIFY subsystem is backed by a global circular queue stored in the pg_notify/ directory.
Each NOTIFY adds an entry to this shared queue, and each listening backend recieves a signal and
then reads from it to receive notifications.
Even though each notification page (“bucket”) in the PostgreSQL pg_notify queue is relatively small (about 8 KB), every time you send a NOTIFY, the database has to do more than just write that 8 KB.
It must:
- Acquire and release locks to safely append the notification entry.
- Write (or mark) the page in the SLRU (Simple LRU) segment that backs pg_notify.
- Send a signal (SIGUSR1) to every listener session, so they wake up and check for new messages.
When notifications happen very frequently, all of these extra steps add up and cause noticeable commit delays, even if the actual disk write (8 KB) isn’t the slow part.
NOTIFY Queue Size is sufficient Link to heading
NOTIFY docs mention that the total capacity of the circular buffer is 8 GB which is quite large for most use cases.
Relevant constants and defaults for this capacity are:
| Parameter | Default | Code Reference | Meaning |
|---|---|---|---|
| max_notify_queue_pages | 1048576 | src/backend/commands/async.c:428 (GUC) | Maximum number of pages allowed in the queue |
| Page size (BLCKSZ) | 8192 B | pg_config_manual.h | Size of each SLRU page |
If fully utilized:
1048576 pages × 8192 bytes = 8_589_934_592 B ≈ 8 GiB
That is the theoretical cap on the queue’s on-disk footprint, not a pre-allocated size.
Although the configuration allows up to 8 GiB of storage, real-world limits are much smaller.
Signal Delivery Overhead Link to heading
Each NOTIFY causes PostgreSQL to send a SIGUSR1 signal to every backend
that is currently listening on the relevant channel.
This happens in SignalBackends() inside
src/backend/commands/async.c.
While signal delivery is asynchronous and relatively fast, it scales poorly: as the number of listeners increases, each notification must still wake every listening backend process. Even idle listeners consume kernel resources and increase inter-process signaling overhead.
Community benchmarks show this effect clearly:
“With 1,000 idle listeners, a NOTIFY round-trip goes from ~0.4 ms to ~14 ms.”
For lightweight coordination with tens of listeners, the cost is negligible. However, with thousands of concurrent listeners, CPU usage and latency rise steeply because each NOTIFY triggers thousands of kernel signals.
Shared Queue Lock Contention Link to heading
At transaction commit, PostgreSQL appends notifications to the global queue
through PreCommit_Notify(). This code path acquires a global lock
(NotifyQueueLock) to serialize writes into the pg_notify SLRU.
From async.c:
LWLockAcquire(NotifyQueueLock, LW_EXCLUSIVE);
asyncQueueAddEntries();
LWLockRelease(NotifyQueueLock);
This design guarantees consistency but limits scalability. Multiple concurrent producers must take the same lock during commit, which effectively serializes NOTIFY traffic across all sessions.
FWIW, the lock seems to be the one taken to serialize insertions into the shared NOTIFY queue, from this bit in commands/async.c:
/*
* Serialize writers by acquiring a special lock that we hold till
* after commit. This ensures that queue entries appear in commit
* order, and in particular that there are never uncommitted queue
* entries ahead of committed ones, so an uncommitted transaction
* can't block delivery of deliverable notifications.
*
* We use a heavyweight lock so that it'll automatically be released
* after either commit or abort. This also allows deadlocks to be
* detected, though really a deadlock shouldn't be possible here.
*
* The lock is on "database 0", which is pretty ugly but it doesn't
* seem worth inventing a special locktag category just for this.
* (Historical note: before PG 9.0, a similar lock on "database 0" was
* used by the flatfiles mechanism.)
*/
LockSharedObject(DatabaseRelationId, InvalidOid, 0,
AccessExclusiveLock);
This lock is held while inserting the transaction’s notify message(s), after which the transaction commits and releases the lock. There’s not very much code in that window. So what we can conclude is that some other transaction also doing NOTIFY hung up within that sequence for something in excess of 3 seconds. We have been shown no data whatsoever that would allow us to speculate about what’s causing that other transaction to take so long to get through its commit sequence.
regards, tom lane
In practice, throughput drops rapidly once you have many simultaneous writers or high NOTIFY rates (> a few thousand per second).
A laggard consumer can fill up a queue. Link to heading
NOTIFY docs:
There is a queue that holds notifications that have been sent but not yet processed by all listening sessions. If this queue becomes full, transactions calling NOTIFY will fail at commit.
The queue is circular: once all listening backends have advanced beyond a page, it can be reused.
Two positions are tracked in shared memory:
- Head: next free slot for new notifications
- Tail: earliest position any listener still needs
A laggard consumer will not let Tail to advance forward and free up space.
The check for overflow occurs in asyncQueueIsFull():
/* src/backend/commands/async.c */
if (QUEUE_POS_PAGE(next_head) == QUEUE_POS_PAGE(queue_tail) &&
QUEUE_POS_OFFSET(next_head) == QUEUE_POS_OFFSET(queue_tail))
{
ereport(WARNING,
(errmsg("NOTIFY queue is full -- dropping notification")));
return false;
}
Once the queue fills up (head == tail), PostgreSQL starts dropping new notifications and logs a warning.
Real-world reports show this exact pattern:
“Postgres logs keep appearing ‘too many notifications in the NOTIFY queue’ … indicating the queue size keeps increasing up.”
Stack Overflow: Debug Postgres too many notifications in the queue
In effect, one lagging or disconnected consumer can block queue recycling and cause notification loss for all others.
When Does LISTEN/NOTIFY Perform Better? Link to heading
LISTEN/NOTIFY works best when there are:
- a small number of consumers
- moderate message rates.
Each listener maintains a persistent session, and PostgreSQL efficiently delivers notifications for up to a few hundred clients. For low-traffic or latency-sensitive workloads, it can outperform polling because consumers are notified immediately without repeatedly querying the table.
As the number of consumers or message rate grows,
the notification queue becomes a bottleneck. Each producer’s NOTIFY
adds to a shared circular buffer, which can overflow at tens of thousands
of messages. Beyond this point, polling scales more predictably because
it can batch messages and relies only on ordinary SQL queries. At high message rate, every poll request will consume messages.