The vacuum pressure is real. Using a system with the skip locked technique + polling caused massive DB perf issues as the queue depth grew. The query to see the current jobs in the queue ended up being the main performance bottleneck, which cause slower throughput, which caused a larger queue depth, which etc.
Scaling the workers sometimes exacerbates the problem because you run into connection limits or polling hammering the DB.
I love the idea of pg as a queue, but I'm a more skeptical of it after dealing with it in production
ffsm8 1 hours ago [-]
Strange, you shouldn't have issues with vacuums on queue tables unless you're doing it wrong?
Were you not using partitions like this?
CREATE TABLE events_2026_04 PARTITION OF events
FOR VALUES FROM ('2026-04-01') TO ('2026-05-01');
CREATE TABLE events_2026_05 PARTITION OF events
FOR VALUES FROM ('2026-05-01') TO ('2026-06-01');
> Bulk loads and deletes can be accomplished by adding or removing partitions, if the usage pattern is accounted for in the partitioning design. Dropping an individual partition using DROP TABLE, or doing ALTER TABLE DETACH PARTITION, is far faster than a bulk operation. These commands also entirely avoid the VACUUM overhead caused by a bulk DELETE.
It was a lot more annoying earlier then pg 13 though, maybe you're just reminiscing things from the 2010s?
BodyCulture 2 hours ago [-]
Is your comment referring to this project specifically?
Because the docs say:
PgQue avoids that whole class of problems. It uses snapshot-based batching and TRUNCATE-based table rotation instead of per-row deletion.
Would be great if you could specify if you had problems with the exact implementation linked by op or if you did write about a different thing, thanks!
klysm 2 hours ago [-]
What kind of throughput are we talking about?
adhocmobility 1 hours ago [-]
Why insist on calling this a queue when it doesn't really have queue semantics? Queues do the job of load balancing between different workers. When workers acknowledge tasks, they get deleted, and there are visibility timeouts.
This is a log.
It's not really solving the problems you claim it solves. It's not, for instance, a replacement for SKIP LOCKED based queues.
samokhvalov 48 minutes ago [-]
Fair. I had an attempt to clarify it in README that PgQue is "closer to Kafka topics than to a job queue" -- per-subscription cursor on a shared event log, no ACK-delete, no visibility timeout.
That makes PgQue an event-streaming tool, not an MQ. For SKIP LOCKED systems like PGMQ, PgQue can still be a replacement in certain cases – similarly to how Kafka can be a replacement for RabbitMQ or ActiveMQ in certain cases.
thanks for pushing back, by the way – I'm thinking this thru, and will likely rename
fun fact: I now think, "River" (Go project) is also a misleading name for a task queue system :)
wewewedxfgdf 23 minutes ago [-]
How many message per second does this do I wonder?
odie5533 7 hours ago [-]
Postgres durability without having to run Kafka or RabbitMQ clusters seems pretty enticing. May reach for it when I next need an outbox pattern or small fan out.
saberd 7 hours ago [-]
I don't understand the latency graph. It says it has 0.25ms consumer latency.
Then in the latency tradeof section it says end to end latency is between 1-2 seconds.
Is this under heavy load or always? How does this compare to pgmq end to end latency?
samokhvalov 6 hours ago [-]
(PgQue author here)
I didn't understand nuances in the beginning myself
We have 3 kinds of latencies when dealing with event messages:
1. producer latency – how long does it take to insert an event message?
2. subscriber latency – how long does it take to get a message? (or a batch of all new messages, like in this case)
3. end-to-end event delivery time – how long does it take for a message to go from producer to consumer?
In case of PgQ/PgQue, the 3rd one is limited by "tick" frequency – by default, it's once per second (I'm thinking how to simplify more frequent configs, pg_cron is limited by 1/s).
While 1 and 2 are both sub-ms for PgQue. Consumers just don't see fresh messages until tick happens. Meanwhile, consuming queries is fast.
Hope this helps. Thanks for the question. Will this to README.
rgbrgb 7 hours ago [-]
[dead]
ozgrakkurt 57 minutes ago [-]
What do you think about trusting something LLM coded with your production data?
ofrzeta 48 minutes ago [-]
Why trust humans? What do you think about this (honest question, because I think this is quite a bit different to what is understood as vibe coding) https://news.ycombinator.com/item?id=47589856 (TJ Green implementing a new PG extension with the help of Claude Code)
ozgrakkurt 13 minutes ago [-]
I don't trust humans so much as I trust their reputation as a group.
I don't know who TJ Green is, even if they are previously working on a database it would take a lot of time for any new product to be trusted.
For example I would trust LLVM but I don't trust Mojo which is headed by the same person.
Putting LLMs in the equation, you would also need to trust that LLMs do not create hidden garbage that will rot the core of a project over time and make it a pain to use. This kind of risk view is very reasonable to take in my opinion.
For example look at the leaked code of claude cli and consider if you want to use a database coded like that for a long running project.
This will have to be proven in the future imo and I wouldn't use anything like this unless it really brings a unique benefit and is extremely useful.
carefree-bob 6 minutes ago [-]
curious, what about Mojo makes you not trust it?
cout 8 hours ago [-]
I think it's great that projects like this exist where people are building middleware in different ways than others. Still, as someone who routinely uses shared memory queues, the idea of considering a queue built inside a database to be "zero bloat" leaves me scratching my head a bit. I can see why someone would want that, but once person's feature is bloat to someone else.
pierrekin 7 hours ago [-]
In Postgres land bloat refers to dead tuples that are left in place during certain operations and need to be vacuumed later.
It’s challenging to write a queue that doesn’t create bloat, hence why this project is citing it as a feature.
what 3 hours ago [-]
Can’t you just partition the table by time (or whatever) and drop old partitions and not worry about vacuuming? Why do you need to keep around completed jobs forever?
pierrekin 1 hours ago [-]
Yes you can, and at the risk of sounding a little snarky; if you do something like that and then release it as open source, people may even discuss it on HN!
halfcat 6 hours ago [-]
So if I understand this correctly, there are three main approaches:
1. SKIP LOCKED family
2. Partition-based + DROP old partitions (no VACUUM required)
3. TRUNCATE family (PgQue’s approach)
And the benefit of PgQue is the failure mode, when a worker gets stuck:
- Table grows indefinitely, instead of
- VACUUM-starved death spiral
And a table growing is easier to reason about operationally?
samokhvalov 6 hours ago [-]
Taxonomy is correct. But the benefit isn't "table grows indefinitely vs. vacuum-starved death spiral"
in all three approaches, if the consumer falls behind, events accumulate
The real distinction is cost per event under MVCC pressure. Under held xmin (idle-in-transaction, long-running writer, lagging logical slot, physical standby with hot_standby_feedback=on):
1. SKIP LOCKED systems: every DELETE or UPDATE creates a dead tuple that autovacuum can't reclaim (xmin is frozen). Indexes bloat. Each subsequent FOR UPDATE SKIP LOCKED scans don't help.
2. Partition + DROP (some SKIP LOCKED systems already support it, e.g. PGMQ): old partitions drop cleanly, but the active partition is still DELETE-based and accumulates dead tuples — same pathology within the active window, just bounded by retention. Another thing is that DROPping and attaching/detaching partitions is more painful than working with a few existing ones and using TRUNCATE.
3. PgQue / PgQ: active event table is INSERT-only. Each consumer remembers its own pointer (ID of last event processed) independently. CPU stays flat under xmin pressure.
I posted a few more benchmark charts on my LinkedIn and Twitter, and plan to post an article explaining all this with examples. Among them was a demo where 30-min-held-xmin bench at 2000 ev/s: PgQue sustains full producer rate at ~14% CPU; SKIP LOCKED queues pinned at 55-87% CPU with throughput dropping 20-80% and what's even worse, after xmin horizon gets unblocked, not all of them recovered / caught up consuming withing next 30 min.
pierrekin 1 hours ago [-]
I think there are two kinds of partition based approach which may cause some confusion if lumped together in this kind of comparison.
Insert and delete with old partition drop vs insert only with old partition drop.
The semantics of the two approaches differ by default but you can achieve the same semantics from either with some higher order changes (partitioning the event space, tracking a cursor per consumer etc).
How does PgQue compare to the insert only partition based approach?
samokhvalov 38 minutes ago [-]
1. partitions are never dropped – they got TRUNCATEd (gracefully) during rotation
2. INSERT-only. Each consumer remembers its position – ID of the last event consumed. This pointer shifts independently for each consumer. It's much closer to Kafka than to task queue systems like ActiveMQ or RabbitMQ.
When you run long-running tx with real XID or read-only in REPEATABLE READ (e.g., pg_dump for long time), or logical slot is unused/lagging, this affects performance badly if you have dead tuples accumulated from DELETEs/UPDATEs, but not promptly vacuumed.
PgQue event tables are append-only, and consumers know how to find next batch of events to consume – so xmin horizon block is not affecting, by design.
andrewstuart 4 hours ago [-]
Postgres is not the only database that does queues.
Any database that supports SKIP LOCKED is fine including MySQL, MSSQL, Oracle etc.
Even SQLite makes a fine queue not via skip locked but because writes are atomic.
5 hours ago [-]
bfivyvysj 6 hours ago [-]
Cool
killingtime74 4 hours ago [-]
I got Claude to analyze the code and it's not really comparable to SKIP LOCKED queues. It's more like Kafka. There's no job queue semantics with acks, workers taking from same job pool.
It's Kafka like one event stream and multiple independent worker cursors.
It's more SNS than SQS or Kafka than Rabbitmq/Nats
pierrekin 57 minutes ago [-]
This fan out approach plus something like Kafka consumer groups is often a better approach to getting workers to take from the same pool anyways, because you can do key based partitioning and therefore have semi stateful consumers (cache, partitioned inserts etc) that are fed similar work.
samokhvalov 4 hours ago [-]
correct
it's explained in README:
> Category: River, Que, and pg-boss (and Oban, graphile-worker, solid_queue, good_job) are job queue frameworks. PgQue is an event/message queue optimized for high-throughput streaming with fan-out.
Rendered at 05:33:40 GMT+0000 (UTC) with Wasmer Edge.
Scaling the workers sometimes exacerbates the problem because you run into connection limits or polling hammering the DB.
I love the idea of pg as a queue, but I'm a more skeptical of it after dealing with it in production
Were you not using partitions like this?
CREATE TABLE events_2026_04 PARTITION OF events FOR VALUES FROM ('2026-04-01') TO ('2026-05-01');
CREATE TABLE events_2026_05 PARTITION OF events FOR VALUES FROM ('2026-05-01') TO ('2026-06-01');
https://www.postgresql.org/docs/current/ddl-partitioning.htm...
> Bulk loads and deletes can be accomplished by adding or removing partitions, if the usage pattern is accounted for in the partitioning design. Dropping an individual partition using DROP TABLE, or doing ALTER TABLE DETACH PARTITION, is far faster than a bulk operation. These commands also entirely avoid the VACUUM overhead caused by a bulk DELETE.
It was a lot more annoying earlier then pg 13 though, maybe you're just reminiscing things from the 2010s?
Because the docs say:
Would be great if you could specify if you had problems with the exact implementation linked by op or if you did write about a different thing, thanks!This is a log.
It's not really solving the problems you claim it solves. It's not, for instance, a replacement for SKIP LOCKED based queues.
That makes PgQue an event-streaming tool, not an MQ. For SKIP LOCKED systems like PGMQ, PgQue can still be a replacement in certain cases – similarly to how Kafka can be a replacement for RabbitMQ or ActiveMQ in certain cases.
Agreed the "queue" naming is historical and a bit loose -- https://github.com/NikolayS/pgque/issues/70
fun fact: I now think, "River" (Go project) is also a misleading name for a task queue system :)
Then in the latency tradeof section it says end to end latency is between 1-2 seconds.
Is this under heavy load or always? How does this compare to pgmq end to end latency?
I didn't understand nuances in the beginning myself
We have 3 kinds of latencies when dealing with event messages:
1. producer latency – how long does it take to insert an event message?
2. subscriber latency – how long does it take to get a message? (or a batch of all new messages, like in this case)
3. end-to-end event delivery time – how long does it take for a message to go from producer to consumer?
In case of PgQ/PgQue, the 3rd one is limited by "tick" frequency – by default, it's once per second (I'm thinking how to simplify more frequent configs, pg_cron is limited by 1/s).
While 1 and 2 are both sub-ms for PgQue. Consumers just don't see fresh messages until tick happens. Meanwhile, consuming queries is fast.
Hope this helps. Thanks for the question. Will this to README.
I don't know who TJ Green is, even if they are previously working on a database it would take a lot of time for any new product to be trusted.
For example I would trust LLVM but I don't trust Mojo which is headed by the same person.
Putting LLMs in the equation, you would also need to trust that LLMs do not create hidden garbage that will rot the core of a project over time and make it a pain to use. This kind of risk view is very reasonable to take in my opinion.
For example look at the leaked code of claude cli and consider if you want to use a database coded like that for a long running project.
This will have to be proven in the future imo and I wouldn't use anything like this unless it really brings a unique benefit and is extremely useful.
It’s challenging to write a queue that doesn’t create bloat, hence why this project is citing it as a feature.
1. SKIP LOCKED family
2. Partition-based + DROP old partitions (no VACUUM required)
3. TRUNCATE family (PgQue’s approach)
And the benefit of PgQue is the failure mode, when a worker gets stuck:
- Table grows indefinitely, instead of
- VACUUM-starved death spiral
And a table growing is easier to reason about operationally?
in all three approaches, if the consumer falls behind, events accumulate
The real distinction is cost per event under MVCC pressure. Under held xmin (idle-in-transaction, long-running writer, lagging logical slot, physical standby with hot_standby_feedback=on):
1. SKIP LOCKED systems: every DELETE or UPDATE creates a dead tuple that autovacuum can't reclaim (xmin is frozen). Indexes bloat. Each subsequent FOR UPDATE SKIP LOCKED scans don't help.
2. Partition + DROP (some SKIP LOCKED systems already support it, e.g. PGMQ): old partitions drop cleanly, but the active partition is still DELETE-based and accumulates dead tuples — same pathology within the active window, just bounded by retention. Another thing is that DROPping and attaching/detaching partitions is more painful than working with a few existing ones and using TRUNCATE.
3. PgQue / PgQ: active event table is INSERT-only. Each consumer remembers its own pointer (ID of last event processed) independently. CPU stays flat under xmin pressure.
I posted a few more benchmark charts on my LinkedIn and Twitter, and plan to post an article explaining all this with examples. Among them was a demo where 30-min-held-xmin bench at 2000 ev/s: PgQue sustains full producer rate at ~14% CPU; SKIP LOCKED queues pinned at 55-87% CPU with throughput dropping 20-80% and what's even worse, after xmin horizon gets unblocked, not all of them recovered / caught up consuming withing next 30 min.
Insert and delete with old partition drop vs insert only with old partition drop.
The semantics of the two approaches differ by default but you can achieve the same semantics from either with some higher order changes (partitioning the event space, tracking a cursor per consumer etc).
How does PgQue compare to the insert only partition based approach?
2. INSERT-only. Each consumer remembers its position – ID of the last event consumed. This pointer shifts independently for each consumer. It's much closer to Kafka than to task queue systems like ActiveMQ or RabbitMQ.
When you run long-running tx with real XID or read-only in REPEATABLE READ (e.g., pg_dump for long time), or logical slot is unused/lagging, this affects performance badly if you have dead tuples accumulated from DELETEs/UPDATEs, but not promptly vacuumed.
PgQue event tables are append-only, and consumers know how to find next batch of events to consume – so xmin horizon block is not affecting, by design.
Any database that supports SKIP LOCKED is fine including MySQL, MSSQL, Oracle etc.
Even SQLite makes a fine queue not via skip locked but because writes are atomic.
It's Kafka like one event stream and multiple independent worker cursors.
It's more SNS than SQS or Kafka than Rabbitmq/Nats
it's explained in README:
> Category: River, Que, and pg-boss (and Oban, graphile-worker, solid_queue, good_job) are job queue frameworks. PgQue is an event/message queue optimized for high-throughput streaming with fan-out.