I Replaced Redis with PostgreSQL (And It's Faster)
TL;DR
The author replaced Redis with PostgreSQL for caching, pub/sub, and job queues, finding it faster and more cost-effective. PostgreSQL's features like UNLOGGED tables, LISTEN/NOTIFY, and SKIP LOCKED enabled this switch, reducing operational complexity and improving data consistency.
Key Takeaways
- •PostgreSQL can replace Redis for caching, pub/sub, and job queues using features like UNLOGGED tables, LISTEN/NOTIFY, and SKIP LOCKED, offering similar performance with lower cost.
- •Switching to PostgreSQL reduces operational complexity by eliminating the need for separate Redis backups, monitoring, and failover, simplifying infrastructure management.
- •Using PostgreSQL ensures better data consistency through atomic transactions, avoiding issues like cache-database mismatches that can occur with Redis.
- •Cost savings are significant, as PostgreSQL leverages existing RDS storage, potentially reducing monthly expenses by around $100 compared to Redis on AWS ElastiCache.
- •Real-world applications, such as real-time log streaming, benefit from PostgreSQL's integrated pub/sub with triggers, enabling atomic operations without extra infrastructure.
Tags
I had a typical web app stack:
- PostgreSQL for persistent data
- Redis for caching, pub/sub, and background jobs
Two databases. Two things to manage. Two points of failure.
Then I realized: PostgreSQL can do everything Redis does.
I ripped out Redis entirely. Here's what happened.
The Setup: What I Was Using Redis For
Before the change, Redis handled three things:
1. Caching (70% of usage)
// Cache API responses
await redis.set(`user:${id}`, JSON.stringify(user), 'EX', 3600);
2. Pub/Sub (20% of usage)
// Real-time notifications
redis.publish('notifications', JSON.stringify({ userId, message }));
3. Background Job Queue (10% of usage)
// Using Bull/BullMQ
queue.add('send-email', { to, subject, body });
The pain points:
- Two databases to backup
- Redis uses RAM (expensive at scale)
- Redis persistence is... complicated
- Network hop between Postgres and Redis
Why I Considered Replacing Redis
Reason #1: Cost
My Redis setup:
- AWS ElastiCache: $45/month (2GB)
- Growing to 5GB would cost $110/month
PostgreSQL:
- Already paying for RDS: $50/month (20GB storage)
- Adding 5GB of data: $0.50/month
Potential savings: ~$100/month
Reason #2: Operational Complexity
With Redis:
Postgres backup ✅
Redis backup ❓ (RDB? AOF? Both?)
Postgres monitoring ✅
Redis monitoring ❓
Postgres failover ✅
Redis Sentinel/Cluster ❓
Without Redis:
Postgres backup ✅
Postgres monitoring ✅
Postgres failover ✅
One less moving part.
Reason #3: Data Consistency
The classic problem:
// Update database
await db.query('UPDATE users SET name = $1 WHERE id = $2', [name, id]);
// Invalidate cache
await redis.del(`user:${id}`);
// ⚠️ What if Redis is down?
// ⚠️ What if this fails?
// Now cache and DB are out of sync
With everything in Postgres: transactions solve this.
PostgreSQL Feature #1: Caching with UNLOGGED Tables
Redis:
await redis.set('session:abc123', JSON.stringify(sessionData), 'EX', 3600);
PostgreSQL:
CREATE UNLOGGED TABLE cache (
key TEXT PRIMARY KEY,
value JSONB NOT NULL,
expires_at TIMESTAMPTZ NOT NULL
);
CREATE INDEX idx_cache_expires ON cache(expires_at);
Insert:
INSERT INTO cache (key, value, expires_at)
VALUES ($1, $2, NOW() + INTERVAL '1 hour')
ON CONFLICT (key) DO UPDATE
SET value = EXCLUDED.value,
expires_at = EXCLUDED.expires_at;
Read:
SELECT value FROM cache
WHERE key = $1 AND expires_at > NOW();
Cleanup (run periodically):
DELETE FROM cache WHERE expires_at < NOW();
What is UNLOGGED?
UNLOGGED tables:
- Skip the Write-Ahead Log (WAL)
- Much faster writes
- Don't survive crashes (perfect for cache!)
Performance:
Redis SET: 0.05ms
Postgres UNLOGGED INSERT: 0.08ms
Close enough for caching.
PostgreSQL Feature #2: Pub/Sub with LISTEN/NOTIFY
This is where it gets interesting.
PostgreSQL has native pub/sub that most developers don't know about.
Redis Pub/Sub
// Publisher
redis.publish('notifications', JSON.stringify({ userId: 123, msg: 'Hello' }));
// Subscriber
redis.subscribe('notifications');
redis.on('message', (channel, message) => {
console.log(message);
});
PostgreSQL Pub/Sub
-- Publisher
NOTIFY notifications, '{"userId": 123, "msg": "Hello"}';
// Subscriber (Node.js with pg)
const client = new Client({ connectionString: process.env.DATABASE_URL });
await client.connect();
await client.query('LISTEN notifications');
client.on('notification', (msg) => {
const payload = JSON.parse(msg.payload);
console.log(payload);
});
Performance comparison:
Redis pub/sub latency: 1-2ms
Postgres NOTIFY latency: 2-5ms
Slightly slower, but:
- No extra infrastructure
- Can use in transactions
- Can combine with queries
Real-World Example: Live Tail
In my log management app, I needed real-time log streaming.
With Redis:
// When new log arrives
await db.query('INSERT INTO logs ...');
await redis.publish('logs:new', JSON.stringify(log));
// Frontend listens
redis.subscribe('logs:new');
Problem: Two operations. What if publish fails?
With PostgreSQL:
CREATE FUNCTION notify_new_log() RETURNS TRIGGER AS $$
BEGIN
PERFORM pg_notify('logs_new', row_to_json(NEW)::text);
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER log_inserted
AFTER INSERT ON logs
FOR EACH ROW EXECUTE FUNCTION notify_new_log();
Now it's atomic. Insert and notify happen together or not at all.
// Frontend (via SSE)
app.get('/logs/stream', async (req, res) => {
const client = await pool.connect();
res.writeHead(200, {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
});
await client.query('LISTEN logs_new');
client.on('notification', (msg) => {
res.write(`data: ${msg.payload}\n\n`);
});
});
Result: Real-time log streaming with zero Redis.
PostgreSQL Feature #3: Job Queues with SKIP LOCKED
Redis (using Bull/BullMQ):
queue.add('send-email', { to, subject, body });
queue.process('send-email', async (job) => {
await sendEmail(job.data);
});
PostgreSQL:
CREATE TABLE jobs (
id BIGSERIAL PRIMARY KEY,
queue TEXT NOT NULL,
payload JSONB NOT NULL,
attempts INT DEFAULT 0,
max_attempts INT DEFAULT 3,
scheduled_at TIMESTAMPTZ DEFAULT NOW(),
created_at TIMESTAMPTZ DEFAULT NOW()
);
CREATE INDEX idx_jobs_queue ON jobs(queue, scheduled_at)
WHERE attempts < max_attempts;
Enqueue:
INSERT INTO jobs (queue, payload)
VALUES ('send-email', '{"to": "[email protected]", "subject": "Hi"}');
Worker (dequeue):
WITH next_job AS (
SELECT id FROM jobs
WHERE queue = $1
AND attempts < max_attempts
AND scheduled_at <= NOW()
ORDER BY scheduled_at
LIMIT 1
FOR UPDATE SKIP LOCKED
)
UPDATE jobs
SET attempts = attempts + 1
FROM next_job
WHERE jobs.id = next_job.id
RETURNING *;
The magic: FOR UPDATE SKIP LOCKED
This makes PostgreSQL a lock-free queue:
- Multiple workers can pull jobs concurrently
- No job is processed twice
- If a worker crashes, job becomes available again
Performance:
Redis BRPOP: 0.1ms
Postgres SKIP LOCKED: 0.3ms
Negligible difference for most workloads.
PostgreSQL Feature #4: Rate Limiting
Redis (classic rate limiter):
const key = `ratelimit:${userId}`;
const count = await redis.incr(key);
if (count === 1) {
await redis.expire(key, 60); // 60 seconds
}
if (count > 100) {
throw new Error('Rate limit exceeded');
}
PostgreSQL:
CREATE TABLE rate_limits (
user_id INT PRIMARY KEY,
request_count INT DEFAULT 0,
window_start TIMESTAMPTZ DEFAULT NOW()
);
-- Check and increment
WITH current AS (
SELECT
request_count,
CASE
WHEN window_start < NOW() - INTERVAL '1 minute'
THEN 1 -- Reset counter
ELSE request_count + 1
END AS new_count
FROM rate_limits
WHERE user_id = $1
FOR UPDATE
)
UPDATE rate_limits
SET
request_count = (SELECT new_count FROM current),
window_start = CASE
WHEN window_start