Rails 8 in Production: Is the Solid Stack Enough to Replace Redis + Sidekiq?
Rails 8 shipped with three database-backed replacements for infrastructure that previously required Redis: Solid Queue for background jobs, Solid Cache for fragment and key-value caching, and Solid Cable for WebSocket pub/sub. Together they form the "Solid Stack," and the promise is straightforward — drop Redis from your architecture, run everything on your existing database, and reduce operational complexity. But promises and production are different things. This article examines the real tradeoffs, walks through a phased migration strategy, and provides a decision framework so you can make the right call for your workload.
Why This Matters
Redis is a battle-tested piece of infrastructure, but it comes with operational cost. You need to provision it, monitor it, handle failover, manage memory limits, and keep it patched. For a team running a single Rails app on a managed database, adding Redis means adding an entire service to the stack — with its own failure modes, its own scaling characteristics, and its own on-call runbook.
The Solid Stack's value proposition is consolidation. If your database can handle the additional load, you eliminate an entire category of infrastructure. That means fewer moving parts in production, simpler deploys, and one fewer service to wake up for at 3 AM.
But consolidation is only valuable if performance holds. The question is not whether the Solid Stack works — it does — but whether it works well enough for your specific throughput, latency, and reliability requirements.
Where Solid Queue Works Well
Solid Queue stores jobs in your relational database using FOR UPDATE SKIP LOCKED for concurrency-safe polling. It supports priorities, multiple queues, recurring jobs (via solid_queue.yml scheduling), and concurrency controls. In practice, it handles the workloads that most Rails applications actually have.
Sweet Spot: Low-to-Medium Throughput
If your application enqueues fewer than roughly 5,000–10,000 jobs per minute, Solid Queue on a modern PostgreSQL or MySQL instance handles this comfortably. That covers the majority of Rails applications: transactional emails, webhook deliveries, PDF generation, image processing, nightly reports, and scheduled maintenance tasks.
# config/queue.yml — typical Solid Queue configuration
dispatchers:
- polling_interval: 1
batch_size: 500
workers:
- queues: [critical, default, low]
threads: 5
processes: 2
polling_interval: 0.1 The built-in concurrency controls are a genuine advantage over Sidekiq's open-source tier. You can limit concurrent executions per job class without reaching for Sidekiq Enterprise:
class ExternalApiSyncJob < ApplicationJob
limits_concurrency to: 3, key: ->(account_id) { account_id }
def perform(account_id)
ExternalApi::Client.sync(account_id)
end
end Where Solid Queue Hits Limits
At high throughput — tens of thousands of jobs per minute — Solid Queue puts meaningful write pressure on your database. Each enqueue is an INSERT, each dequeue is a SELECT ... FOR UPDATE plus a DELETE or UPDATE. If your database is already at 60–70% CPU serving web requests, adding thousands of job operations per minute can push it past comfort.
Redis-backed Sidekiq handles this workload in memory with sub-millisecond operations. If your job throughput is a defining characteristic of your application — think high-volume event processing, real-time data pipelines, or fan-out messaging — Sidekiq's raw throughput advantage is significant and measurable.
Solid Queue vs Sidekiq: Quick Comparison
| Factor | Solid Queue | Sidekiq + Redis |
|---|---|---|
| Throughput ceiling | ~5K–10K jobs/min (DB dependent) | 100K+ jobs/min |
| Latency (enqueue → start) | 1–100ms (polling interval) | <1ms (push-based) |
| Infra dependencies | Database only | Redis + database |
| Concurrency controls | Built-in (free) | Enterprise license ($$$) |
| Job durability | ACID-guaranteed | Depends on Redis persistence config |
| Recurring jobs | Built-in | Sidekiq-Cron or sidekiq-scheduler |
Solid Cache: Evaluation Points
Solid Cache replaces RedisCacheStore with a database-backed cache. It uses a dedicated database (recommended) and performs automatic eviction based on a configurable maximum size.
When It Works
Solid Cache shines when your cache is disk-bound anyway. If your cached values are large (full HTML fragments, serialized API responses, computed reports), they would exceed Redis memory limits regardless. Solid Cache lets you cache terabytes of data on cheap SSD storage without worrying about eviction pressure.
# config/cache.yml
production:
databases: [cache_primary]
store_options:
max_size: <%= 256.gigabytes %>
max_age: <%= 60.days.to_i %>
# config/database.yml (separate database for cache)
cache_primary:
<<: *default
database: myapp_cache_production
migrations_paths: db/cache_migrate Basecamp reported that Solid Cache reduced their infrastructure costs because they could store far more cached data on disk than they could justify in Redis memory. For applications with similar caching profiles — large values, long TTLs, tolerant of slightly higher latency — this is a genuine win.
When to Keep Redis
If your cache hit path is latency-sensitive — serving cached API responses at the edge, rate limiting, or session storage — Redis is still faster. A Redis GET returns in microseconds; a database query, even on a local SSD, takes low-single-digit milliseconds. For high-traffic cache reads (thousands per second), the aggregate difference adds up.
Solid Cable: Evaluation Points
Solid Cable provides an Action Cable adapter backed by the database instead of Redis pub/sub. It uses polling to check for new messages, which means it trades latency for simplicity.
When It Works
Solid Cable is well-suited for applications with modest WebSocket usage: live notifications, chat in internal tools, dashboard updates, or collaborative editing with a small number of concurrent users. If you have hundreds (not thousands) of concurrent WebSocket connections and messages are infrequent (a few per second), Solid Cable handles this without issues.
When to Keep Redis
Solid Cable's polling model introduces latency (configurable, but typically 100ms+). For real-time features where sub-10ms delivery matters — live gaming, financial tickers, collaborative cursors — Redis pub/sub is fundamentally faster because it pushes messages rather than polling for them. Additionally, at high connection counts (thousands+), the polling queries can become a significant database load.
# config/cable.yml
production:
adapter: solid_cable
polling_interval: 0.1.seconds
keep_messages_around_for: 1.day
trim_batch_size: 100 Migration Strategy: Phased and Hybrid
The best migration is incremental. You do not need to adopt all three Solid components at once, and you do not need to go all-in on any of them. Here is a phased approach that minimizes risk.
Phase 1: Solid Cache (Lowest Risk)
Start with Solid Cache because cache failures are the least damaging — a cache miss just means a slower response, not a lost job or dropped message. Set up a dedicated cache database, configure Solid Cache alongside your existing Redis cache, and gradually shift cache writes.
# Step 1: Add the gem
# Gemfile
gem "solid_cache"
# Step 2: Generate and run migrations
bin/rails solid_cache:install:migrations
bin/rails db:migrate
# Step 3: Configure in production (keep Redis as fallback)
# config/environments/production.rb
config.cache_store = :solid_cache_store
# Step 4: Monitor cache hit rates and database load for 1-2 weeks
# before proceeding to Phase 2 Phase 2: Solid Queue (Medium Risk)
Migrate background jobs next. Start by routing low-priority queues (mailers, cleanup tasks) to Solid Queue while keeping critical jobs on Sidekiq. This dual-adapter approach lets you measure database impact before committing fully.
# Run both adapters simultaneously during migration
# config/application.rb
config.active_job.queue_adapter = :solid_queue
# For jobs that must stay on Sidekiq during migration:
class CriticalPaymentJob < ApplicationJob
self.queue_adapter = :sidekiq
def perform(payment_id)
# stays on Sidekiq until you are confident in Solid Queue
end
end
# Low-priority jobs use the default (Solid Queue)
class WeeklyDigestJob < ApplicationJob
queue_as :low
def perform
# runs on Solid Queue immediately
end
end Monitor your database CPU, I/O wait, and connection pool utilization for at least two weeks. If the numbers hold, migrate the remaining queues one at a time.
Phase 3: Solid Cable (If Applicable)
If you use Action Cable, switch the adapter last. WebSocket issues are immediately visible to users, so this phase deserves the most caution. Deploy to staging first, load-test with realistic connection counts, and measure message delivery latency before rolling to production.
The Hybrid Option
There is no rule that says you must adopt all three or none. Many production Rails applications will settle on a hybrid: Solid Queue for jobs (where ACID durability and built-in concurrency controls are valuable), Solid Cache for large-value caching (where disk storage is cheaper than Redis memory), and Redis for Action Cable (where push-based delivery matters) and latency-sensitive caching.
Decision Framework: Simplicity vs Throughput
Use this framework to evaluate each Solid component independently. The answer does not have to be the same for all three.
Choose Solid Stack When:
- Your database has headroom (CPU < 50%, connections < 70% of pool)
- Job throughput is under ~10K jobs/min
- Cache values are large and can tolerate single-digit-ms latency
- WebSocket usage is moderate (hundreds of connections, not thousands)
- Your team is small and operational simplicity is a priority
- You want ACID guarantees on job execution (no lost jobs on Redis restart)
- You value built-in concurrency controls without paying for Sidekiq Enterprise
Keep Redis + Sidekiq When:
- Your database is already under heavy load
- Job throughput exceeds 10K–20K jobs/min or is growing rapidly
- Sub-millisecond cache reads are required for your SLA
- You need push-based WebSocket delivery with minimal latency
- You rely on Sidekiq Pro/Enterprise features (batches, rate limiting, encryption)
- Your ops team already has Redis monitoring, alerting, and failover dialed in
- You run a multi-region setup where Redis replication topology matters
The decision is not ideological. It is about measuring your actual workload against the capabilities of each option. Run the numbers, not the narratives.
For teams evaluating how different AI tools handle Rails 8 configuration and migration code, WhoCodes Best provides side-by-side comparisons of AI coding models on real-world tasks — useful context when you are asking an assistant to help scaffold your Solid Stack migration.
Final Takeaway
The Solid Stack is not a toy. It is production-grade infrastructure backed by Basecamp and HEY's real-world usage. For the majority of Rails applications — those running on a single database, processing a moderate volume of background jobs, and serving standard web traffic — it genuinely eliminates the need for Redis.
But "most applications" is not "all applications." If your workload demands high-throughput job processing, microsecond cache reads, or real-time push delivery, Redis remains the right tool. The Solid Stack's greatest contribution may not be replacing Redis everywhere — it is giving teams a credible default that works out of the box, so Redis becomes an intentional choice rather than an automatic dependency.
Start with Solid Cache (lowest risk), measure the database impact, then migrate Solid Queue one queue at a time. Keep Redis where the numbers justify it. The best architecture is the one that matches your actual workload, not the one that looks cleanest on a diagram.
Tags: Rails 8 Solid Queue Solid Cache Solid Cable Redis Sidekiq Production