When you're processing 100 requests a day, any tool works. When you're processing 100 per second, only architecture matters.
A client came to n8ify with a problem: their n8n instance was "exploding" every Tuesday during their marketing blast. Workflows were timing out, the database was locking, and they were losing thousands of leads.
They weren't using too much data; they were using too much **overhead**. Here is how we rebuilt their stack to handle 1M+ requests per month with 99.99% uptime.
query_stats Concurrency Cap
Scaling isn't about CPU speed; it's about managing concurrency. If your infrastructure tries to do everything at once, it will crash. If it queues efficiently, it will succeed.
1. Redis as the Backbone
By default, n8n runs everything in a single process. For high-volume stacks, we move to **Queue Mode**. We deploy a Redis instance to act as a traffic controller and multiple "worker" instances to handle the actual execution.
This allows the main instance to focus purely on receiving webhooks, while the workers chew through the data in the background. If one worker fails, the others pick up the slack.
2. Execution Data Pruning
Most people don't realize that **Logging** is what kills n8n. If you store the full JSON of every execution for a million requests, your database will swell to 500GB in a month.
We implement **Aggressive Pruning**:
- Only store "Failed" executions permanently.
- Automatic deletion of successful logs after 1 hour.
- Offboarding detailed analytics to a lightweight time-series DB (like InfluxDB).
3. The Payload Buffer
When a million requests hit your API, you don't want them hitting your automation tool directly. We place a **Buffer Service** (like Cloudflare Workers or a simple Node.js gateway) in front.
This buffer does basic validation and "batches" the requests. Instead of 1,000,000 individual triggers, n8n receives 10,000 batches of 100. This is **100x more efficient** for the automation engine.
cloud_sync Statelessness
Design your workflows to be stateless. They should be able to run on any worker at any time without needing to know what happen in the previous execution. That is the secret to horizontal scaling.
Conclusion
Scaling is a journey, not a destination. As your business grows, your "bottleneck" will move from CPU to RAM to Database to Network. By building with **Queue Mode** and **Redis** from day one, you ensure your foundation is ready for the first million—and the next ten.
Is Your Stack Ready?
We specialize in building "High-Tide" infrastructure for companies that can't afford a single missing lead.
Schedule a Scale Audit →