Question
We are building a SaaS panel on top of Evolution API where multiple WhatsApp instances send bulk messages concurrently. Before going to production, I want to understand how the system handles this at scale.
Setup
- 10 staff users, each connected via their own WhatsApp instance
- Each staff needs to send ~100,000 messages to users between 9 AM – 8 PM
- All 10 instances may be sending at the same time
What I found in the code
Each send call routes directly to the instance:
// sendMessage.controller.ts
return await this.waMonitor.waInstances[instanceName].textMessage(data);
Each instance is independent in memory, so sends across different instances appear to run in parallel via Node.js's async event loop - no blocking between instances.
I also noticed eventProcessingQueue (a promise chain) inside whatsapp.baileys.service.ts, but that only serializes incoming WebSocket events per instance - it doesn't affect outgoing sends.
There is a delay option in sendMessageWithTyping that simulates typing before sending, but no built-in rate limiter, job queue (Bull, BeeQueue, p-queue, etc.), or retry mechanism for bulk sends.
Questions
-
Concurrency - Are sends across different instances truly parallel? Or does Node.js's single thread create a bottleneck when 10 instances all send simultaneously?
-
Rate limiting - Is there any built-in protection against WhatsApp detecting bulk/spam activity? What is the safe message-per-hour rate per instance?
-
Queue / retry - Is there a recommended way to queue 100,000 messages per instance (so they process in background, survive server restarts, and retry on failure)? Is Bull/BeeQueue/Redis queue the intended approach?
-
Ban risk - What delay between messages do you recommend to avoid WhatsApp banning numbers during bulk campaigns?
-
Roadmap - Is a built-in bulk messaging queue or campaign feature planned?
Environment
Evolution API version: 2.3.7
Node.js: 20
Database: PostgreSQL
Deploy: Railway
Question
We are building a SaaS panel on top of Evolution API where multiple WhatsApp instances send bulk messages concurrently. Before going to production, I want to understand how the system handles this at scale.
Setup
What I found in the code
Each send call routes directly to the instance:
Each instance is independent in memory, so sends across different instances appear to run in parallel via Node.js's async event loop - no blocking between instances.
I also noticed eventProcessingQueue (a promise chain) inside whatsapp.baileys.service.ts, but that only serializes incoming WebSocket events per instance - it doesn't affect outgoing sends.
There is a delay option in sendMessageWithTyping that simulates typing before sending, but no built-in rate limiter, job queue (Bull, BeeQueue, p-queue, etc.), or retry mechanism for bulk sends.
Questions
Concurrency - Are sends across different instances truly parallel? Or does Node.js's single thread create a bottleneck when 10 instances all send simultaneously?
Rate limiting - Is there any built-in protection against WhatsApp detecting bulk/spam activity? What is the safe message-per-hour rate per instance?
Queue / retry - Is there a recommended way to queue 100,000 messages per instance (so they process in background, survive server restarts, and retry on failure)? Is Bull/BeeQueue/Redis queue the intended approach?
Ban risk - What delay between messages do you recommend to avoid WhatsApp banning numbers during bulk campaigns?
Roadmap - Is a built-in bulk messaging queue or campaign feature planned?
Environment
Evolution API version: 2.3.7
Node.js: 20
Database: PostgreSQL
Deploy: Railway