Make scalable NodeJS app in Wappler without special knowledge

If you are running NodeJS with docker on your live server, and have also installed Traffic for SSL and domain management, you can simply run multiple instances by entering more replicas:


This will create as many NodeJS instances of your site as you enter replicas and those will run simultaneous and be load balanced by Traefik so users are spread (round robin)


How does it work with socket connections of users in different instances? Can they send socket events to users in other instances? If not, how to make the project scalable with socket events?

This is interesting; had no idea that Traefik could also provide load balancing functionality. Can the number of Replicas be added anytime? Currently this has been left blank in my NodeJS projects.

Yes, it can be added anytime - Traefik reconfigures itself internally. Replicas are a Docker functionality

Thank you! Turns out, it was already there all the time. :slight_smile:

Just tried it for the test target. Indeed, it has created multiple containers with an app, I see them in Portainer.
I turned on the Redis so it seems like all websockets work fine too.
So at first sight everything is ok.

But of course I need to check how these settings influence performance of the website. I will come back with results.

I believe, Patrick has answered here:

1 Like

Thanks @Apple; I will try it out in my latest project.

This is actually quite a common scenario that any container software handles. I.e. Kubernetes, Docker(Swarm).

It’s called “horizontal scaling” if you want to look into it. As opposed to vertical scaling which means to add more resources to your infra(cpu,ram,etc).

Yes that is exactly the perfect combination, NodeJS multiple instances with Traefik for SSL and load balancing, and Redis for Sessions store to share between the multiple NodeJS instances!

Curious about the performance now :slight_smile: - definitely should fly now! Of course the more multi cpu cores you have on the server - the better as well.

First, it was a basic cheap VPS with 1 cpu 1 ram.
It failed when 200 concurrent visitors been reached.

Then I add a juice to VPS up to 6 cpu 6 ram.
It became better, but a little bit. With 500 users troubles came back.

Then I tried to configure the pessenger in Plesk to make several copies of the app.
It worked perfectly with 400 visitors. I wasn’t able to check with more users yet, but synthetic tests show some hope.
But obviously the sockets didn’t work correctly.

So now I am using docker setup to solve all this problems out of the box.

Next big event with hundreds of users would be in two weeks.
But I’m gonna run some synthetic tests with Docker setup soon.

You can try:

1 Like

@George Any resources on how to get Redis working on Caprover type Docker setup?
I ask you and not search directly is because you could give me a direct resource based on Wappler/Caprover setup and what could work best.

Caprover already has a cluster setup, so the horizontal scaling part is already covered. But ability to add Rezdy would be a great addition.

Caprover UI includes a one-click redis install.

Yes, but how would it integrate with a Wappler app?

What do you mean with integrate? Connect to it? IP and port. Straightforward.

1 Like

Yeah, how does he sets the IP and stuff on the Wappler config he’s asking

1 Like

I was surprised by the question as @sid knows the UI back and forth :sweat_smile:
There is a Redis tab on SC options.

Damn! I knew about the redis tab, but I just thought it was an on/off toggle.
Never switched it on to realise it asks for a path. :laughing:
Going to try this out asap! Thanks @JonL

So, here is some benchmarking.
I have used free plan of nice online tool
I tested one page, it is static and almost empty. Obviously, results may vary on different pages and projects.
All tests last 1 minute.

Also, I didn’t make memos on the way (silly, I know), so maybe I mistaken in details.

From 0 to 1000 clients.

(1) 6 cpu 6 ram. Plesk. Without any scaling.
As you see it is ok up to 600 clients, but then absolutely fails.
Number of CPUs doesn’t effect much.

Further - only Docker setup.

(2) 1 cpu 2 ram. No replica.
No errors, but loading speed not good: 1800 ms at finish.

(3) 1 cpu 2 ram. 4 replicas.
Seems like without additional CPU it only make worse.

(4) 2 cpu 2 ram. 4 replicas.
Loading speed increased twice. Now it 1000 ms at finish.

(5) 4 cpu 2 ram. 4 replicas.
Twice faster, 550 ms at 1000 clients

From 0 to 2000 clients.

(6) The same setup, just for 2000 clients.
1000 ms, no errors.

(7) 8 cpu 2 ram. 6 replicas.
600 ms at finish.

(8) But what if we set 0 replicas?
It shows no errors, but obviously works slower. 2000 ms

(9) And what if we set 16 replicas?
Turns out, it not differs from 6 replicas, as our cpu is still at 8.

(10) 8 cpu 4 ram. 6 replicas.
Almost the same as 2 ram.

(11) 8 cpu 6 ram. 6 replicas.
Look the same.

(12) But how about 0 - 10000 clients?!
It became very slow, but holding.

So, my personal summary about this synthetic testing.

  1. If you don’t intend to tweak your Plesk, better don’t use it for production. Though with cheap VPS Docker setup will be slower, but at least it lasts longer before total break.
  2. It doesn’t make sense to increase replicas if you don’t have enough CPUs.
  3. If you increase CPU and number of replicas, loading speed would increase almost at the same proportion. (at least if we talking about from 1 to 8 cpu)
  4. Increasing CPUs without additional replicas gives just a little boost.
  5. In this particular synthetic case increasing RAM doesn’t make an impact.

I understand that it is obvious conclusion for most developers, but it turns out useful for me and it may help someone too.


Tested it. Unfortunately, this option does not work if the application actively uses web sockets, for example, messenger. The problem is that two different containers are unaware of each other’s actions.

I will describe it in more detail. Imagine that we have created 2 replicas (container1 and container2). Two users visit the site. Traefik directs one user to container1, directs another user to container2. When user1 writes messages on the server side, sockets will be updated only in container1. Container2 will not know about new messages. Therefore, user2 will not receive them until he manually refreshes the page, or writes messages himself.