Implement Server Connect API rate limiting

In addition to @Apple reply, the current implementation is just a native security provider. As you aptly pointed out, this still causes some server strain when being actively flooded, so it's already known the approach will need to be adjusted.

Perhaps a workaround is to group API's that need different rate limits by route structure. Then the config can look more like:

app.use('/api/slow/', slowApiLimiter);
app.use('/api/med/', medApiLimiter);
app.use('/api/high/', highApiLimiter);

Although this looks a bit easy to reverse engineer by bad actors.

While protecting server strain at the highest possible level is an ideal solution, personally in our implementation I'm willing to accept strain at the web server level in the event of an attack in order to maintain better control over limits for the sake of authenticated users and simply shielding the DB-level.

Very much appreciated @George @patrick for implementing this in 6.7.0. Your chosen implementation looks great.

My assumption for the best way to set it up is to use a high number of points within the global settings. Then manually setting the consume rate also high for more-public facing API's accessed by low-trusted users and setting consume rates lower (or leaving at the default of 1) for API's that are already-shielded via security restrict for higher-trusted users.

Again, appreciate it!


Highly disagree with the implementation, but it is what it is. Glad some people find it useful at least.

As for me, I'll have to settle with a custom extension for rate limiting, unfortunately.

1 Like

I haven't tried it yet, but out of curiosity, what issues do you see with the implementation?

I'm very curious as to how it gets the user IP. If it can't be customized, that seems like a fairly big issue, especially if using a load balancer or even Cloudflare

@Apple can you share your feedback, what’s not right with the implementation?

The current implementation is a global rate limiter for all API endpoints and you can have different limits for logged in users.

We are also investigating to apply it for a single endpoint, probably using some action steps that you can add to have more specific limits for endpoints like login or password requests.

The Node server in production always runs after a reverse proxy which should set the X-Forwarded-For header which is used to get the user IP. I believe the load balancer will also set this header to let know the request was forwarded. When there are multiple addresses in the header the left most is used.

1 Like

Look, when I give suggestions, I give the holy grail for ultimate flexibility. I understand the average Wappler is far different from me, so maybe the implementation is right for them.

I'm just annoyed my suggestions resulted in a feature that's ultimately not useful to me.

Here's something I did some time ago (not working since Wappler updated Redis module):

The Namespace part is especially interesting. The namespace is injected into the Key:

This allows me to have individual rate limiters (e.g.: for login, for e-mail sending) without wasting the global rate limiter. With your current implementation, you can achieve something similar by increasing the Points consumption on e.g. sending e-mail, but then the whole website gets blocked if they reach such limit, so it's not possible to have like "10 e-mails sent in one hour", because the whole website is blocked once you reach the limit.

1 Like

There are plans on having also action steps for having more fine-grained limiters for specific actions, the current implementation is just a first step. Every user will have other requirements and with a global rate limiter I think we already satisfy many users and we first want to receive some feedback on what can be improved and other whishes from users.


In that case I am 100% with Apple on this and hope you guys follow up on his wishlist, as this is also ours :smiley: