Understanding server connect and accumulative DB connections

Hi all,

I recently started running into some errors with too many DB connections.

So I was wondering how this works - ie if I have say 6 server connects on one page load - will this count as 6 dB connections?

Every database connection step in your server actions counts as a database connection.
The server action loaded on page load does not count as a database connection if it doesnā€™t contain any.

Can you share the exact error message you see and explain what do you have in your server actions that causes it and when do you see it?

Hi Teodor, Iā€™ll try and recreate this this morning and share screenshots.

Itā€™s just on my dev environment (both local and also remote), so there are no other users using it.

This perplexes me a little, one part of this app is a public page that is shareable, and could have 100ā€™s, or thousands of visitors - each time they view the page it will call the DB for the pageā€™s information - this seems unscalable? I wonder what the normal way to handle this would be?

@Teodor this is the error I received, 1 user, simply clicking around the pages, loading lists etc.

{ā€œstatusā€:ā€œ500ā€,ā€œcodeā€:ā€œ53300ā€,ā€œmessageā€:ā€œtoo many connections for role ā€œddqdnqmojabwqzā€ā€,ā€œstackā€:ā€œerror: too many connections for role ā€œddqdnqmojabwqzā€\n at Parser.parseErrorMessage (/var/www/vhosts/pitchflare.com/httpdocs/node_modules/pg-protocol/dist/parser.js:278:15)\n at Parser.handlePacket (/var/www/vhosts/pitchflare.com/httpdocs/node_modules/pg-protocol/dist/parser.js:126:29)\n at Parser.parse (/var/www/vhosts/pitchflare.com/httpdocs/node_modules/pg-protocol/dist/parser.js:39:38)\n at TLSSocket. (/var/www/vhosts/pitchflare.com/httpdocs/node_modules/pg-protocol/dist/index.js:8:42)\n at TLSSocket.emit (events.js:200:13)\n at addChunk (_stream_readable.js:294:12)\n at readableAddChunk (_stream_readable.js:275:11)\n at TLSSocket.Readable.push (_stream_readable.js:210:10)\n at TLSWrap.onStreamRead (internal/stream_base_commons.js:166:17)ā€}

Maybe @patrick can help here more.

This is a connection limit issue, Heroku / Postgres by default limits to 20/22 depending on the platform so I used the KillAll - which has fixed it.

If yourself or @Teodor, could help however and advise on this question:

ā€œThis perplexes me a little, one part of this app is a public page that is shareable, and could have 100ā€™s, or thousands of visitors - each time they view the page it will call the DB for the pageā€™s information - this seems unscalable? I wonder what the normal way to handle this would be?ā€

@Teodor so, if I have 5 server connects on one page, each with a DB Connection step - and one user loading that page, will that be ā€˜5ā€™ new DB connections in terms of allocation, or just 1 new DB connection - but the user is connecting 5 times?

I see you use nodeJS and that uses the knex library for the database connections. Knex seems to use a connection pool for the connections (http://knexjs.org/#Installation-pooling) which defaults to min: 2 and max: 10. So I think that when you load 5 files at the same time each files uses minimal 2 connections, so that are 10 connections.

To optimize we could set the connection pool to use only 1 connection by settings the min and max to 1. You can do this in the connection json file. The other thing we could do is to reuse the same connection pool each time, but Iā€™m not sure if that will cause any other problems.

So try limiting the pool by editing the connection file, it is located in app/modules/connections. The file will look like:

{
  "name": "db",
  "module": "dbconnector",
  "action": "connect",
  "options": {
    "client": "mysql",
    "connection": {
      "host": "db",
      "port": 3306,
      "user": "db_user",
      "password": "CvP10qzj",
      "database": "node_project_mysql"
    }
  }
}

After adding the pool properties it would look like:

{
  "name": "db",
  "module": "dbconnector",
  "action": "connect",
  "options": {
    "client": "mysql",
    "connection": {
      "host": "db",
      "port": 3306,
      "user": "db_user",
      "password": "CvP10qzj",
      "database": "node_project_mysql"
    },
    "pool": {
      "min": 1,
      "max": 1
    }
  }
}
2 Likes

Thanks Patrick. Tried this, and the schema refresh just hangs then fails so it didnā€™t work for me.

I made a note in another post in coffee lounge, but you may be the right person.

Iā€™ve connected to Digital Ocean fine with the DB, both a dev and a production DB and updated the DB schema by applying dev changes.

I have now setup a connection pool in DigitalOcean, connected to this - then try to apply changes and nothing happens. It says itā€™s applied them, but Iā€™m left with a blank schema.

With the schema you mean with the Database Manager in Wappler? Do you use a direct connection there or the Server Connect Connection?

I use server connection.

And when I mean Schema, so I setup the entire DB structure, all tables, and all fields in dev environment - connecting to a ā€˜devā€™ version of the DB on DigitalOcean. Then on the production environment, I connected to a production version of the DB in DigitalOcean - then I simply updated the changes and all the DB structure and fields where correctly updated in the production version - perfect.

Then, I create a connection pool to try to solve the DB connection issues Iā€™m experiencing with Wappler - when I do, and then try to do the same process updating the schema - the update says itā€™s successfull, bot the structure doesnā€™t change, it just has one folder ā€˜tablesā€™.

Noting digitalocean uses pgbouncer for the connection pooling.

Any new learnings around this topic ?

Experienced the same on my Digital Ocean droplet. Running the query below showed me that I had 50+ connections. Some of the connections were 3 days old and idle

select *
from pg_stat_activity
where datname = ā€˜database_nameā€™;

Have anyone concluded with some kind of best practice :wink: ?

anyone?

@otavionestares the Wappler update that came right after my post was written fixed the issue for me

The Wappler update where we introduced the global database connections fixed most of the issues, Server Connect now reuses the existing connection pool more effective. If you are using NodeJS then you can have a look at http://knexjs.org/#Installation-pooling to configure the pooling, before the update it created a new pool each time, now it reuses the pool and you can set the min and max connection it should use in the connection file.

thanks @namakemono and @patrickā€¦But Iā€™m still having problems with Postgresā€¦I changed connection configs with pooling options (min:1, max:1 following Patrick Advice), but still veeery slow in production environment when more than one user access the web appā€¦

That advice was from before the update, at that time a new pool was created for each server action and that cased for too many connections. That has been fixed en the connection pool is now being reused, so it is better to set higher values now.

Most of the connection problems posted here were fixed long ago, so will close this topic and new problems should be posted in a new bug report.