Hi all,
I recently started running into some errors with too many DB connections.
So I was wondering how this works - ie if I have say 6 server connects on one page load - will this count as 6 dB connections?
Hi all,
I recently started running into some errors with too many DB connections.
So I was wondering how this works - ie if I have say 6 server connects on one page load - will this count as 6 dB connections?
Every database connection step in your server actions counts as a database connection.
The server action loaded on page load does not count as a database connection if it doesnāt contain any.
Can you share the exact error message you see and explain what do you have in your server actions that causes it and when do you see it?
Hi Teodor, Iāll try and recreate this this morning and share screenshots.
Itās just on my dev environment (both local and also remote), so there are no other users using it.
This perplexes me a little, one part of this app is a public page that is shareable, and could have 100ās, or thousands of visitors - each time they view the page it will call the DB for the pageās information - this seems unscalable? I wonder what the normal way to handle this would be?
@Teodor this is the error I received, 1 user, simply clicking around the pages, loading lists etc.
{āstatusā:ā500ā,ācodeā:ā53300ā,āmessageā:ātoo many connections for role āddqdnqmojabwqzāā,āstackā:āerror: too many connections for role āddqdnqmojabwqzā\n at Parser.parseErrorMessage (/var/www/vhosts/pitchflare.com/httpdocs/node_modules/pg-protocol/dist/parser.js:278:15)\n at Parser.handlePacket (/var/www/vhosts/pitchflare.com/httpdocs/node_modules/pg-protocol/dist/parser.js:126:29)\n at Parser.parse (/var/www/vhosts/pitchflare.com/httpdocs/node_modules/pg-protocol/dist/parser.js:39:38)\n at TLSSocket. (/var/www/vhosts/pitchflare.com/httpdocs/node_modules/pg-protocol/dist/index.js:8:42)\n at TLSSocket.emit (events.js:200:13)\n at addChunk (_stream_readable.js:294:12)\n at readableAddChunk (_stream_readable.js:275:11)\n at TLSSocket.Readable.push (_stream_readable.js:210:10)\n at TLSWrap.onStreamRead (internal/stream_base_commons.js:166:17)ā}
This is a connection limit issue, Heroku / Postgres by default limits to 20/22 depending on the platform so I used the KillAll - which has fixed it.
If yourself or @Teodor, could help however and advise on this question:
āThis perplexes me a little, one part of this app is a public page that is shareable, and could have 100ās, or thousands of visitors - each time they view the page it will call the DB for the pageās information - this seems unscalable? I wonder what the normal way to handle this would be?ā
@Teodor so, if I have 5 server connects on one page, each with a DB Connection step - and one user loading that page, will that be ā5ā new DB connections in terms of allocation, or just 1 new DB connection - but the user is connecting 5 times?
I see you use nodeJS and that uses the knex library for the database connections. Knex seems to use a connection pool for the connections (http://knexjs.org/#Installation-pooling) which defaults to min: 2 and max: 10. So I think that when you load 5 files at the same time each files uses minimal 2 connections, so that are 10 connections.
To optimize we could set the connection pool to use only 1 connection by settings the min and max to 1. You can do this in the connection json file. The other thing we could do is to reuse the same connection pool each time, but Iām not sure if that will cause any other problems.
So try limiting the pool by editing the connection file, it is located in app/modules/connections
. The file will look like:
{
"name": "db",
"module": "dbconnector",
"action": "connect",
"options": {
"client": "mysql",
"connection": {
"host": "db",
"port": 3306,
"user": "db_user",
"password": "CvP10qzj",
"database": "node_project_mysql"
}
}
}
After adding the pool properties it would look like:
{
"name": "db",
"module": "dbconnector",
"action": "connect",
"options": {
"client": "mysql",
"connection": {
"host": "db",
"port": 3306,
"user": "db_user",
"password": "CvP10qzj",
"database": "node_project_mysql"
},
"pool": {
"min": 1,
"max": 1
}
}
}
Thanks Patrick. Tried this, and the schema refresh just hangs then fails so it didnāt work for me.
I made a note in another post in coffee lounge, but you may be the right person.
Iāve connected to Digital Ocean fine with the DB, both a dev and a production DB and updated the DB schema by applying dev changes.
I have now setup a connection pool in DigitalOcean, connected to this - then try to apply changes and nothing happens. It says itās applied them, but Iām left with a blank schema.
With the schema you mean with the Database Manager in Wappler? Do you use a direct connection there or the Server Connect Connection?
I use server connection.
And when I mean Schema, so I setup the entire DB structure, all tables, and all fields in dev environment - connecting to a ādevā version of the DB on DigitalOcean. Then on the production environment, I connected to a production version of the DB in DigitalOcean - then I simply updated the changes and all the DB structure and fields where correctly updated in the production version - perfect.
Then, I create a connection pool to try to solve the DB connection issues Iām experiencing with Wappler - when I do, and then try to do the same process updating the schema - the update says itās successfull, bot the structure doesnāt change, it just has one folder ātablesā.
Noting digitalocean uses pgbouncer for the connection pooling.
Any new learnings around this topic ?
Experienced the same on my Digital Ocean droplet. Running the query below showed me that I had 50+ connections. Some of the connections were 3 days old and idle
select *
from pg_stat_activity
where datname = ādatabase_nameā;
Have anyone concluded with some kind of best practice ?
anyone?
The Wappler update where we introduced the global database connections fixed most of the issues, Server Connect now reuses the existing connection pool more effective. If you are using NodeJS then you can have a look at http://knexjs.org/#Installation-pooling to configure the pooling, before the update it created a new pool each time, now it reuses the pool and you can set the min and max connection it should use in the connection file.
thanks @namakemono and @patrickā¦But Iām still having problems with Postgresā¦I changed connection configs with pooling options (min:1, max:1 following Patrick Advice), but still veeery slow in production environment when more than one user access the web appā¦
That advice was from before the update, at that time a new pool was created for each server action and that cased for too many connections. That has been fixed en the connection pool is now being reused, so it is better to set higher values now.
Most of the connection problems posted here were fixed long ago, so will close this topic and new problems should be posted in a new bug report.