I want to find a general solution that works for everyone. Is the situation for you the same as how Ben is using it or do you approach it in a different way?
Today, I separate all data using a tenant_id on every table and use a $_SESSION.tenant_id on every query, insert, etc. All one database, all the same connection.
If that were to move to separate databases, there would still be "app level" processes (especially in the scheduler/cron) that would require activities crossing tenants.
My interest here is to further reduce the risk of inadvertently crossing data across tenants by having them completely separated in different databases within the same server. It's not a requirement for me today, but I can see the need as things scale.
Updated the code to cache dynamic db connections different
app.zip (6.0 KB) unzip to lib/core
.
Thanks Patrick,
Iām just out of the house but will try in a bit, when I get back
Love it! Seems to work perfectly. Will keep testing, but seems good - I haven't used transactions yet so maybe @mebeingken has a project he can test those with?
Only minor thought. If any SC action throws an error, do any include the name in the response (if debugging is accidentally on). Would it take a lot to add hashing, such as SHA256, to the string to obfuscate connection details when using them for the connection name in the cache?
I first wanted to use something like MD5 for the hash, does have to be secure just unique. But since the name is only used within that single function it is not a security risk, the hashed name is not used anywhere outside.
I don't have any debugging within that function, any debugging outside would return the default connection name.
I think we should also implement online, on demand database schema migration @patrick
So that databases can be dynamically created on demand and they database schema kept directly up to date.
This will be also handy for databases that canāt be accessed directly by Wappler, or databases without server like SQLite.
Also will make the development process much easier as the database will be automatically updated on web service start.
Only we will have to build our own cross platform and cross database schema migrator, as we canāt use the current Knex migrations for PHP.
Also we canāt use just a bunch of SQL statements as those are not the same for different databases.
So we will have to make our own schema update descriptions, probably in similar json format as server connect and run those automatically on server start.
Why donāt you focus on one language instead of trying to make everything universal?
Because implementing everything on a higher level allow us to have more control on the implementation internals, without the need to constantly rewrite code. Also supporting multiple platforms is easier because the workflows are the same. Only the implementation runtime is different.
Downside is indeed that we have to support features that are possible by all platforms and that can be limiting. But we can have exceptions per platform and implement platform specific functionality.
In light of a generic database schema migrator, it will be very profitable to have it also generic on meta level because of the different databases that it can run on. Those can have different database engines, supporting different sql standards.
So by having the database migration all on a higher abstract level, we can fully control how it gets executed and react in case of errors.
This is difficult for example now with the Knex migrations that we use because each migration is just a JavaScript code that we generate one time and we canāt enhance it later without fully rebuilding as well if some errors occur the code might get half executed leaving the schema in half updated way which can result in more subsequent errors.
@patrick / @Teodor Can you guys share a quick doc on how to implement this?
I got a bit lost in the discussion above, and the changes I see in the app.js file from the update are not very clear either.
This seems like a very good solution to build multi-tenant app with separate DB per tenant. I would personally prefer single DB, but there have been use cases/client queries for this in the past.
You donāt need to do anything really. Just, if you use dynamic bindings for the connection parameters and they change once deployed (e.g. if based on userās company), the DB connection will use the correct details. Previously on NodeJS once the first connection was made, it wouldnāt change until restarted/redeployed
I see a condition for isDynamic
in the code changes.
Is it safe to assume that its set automatically by checking if the connection details has bindings or plain-text?
We already have dynamic DB connection values, using ENV variables.
But what I assume needs to be done here is
- get the DB connection values in every SA (or globals) into some variable or query
- make sure that this variable/query is configured in the DB connection bindings
- Wappler will automatically maintain a list of all unique connection values, and avoid too many connections to the DB server
Is that correct?
@patrick is this live? And will this check dynamic db connections once when a server actions starts or will it check for every db operation?
I have a primary db that's used for read and write in the EU and a replica in the US for read only operations. So I need to dynamically adjust the connection depending on if it is a read operation and if it is in the US. Would this work with the current lib/core/app.js
?
nvm I can solve this easier for my use case.
Add 2 db connections to Wappler.
For US - Use the ENV creds for replica for the read db and write (EU) ENV creds for the write db.
For EU - Use the (EU) ENV creds for both.