¿How expensive is a WHERE clause using Set Value?

Hi,

I have a table that works to populate the info into a form that allow to user to later change these info or not, in my case it works to populate business config using two columns “config_name” and “config_value”, some rows are simple values and others contain json data.

The API has a simple “Database Query” to the table and then I use “Set Value” to obtain each config_value from each “config_name” using WHERE clause inside each “Set Value”.

The API looks like this, some “Set Value” retrieve simple data and others json using “parseJSON” at end, but all from same “Database Query”. All “Set Value” are output because need it to populate in form.

(Image 1)

The table looks like this:

(Image 2)

And in the cliente side, the front is something like this:

(Image 3)

Now my question is if the use of “Set Value” using WHERE clause cause that each one query into the “Database Query” and at same time ocurr a real query into database, I mean, if each “Set Value” means a direct query into database?.

In API (Image 1) is a cutted image, really the “Set Value” quantity are like 30, so I wonder if each “Set Value” cost performance to database, or, like each “Set Value” is querying from the same “Database Query” it take it from some cache and has not cost in performance.

And at same time. I know that each “Set Value” using WHERE clause can be translate into Client Side, for example:
sc_config_business.data.get_config_empresa.where('config_name', 'home_total_muebles', '==')[0].config_value
But I wonder if performance is the same in Client side that Server Side using the “Set Value”.

In this old blog (2008) from percona site: https://www.percona.com/blog/how-expensive-is-a-where-clause-in-mysql/, says that the cost with a simple WHERE clause cause between 60 and 80%, so, my concern is in trying to build something maybe “future proof” keeping in mind that that table grow every time.

I would really like your help with this, I don’t know if it is the best approach, or I’m doing something wrong.

Thanks.

There’s only one database query, it’s the Database Query step, the result is stored in a variable. The other Set Values run in PHP/NodeJS, these will use CPU use too (not important though). Avoid offloading processing to clients, not everyone has a fast device. Scaling a server is cheaper than developer’s time. You’re welcome

Ok, so, it’s better doing in server side in the way I’m doing, and not let the client side to process all that.
Having more cores, replicas and load balancing, etc., to proccess if the query get scales exponentially, right?

Thanks @Apple, that’s was exactly my concern, now I can finally sleep.

Just one more thing, I’m wonder if maybe there is another clean solution to that, or this is the common scenario for most wappler users:

The modelling for this query is less common and appears to use an overloaded table (multiple users, multiple reference points with fact data), but your implementation of extracting the reference values is just fine. Transposing the query is also a possibility, where to rows turn into output columns for easier reference; would just need to be a custom query.

You always want to do as much pre-filtering on the DB as possible (in this case I think it’s probably just filtering by the user id) as DB’s can be efficiently indexed over time. The server-side filtration however is harder to optimize at scale, but plucking reference values like this is very lightweight, even in the hundreds.

Thank @xsfizzix for your time.

I’m filtering by an ID in my case business_id, each business has like 30 rows in table, is a SaaS, so all business reside in same table, all references are by incremental ID, not exists hard deletes of row, only soft deletes (isDeleted column is present). I opt fot this model thinking that the database could done a better job indexing when table grows exponential.

In Wappler with the use of “Set Value” the file itself has 1400 lines with a total of 50kb, each time this API is called in browser it takes like 1.5 - 2secs in load, and that is locally using Docker with out of the box Wappler config.

The table right now for testing purpose have only one business_id, that’s a table with 30 rows. I’m trying to create a scenery with more business_id to test loading time, but right now I feel like 2secs is too much for a simple query.

About transposing, my knowledgment not go so far, and the use of custom query I barely use, just in specific case like update a JSON field using keys and values like this:
image

I would really appreciate if maybe some kind of example be possible to optimize the API.

Thanks

If I were doing this, I’d suggest it is better done on the client with dataviews or simple sc.where filters. It’s incredibly fast on the client and offloads the processing from the server. Equally you probably don’t get the benefits of dataset caching if you’re running all that on the server.

But if you’ve got less than a few thousand users, it makes no real difference

1 Like

For the timing, it should not be taking >1.5 seconds for this. Duplicate your API and yank out all logic besides the query, then call that stripped-down API on a test page and observe the timing. For a table of 1400 rows, with even zilch for indexing, this shouldn’t take more than 200-250ms roundtrip. If it takes more than that, there resides a problem outside of your logic for delivering the reference values.

I use a rule of thumb:

  • Less than 3,000 records > clientside
  • More than 3,000 records > serverside

Thanks to all for your suggestions

Something strange, I don’t know really why.

Context:
My dev enviroment is local using docker with Wapple out of the box settings, using redis in same docker and local MariaDB database.

Instead, my staging enviroment is different, in cloud server-Hetzner EEUU, without docker, directly using CloudPanel and NodeJS project, redis installed in same server, all using pm2, oh and MariaDB in another server but in same location of server to allow using private network for allow only 3306 tcp connections.

In local dev enviroment with docker it actually takes 1.5 -2sec in load the API, but, in staging it only takes 260-290ms, maybe default docker setting not use multiprocess or something like like that make a slow proccess of the API file? But with this numbers I’m ok and I think that I shouldn’t worry about optimization for now, look like proccess logic are Ok and API works as it shoulg.

Any idea why docker make a slow proccess of files?

This may be largely dependent on your local hardware. If you’re say pushing (Windows equivalent) of 8GB ram on an i5 processor, do not expect great performance running both a web server and DB node whilst Wappler is also running. If it’s working as-expected in staging/QA, then you can have confidence your code-setup will be fine.

That said, I also had a bug a while back on a new project where the DB wouldn’t respond for a few seconds on any update in local dev (and the DB was custom + remote, so hardware wasn’t the issue), yet QA was fine. I trashed the project and recreated and it worked just fine thereafter. No idea.