Hi @mebeingken, Iām trying to use a bull queue with a very long process. Itās failing after timeout with 120000ms. Is it possible to higher the timeout limit?
Edit: Also it seems that the bull queues donāt work from schedule, is that correct?
Any way to solve that? I wanted to create a job every minute, and process it one by one within the queue
TypeError: Cannot read properties of undefined (reading 'includes')
at App.exports.add_job_api (/opt/node_app/extensions/server_connect/modules/bull_queues.js:421:55)
at App._exec (/opt/node_app/lib/core/app.js:491:57)
at App._exec (/opt/node_app/lib/core/app.js:458:28)
at async App.exec (/opt/node_app/lib/core/app.js:427:9)
at async App.exec (/opt/node_app/lib/modules/core.js:228:13)
at async App._exec (/opt/node_app/lib/core/app.js:491:30)
at async App.exec (/opt/node_app/lib/core/app.js:427:9)
at async App.define (/opt/node_app/lib/core/app.js:417:9)
I have not attempted to use from the scheduler, but I can understand how it might not work as I am likely expecting to get variables, etc. from the actual request. Not sure if/when I would be able to refactor this to accommodate schedules.
I see a couple options for you: 1. Use an external cron job that calls an api to kick off the process. or 2. Add a check under Globals that runs once if it has not already run (which would setup everything on the first hit to an api.) Either way, once the process has been established, you can create a job for 1 minute in the future to repeat itself.
Iām having issues with API jobs ending up in the waiting queue although there are no other jobs running, the queue limits allow for concurrent jobs and the limit is set to 30 jobs per 60 seconds. But after adding 2-4 jobs all the following jobs get added to the waiting queue. Any idea how to debug whatās going on?
I think I figured it out. If I add a job and then create a queue then the workers will be automatically removed when all jobs in the queue are done. So I need to run the create queue action again to attach workers when I add a nee job to the same queue. Is that right?
Nope, shouldnāt have to do that. Create queue only needs to be done once which establishes the workers. You can add jobs before creating the queue and those jobs will run once the queue is created.
Someday Iāll add an option to log into to files which is what Iāve done to debug.
Thanks. Thatās weird because if I add api jobs to a queue then they get processed. But if I add more jobs afterwards to the same queue then they end up getting stuck in the waiting state unless I run the create queue action for the same queue again to assign new workers.
I think Iāve pinned down the issue. It looks like the worker was busy doing nothing because the API action didnāt send a response back. Wrapping the API action in a Try Catch and always sending a response that way solved the issue.
Iāve modified the extension a little to allow for TLS connections to managed databases like on DigitalOcean. Should I create a pull request or is that something you donāt want in the extension?
So after a lot of debugging, whatās happening is that no worker gets reattached to a queue after node restarts. So when nodemon restarts node, then jobs get stuck in delayed or waiting state unless I rerun the create queue action to reattach workers. Do you have a solution for that out of checking the status of each queue after node restarts @mebeingken?
This is what I would expect. I donāt believe we have anything that fires upon server startup, so I use cron to create queues periodically. If the queue is already created upon attempt to create, it skips that process.
Does this extension not support authentication? I always receive ReplyError: NOAUTH Authentication required despite the connection URL in Wappler containing the pass, and itās working on other clients.
Iāve got to say, aside from that though, Itās a nice looking extension.
Iāve not yet added anything specific to authentication to the extension.
If your job api file is expecting a query string, there is no mechanism in the config to add that; it only supports POST vars.
As you are building this, remember that when the job runs, it will be outside of the context of the user session that triggered it (ie. a logged in user session), so other arrangements must be made to identify the user, or other things specified in a server session.
Iāll have to see if I can find a way to add in auth, since Wappler itself is working with Redis (along with Auth). If I can get it figured out Iāll send a PR.
I just added in āpassword: yourpasswordā within the defaultQueueOptions. Granted, I only spent two minutes checking, I donāt think itās possible to grab that from global.redisClient, but itās a quick fix for anyone else using auth.
Iām authenticating queue api calls with a JWT. The JWT includes the user_id and is signed with a key stored base64 encoded in the ENV. So when a user sends a request from the frontend then I check if they are authenticated via Wapplerās security provider and if so then I generate the JWT and add the job to the queue.
The API job then verifies the JWT with a public key stores base64 encoded in the ENV.
Thatās a good idea, though in my case, itās for a totally automated system with no user input, so thatās not really something I have to worry about. For me, the issue was actually connecting to the Redis server with a password, since I had enabled it there for testing (as the server was exposed to the outside world).
That makes sense, I misunderstood. As you said you can just change the default queue options. I changed them as well to allow TLS handshakes, passwords, longer connection timeouts and cluster support for dedi DO Redis databases. Itās actually better to format parts of the connection as an object in the extension if you need to add things like passwords. Maybe I can pull up the code tmr if you are interested.
Hi, Ken. Finally I got to your extension. Let me torment you a little with questions about his work.
I donāt quite understand this point, can you give a little explanation? Is some kind of separate check being done, or are you just doing a queue creation step and if a queue with that name already exists, the step simply wonāt be executed? If a condition step is created in the implementation in which a check is made for the existence of a queue, then how is the check done?
Hello Tobias. Tell me how do you monitor queues and jobs in queues during debugging? I havenāt worked with queues before, so this is quite new to me and raises a lot of questions.