Nope, shouldnāt have to do that. Create queue only needs to be done once which establishes the workers. You can add jobs before creating the queue and those jobs will run once the queue is created.
Someday Iāll add an option to log into to files which is what Iāve done to debug.
Thanks. Thatās weird because if I add api jobs to a queue then they get processed. But if I add more jobs afterwards to the same queue then they end up getting stuck in the waiting state unless I run the create queue action for the same queue again to assign new workers.
I think Iāve pinned down the issue. It looks like the worker was busy doing nothing because the API action didnāt send a response back. Wrapping the API action in a Try Catch and always sending a response that way solved the issue.
Iāve modified the extension a little to allow for TLS connections to managed databases like on DigitalOcean. Should I create a pull request or is that something you donāt want in the extension?
So after a lot of debugging, whatās happening is that no worker gets reattached to a queue after node restarts. So when nodemon restarts node, then jobs get stuck in delayed or waiting state unless I rerun the create queue action to reattach workers. Do you have a solution for that out of checking the status of each queue after node restarts @mebeingken?
This is what I would expect. I donāt believe we have anything that fires upon server startup, so I use cron to create queues periodically. If the queue is already created upon attempt to create, it skips that process.
Does this extension not support authentication? I always receive ReplyError: NOAUTH Authentication required despite the connection URL in Wappler containing the pass, and itās working on other clients.
Iāve got to say, aside from that though, Itās a nice looking extension.
Iāve not yet added anything specific to authentication to the extension.
If your job api file is expecting a query string, there is no mechanism in the config to add that; it only supports POST vars.
As you are building this, remember that when the job runs, it will be outside of the context of the user session that triggered it (ie. a logged in user session), so other arrangements must be made to identify the user, or other things specified in a server session.
Iāll have to see if I can find a way to add in auth, since Wappler itself is working with Redis (along with Auth). If I can get it figured out Iāll send a PR.
I just added in āpassword: yourpasswordā within the defaultQueueOptions. Granted, I only spent two minutes checking, I donāt think itās possible to grab that from global.redisClient, but itās a quick fix for anyone else using auth.
Iām authenticating queue api calls with a JWT. The JWT includes the user_id and is signed with a key stored base64 encoded in the ENV. So when a user sends a request from the frontend then I check if they are authenticated via Wapplerās security provider and if so then I generate the JWT and add the job to the queue.
The API job then verifies the JWT with a public key stores base64 encoded in the ENV.
Thatās a good idea, though in my case, itās for a totally automated system with no user input, so thatās not really something I have to worry about. For me, the issue was actually connecting to the Redis server with a password, since I had enabled it there for testing (as the server was exposed to the outside world).
That makes sense, I misunderstood. As you said you can just change the default queue options. I changed them as well to allow TLS handshakes, passwords, longer connection timeouts and cluster support for dedi DO Redis databases. Itās actually better to format parts of the connection as an object in the extension if you need to add things like passwords. Maybe I can pull up the code tmr if you are interested.
Hi, Ken. Finally I got to your extension. Let me torment you a little with questions about his work.
I donāt quite understand this point, can you give a little explanation? Is some kind of separate check being done, or are you just doing a queue creation step and if a queue with that name already exists, the step simply wonāt be executed? If a condition step is created in the implementation in which a check is made for the existence of a queue, then how is the check done?
Hello Tobias. Tell me how do you monitor queues and jobs in queues during debugging? I havenāt worked with queues before, so this is quite new to me and raises a lot of questions.
Thank you Tobias for the information. I am grateful in advance for your work on the new version of the extension. If the new version will be able to log queues, it will help a lot with debugging.
I want to clarify one question. Did I understand correctly from the Redis logs if at the end of each API action I take the step of creating a queue with the same name:
In fact, the queue is created only once, with further attempts this step will not create anything. Is that right?
The creation of a queue is what attaches workers to process anything previously created in the queue. For example, you could execute a series of Add job actions, which would pull jobs in a queue, where they would wait forever. If a queue is created with that name, then the processing of those jobs would begin.
If there is an attempt to create a queue, and the name of that queue and the type of queue (api or library) are the same, then it will not be recreated, but it will make sure workers are attached.
Assuming the queue name is the same, it would not be necessary (or desired) to have the create queue action inside a repeat. Create it once.
@tbvgl shows how to call an api on server start, which is a great place to setup your queues, and to restart them should jobs be in the queue before the restart.
The extension has been updated to include logging which is very helpful in debugging the queue behavior since much of it takes place in backend processes. I canāt find any breaking changes, but PLEASE TEST this version thoroughly as the logging actions were inserted throughout the code.
Special thanks to @tbvgl for contributions of ENV variables for Redis settings, bull log option, and optimization of my rudimentary use of javascript!
Optional ENV Variables
REDIS_PORT: The Redis port
REDIS_HOST: The Redis host
REDIS_BULL_QUEUE_DB: The Redis database for bull queues
REDIS_PASSWORD: The Redis password
REDIS_USER: The Redis user
REDIS_TLS: The TLS certificate. Define it is {} if you need a TLS connection without defining a cert.
REDIS_PREFIX: The prefix for the database. This is useful if you need to connect to a clusert.
REDIS_BULL_METRICS: Boolean. Enables Bull metrics collection which can be visualised with a GUI like https://taskforce.sh/
REDIS_BULL_METRICS_TIME: The timeframe for metric collection. Defaults to TWO_WEEKS if metrics are enabled
New actions
Configure logging (v 1.4.0)
Three types of logging are supported with log levels of Error, Warn, Info, Debug. The logging action configures logging globally for all other bull queue actions that execute after it.
Console
Always on, defaults to log_level: Error
File
Enabled by providing the desired log level
Disabled with ānoneā or empty
Defaults to disabled
Creates a daily rotated text file in /logs of the app
Keeps 14 days of logs
Bull (integrated bull queue job logging suitable for UI like Bull Board)
@mebeingken Iām on a Windows server, the only way I can run Redis is via Remote Docker. Will this excellent extension work with Remote Dockerā¦? Iām guessing it will but need to check before committing any time to it.