šŸ¦¬ Bull Queues for Node

Nope, shouldnā€™t have to do that. Create queue only needs to be done once which establishes the workers. You can add jobs before creating the queue and those jobs will run once the queue is created.

Someday Iā€™ll add an option to log into to files which is what Iā€™ve done to debug.

Thanks. Thatā€™s weird because if I add api jobs to a queue then they get processed. But if I add more jobs afterwards to the same queue then they end up getting stuck in the waiting state unless I run the create queue action for the same queue again to assign new workers.

I think Iā€™ve pinned down the issue. It looks like the worker was busy doing nothing because the API action didnā€™t send a response back. Wrapping the API action in a Try Catch and always sending a response that way solved the issue.

Iā€™ve modified the extension a little to allow for TLS connections to managed databases like on DigitalOcean. Should I create a pull request or is that something you donā€™t want in the extension?

So after a lot of debugging, whatā€™s happening is that no worker gets reattached to a queue after node restarts. So when nodemon restarts node, then jobs get stuck in delayed or waiting state unless I rerun the create queue action to reattach workers. Do you have a solution for that out of checking the status of each queue after node restarts @mebeingken?

This is what I would expect. I donā€™t believe we have anything that fires upon server startup, so I use cron to create queues periodically. If the queue is already created upon attempt to create, it skips that process.

1 Like

Does this extension not support authentication? I always receive ReplyError: NOAUTH Authentication required despite the connection URL in Wappler containing the pass, and itā€™s working on other clients.

Iā€™ve got to say, aside from that though, Itā€™s a nice looking extension.

Iā€™ve not yet added anything specific to authentication to the extension.

If your job api file is expecting a query string, there is no mechanism in the config to add that; it only supports POST vars.

As you are building this, remember that when the job runs, it will be outside of the context of the user session that triggered it (ie. a logged in user session), so other arrangements must be made to identify the user, or other things specified in a server session.

Iā€™ll have to see if I can find a way to add in auth, since Wappler itself is working with Redis (along with Auth). If I can get it figured out Iā€™ll send a PR.

Well, that was easy enough.

I just added in ā€œpassword: yourpasswordā€ within the defaultQueueOptions. Granted, I only spent two minutes checking, I donā€™t think itā€™s possible to grab that from global.redisClient, but itā€™s a quick fix for anyone else using auth.

Iā€˜m authenticating queue api calls with a JWT. The JWT includes the user_id and is signed with a key stored base64 encoded in the ENV. So when a user sends a request from the frontend then I check if they are authenticated via Wapplerā€™s security provider and if so then I generate the JWT and add the job to the queue.
The API job then verifies the JWT with a public key stores base64 encoded in the ENV.

Thatā€™s a good idea, though in my case, itā€™s for a totally automated system with no user input, so thatā€™s not really something I have to worry about. For me, the issue was actually connecting to the Redis server with a password, since I had enabled it there for testing (as the server was exposed to the outside world).

That makes sense, I misunderstood. As you said you can just change the default queue options. I changed them as well to allow TLS handshakes, passwords, longer connection timeouts and cluster support for dedi DO Redis databases. Itā€˜s actually better to format parts of the connection as an object in the extension if you need to add things like passwords. Maybe I can pull up the code tmr if you are interested.

2 Likes

Hi, Ken. Finally I got to your extension. Let me torment you a little with questions about his work.

I donā€™t quite understand this point, can you give a little explanation? Is some kind of separate check being done, or are you just doing a queue creation step and if a queue with that name already exists, the step simply wonā€™t be executed? If a condition step is created in the implementation in which a check is made for the existence of a queue, then how is the check done?

Thank you in advance for your help.

Hello Tobias. Tell me how do you monitor queues and jobs in queues during debugging? I havenā€™t worked with queues before, so this is quite new to me and raises a lot of questions.

Thanks.

You can either check Redis directly with any Redis GUI or use something like Bull Board https://github.com/felixmosh/bull-board

The next extension release will have more logging functionality. So maybe wait for a week :slight_smile:

1 Like

Thank you Tobias for the information. I am grateful in advance for your work on the new version of the extension. If the new version will be able to log queues, it will help a lot with debugging.

I want to clarify one question. Did I understand correctly from the Redis logs if at the end of each API action I take the step of creating a queue with the same name:
14

In fact, the queue is created only once, with further attempts this step will not create anything. Is that right?

The creation of a queue is what attaches workers to process anything previously created in the queue. For example, you could execute a series of Add job actions, which would pull jobs in a queue, where they would wait forever. If a queue is created with that name, then the processing of those jobs would begin.

If there is an attempt to create a queue, and the name of that queue and the type of queue (api or library) are the same, then it will not be recreated, but it will make sure workers are attached.

Assuming the queue name is the same, it would not be necessary (or desired) to have the create queue action inside a repeat. Create it once.

@tbvgl shows how to call an api on server start, which is a great place to setup your queues, and to restart them should jobs be in the queue before the restart.

1 Like

The extension has been updated to include logging which is very helpful in debugging the queue behavior since much of it takes place in backend processes. I canā€™t find any breaking changes, but PLEASE TEST this version thoroughly as the logging actions were inserted throughout the code.

Special thanks to @tbvgl for contributions of ENV variables for Redis settings, bull log option, and optimization of my rudimentary use of javascript!

Optional ENV Variables

  • REDIS_PORT: The Redis port
  • REDIS_HOST: The Redis host
  • REDIS_BULL_QUEUE_DB: The Redis database for bull queues
  • REDIS_PASSWORD: The Redis password
  • REDIS_USER: The Redis user
  • REDIS_TLS: The TLS certificate. Define it is {} if you need a TLS connection without defining a cert.
  • REDIS_PREFIX: The prefix for the database. This is useful if you need to connect to a clusert.
  • REDIS_BULL_METRICS: Boolean. Enables Bull metrics collection which can be visualised with a GUI like https://taskforce.sh/
  • REDIS_BULL_METRICS_TIME: The timeframe for metric collection. Defaults to TWO_WEEKS if metrics are enabled

New actions

Configure logging (v 1.4.0)

Three types of logging are supported with log levels of Error, Warn, Info, Debug. The logging action configures logging globally for all other bull queue actions that execute after it.

  1. Console
  • Always on, defaults to log_level: Error
  1. File
  • Enabled by providing the desired log level
  • Disabled with ā€˜noneā€™ or empty
  • Defaults to disabled
  • Creates a daily rotated text file in /logs of the app
  • Keeps 14 days of logs
  1. Bull (integrated bull queue job logging suitable for UI like Bull Board)
  • Enabled with value of true
  • Disabled with value of false
7 Likes

@mebeingken Iā€™m on a Windows server, the only way I can run Redis is via Remote Docker. Will this excellent extension work with Remote Dockerā€¦? Iā€™m guessing it will but need to check before committing any time to it.

I think with the addition by @tbvgl of environment variables to specify the Redis connection, it can be anywhere

1 Like