Project Missing Multiple Docker Targets

Wappler Version : 6.5.2
Operating System : Windows 11
Server Model: NodeJS
Database Type: MariaDB, PostgreSQL, Redis (Multiple Projects)
Hosting Type: Docker Custom and Local Docker

Expected behavior

What do you think should happen?

An existing project should not have database targets disappear and unable to now show existing database targets that are actively working

Actual behavior

What actually happens?

After a Wappler update or Docker update, not sure which one, my existing working projects are now missing the database targets in the Project options. Most are missing, only a couple show up. I use a custom Docker server as the target, which has been working fine for months but now targets are missing on both target docker and local system docker. Development, Staging, and Production are missing their existing targets in Database and Redis.

How to reproduce

  • Detail a step by step guide to reproduce the issue
  • A screenshot or short video indicating the problem
  • A copy of your code would help. Include: JS, HTML.
  • Test your steps on a clean page to see if you still have an issue

Open existing working project, go to Project Options, see database targets missing in all three targets. Check the Resource Manager and it shows all existing targets are still there and site is still functional.

When trying to work on a new project and publish a new MariaDB database to the docker target I am now unable to do so and receive an error that is referencing database targets on other projects. The error changes to other project targets and volumes when trying again.

Local development docker version 4.29.0
Custom docker target Docker version 24.0.5, build 24.0.5-0ubuntu1~22.04.1

Production and Staging

image

Local development

image

Thanks for any assistance

Shot in the dark (have had a lot of these recently) but could it be to do with the new naming conventions for Docker not being updated. I say this as your deployments will uss your custom names (probably the old container names from previous builds) which may not reflect the new builds naming policy (see below for an explanation).

I refer to the section below that starts 'Containers are now created with hyphens in their names instad of underscores’

ie, if you specified the names to the containers manually in your customisation you’ll need to update the names to use the new container names Wappler creates under the new naming convention.

Just throwing it out there @JonR as had a think about your above issue for the last hour or so…

Hi Jon,

Check whether an SSH key is available under the SSH Agent option in Resources Manager. Even though the SSH Agent is showing a green tick against it, the SSH Key file may not have been correctly linked when opening the project.

To add the key - right click on the SSH Agent -> Add New Key and select the existing the SSH key to link it again.

I checked this out and while I do have a custom docker target, Wappler itself handled the docker install, container creations, all through the Resource Manager… I typically have been naming them appname_type_target (appname_mariaDB_dev-1 as an example) so technically the container itself might end up being named wappler-compose-appname_mariaDB_prod-1 as an example currently.

The only docker target that is currently showing up in my list is an app with the naming convention wappler-compose-appname_mariaDB_prod-1.

My new docker target that I mentioned in the other thread so far aside from that slow SSL issue my one project deployed without issue, using the same naming convention, in this case it is named wappler-compose-appname_mariaDB_dev-1

So it has both hyphens and underscores in the container names as you can see and its a mixed bag if they work or not so that may not be the issue as I would expect it to all work or all not work.

I appreciate the insight though.

I checked the SSH key and it is there. I also verified it matches what is on that server.

Regardless I went ahead and created a new SSH key, using Wappler, to see if that might make a difference.

Created the key, added the key to Wappler, added the key to the server as an authorized key, removed all previous keys in Wappler, restarted Wappler, restarted SSH agent (via Wappler).

As before I can see all the services from the target but unfortunately no change in the project settings, still shows an empty list as before.

Well, now it looks like I am getting the issue even on my local docker setup as well so may not even be related to docker targets or ssh keys at this point…

So a discovery and partial fix on the invalid compose project errors. I discovered that Wappler was removing the volumes in the docker-compose.yml file when removing a service. If I added it back manually then I was able to publish again. When I remove a service, it keeps only the services without anything after the volume_ so for example:

Say I have three projects with three volumes

project1
mariadb-volume
project2
mariadb-volume_duS
project3
mariadb-volume_jkD

Well I no longer need project 3’s database so I right click, then click on destroy and remove, well now under the “volumes:” section in that projects docker-compose.yml file it keeps only the mariadb-volume project, thus rendering project2 as invalid. I can add “mariadb-volume_duS: ~” back manually and it works again. It actually removes ANY service with a _randomletters at the end of it. So if I had

redis-volume
redis-volume_dyf
postgres-volume
postgres-volume_duD

in the end it leaves me with:

mariadb-volume
redis-volume
postgres-volume

rendering redis-volume_dyf and postgres-volume_duD invalid

Well, fixing the docker-compose file to my discovery I see the missing items in the project settings after a Wappler restart, except for the redis database that is still missing, still looking into it.

I actually did test out the hyphen vs underscore and modified the volume name before creation, unfortunately it did not matter there and was still deleted regardless.

I would be interested to know if other people are experiencing the same thing. Before trying, backup the projects docker-compose.yml file

Wanted to bump this up since it is now repeatable every time I would remove a Docker service, detailed by my last post. Right now I get around it by not deleting any service or making a backup of the volumes section inside all of the docker-compose files before deleting a service so I can restore the missing volumes that were removed.

It affects both local and cloud providers.