Service 'db' failed to build due to dashes in target name

Hi all,

I followed the steps on setting up and deploying Docker to Digital Ocean - https://docs.wappler.io/t/docker-part-4-deploy-in-seconds-to-the-cloud-with-docker-machines/14373

But when attempting to deploy to the remote live server (Digital Ocean), I get:
ERROR: Service ‘db’ failed to build : Build failed

In the Database Connection section, it says ‘connect ECONNREFUSED’.
Screen Shot 2022-05-24 at 11.55.27 am

I’m not sure if it’s related to a database issue I experienced at the start of developing my app (Issues deleting placeholder Docker tables) as @Apple said it could result in issues deploying to another target.

Any suggestions on what I should do to fix this issue?

This is a different issue from the last one, it might be a bug concerning Docker/Wappler

1 Like

Thanks for clarifying @Apple. I thought it may have been related. I might try deploying to AWS to see if I get the same issue.

I’m not sure if I want to continue development in case there is an issue with my project not deploying properly on the remote targets.

@Teodor - Does this look like a bug? Or is it a set up issue?

Well I don’t see any usage of a -t argument for docker, maybe it is a version mixture somewhere.

What docker version do you run locally - and what docker version do you have on the remote server? Or how did you create the remote?

My local Docker version is 20.10.14, build a224086 (via Wappler System Check). And the remote version is 19.03.12, build 48a66213fe (via Digital Ocean console).

I created the remote target by following the instructions in https://docs.wappler.io/t/docker-part-4-deploy-in-seconds-to-the-cloud-with-docker-machines/14373.

I tried creating a new Digital Ocean remote target and am still getting the same DB build error.

I actually managed to deploy to AWS and the DB was built without error this time. Though none of the tables I previously created in the development target are showing in the Database Manager Tables folder. So I’m not sure if this is normal and need to ‘seed’ this remote target after deploying.

You have to right click changes while on the remote connection and apply latest to import the schema from the development target

1 Like

Thanks for the suggestion @Sorry_Duh. I actually tried to Apply Latest Changes before and got the following error message:
Screen Shot 2022-05-24 at 8.45.09 pm

Looks like the tables wasn’t all made with Wappler or that some of your changes was corrupted / deleted etc you could also apply the changes one by one but this also may not work if the changes aren’t from the start e.g you have no change for creating the users table.

You may need to backup the dev database and then transfer that schema using the backup unless @George can recommend a better option for this

I won’t go into too much detail as this will send this topic off discussion

1 Like

I have only used Wappler tools for developing my app. Unfortunately, I spent over a month in development before testing out the remote deployment today (naively expecting it to ‘just work’), so I’d be more than disappointed if I have start from scratch again.

@George - I just deployed a test project to a remote Digital Ocean target (same Docker version number 19.03.12) and I didn’t get a DB error this time, but the remote tables are empty like the AWS deploy.

Your database and its data are not transferred on deploy. On deploy you only upload your files.
To send the data to the remote database you need change the target to remote and apply the database changes through the database manager.

1 Like

Ok, now you’re hitting an issue similar to the last one :slight_smile:

Your migration files are missing a migration to actually create the table, so you can’t “alter” a table that doesn’t exist in first place.

This error would also happen on your development environment if you stop all services (erase local db) and start again.

You have to open one of the migration files and manually add an instruction to create the table (if it’s missing)

1 Like

Thanks, I know this now moving forward.

I just remembered that when I first started the project, I loaded the sample Docker data and am still using the users table now (hence why the users table creation step isn’t in the change history). In the Targets Project Settings, I then toggled ‘No Sample Data’ to be on so a service restart doesn’t add all those tables back in again.

So I though it could be an issue with not populating the sample data in my remote deploys, so I set the switch to ‘Add sample data’ to the new targets, but this didn’t work either.

I tried deploying to both AWS and DO by toggling the ‘No Sample Data’ switch on and off, but am getting the same error below:

Can you please point me to some information on how to do this?

Or would it better for future deployments to recreate the database and tables again?

I currently only have 4 tables with a total of 26 fields. So if creating a new ‘clean’ database makes future deployments more streamlined, I’d be happy to do that.

You seem to be hitting a Wappler/Docker bug again

You may be right about the initial table data. This is the migration file you’re going to need:

exports.up = function(knex) {
  return knex.schema
    .createTable('users', function (table) {
      table.increments('id');
      table.string('name');
      table.string('email');
      table.string('password');
    })
};

exports.down = function(knex) {
  return knex.schema
    .dropTable('users')
};

So, you actually have to create a new migration file manually. I usually use Visual Studio Code, open the project folder and then go to:

.wappler/migrations/db

And then create a new file there, following a specific naming convention. The name of the migration starts with the date of the creation, and you should select this date before all others (as you want this migration to run before all the others)

Let’s say, you could use the name 20200101000000_create_users_table.js

1 Like

Thank you very much @Apple for all those detailed instructions!

I actually managed to successfully deploy to the remote Digital Ocean target but got a database migration error message saying the password field was a duplicate. So it seems that the manual migration file I created was slightly mismatched in terms of fields in the original Docker sample data.

So I updated the manually created migration file to include all the fields from the original users table as well as the other placeholder tables (cars, images, countries) based on the details I noticed in the database change history files. I created new Docker machines to deploy these changes to, but now I’m getting the same ‘no such image’ error message (see screenshot below) each time when trying to deploy to these remote targets (both DO and AWS). So I’m not even getting to the DB creation and migration step at all. I reverted the migration file to the one that originally worked, but I’m still getting this ‘no such image’ error.

I’m not sure if there is some limit on the number of machines I can create and delete in a day, but I’ve been creating and deleting Docker machines each time a change doesn’t work properly. I don’t know if all the new Docker machines I’m creating is causing this error.

This is now on the hands of @George

P.S.: Happy to help!

1 Like

Could you not use dashes in the target name? Or other weird characters.

1 Like

Thanks George. That did the trick!

@Apple - I also managed to migrate the databases successfully without using the manual migration file. In the target’s settings, I toggled ‘No Sample Data’ to be off in order to replicate how I started working on the development target that had sample data.

Thank you all for your time, energy and patience helping me troubleshoot this!

1 Like

Good to hear - I will see if we can cleanup those automatically

1 Like