Is there a way to apply the db.json to a remote instance?

I ran into issues deploying to a remote instance and followed some advice on the forums to delete some migration files. Now I’m not sure that was a good idea.

Is there a way to move the db.json from local to remote instead?

No you should never do so!!

You can only delete unapplied errored changes files.

Where does this advice come from?

@Apple mentioned it in a few posts.

How can I skip the migrations and just get both systems to start from the db state that exists in db.json?

Changes (migrations) not applied and not needed from the local development you can delete.

So just make sure you have all changes applied in the development target. They they can be also reapplied to the remote on publish.

Changes = migration files, correct? And those only have one copy on the local machine? Changing targets does not load a separate copy of the files?

If so. I deleted some of them that I thought were the unsuccessful changes while on the cloud target not realizing those were the same files from local instance, so now I do not have a way to apply all changes because something is missing.

I have a db that works in the local instance so I want to move that to remote and then have a fresh starting point for all future changes.

Yes, changes are migrations and there is a single set of them for all targets.

The idea is that you work fully on your local development target and apply all the changes.

Then when done, you just to the live target and apply the changes that weren’t applied there yet.

The changes are corrupt and I removed all of them, but I have a schema in local instance. How to proceed with using this in remote instances?


Well the schema is your local database structure but without the changes it can’t be recreated at your remote server.

So I do hope you have git or some backup to get your changes files back. Otherwise you can’t update the remote schema.

You’ll have to recreate the changes from scratch looking at your existing schema file, unfortunately. Backup that file if you haven’t yet

That’s unfortunate. Maybe a local git should automatically be used for those files so others do not come across bad recommendations in the forums or accidentally remove the files and be in the same position as I am. It really is a big issue in my eyes and makes me not want to use Wappler.

Maybe something should be built into Wappler that can perform a sql dump in these instances to use as the initial schema.

pg_dump --schema-only databasename

Well the easier way to bring the remote schema up to date is if you had the changes (migration files). That is why you shouldn’t throw them away. Not sure if @Apple suggested that but as explained above the applied changes are really needed otherwise you don’t know what is changed.

The difficult way now dump the local dev schema and reapply it manually on the remote database. Then reset the changes history and start tracking them again.

Will see if we could generate new changes from existing database but that could be also error prone

I think you’re talking about me. I’m really sorry this happened to you

My specific advice to you was:

My advice to someone else that mentioned deleting migrations (that you’ve probably read) included:

In case of destructive operations, I made sure to mention users to backup their data. The 1st advice I gave to you was a non-destructive operation, hence there was no warning.

Only time I missed giving a destructive operation heads-up was one of my replies to daves88, where I suggested to delete all migrations and re-create them, which I assumed it was clear this meant re-creating the schema from scratch, but perhaps I should be more explicit next time to prevent incidents like these.

On your initiative, you attempted to find a workaround that involved deleting some migrations:

That particular algorithm you described was never hinted by me. I’m sorry I wasn’t clear enough on the severity of manually tampering with migrations/changes

@george maybe you could write a script that introspects the schema and writes an initial migration file for existing databases.

That would help with these type of cases but also for those that start building a project for an existing database.

I believe the latter has been requested in the forum already.

I know you already are aware of Rijk’s work with knex-schema-inspector. Its API already provides the functions needed to build an initial file.

1 Like

No problem, @Apple. It’s a mistake I made, but I think Wappler could improve this part of the tool so similar “catastrophic” issues don’t happen to others.

Users are going to do stupid things with powerful tools and I should have been using version control. That said, I think this is such a bad error that Wappler could put some safeguards in place to ensure dumb user mistakes won’t require a total manual rebuild of the tables in order for this part of Wappler to function again. Looking through the forum I’m not the only user who’s had to rebuild their table from scratch.

For instance, when I delete a file on Windows it’s sent to the recycling bin, so if a mistake is made I can revert it. When I deleted the migration files through Wappler it was permanently deleted with no way to restore it.

Could Wappler send files deleted through its UI to the Windows Recycle Bin? If not, how about implementing a local git just for Wappler and saving all changes automatically?

Heck, Wappler could create a rudimentary soft-delete that moves the files to another folder for safe restore.

Yes good idea will see what we can do. This can be done only if you don’t have any migrations files yet.

And we should add indeed much more checks and even prohibit deleting of already applied changes.


Another check would be if you have a change applied lets say you created a table, if you then backup the database this will be stored In the backup as expected so if you now rollback the change for whatever reason and delete this change (as it is now inactive) when the database is stopped and started and the backup is ran it will cause migration corrupted errors not sure how this one would be handled but maybe you have some ideas for it

This one can be prevented by backing up after the rollback but users might not do this and face issues etc

Yes backups and migration changes can conflict each other, not sure for the best way of handling that yet. It is not really handy to do a db backup after each change as it takes some time to complete.