Graceful Deploys

Just curious here, what are some of the methodologies everyone’s used to deploy as gracefully-as-possible to high-traffic sites? I’m running Node/Docker and on a 16GB server and deploys are quite fast (outage for only about 5 secs). But still, in a high-traffic site, displaying that ugly “404 not found” traefik throws up for even 5 seconds can be a bit unsettling to users in the middle of doing all sorts of things.

I was thinking about a high-availability failover pair to minimize this impact to the userbase (and staying up until 4AM GMT to do deploys ha), but was wondering if anyone had any tricks up their sleeve to share?

I use AWS Elastic Beanstalk… as my user base grows I’m only doing deploys post 11pm, and they have shown me a trick for scheduling them at 4am which I’ll put in place some time soon, although I like to do the deploy and see with my own eyes that everything still works…

Having a regularly late evening social life and returning sober to deploy is the way forwards for us app entrepreneurs! :tada:

1 Like

One option is to set up a load-balanced environment that uses connection draining to remove individual nodes before upgrading them. It will cost more to run, but it could help avoid affecting the user’s experience.

1 Like

Ha @Antony, only one teensy weensy prob with the 4AM idea continuously…
-2AM social life
-4AM deployment

-6AM kids
:skull_and_crossbones:

2 Likes

That’s what I’m thinking as well @kfawcett. An HA pair setup on a load balancer. Only double the cost haha.

3 Likes

There are some new technologies combines with docker that allows to achieve zero downtime deploys.

We are considering implementing those, you might want to vote:

2 Likes

I just implemented zero downtime deployments for my own project and for someone from the community. It runs via CI/CD -> spins up one new container for each container in every region when code gets pushed -> deploys the new image to these -> runs health checks -> starts routing traffic there and then destroys the old containers.

2 Likes

fook me… some really clever people doing some crazy stuff… now I feel like a bottom feeder again… hahahaha… well done… inspirational … When i see what some guys do with wappler on this community im dumb-stuck (literary) kudos to all you for doing this… and well done to the wappler team for giving you the tools to do so…

It will be great if you can write a tutorial of how this is exactly done, also how to handle database migrations. It will be very useful for the community and we will also see if we can automate some steps in Wappler.

Thanks much @George and @tbvgl. It seems we have two schools of thought thus far at a very high-level layman: load balancer across tow separate vps/droplets method -or- spinning up duplicated docker containers on the same box method (I’d assume high-memory limits are a must for this latter route). At first glance, it seems the former is less demanding on configuration, but at a monetary cost.

Hey @George, @xsfizzix ,

Building on what I mentioned earlier about zero downtime deployments, the challenges, as many of you might have deduced, aren’t just in the initial setup. It’s the customization, the tweaks, and adjustments that vary from one project to another.

So, about crafting a tutorial: While the idea is solid, CI/CD is more art than science in some respects. What works for one may not work for all. Given the range of project quirks and specifics, maintaining a tutorial could end up more like chasing a moving target. That’s why I’m leaning towards offering a bespoke service. Dive deep into individual needs, tailor the CI/CD, ensure auto-scaling’s smooth, and get failovers right. And just a heads up, with the correct setup, we can keep those server costs in check – by auto-scaling to 0 servers if there is no traffic.

If this strikes a chord or you’re just curious about how it might fit your project, don’t hesitate. Ping me here or drop an email to tobi@vgl.marketing.

@George, given our shared enthusiasm, maybe there’s an avenue for us to collaborate on this? Always up for a chat.

Cheers