I have been trying to deploy replicas on a project, and noticed that some of the containers would continually restart.
They were generating this error:
[nodemon] Internal watch failed: ENOSPC: System limit for number of file watchers reached
I think I’ve fixed the problem by increasing the file watchers:
sudo sysctl -w fs.inotify.max_user_watches=16384
It was originally set to 8192 I believe.
@patrick in case there is something better to do here.
George
June 13, 2023, 7:17pm
2
What is the config of your remote server target?
Normally it shouldn’t reach the limit so easy, but maybe you are short of diskspace as well.
So check that and maybe run clean up of the old docker images.
This might also be helpful:
opened 03:20PM - 14 Sep 16 UTC
closed 08:53AM - 18 Oct 16 UTC
After restarting Nodemon with the same file got this Error.
Os: CentOs 7
[nodemon] 1.10.2
[nodemon] to restart at any time, enter rs
[nodemon] watching:...
Plenty of disk spac (50+GB free), and docker system prune run as well.
I could go up to 3 replicas, but 4 tipped the scale.
Machine is 4cpu, 8GM ram.
This project has versioned api’s so between the api and modules folder, there are 2,000+ files. Considering it fails at 4 replicas, and a limit of 8k, this all kinda makes sense to me.
Here is the docker compose:
version: '3'
services:
caddy:
build:
context: ./caddy-build
dockerfile: Dockerfile
ports:
- 80:80
- 443:443
environment:
- CADDY_INGRESS_NETWORKS=caddy
networks:
- caddy
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- caddy_data:/data
restart: 'unless-stopped'
web:
ports:
- '3000'
restart: 'always'
stdin_open: true
tty: true
logging:
options:
max-file: '20'
max-size: '100m'
build:
context: '../../../'
dockerfile: '.wappler/targets/Staging/web/Dockerfile'
networks:
- caddy
- internal
dns:
- 8.8.8.8
- 8.8.4.4
labels:
caddy: caddytest.mealproapp.io
caddy.reverse_proxy: "{{upstreams 3000}}"
deploy:
replicas: 4
volumes:
- 'tmp:/opt/node_app/tmp:rw'
- 'tmp_logos_icons:/opt/node_app/tmp_logos_icons:rw'
- 'tmp_post_media:/opt/node_app/tmp_post_media:rw'
- 'tmp_recipe_media:/opt/node_app/tmp_recipe_media:rw'
redis:
image: 'redis:alpine'
restart: 'always'
hostname: 'redis'
volumes:
- 'redis-volume:/data'
networks:
- caddy
- internal
volumes:
redis-volume: ~
tmp: ~
tmp_logos_icons: ~
tmp_post_media: ~
tmp_recipe_media: ~
caddy_data: {}
networks:
caddy:
external: true
internal: ~
And this is the permanent fix:
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf
524288 is the max I believe
George
June 13, 2023, 7:32pm
5
Actually you do’t really need nodemon on production servers with docker, as docker by itself is self containing and also auto restarts on crash. It is actually only handy for local development.
So we are thinking of removing it for live environments.
You might try to change your Dockerfile for the web container to go straight to index.js or replace nodemon with just node. I think we are doing that already for some hostings like Heroku.
Let me know what works best.
That’s what I thought…changing to node works fine, without changing the limit.
1 Like
George
June 13, 2023, 7:47pm
7
Will remove nodemon for live targets in the next update
Teodor
Closed
June 16, 2023, 3:00pm
10
This topic was automatically closed after 25 hours. New replies are no longer allowed.