I'm having the dickens of a time trying to piece together how all of the logic on file uploads works with AppConnect, Apache2, and Node.js. I'd like to better understand where the different configurations influence the file upload-ability.
To start out, in Wappler, I can set a client-side validation rule for max file size or total max file size in bytes. But what if I set it to something very large like 2 Petabytes? If this is beyond the allowable max file size set in Node or Apache configurations, how would I know? Wappler doesn't give me an error message. Where do I go after Wappler in my environment to adjust these settings?
Where next in the chain of Wappler, Node, or Apache code, does the allowable max file upload size get checked? Many posts in the community only talk about PHP.ini settings which doesn't help here.
The site I'm building must allow HD and 4K video files, usually 3 mins or longer to be uploaded. I have no client-side validation restricting upload size, but uploads are failing with these large video files on my DO droplet server.
I apologize for the frustration in tone if that's being carried over here. Part of the challenge is that installation paths when using Wappler/Docker are different from when I was a sysAdmin decades ago, and the system environment has limited commands installed to help me. All help is appreciated.
That is a file size of 2000000 Gigabytes? That will take days to upload on any network outside of full scale enterprise!
You will really be confined to memory and execution time. Apache itself has no maximum upload size only if it is running PHP will that come in the equation. Likewise with Node (am pretty sure, @JonL maybe able to answer that?) there is no maximum size. Within Wappler itself you can specify the maximum file size using Client and Server side validation. It is memory usage and execution time that will murder you.
You should investigate S3 storage and upload there relieving any strain on your production server. Things could come crashing down if several people are uploading very large files! Also Digital Ocean may well have restrictions in place for execution times and memory allocation but you'll have to check with their support about that. A basic entry point Droplet will not suffice for this type of activity as it is way beyond the scope of its configuration.
Consider yourself at home!
Not often you get to throw a Dickens reference in!
Wappler doesn’t set your server upload limits. The validation you’re referring to is just a validation that checks the file sizes, so you can apply it when you want to validate what users upload.
The upload size settings are set on your server. So you need to check your server documentation to see how to set them, depending on the server type you’re using.
Have I told you today that I love Cheese? I have no idea where the Dickens came from, it was just the mood I was in this morn. Thank you for putting a smile on my face and for your insight and understanding about today's solutions. "When I was your age", VMware ESX was the shiny new thing. And I was just kidding about 2 Petabytes. That was an attention grabber.
I am considering an S3 solution if this idea gets traction, but I'm more of a GCP fan than AWS. I haven't really dug deep into DO's block storage or storage architecture. It was because of Wappler's integration that I chose to put a staging server in DO's cloud. Wappler made it very simple. I did choose an instance with an NVM disk, but I probably have some noisy neighbors to contend with too. Or maybe my file uploads make me the noisy neighbor?
Since both you and Teodor feel I should be focused on execution times and memory at this point, I'll upgrade the instance and retest.
Thanks Apple. With my environment (Apache & Node inside Docker), where can I find these configurations. I did not find Apache's LimitRequestBody inside the httpd.conf file and Node and Docker are very new to me so no clue where to look.
Consider the possibility it might be missing from the config file, and therefore the default value comes from Apache's source-code/binary. You can add the missing property to the config file
The memory limit can be changed through a command line argument when starting the NodeJS binary. You can search the web for such command line argument
Never knew it changed. Haven't used Apache for a good few years now. We used to upload some quite hefty loads back then, but in saying that was a heavily customised instance.
Cheers @Apple always useful to know these things. Although can not see us going back to Apache one never knows what the future may hold.