How would I download an image for use in a SC from an image URL?

Hey all,

Let’s say I have a number of images, e.g. www.example.com/avatarimage_1.jpg. I want to save these images so that I can then store them in my own S3, and then my own path in the DB.

Is there anything native to Wappler that would allow me to download the image from the above url path so that I can then upload to my S3 storage?

Matt

If you have access to the server you can use the S3 CLI to access and then send directly to your Buckets on S3…

Have to install the CLI on the host server first, so you’ll need access via SSH.

https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install-linux.html

Then as simple as:

aws s3 cp /path/to/images/ s3://BucketName  --recursive

Can drop all the images into the Bucket in a few seconds. We did 15gb this way in around a minute a few days ago. No drops, very fast.

Otherwise you could create a Server Action that copies all the images on the host to a directory, then add an S3 upload repeat to push them to your Buckets… Should not be too difficult to do as long as everything is on the same server and within the same directory of the Project. Can’t traverse out of the Projects parent directory using the SC file actions. That would be the only thing denying this approach. May have to up your timeout and memory allocation if your’re using PHP by adjusting your PHP variables though…

Hey Dave,

Unfortunately I don’t have access to the server these images are located - all I can get access to at scale is the URL path of the image.

E.g. with UIDs and HTTP call I can create these: https://api.multiavatar.com/e673744790fc927035.png

What I’m trying to do is somehow use the URL path in a SC to download the image, so that I can then send it into my S3 bucket.

Any thoughts/suggestions around this?

Quick search shows up a tool called ParseHub which would allow you to scrape those images… Not sure if it is ideal, maybe there is a SC way of doing it. Worth asking @George

Here is ParseHub though:

https://www.parsehub.com/quickstart

And a tutorial:

I’ve not used it as we wrote our own scraper for similar purposes for Property images (long gone Project). We pretty much scraped all of Booking.com Spain and Portugal a few years back. :slight_smile:

1 Like

I think this would currently be a custom function job. All file paths are system ones in SC (as far as I know) so can’t handle URLs.

I don’t think it would be too complex a function, theres some ideas here: https://stackoverflow.com/questions/11944932/how-to-download-a-file-with-node-js-without-using-third-party-libraries

1 Like