About this project
I do a lot of photography in my spare time, but most of the pictures I take end up just sat on my hard drive. I decided that since I'd had some experience throwing a few sites together with React and Node, I could do the same for my photography. The site contains 'collections' which are simply mirrors of folder objects on an Amazon Web Services S3 storage bucket.
Inside a collection, the backend grabs keys and generates public URLs for all the images (including both full resolution and compressed versions of each image) and passes them off to the frontend. Each of the compressed URLs is then mapped into an image component, which make up larger responsive grids of images which are Instagram-esque in their look and feel.
- the backend route for a collection's image URLs
You can view an example response here
- the React component for a single image
- the React component for a collection of images - mapping compressed public URLs into a grid
And the result, you can then click on an individual image to get a larger view, or follow another link through to get the full resolution image.
As DSLR images are very large in size (sometimes around 6MB each), I had to compress them to make them suitable for web use. I was originally thinking of keeping this entirely in the cloud and using AWS Lambda or another serverless function compute service in order to compress images on the fly when it was needed. However, it was not worth the hassle when I could throw together a perfectly functional Python script to do the exact same thing in half the time, and then just run that over a set of images before they were uploaded.
I wrote a small wrapper around the ImageMagick libraries and added things like file renaming with UUID4 (universally unique identifier) to avoid filename clashes on the server despite the camera's repetitive file naming system. This makes for quite a streamline system. I take pictures, compress and rename them (all done via 1 command thanks to my script) and then just upload them to my S3 bucket. And bang, they're instantly live on the site. The users load the compressed and light files on the site, and then go externally to the public S3 link to get the full resolution file if they want it. I wrote a blog post explaining more about why I chose S3 and some other small bits of detail regarding the backend that I left out here. If you're interested, you can read the post here.