Skip to content

AWS Lambda for resizing images in S3 on fly.

License

Notifications You must be signed in to change notification settings

caspertikka/s3-resizer

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

What is it?

It's AWS Lambda, which is a compute service that lets you run code without provisioning or managing servers.

Read more about AWS Lambda.

Demo

https://sagidm.github.io/smartuploader/examples/4.s3-resizer.html

What this lambda provides

Let's say we have some shared image in S3, for example:
https://example.com/images/pretty_photo.jpg

to resize on fly this image to, say, 150x150, we can make a request like this:
https://example.com/images/150x150/pretty_photo.jpg

So, if there's not image in this path, it's redirected to lambda and, after a moment, lambda creates the suitable image and then redirects back. We'll obviously have a new image next time.


Instead of WxH there're some extra available magic paths:
.../AUTOx150/...
.../150xAUTO/...
or
.../150x150_max/...
.../150x150_min/...

Note that s3-resizer does not enlarge an image if the original image width or height are less than the requested dimensions. You can read about withoutEnlargement method.

Setting up

To resize images we need a storage, which is S3 (but could be CloudFront), and Lambda function. Then we should set up all the permissions and redirection rules.

  • Create a Bucket

    • Click on the blue button Create bucket
    • Enter the name and click on Create
  • Create a Lambda

    • Create a function -> Author from scratch
    • Enter a name (e.g. s3-resizer)
    • Select the latest version of Node.js according to Releases (you can change it later)
    • You need a role that has permission to put objects to your storage (aka policy). If you click on Create function, a default role will be created. You can edit it later or you can create and set up a role right now. To do that,
      • then Create role -> Lambda -> Next: Permissions -> Create policy, a new tab should open
      • on that tab, you can use Visual Editor or add this JSON
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ],
      "Resource": "arn:aws:logs:*:*:*"
    },
    {
      "Effect": "Allow",
      "Action": "s3:PutObject",
      "Resource": "arn:aws:s3:::__BUCKET_NAME__/*"
    }
  ]
}

Pay attention to __BUCKET_NAME__

      • Name your policy, for example: "access_to_putObject_policy" and click on Create policy; you can close the tab
      • On the previous tab, update the policy list clicking on the button with reload image or reloading the page.
      • Select your policy clicking on the checkbox
      • Click on Next: tags -> Next: Review, name your role, for example, "access_to_putObject_role"
      • Click on Create role; you can close the tab.
    • Now you are again on the lambda creating page.
    • Select Use an existing role and choose your role in the list, update the list if necessary.
    • After clicking on Create function, the lambda should be created.
  • Add a trigger, which will listen to http requests

    • YOUR_LAMBDA -> Add trigger -> API Gateway
    • You can select api that has prefix -API or Create a new API
    • In Security, select Open, then click Add
    • Now if you click on API Gateway, you should see API endpoint, something like
      https://some-id.execute-api.eu-central-1.amazonaws.com/your-stage/your-lambdas-name
  • Set up Static website hosting

    • Having an API endpoint, go to your bucket created at the beginning and add permissions
      • YOUR_BUCKET -> Permissions -> Block public access -> Edit, uncheck Block all public access, Save -> Confirm
      • YOUR_BUCKET -> Permissions -> Bucket policy and paste
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AddPerm",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::__BUCKET_NAME__/*"
        }
    ]
}

Pay attention to __BUCKET_NAME__. By the way, you are able to give an access not only to the whole bucket but also to a specific directory providing its path instead of *.

    • Go to Properties (next to Permissions) -> Static website hosting -> Select "Use this bucket to host a website"
    • In Index document paste any file, it'd be logical to name it "index.html"
    • Paste this Redirection rules
[
  {
    "Condition": {
      "HttpErrorCodeReturnedEquals": "404"
    },
    "Redirect": {
      "Protocol": "https",
      "HostName": "__DOMAIN__",
      "ReplaceKeyPrefixWith": "__PATH_TO_LAMBDA__?path=",
      "HttpRedirectCode": "307"
    }
  }
]

Pay attention to __DOMAIN__ and __PATH_TO_LAMBDA__ (protocol is always https)
This is your API endpoint. For example, if the url is https://some-id.execute-api.us-east-1.amazonaws.com/your-stage/your-lambdas-name, the correct xml nodes shall look like

<HostName>some-id.execute-api.us-east-1.amazonaws.com</HostName>
<ReplaceKeyPrefixWith>your-stage/your-lambdas-name?path=</ReplaceKeyPrefixWith>
    • At this state, before clicking on Save, copy your Endpoint. Do not mix it up. This is an endpoint of your Static website hosting, and it is http, not https.
  • Add s3-resizer.zip and make lambda work

    • Go to your lambda and select Lambda layer (presumably, the API Gateway layer was selected instead)
    • Function code -> Code entry type -> Upload a .zip file upload a zip file
    • In Runtime, select the latest version of Node.js that you found on Releases
    • [You can now click on Save to save your time because it takes a while to upload a zip file]
    • Set up the following Environment variables (format: key=value)
      BUCKET=your bucket's name
      URL=Endpoint you copied before (from Static website hosting)
      WHITELIST=your list (space-separated) of allowed size options (e.g. AUTOx150 300x200 100x100_max). This parameter is optional, if not provided, the lambda will process everything
    • In Basic settings
      • Allocate 768mb memory
      • Timeout could be 5 seconds

It's mooore than enough. But you shouldn't care of limits because images cache, which means the lambda is called only for the first time. For example, large png 29mb image converts to 150x150 in 1.8s with 1024mb memory allocated, 2.3 with 768, 3.5 with 512mb, and ~7s with 256 on Node.js 12.13. (I guess these such different results is because of GC). For normal images, results are nearly the same (400-700 mls).

    • Save it. You are done!

  • Test your lambda (optional)
    • Upload an image to your bucket and copy the full path to it. Check whether the image shows in your browser entering "ENDPOINT/FULL_PATH"

Attention. Endpoint is your Static website hosting (http). If you added the image to the root of your bucket, than FULL_PATH should be just a name of the image.

    • Go to lambda, click on Test, and paste this json:
{
  "queryStringParameters": {"path": __YOUR_IMAGE_PATH_WITH_SIZE_PREFIX__}
}

__YOUR_IMAGE_PATH_WITH_SIZE_PREFIX__ - for example: 150x150/pretty_image.jpg

    • Go back to the bucket, a new directory 150x150 must be created

How to use HTTPS

The Amazon S3 website endpoints do not support HTTPS
https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html
As a workaround, you have to use your own domain.
Please check out sagidM#7

About

AWS Lambda for resizing images in S3 on fly.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • JavaScript 100.0%