Skip to content
This repository was archived by the owner on Aug 25, 2025. It is now read-only.

Test Lambda deployment locally

Szymon Maszke edited this page Apr 15, 2020 · 5 revisions

It is a good practice to check whether your network will work on AWS Lambda without resorting immediately to Amazon's service.

Following tutorial outlines how one can perform such initial tests.

1. Setup

There are a few steps required to follow along this tutorial. Those include:

  • neural network creation
  • creation of deployment source code
  • payload creation

You can follow steps 1 - 5 of ResNet18-deployment-on-AWS-Lambda and come back here.

Otherwise, if you wish to test base64 image encoding check out steps 1 - 4 of base64 image encoding tutorial and also return here. Step 5 of ResNet18-deployment-on-AWS-Lambda is also required so you can build your code.

Please notice you only build C++ code .zip package while your model should be left as is.

2. Testing locally

For this step Docker is required (this dependency should already be satisfied if you installed torchlambda). To test torchlambda related deployments one could use lambci/docker-lambda which replicates AWS Lambda environment locally within docker container.

First, unzip deployment code created at step 5 of ResNet18-deployment-on-AWS-Lambda into build_source directory:

unzip torchlambda.zip -d build_source

Now, the following command will test your deployment and return the response in output.json:

docker run --rm -v \
  "$(pwd)"/build_source:/var/task:ro,delegated -v \
  "$(pwd)"/model.ptc:/opt/model.ptc:ro,delegated \
  -i -e DOCKER_LAMBDA_USE_STDIN=1 \
  lambci/lambda:provided \
  torchlambda <payload.json >output.json

What is basically going on here:

  • Mount entrypoint code (build_source) and model.ptc as appropriate for lambci/docker-lambda using -v flags ("$(pwd)" is required as Docker does not allow relative paths)
  • Instruct it yo use standard input stream as request
  • Run image lambci/lambda:provided where tag is the type of runtime (in torchlambda case it is always provided)
  • torchlambda is the name of handler (it is always torchlambda after torchlambda build and cannot be modified)
  • Put previously created payload.json as request's payload
  • Output results to output.json

If done correctly, you should have output.json containing the response you defined. There also some statistics displayed such as time taken to initialize Lambda function, duration and how much you would be billed (in milliseconds) for running it.

Clone this wiki locally