-
Notifications
You must be signed in to change notification settings - Fork 3
Test Lambda deployment locally
It is a good practice to check whether your network will work on AWS Lambda without resorting immediately to Amazon's service.
Following tutorial outlines how one can perform such initial tests.
There are a few steps required to follow along this tutorial. Those include:
- neural network creation
- creation of deployment source code
- payload creation
You can follow steps 1 - 5 of ResNet18-deployment-on-AWS-Lambda and come back here.
Otherwise, if you wish to test base64
image encoding check out steps 1 - 4 of base64
image encoding tutorial and also return here.
Step 5 of ResNet18-deployment-on-AWS-Lambda is also required so you can build your code
.
Please notice you only build C++ code .zip package while your model should be left as is.
For this step Docker
is required (this dependency should already be satisfied if you installed torchlambda
). To test torchlambda
related deployments one could use lambci/docker-lambda
which replicates AWS Lambda environment locally within docker
container.
First, unzip
deployment code created at step 5 of ResNet18-deployment-on-AWS-Lambda into build_source
directory:
unzip torchlambda.zip -d build_source
Now, the following command will test your deployment and return the response in output.json
:
docker run --rm -v \
"$(pwd)"/build_source:/var/task:ro,delegated -v \
"$(pwd)"/model.ptc:/opt/model.ptc:ro,delegated \
-i -e DOCKER_LAMBDA_USE_STDIN=1 \
lambci/lambda:provided \
torchlambda <payload.json >output.json
What is basically going on here:
- Mount entrypoint code (
build_source
) andmodel.ptc
as appropriate forlambci/docker-lambda
using-v
flags ("$(pwd)"
is required as Docker does not allow relative paths) - Instruct it yo use standard input stream as request
- Run image
lambci/lambda:provided
where tag is the type of runtime (intorchlambda
case it is alwaysprovided
) - torchlambda is the name of handler (it is always
torchlambda
aftertorchlambda build
and cannot be modified) - Put previously created
payload.json
as request'spayload
- Output results to
output.json
If done correctly, you should have output.json
containing the response you defined. There also some statistics
displayed such as time taken to initialize Lambda function, duration and how much you would be billed (in milliseconds) for running it.