Build a Serverless Todo App With AWS, Pulumi, and Python
Try this step-by-step guide to build and deploy a scalable serverless app that’s accessible through a RESTful API.
Developers charged with building modern, scalable applications often face the burden of having to learn new skills, but there are alternatives that can speed and simplify their work. This tutorial provides a practical, hands-on guide to deploying a serverless app that’s accessible through a RESTful API. Following along will give you valuable skills in serverless architecture, Infrastructure as Code (IaC) and API development, empowering you to create efficient and cost-effective solutions.
In this tutorial, I’ll walk through a step-by-step process for creating a serverless application using Amazon Web Services (AWS) Lambda, Docker and AWS API Gateway, all orchestrated with Pulumi using Python. By the end of this guide, you’ll have a deployed serverless application that can be accessed via a RESTful API.
Why Use Pulumi and Serverless for This Project?
Pulumi is an open-source Infrastructure as a Code (IaC) tool that allows developers to define and manage infrastructure using their favorite programming languages, such as TypeScript, JavaScript, Python, Go, or C#.
By using Pulumi to create AWS Lambda, Docker, and API Gateway services, developers can leverage their existing knowledge to build and deploy a highly scalable serverless solution that can handle traffic without needing additional infrastructure-creating tools.
Serverless computing allows developers to manage and run application code without the need to provision or manage servers. By using this model, developers can focus mainly on their application code without worrying about the underlying infrastructure.
AWS API Gateway is a fully managed service that helps developers secure APIs. It also handles rate limiting, routing and scaling API requests. AWS Lambda is a serverless computing service that allows developers to run code without the need to provision or manage servers.
Project Overview
The todo app will have the following features:
- Create a Todo: This action will add a todo list.
- Read a Todo: This action will read a todo list.
- Update Todo: This action will update a todo list.
- Delete Todo: This action will delete a todo list.
Now that you know what the project will do, follow this step-by-step guide on using Python to create a serverless todo application using Docker, API Gateway, AWS Lambda, Pulumi and Python.
Get Started
To begin, ensure you have done the following on your development machine.
- Install the Pulumi command-line interface (CLI)
- Install Python 3.7 or later.
- Install the AWS CLI.
- If you don’t already have an AWS account, set one up.
- Configure the AWS CLI with your credentials to manage your AWS services.
Step 1: Install Pulumi
First, ensure you have Pulumi installed in your development environment. Pulumi can be installed on Linux, macOS or Windows:
- On Linux:
curl -fsSL https://get.pulumi.com | sh
- On macOS (using Brew):
brew install pulumi/tap/pulumi
- On Windows: Download and run the Pulumi installer (or try one of the other methods on that page).
Step 2: Set Up Your Environment
Next, set up your environment and install the Python dependencies if necessary. Follow the steps for instructions on how to set up your environment.
- Create a Pulumi account to store your stack state, if you want to use Pulumi for state management.
- Install dependencies: Install Python and
pip
on your workstation, since you will use Python to provision infrastructure.
Step 3: Create a New Pulumi Project
Create a folder called todo_pulumi_docker_aws_lambda_api_gateway
and create another folder for the Lambda project todo-app
.
Initialize a new Pulumi project by running:
cd todo_pulumi_docker_aws_lambda_api_gateway/todo-app
pulumi new aws-python
Follow the prompts to set up your project.
Step 4: Install Dependencies
Create a requirements.txt
file in the project root todo-app
folder with the following content:
pulumi>=3.0.0,<4.0.0
pulumi-aws>=6.0.2,<7.0.0
pulumi_docker==3.4.0
setuptools
Then install the dependencies; using this command
pip3 install -r requirements.txt
Step 5: Create Your Lambda Function
Create a folder called lambda_function
; this will contain the Lambda code and a file named lambda.py
.
Step 6: Create a Dockerfile
Create a file named Dockerfile
inside the lambda_function
directory.
# Stage 1: Build dependencies on Ubuntu
FROM ubuntu:22.04 as builder
WORKDIR /app
# Install Python and pip
RUN apt-get update && \
apt-get install -y python3 python3-pip && \
apt-get clean && rm -rf /var/lib/apt/lists/*
# Copy and install dependencies into a local directory
COPY requirements.txt .
RUN pip3 install --no-cache-dir -r requirements.txt -t /app/python
# Stage 2: Lambda-compatible final image
FROM public.ecr.aws/lambda/python:3.10
# Copy dependencies from the builder stage
COPY --from=builder /app/python ${LAMBDA_TASK_ROOT}
# Copy Lambda function code
COPY lambda.py ${LAMBDA_TASK_ROOT}/lambda.py
# Set the Lambda handler
CMD ["lambda.handler"]
Step 7: Create a GitHub Action to push Docker Image to ECR
Create a file named docker-publish.yml
in the folder .github/workflows.
This file will contain the GitHub Actions code to publish and push the Docker image to the AWS Elastic Container Registry (ECR).
Add the following secrets
to the repository.
See the screenshot below for an example:
Here is the workflow to deploy the Docker Image to AWS ECR:
name: Docker Push
on:
push:
paths:
- 'todo-app/lambda_function/**'
branches:
- main
jobs:
push-app-ecr:
name: Deploy to ECR
runs-on: ubuntu-latest
env:
AWS_REGION: ${{ secrets.AWS_REGION }}
sha_short: $(git rev-parse --short HEAD)
TARGET_ENVIRONMENT: dev
REGISTRIES: ${{ secrets.REGISTRIES }}
ECR_REGISTRY: ${{ secrets.ECR_REGISTRY }}
permissions:
id-token: write
contents: read
pull-requests: write
repository-projects: write
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set ENV variables to get the repo name
run: echo "REPO_NAME=${GITHUB_REPOSITORY#$GITHUB_REPOSITORY_OWNER/}" >> $GITHUB_ENV
- name: Use the custom ENV variable
run: echo $REPO_NAME
env:
REPO_NAME: $REPO_NAME
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4 # More information on this action can be found below in the 'AWS Credentials' section
with:
role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
aws-region: ${{ secrets.AWS_REGION }}
role-session-name: GithubActionsSession
- name: Install AWS CLI
run: |
sudo apt-get update
sudo apt-get install -y awscli
- name: Check if ECR repository exists
id: check_ecr_repo
run: |
aws ecr describe-repositories --repository-names ${{ env.REPO_NAME }} --region ${{ env.AWS_REGION }} > /dev/null || echo "::set-output name=exists::false"
- name: Create ECR repository if it doesn't exist
if: steps.check_ecr_repo.outputs.exists == 'false'
run: |
aws ecr create-repository --repository-name ${{ env.REPO_NAME }} --region ${{ env.AWS_REGION }}
- name: Show ECR repository details
run: |
aws ecr describe-repositories --repository-names ${{ env.REPO_NAME }} --region ${{ env.AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v2
with:
registries: "${{ env.REGISTRIES }}"
- name: Set short sha
id: sha_short
run: echo "sha_short=$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT
- name: Build and push
uses: docker/build-push-action@v5
id: build-push-to-ecr
with:
context: todo-app/lambda_function
file: todo-app/lambda_function/Dockerfile
push: true
tags: ${{ env.ECR_REGISTRY }}/${{ env.REPO_NAME }}:${{ steps.sha_short.outputs.sha_short }}
platforms: linux/amd64
provenance: false
continue-on-error: false
Step 8: Create a GitHub Action to Run and Deploy the Pulumi Code
Create a token from pulumi.com; it will be used in GitHub Actions to utilize Pulumi for state management. Then use the newly created secret in GitHub as your PULUMI_ACCESS_TOKEN
.
Create a file named pulumi-deploy.yml
in the folder .github/workflows
. It will contain the GitHub Actions code to deploy the infrastructure code on AWS.
name: Pulumi Deploy
on:
push:
paths:
- 'todo-app/**'
branches:
- main # Trigger on push to the main branch
jobs:
pulumi-deploy:
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
pull-requests: write
repository-projects: write
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4 # More information on this action can be found below in the 'AWS Credentials' section
with:
role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
aws-region: ${{ secrets.AWS_REGION }}
role-session-name: GithubActionsSession
- name: Install Dependencies
working-directory: todo-app
run: |
pip install -r requirements.txt
- name: Configure Pulumi
working-directory: todo-app
run: |
pulumi stack select ExitoLab/todo-app/dev --non-interactive || pulumi stack init ExitoLab/todo-app/dev
env:
PULUMI_ACCESS_TOKEN: ${{ secrets.PULUMI_ACCESS_TOKEN }}
- name: Pulumi Preview
working-directory: todo-app
run: |
pulumi preview --stack ExitoLab/todo-app/dev
env:
PULUMI_ACCESS_TOKEN: ${{ secrets.PULUMI_ACCESS_TOKEN }}
- name: Pulumi Up
working-directory: todo-app
run: |
pulumi up --stack ExitoLab/todo-app/dev --yes
env:
PULUMI_ACCESS_TOKEN: ${{ secrets.PULUMI_ACCESS_TOKEN }}
# Comment this block, if you don't want to destroy the infra
- name: Pulumi Destroy
working-directory: todo-app
run: |
pulumi destroy --stack ExitoLab/todo-app/dev --yes
env:
PULUMI_ACCESS_TOKEN: ${{ secrets.PULUMI_ACCESS_TOKEN }}
You can find the Lambda function code in todo-app/lambda_function
. It contains the Lambda function code in Python and the following resource endpoints. It uses DynamoDB to keep track of the todo list.
- GET Endpoint uses the resource
/todos
with the methodGET
. - POST Endpoint uses the resource
/todos
with the methodPOST
. - DELETE Endpoint uses the resource
/todos/<id>
with the methodDELETE
. - PATCH Endpoint uses the resource
/todos/<id>
with the methodPATCH.
Step 9: Create the Pulumi Code to Spin Up the Infrastructure
Create a file called _main_.py
in the todo-app
folder; it will contain the infrastructure code for spinning up the infrastructure. The Pulumi code will create the following resources on AWS.
- API-Gateway: Defines the
API Gateway
and its associatedroot_resource
, linking it to theLambda function
. - Lambda-function: This is a Dockerized Lambda function created using a Docker image
(image_uri)
. - IAM-Roles: This is the identity and access management (IAM) role attached to the Lambda function. It allows the Lambda function to assume role permissions and also contains permissions for it to access the DynamoDB.
- Deployment: This deploys the
API Gateway
to thedev
stage.
The Pulumi code is designed to deploy into different environments, such as production and development. In this tutorial, you will be deploying to dev, and the config file for dev is in Pulumi.dev.yaml
.
The code for the Pulumi infrastructure resource is:
import pulumi, json
import pulumi_aws as aws
from pulumi_docker import Image, DockerBuild
import pulumi_docker as docker
from pulumi import Config
# Create a config object to access configuration values
config = pulumi.Config()
docker_image = config.get("docker_image")
environment = config.get("environment")
region = config.get("region")
aws.config.region = region
# First, create the DynamoDB table with just `id` as the primary key
dynamodb_table = aws.dynamodb.Table(
f"todo-{environment}",
name=f"todo-{environment}",
hash_key="id", # Only `id` as the partition key
attributes=[
aws.dynamodb.TableAttributeArgs(
name="id",
type="S" # `S` for string type (use appropriate type for `id`)
),
],
billing_mode="PAY_PER_REQUEST", # On-demand billing mode
tags={
"Environment": environment,
"Created_By": "Pulumi"
}
)
# Create an IAM Role for the Lambda function
# Create Lambda execution role
lambda_role = aws.iam.Role(
"lambdaExecutionRole",
assume_role_policy=json.dumps({
"Version": "2012-10-17",
"Statement": [{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}]
})
)
# Create inline policy for the role
dynamodb_policy = aws.iam.RolePolicy(
f"lambdaRolePolicy-{environment}",
role=lambda_role.id,
policy=pulumi.Output.json_dumps({
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:Scan",
"dynamodb:PutItem",
"dynamodb:GetItem",
"dynamodb:UpdateItem",
"dynamodb:DeleteItem",
"dynamodb:Query"
],
"Resource": [
dynamodb_table.arn,
pulumi.Output.concat(dynamodb_table.arn, "/*")
]
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*"
}
]
})
)
# Create a Lambda function using the Docker image
lambda_function = aws.lambda_.Function(
f"my-serverless-function-{environment}",
role=lambda_role.arn,
package_type="Image",
image_uri=docker_image,
memory_size=512,
timeout=30,
opts=pulumi.ResourceOptions(depends_on=[lambda_role])
)
# Create an API Gateway REST API
api = aws.apigateway.RestApi(f"my-api-{environment}",
description="My serverless API")
# Create a catch-all resource for the API
proxy_resource = aws.apigateway.Resource(f"proxy-resource-{environment}",
rest_api=api.id,
parent_id=api.root_resource_id,
path_part="{proxy+}")
# Create a method for the proxy resource that allows any method
method = aws.apigateway.Method(f"proxy-method-{environment}",
rest_api=api.id,
resource_id=proxy_resource.id,
http_method="ANY",
authorization="NONE")
# Integration of Lambda with API Gateway using AWS_PROXY
integration = aws.apigateway.Integration(f"proxy-integration-{environment}",
rest_api=api.id,
resource_id=proxy_resource.id,
http_method=method.http_method,
integration_http_method="POST",
type="AWS_PROXY",
uri=lambda_function.invoke_arn) # Ensure lambda_function is defined
lambda_permission = aws.lambda_.Permission(f"api-gateway-lambda-permission-{environment}",
action="lambda:InvokeFunction",
function=lambda_function.name,
principal="apigateway.amazonaws.com",
source_arn=pulumi.Output.concat(api.execution_arn, "/*/*")
)
# Deployment of the API, explicitly depends on method and integration to avoid timing issues
deployment = aws.apigateway.Deployment(f"api-deployment-{environment}",
rest_api=api.id,
stage_name="dev",
opts=pulumi.ResourceOptions(
depends_on=[method, integration, lambda_permission] # Ensures these are created before deployment
)
)
# Output the API Gateway stage URL
api_invoke_url = pulumi.Output.concat(
"https://", api.id, ".execute-api.", "us-east-1", ".amazonaws.com/", deployment.stage_name
)
pulumi.export("api_invoke_url", api_invoke_url)
Step 10: Test the Serverless Application
The API gateway connects to the Lambda function that contains the Python code in Lambda. To test the endpoint, you need to get its URL from the AWS console. Log in to AWS and navigate to the API Gateway
. You will see something like this:
To get the stage URL, which is used to access the serverless application from Postman, click on my-api-dev
. You can find it under “Invoke URL” in the screenshot below.
Health endpoint: The health
endpoint checks if the app is up and running.
GET endpoint: The GET
endpoint retrieves the list of the todos in the DynamoDB.
POST endpoint: The POST
endpoint creates a todo list in the DynamoDB.
PATCH endpoint: The PATCH
endpoint updates the todo list in the DynamoDB by supplying the ID.
DELETE endpoint: The DELETE
endpoint deletes the todo list in the DynamoDB by supplying the ID.
Conclusion
You have successfully built and deployed a scalable, serverless todo app on AWS using AWS, API Gateway, Lambda, Docker, GitHub Actions and Pulumi. Pulumi makes it easier to manage Infrastructure as a Code (IaC) so that deployments are efficient, maintainable and faster. GitHub Actions automates the CI/CD pipeline deployment for seamless and reliable updates. At the same time, Docker in Lambda provides the flexibility to package your application and its dependencies into a container image. You can find the complete code for this project in my GitHub repo.
This article was first published on https://thenewstack.io/build-a-serverless-todo-app-with-aws-pulumi-and-python/