Today I’m going to talk about automating the deployment of AWS Lambda functions using a new feature in Bitbucket called Pipes.

Before we do that, let’s start with a basic overview.

What is AWS Lambda?

AWS Lambda is the managed service provided by Amazon that allows you to run functions-as-a-service. The basic idea is that you define a Python/Node/Java/etc. function and give it an API endpoint, and then upload that function to AWS. Your function will then handle the basic request-response cycle, and AWS will handle the underlying infrastructure plumbing, such as computing, networking, storage, etc., scaling it all on-demand. This frees you up from managing infrastructure, and lets you focus on building applications.

What is Bitbucket Pipelines?

Bitbucket Pipelines is the continuous-integration/continuous-delivery pipeline integrated directly into Bitbucket. It works as follows: once a pull request is reviewed and merged into a branch, Bitbucket can run a sequence of steps to perform various activities, such as running test cases, static code analysis, and deploying to staging or production environments. I’ve covered deployments through Bitbucket Pipelines before, so you may want to go through that once before proceeding.

What is Bitbucket Pipes?

Bitbucket Pipes is the new feature we’ll test-drive today. It is a marketplace for third-party integrations. A Pipe is nothing but a parameterized Docker container which you can use within your pipeline to avoid re-writing a lot of standard code. It will look something like this:

1
2
3
4
5
- pipe: <vendor>/<some-pipe>
  variables:
    variable_1: value_1
    variable_2: value_2
    variable_3: value_3

There are many Pipes available today by AWS, Google Cloud, SonarCube, Slack, etc. They’re essentially a way to abstract away repeated steps. This makes code reviews easier, makes deployments more reliable, and let’s you focus on what is being done rather than how it is being done. If a third-party pipe doesn’t work for you, you can even write you own!

These are some of the providers that provide Pipes today:

Bitbucket Pipes marketplace providers.

Goal: Deploy a Lambda using Pipes

So our goal today is as follows: We want to deploy a test Lambda function using the new Pipes feature.

To do this, we’ll need to:

  1. Create a test function.
  2. Configure AWS credentials for Lambda deployments.
  3. Configure credentials in Bitbucket.
  4. Write our pipelines file which will use our credentials and a Pipe to deploy to AWS.

Step 1: Create a test function

Let’s start with a basic test function. Create a new repo, and add a new file called lambda_function.py with the following contents:

1
2
3
def lambda_handler(a, b):

    return "It works :)"

Step 2: Configure AWS credentials

For this deployment, we’ll need an IAM user with the AWSLambdaFullAccess managed policy.

Once this user is created, generate a pair of access and secret keys, and add them to the Repository variables of your repo. Make sure to mask and encrypt these values so they cannot be accessed later.

Bitbucket Pipelines repository variables masked and encrypted.

The access key and secret can be added in 3 places: either at the Account level, the Deployment level, or the Repository level. You can find more information about these here.

Step 3: Create our Pipelines file

Now create a bitbucket-pipelines.yml file and add the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
pipelines:
  default:
    - step:
        name: Build and package
        script:
          - apt-get update && apt-get install -y zip
          - zip code.zip lambda_function.py
        artifacts:
          - code.zip
    - step:
        name: Update Lambda code
        script:
          - pipe: atlassian/aws-lambda-deploy:0.2.1
            variables:
              AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
              AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
              AWS_DEFAULT_REGION: 'us-east-1'
              FUNCTION_NAME: 'my-lambda-function'
              COMMAND: 'update'
              ZIP_FILE: 'code.zip'

The first step: in the pipeline is pretty basic: it will package our Python function in a zip file and pass it as an artifact to the next step.

The second step: is where the magic happens. We’re calling the atlassian/aws-lambda-deploy:0.2.1 Pipe, which is a Dockerized container provided by Atlassian for deploying Lambdas. Its source code can be found here. We call this Pipe with six paramters: our AWS credentials, the region where we want to deploy, the name of our Lambda function, the command we want to execute, and the name of our packaged artifact.

Step 4: Executing our deployment

Committing the above changes into our repo will trigger a pipeline for this deployment. If all goes well, we should see the following:

Bitbucket Pipes deployment successful.

Wrapping it up

With the above pipeline in place, we can now leverage other Bitbucket features to tighten the deployment, such as merge checks, branch permissions, and deployment targets. We can also tighten the permissions of the IAM role we created using the least-privilege rule to ensure it has access to only the resources it needs.

Using Pipes in this way has the following advantages:

  1. They simplify pipeline creation and abstract away repeating details: just paste in a vendor-supplied pipeline, pass in your parameters, and that’s it!
  2. They make code reviews easier, since complex workflows can be hidden behind standard Pipes that do not need to be reviewed very often.
  3. Pipes use semantic versioning, so we can lock the Pipe version to major or minor versions as we choose. Since Bitbucket already has a code-review process, we can ensure changing these versions require a review process and approval.
  4. Pipes can simplify post-deployment administrative tasks, such as sending alerts via email, Slack, PagerDuty, etc.

And that’s all. I hope you’ve enjoyed this demo. You can find additional resources below.

Thanks, and happy coding :)

Resources