In the previous post, I talked about how to automated AWS Lambda deployments using Pipes in Bitbucket.

Today I’ll go over how to use Bitbucket Pipelines to deploy Serverless projects.

What is the Serverless framework?

Serverless framework for vendor-agnostic and declarative serverless applications.

The Serverless framework is a vendor-agnostic, declarative, and configurable framework that allows you to create Lambdas and its dependencies, like API gateways, DynamoDB tables, IAM policies, etc. It allows you to specify your Lambda infrastructure as a YAML file, and the framework takes care of creating or updating these resources.

If you haven’t heard of Serverless before, you can find out more here.

Goal: Deploy a Serverless project Lambda using Serverless

So our goal today is as follows: We want to deploy a test Serverless project using Bitbucket Pipelines.

To do this, we’ll need to:

  1. Create a test project.
  2. Configure AWS credentials for deployments.
  3. Configure credentials in Bitbucket.
  4. Write our pipelines file which will use our credentials and deploy our project to AWS.

Step 1: Create a test function

If you don’t already have a Serverless project you want to deploy, you can create a new one to test-drive from a template. Just run the command below:

1
serverless create --template hello-world

The above command will create a basic hello-world Lambda and a serverless.yml file, which should contain details of an API gateway that will forward requests to your Lambda.

My handler.py file looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
import json

def hello(event, context):

    body = {
        "message": "Go Serverless v1.0! Your function executed successfully!",
        "input": event
    }

    response = {
        "statusCode": 200,
        "body": json.dumps(body)
    }

    return response

    # Use this code if you don't use the http event with the LAMBDA-PROXY
    # integration
    # """
    # return {
    #     "message": "Go Serverless v1.0! Your function executed successfully!",
    #     "event": event
    # }
    # """

And my serverless.yml file looks like this (I’ve made a few changes):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
service: ayush-test

provider:
  name: aws
  runtime: python3.7
  stage: "dev"
  region: "us-east-1"
  timeout: 30
  stackTags:
    Project: "MyProject"
    deployed_by: "Ayush Sharma"
    deployed_tag: "master"
    deployed_on: "<date>"
  deploymentBucket: 'my-deployment-bucket'

package:
  exclude:
    - .gitignore
    - bitbucket-pipelines.yml
    - README.md
    - serverless.yml
  excludeDevDependencies: true
  individually: true

functions:
  hello:
    handler: handler.hello
    events:
      - http:
          path: /
          method: get
          cors: true

Step 2: Configure AWS credentials

The AWS credentials can be configured just like we did for the Bitbucket Pipes deployment. The only difference: we’ll require more permissions than the AWSLambdaFullAccess policy we used for the last one. Since we’re also creating API gateways, we’ll also need to add that policy.

Note that its very important to follow the least-privilege principle while creating these credentials. Since its likely that you’ll be using the same credentials across multiple Serverless deployments, its a good idea to keep the policies as tight as possible.

Step 3: Create our Pipelines file

Now create a bitbucket-pipelines.yml file and add the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
image: node:11.13.0-alpine

pipelines:
  branches:
    master:
      - step:
          caches:
            - node
          script:
            - apk add python3
            - npm install -g serverless
            - serverless config credentials --stage dev --provider aws --key ${AWS_DEV_LAMBDA_KEY} --secret ${AWS_DEV_LAMBDA_SECRET}
            - serverless deploy --stage dev

There are a few things going on in the pipelines file above:

  1. We’re using the node:11.13.0-alpine Docker image in our pipeline. This image is small in size and has the npm package manager already installed.
  2. The caches: step will cache all the node dependencies we install so they can be re-used across multiple deployments.
  3. We’re using apk, which is the package manager in Alpine, to install the basic dependencies that our Serverless project will need.
  4. The serverless config command will configure our AWS credentials.
  5. The serverless deploy command will read our serverless.yml file and deploy the resources we’ve specified.

Step 4: Executing our deployment

Committing the above changes into our repo will trigger a pipeline for this deployment. If all goes well, we should see the following:

Bitbucket deployment for Serverless project successful.

Wrapping it up

With the above pipeline in place, we can now leverage other Bitbucket features to tighten the deployment, such as merge checks, branch permissions, and deployment targets. We can also tighten the permissions of the IAM role we created using the least-privilege rule to ensure it has access to only the resources it needs.

If you’re using Serverless for a lot of deployments, you can even create a custom Bitbucket Pipe for it so you can abstract away a lot of boilerplate code, and take advantage of standards and best practices across all your Serverless deployments.

And that’s all. I hope you’ve enjoyed this demo. You can find additional resources below.

Thanks, and happy coding :)

Resources