Docker and AWS
  • Article
  • Jan.4.2017

DevOps made easy with Bamboo, Docker and AWS

  • Jan.4.2017
  • Reading time mins

Quick recap

If you’ve been following the blog series Dockerising our apps, we promised to show how this instance can be deployed on AWS (which has been illustrated here). As discussed, the final piece of the puzzle will be to show how building our image and deploying it can be completed with Bamboo, Atlassian’s own continuous integration and deployment software.

What is Bamboo?

Bamboo is a continuous integration (CI) and continuous delivery (CD) tool that ties automated builds, tests and releases into a well-crafted workflow. There are lots of other tools in the market like Jenkins and Teamcity, but what differentiates Bamboo from these is the continuous delivery aspect, as Bamboo separates the actual build workflow from the deployment.

This separation allows users to map specific branches to deployment environments like QA, staging or production for greater clarity. What other tools do is combine everything in a single workflow,which can be very confusing and difficult to manage. There are many aspects to Bamboo which are impossible to cover in one blog, but what I will attempt to do is to explain briefly some concepts of Bamboo as we encounter them in our workflow.

Methods

In this section will begin designing our CI workflow and the CD. In Bamboo, builds and deployments are done via builders. Bamboo comes bundled with a handful of builders like Maven, Ant, NPM, Visual Studio and Tomcat. Because the Atlasssian community is very active and very rich, some smart people at Utoolity Softwares have developed custom add-ons that allow users to perform deployments to AWS. Because of this, I will show how we can achieve our goal of deployment to AWS using both add-ons and the built-in tools.

Add-ons path

The first thing to do after installing Bamboo is to define the source code repositories that will be used by your projects. CI is all about running a pre-defined set of tasks (builders and tests) against your codes. Therefore Bamboo should know which repository it will listen to for changes.

 

linked_repo

 

We can then go ahead and create the project. This is a two-step action where we firstly define the project before planning the details and tasks. By default, the source code checkout is added – this means that upon adding the plan, a checkout of the repo is done and added to Bamboo’s working directory. This is where Bamboo checks your source code and performs all the magic (builds and tests):

 

bamboo_project                                                       create_project2

 

Hurray! Now have your green build, we are ready to start adding our custom tasks.

 

green_first

 

Speaking of tasks, Bamboo categorises all tasks as either buildertestsdeploymentsource control or variables. Usually, the first task should be a source control, followed by a builder to compile your codes, before running your code against some tests. The variables task are meant to be injected into the build at whichever phase of the build. Lastly, the deployment tasks will handle delivery to your target environments.

While the tasks are categorised, there is no hard rule that prevents you from using a task at any phase of the pipeline. In fact, we will use them as the need arises

bamboo_tasks

So what are we going to do at the build level?

  • We will run the checkout tasks, which will clone the repository to the working directory. This directory will form the workspace for the build. Because our repository contains the Docker files and all the resources we need, Bamboo can execute our Docker commands from this location.
  • We will run a script task that will start the Docker machine, install aws-cli  tools, run the Docker images and, if all is good, deploy the images to our ECR repository. Bamboo comes  bundled with Docker tasks, but because we have our images hosted in AWS ECR, we prefer to use scripts. Luckily, Bamboo is bundled with a script task which allows you to execute a script as part of your build by referencing a file or writing it inline. This is how our second task will look:

 

echo "Starting docker"
docker-machine restart default
eval $(docker-machine env)
if aws --version | grep -q "aws-cli/"
then
    echo "aws is already installed"
else
    echo "Installing aws"
    curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
    unzip -o awscli-bundle.zip
    sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
fi
if grep -w $bamboo_aws_access_key_password ~/.aws/config
then
    echo "credentials in place"
else
    echo "Installing credential"
    echo "[profile valiantys]" >> ~/.aws/config
    echo "aws_access_key_id           = $bamboo_aws_access_key_password" >> ~/.aws/config
    echo "aws_secret_access_key       = $bamboo_aws_secret_key_password" >> ~/.aws/config
fi
echo "obtaining the key and logging in"
logger=$(aws --profile valiantys ecr get-login --region eu-west-1)
$logger
cd jira
docker build -t 1mindemo_jira .
docker tag 1mindemo_jira:latest 149476795744.dkr.ecr.eu-west-1.amazonaws.com/1mindemo_jira:latest
docker push 149476795744.dkr.ecr.eu-west-1.amazonaws.com/1mindemo_jira:latest
cd ../scripts
docker build -t 1mindemo_scripts .
docker tag 1mindemo_scripts:latest 149476795744.dkr.ecr.eu-west-1.amazonaws.com/1mindemo_scripts:latest
docker push 149476795744.dkr.ecr.eu-west-1.amazonaws.com/1mindemo_scripts:latest

[/code_piece]

 

Notice the use of Bamboo variables $bamboo_aws_access_key_password and $bamboo_aws_secret_key_password. This is possible because we have defined variables aws_access_key_password and aws_secret_key_password respectively. In Bamboo, naming a variable containing the keyword password will mask its value both in the UI and in build logs

 

bamboo_variables

 

Our task list now looks like:

 

bamboo_tasks2

 

  • Generate artifacts for the JIRA container and that of the scripts. We won’t need artifacts in our workflow since we can just pull the latest image from ECR before deploying, but for best practice it’s a good idea to keep artifacts, as these give you a snapshot of your deliverables at every build.

 

artifacts

 

At this stage, our build is complete and we need to create a deployment project that will be run on successful build of our project.

deployment_project

 

A window will then be provided for you to add an environment to run your deployments. You will notice that the artifacts from the build plan have been grabbed. Like our build plans, the deployment plan comes with two tasks by default: the clean working directory and the artifact download. What this means is that Bamboo will clear the deployment working directory of remnants of previous the deployment, then download artifacts from the build plan in preparation for delivery to your environment. We will add some custom tasks that will act in the artifact. But like I said earlier, we don’t need the artifacts in our workflow.

 

default_deployment

 

 

At this stage, we will add tasks to create our AWS cluster, create a cloudFormation and create the ECS service. This is the point where we will use the help of add-ons from utoolity. They have two add-ons we can use:

  1.  Identity federation for AWS, which allows you to store user access credentials to your AWS for use in the task
  2.  Tasks for AWS, which comes with dozens of tasks to interact with AWS. Here are some examples:

 

aws_tasks

 

Now let’s add more tasks to our environment!

Amazon ECS Cluster

Notice how we parameterised the cluster name with default build variables – this means our clusters will always be tagged something like 1mindemo-15. We will keep this naming pattern throughout. Also notice that we prefer to provide the access with build variables.                             create_cluster

AWS CloudFormation Stack

CloudFormation is a service that will help us to start an EC2 instance pre-configured with Docker, along with security groups that open the required ports. So basically we need to provide a template for this, and add parameters.

 

{
  "AWSTemplateFormatVersion": "2010-09-09",
  "Description": "AWS CloudFormation template to create resources required to run tasks on an ECS cluster.",
  "Mappings": {
    "VpcCidrs": {
      "vpc": {"cidr" : "10.0.0.0/16"},
      "pubsubnet1": {"cidr" : "10.0.0.0/24"},
      "pubsubnet2": {"cidr" :"10.0.1.0/24"}
    }
  },
  "Parameters": {
    "EcsAmiId": {
      "Type": "String",
      "Description": "ECS EC2 AMI id",
      "Default": ""
    },
    "EcsInstanceType": {
      "Type": "String",
      "Description": "ECS EC2 instance type",
      "Default": "t2.medium",
      "AllowedValues": [
        "t2.nano",
        "t2.micro",
        "t2.small",
        "t2.medium",
        "t2.large",
        "m3.medium",
        "m3.large",
        "m3.xlarge",
        "m3.2xlarge",
        "m4.large",
        "m4.xlarge",
        "m4.2xlarge",
        "m4.4xlarge",
        "m4.10xlarge",
        "c4.large",
        "c4.xlarge",
        "c4.2xlarge",
        "c4.4xlarge",
        "c4.8xlarge",
        "c3.large",
        "c3.xlarge",
        "c3.2xlarge",
        "c3.4xlarge",
        "c3.8xlarge",
        "r3.large",
        "r3.xlarge",
        "r3.2xlarge",
        "r3.4xlarge",
        "r3.8xlarge",
        "i2.xlarge",
        "i2.2xlarge",
        "i2.4xlarge",
        "i2.8xlarge",
        "g2.2xlarge",
        "g2.8xlarge",
        "d2.xlarge",
        "d2.2xlarge",
        "d2.4xlarge",
        "d2.8xlarge"
      ],
      "ConstraintDescription": "must be a valid EC2 instance type."
    },
    "KeyName": {
      "Type": "AWS::EC2::KeyPair::KeyName",
      "Description": "Optional - Name of an existing EC2 KeyPair to enable SSH access to the ECS instances",
      "Default": ""
    },
    "VpcId": {
      "Type": "String",
      "Description": "Optional - VPC Id of existing VPC. Leave blank to have a new VPC created",
      "Default": "",
      "AllowedPattern": "^(?:vpc-[0-9a-f]{8}|)$",
      "ConstraintDescription": "VPC Id must begin with 'vpc-' or leave blank to have a new VPC created"
    },
    "SubnetIds": {
      "Type": "CommaDelimitedList",
      "Description": "Optional - Comma separated list of two (2) existing VPC Subnet Ids where ECS instances will run.  Required if setting VpcId.",
      "Default": ""
    },
    "AsgMaxSize": {
      "Type": "Number",
      "Description": "Maximum size and initial Desired Capacity of ECS Auto Scaling Group",
      "Default": "1"
    },
    "SecurityGroup": {
      "Type": "String",
      "Description": "Optional - Existing security group to associate the container instances. Creates one by default.",
      "Default": ""
    },
    "SourceCidr": {
      "Type": "String",
      "Description": "Optional - CIDR/IP range for EcsPort - defaults to 0.0.0.0/0",
      "Default": "0.0.0.0/0"
    },
    "EcsPort" : {
      "Type" : "String",
      "Description" : "Optional - Security Group port to open on ECS instances - defaults to port 80",
      "Default" : "80"
    },
    "VpcAvailabilityZones": {
      "Type": "CommaDelimitedList",
      "Description": "Optional - Comma-delimited list of VPC availability zones in which to create subnets.  Required if setting VpcId.",
      "Default": ""
    },
    "EcsCluster" : {
      "Type" : "String",
      "Description" : "ECS Cluster Name",
      "Default" : "default"
    }
  },
  "Conditions": {
    "CreateVpcResources": {
      "Fn::Equals": [
        {
          "Ref": "VpcId"
        },
        ""
      ]
    },
    "CreateSecurityGroup": {
      "Fn::Equals": [
        {
          "Ref": "SecurityGroup"
        },
        ""
      ]
    },
    "CreateEC2LCWithKeyPair": {
      "Fn::Not": [
        {
          "Fn::Equals": [
            {
              "Ref": "KeyName"
            },
            ""
          ]
        }
      ]
    },
    "CreateEC2LCWithoutKeyPair": {
      "Fn::Equals": [
        {
          "Ref": "KeyName"
        },
        ""
      ]
    },
    "UseSpecifiedVpcAvailabilityZones": {
      "Fn::Not": [
        {
          "Fn::Equals": [
            {
              "Fn::Join": [
                "",
                {
                  "Ref": "VpcAvailabilityZones"
                }
              ]
            },
            ""
          ]
        }
      ]
    }
  },
  "Resources": {
    "Vpc": {
      "Condition": "CreateVpcResources",
      "Type": "AWS::EC2::VPC",
      "Properties": {
        "CidrBlock": {
          "Fn::FindInMap": ["VpcCidrs", "vpc", "cidr"]
        }
      }
    },
    "PubSubnetAz1": {
      "Condition": "CreateVpcResources",
      "Type": "AWS::EC2::Subnet",
      "Properties": {
        "VpcId": {
          "Ref": "Vpc"
        },
        "CidrBlock": {
          "Fn::FindInMap": ["VpcCidrs", "pubsubnet1", "cidr"]
        },
        "AvailabilityZone": {
          "Fn::If": [
            "UseSpecifiedVpcAvailabilityZones",
            {
              "Fn::Select": [
                "0",
                {
                  "Ref": "VpcAvailabilityZones"
                }
              ]
            },
            {
              "Fn::Select": [
                "0",
                {
                  "Fn::GetAZs": {
                    "Ref": "AWS::Region"
                  }
                }
              ]
            }
          ]
        }
      }
    },
    "PubSubnetAz2": {
      "Condition": "CreateVpcResources",
      "Type": "AWS::EC2::Subnet",
      "Properties": {
        "VpcId": {
          "Ref": "Vpc"
        },
        "CidrBlock": {
          "Fn::FindInMap": ["VpcCidrs", "pubsubnet2", "cidr"]
        },
        "AvailabilityZone": {
          "Fn::If": [
            "UseSpecifiedVpcAvailabilityZones",
            {
              "Fn::Select": [
                "1",
                {
                  "Ref": "VpcAvailabilityZones"
                }
              ]
            },
            {
              "Fn::Select": [
                "1",
                {
                  "Fn::GetAZs": {
                    "Ref": "AWS::Region"
                  }
                }
              ]
            }
          ]
        }
      }
    },
    "InternetGateway": {
      "Condition": "CreateVpcResources",
      "Type": "AWS::EC2::InternetGateway"
    },
    "AttachGateway": {
      "Condition": "CreateVpcResources",
      "Type": "AWS::EC2::VPCGatewayAttachment",
      "Properties": {
        "VpcId": {
          "Ref": "Vpc"
        },
        "InternetGatewayId": {
          "Ref": "InternetGateway"
        }
      }
    },
    "RouteViaIgw": {
      "Condition": "CreateVpcResources",
      "Type": "AWS::EC2::RouteTable",
      "Properties": {
        "VpcId": {
          "Ref": "Vpc"
        }
      }
    },
    "PublicRouteViaIgw": {
      "Condition": "CreateVpcResources",
      "DependsOn": "AttachGateway",
      "Type": "AWS::EC2::Route",
      "Properties": {
        "RouteTableId": {
          "Ref": "RouteViaIgw"
        },
        "DestinationCidrBlock": "0.0.0.0/0",
        "GatewayId": {
          "Ref": "InternetGateway"
        }
      }
    },
    "PubSubnet1RouteTableAssociation": {
      "Condition": "CreateVpcResources",
      "Type": "AWS::EC2::SubnetRouteTableAssociation",
      "Properties": {
        "SubnetId": {
          "Ref": "PubSubnetAz1"
        },
        "RouteTableId": {
          "Ref": "RouteViaIgw"
        }
      }
    },
    "PubSubnet2RouteTableAssociation": {
      "Condition": "CreateVpcResources",
      "Type": "AWS::EC2::SubnetRouteTableAssociation",
      "Properties": {
        "SubnetId": {
          "Ref": "PubSubnetAz2"
        },
        "RouteTableId": {
          "Ref": "RouteViaIgw"
        }
      }
    },
    "EcsSecurityGroup": {
      "Condition": "CreateSecurityGroup",
      "Type": "AWS::EC2::SecurityGroup",
      "Properties": {
        "GroupDescription": "ECS Allowed Ports",
        "VpcId": {
          "Fn::If": [
            "CreateVpcResources",
            {
              "Ref": "Vpc"
            },
            {
              "Ref": "VpcId"
            }
          ]
        },
        "SecurityGroupIngress" : [ {
            "IpProtocol" : "tcp",
            "FromPort" : { "Ref" : "EcsPort" },
            "ToPort" : { "Ref" : "EcsPort" },
            "CidrIp" : { "Ref" : "SourceCidr" } 
        } ]
      }
    },
    "EcsInstancePolicy": {
      "Type": "AWS::IAM::Role",
      "Properties": {
        "AssumeRolePolicyDocument": {
          "Version": "2012-10-17",
          "Statement": [
            {
              "Effect": "Allow",
              "Principal": {
                "Service": [
                  "ec2.amazonaws.com"
                ]
              },
              "Action": [
                "sts:AssumeRole"
              ]
            }
          ]
        },
        "Path": "/",
        "ManagedPolicyArns": [
          "arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role"
        ]
      }
    },
    "EcsInstanceProfile": {
      "Type": "AWS::IAM::InstanceProfile",
      "Properties": {
        "Path": "/",
        "Roles": [
          {
            "Ref": "EcsInstancePolicy"
          }
        ]
      }
    },
    "EcsInstanceLc": {
      "Condition": "CreateEC2LCWithKeyPair",
      "Type": "AWS::AutoScaling::LaunchConfiguration",
      "Properties": {
	"ImageId": { "Ref" : "EcsAmiId" },
        "InstanceType": {
          "Ref": "EcsInstanceType"
        },
        "AssociatePublicIpAddress": true,
        "IamInstanceProfile": {
          "Ref": "EcsInstanceProfile"
        },
        "KeyName": {
          "Ref": "KeyName"
        },
        "SecurityGroups": {
          "Fn::If": [
            "CreateSecurityGroup",
            [ {
              "Ref": "EcsSecurityGroup"
            } ],
            [ {
              "Ref": "SecurityGroup"
            } ]
          ]
        },
        "UserData": {
          "Fn::Base64": {
            "Fn::Join": [
              "",
              [
                "#!/bin/bash\n",
                "echo ECS_CLUSTER=",
                {
                  "Ref": "EcsCluster"
                },
                " >> /etc/ecs/ecs.config\n"
              ]
            ]
          }
        }
      }
    },
    "EcsInstanceLcWithoutKeyPair": {
      "Condition": "CreateEC2LCWithoutKeyPair",
      "Type": "AWS::AutoScaling::LaunchConfiguration",
      "Properties": {
	"ImageId": { "Ref" : "EcsAmiId" },
        "InstanceType": {
          "Ref": "EcsInstanceType"
        },
        "AssociatePublicIpAddress": true,
        "IamInstanceProfile": {
          "Ref": "EcsInstanceProfile"
        },
        "SecurityGroups": {
          "Fn::If": [
            "CreateSecurityGroup",
            [ {
              "Ref": "EcsSecurityGroup"
            } ],
            [ {
              "Ref": "SecurityGroup"
            } ]
          ]
        },
        "UserData": {
          "Fn::Base64": {
            "Fn::Join": [
              "",
              [
                "#!/bin/bash\n",
                "echo ECS_CLUSTER=",
                {
                  "Ref": "EcsCluster"
                },
                " >> /etc/ecs/ecs.config\n"
              ]
            ]
          }
        }
      }
    },
    "EcsInstanceAsg": {
      "Type": "AWS::AutoScaling::AutoScalingGroup",
      "Properties": {
        "VPCZoneIdentifier": {
          "Fn::If": [
            "CreateVpcResources",
            [
              {
                "Fn::Join": [
                  ",",
                  [
                    {
                      "Ref": "PubSubnetAz1"
                    },
                    {
                      "Ref": "PubSubnetAz2"
                    }
                  ]
                ]
              }
            ],
            {
              "Ref": "SubnetIds"
            }
          ]
        },
        "LaunchConfigurationName": {
          "Fn::If": [
            "CreateEC2LCWithKeyPair",
            {
              "Ref": "EcsInstanceLc"
            },
            {
              "Ref": "EcsInstanceLcWithoutKeyPair"
            }
          ]
        },
        "MinSize": "1",
        "MaxSize": {
          "Ref": "AsgMaxSize"
        },
        "DesiredCapacity": {
          "Ref": "AsgMaxSize"
        },
        "Tags": [
          {
            "Key": "Name",
            "Value": {
              "Fn::Join": [
                "",
                [
                  "ECS Instance - ",
                  {
                    "Ref": "AWS::StackName"
                  }
                ]
              ]
            },
            "PropagateAtLaunch": "true"
          }
        ]
      }
    }
  }
}[/code_piece]

 

create_cloudformation

 

Ensure you choose the option Enable IAM in the advanced settings. This is required to overcome the issue reported in https://utoolity.atlassian.net/browse/UAA-200 where cloudformation can only be started with the parameter CAPABILITY_IAM

Amazon ECS Service

This is the service that pulls the container and start it on our EC2 instance.

 

ecs_service

 

Finally, we need a way to trigger a build of the deployment project. It’s recommended practice that the deployment is done on successful build of the linked build plan. The build plan itself can be triggered to run based on commit on the repo, scheduled build, manual or even a trigger from other applications like JIRA post functions. You can attempt to trigger a build of the plan and watch how the deployment project will be picked up immediately with a green build. If all is green, check your AWS server for the ECS service and the EC2 instance fully started.

 

bamboo_all_green

Using built ins

We have seen how we can use the add-ons to start AWS services from within Bamboo. While this is easier and can be configured easily, the add-ons aren’t free, so you’ll have to spend some extra money. The good news is that Bamboo comes bundled in with the script task.

We’ve seen this earlier in the build plan when trying to push images to ECR – script tasks are so powerful that you can script anything you want as long as it can run smoothly on the shell, plus you can substitute build variables. So what we are going to do is remove or disable all AWS-related tasks in our deployment project in favour of script task that will do everything from shell. So this is how my task list looks like after disabling AWS tasks:

aws_disabled

 

And the contents of the scripts:

 

if aws --version | grep -q "aws-cli/"
then
    echo "aws is already installed"
else
    echo "Installing aws"
    curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
    unzip -o awscli-bundle.zip
    sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
fi
if grep -w "AKIAJIW3DITJD2DKQG4A" ~/.aws/config
then
    echo "credentials in place"
else
    echo "Installing credential"
    echo "[profile valiantys]" >> ~/.aws/config
    echo "aws_access_key_id           = $bamboo_aws_access_key_password" >> ~/.aws/config
    echo "aws_secret_access_key       = $bamboo_aws_secret_key_password" >> ~/.aws/config
fi

 

if ecs-cli --version | grep -q "ecs-cli version"
then
    echo "ecs is already installed"
else
    echo "Installing ecs"
    sudo curl -o /usr/local/bin/ecs-cli https://s3.amazonaws.com/amazon-ecs-cli/ecs-cli-darwin-amd64-latest
fi
rm -rf ~/.ecs/config
echo "[ecs]" >> ~/.ecs/config
echo "cluster                     = 1mindemo-$bamboo_buildNumber" >> ~/.ecs/config
echo "aws_profile                 = " >> ~/.ecs/config
echo "region                      = eu-west-1" >> ~/.ecs/config
echo "aws_access_key_id           = $bamboo_aws_access_key_password" >> ~/.ecs/config
echo "aws_secret_access_key       = $bamboo_aws_secret_key_password" >> ~/.ecs/config
echo "compose-project-name-prefix = ecscompose-" >> ~/.ecs/config
echo "compose-service-name-prefix = ecscompose-service-" >> ~/.ecs/config
echo "cfn-stack-name-prefix       = amazon-ecs-cli-setup-" >> ~/.ecs/config
#ecs-cli configure --cluster $bamboo_ManualBuildTriggerReason_userName cluster --region eu-west-1 --access-key $bamboo_aws_access_key_password --secret-key $bamboo_aws_secret_key_password
ecs-cli up --keypair onemindemo --capability-iam --size 1 --instance-type t2.medium --security-group sg-91e9a1f6 --vpc vpc-24c02841 --subnets subnet-27be7c42 subnet-ba4fa1cd subnet-42494604
ecs-cli compose --file docker-compose-ecs.yml service up

 

DevOps made easy with Docker, Bamboo and AWS

We have seen that we can do both CI and CD within Bamboo, and have shown just how powerful it is to separate CI from CD. This is an important feature that Bamboo boasts over other continuous integration software. In the design of our workflow, we used two methods to achieve the same results – using free add-ons versus scripting the whole workflow. Whichever method you choose is a tradeoff between financial investment and reliability of your scripts.

Now that you have a full workflow, you can take this a bit further to decide how your Bamboo builds can be triggered. You might want your build to run on JIRA issue transitions, in which case, you’ll have to write a post function that triggers a Bamboo build by REST API POST.

This has been a long and exciting series of articles, I hope. We have touched on three big technologies, namely Docker, AWS and CI/CD with Bamboo. Each of these technologies requires a great deal of detail, so this is not exhaustive but an introductory blog that goes a long way to show some best practices, along with what can be achieved easily with them.

Related resources

View all resources