I’ve always been somewhat of a GitHub believer but recently I’ve begun utilizing GitLab for a few different projects. Honestly, once you get over the changes in vernacular it’s a pretty nifty spot to host some of your code. Sure, it comes with all the bells and whistles you would expect with a git repository, but the built-in CI and Pipelining are really where I see the value. It’s just so, well, easy! That said, recently they have placed a limit on the amount of “pipeline minutes” per month per project on their shared runners. Now, the limit is 2000 minutes, and honestly, that’s a lot of minutes. That said, it only applies to pipelines executed on shared runners – local runners and Kubernetes clusters need not apply! Thus, my quest to get GitLab and AWS EKS walking and talking together began – and the following worked for me!

Before jumping right in though let’s take a quick look at scope. Within GitLab we can have a few different types of Kubernetes clusters to handle our CI:

  • Instance Clusters – These provide a home for CI based services for your complete instance of GitLab – All Groups, All Projects will have access to them
  • Group Clusters – These will, as you guessed, operate at the Group level – Group Clusters will only be available to projects within the specified group
  • Project Clusters – And finally, we can have a K8s cluster dedicated to a single project and only that project can utilize it.

I opted for a Group Cluster, but honestly, the process is the same for each – it all just depends where you initiate the deployment. The process itself is fairly straight forward, and for the most part, is completely automated utilizing an AWS CloudFormation stack once we have a couple of required resources configured within our AWS environment

Pre-Reqs – Creating IAM Roles and Policies

In order for GitLab to deploy and managed a K8s cluster within EKS we need to create a couple of different IAM Roles and policies for support: A provisioning role to handle the actual deployment of resources through the CloudFormation stack and a service role which allows EKS and K8s to manage all this madness. Before jumping into roles, however, let’s go ahead and create our policy

Creating the provisioning policy

Head into IAM and select to create a new policy ( AWS Services -> IAM -> Policies -> Create policy) – Go ahead and skip straight to the JSON tab (ain’t nobody got time for Visual Editor). The following is the JSON recommended for the provisioning policy:

 {
     "Version": "2012-10-17",
     "Statement": [
         {
             "Effect": "Allow",
             "Action": [
                 "autoscaling:*",
                 "cloudformation:*",
                 "ec2:*",
                 "eks:*",
                 "iam:*",
                 "ssm:*"
             ],
             "Resource": "*"
         }
     ]
 }

Now just by looking at the JSON we can see that this will pretty much give GitLab full access to all of the AutoScaling, CloudFormation, EC2, EKS, IAM and SSM resources within our environment – while this is the route I went, they do provide a more granular permission policy with the following JSON (Note: I couldn’t get anything to work with this 🙂)

{
     "Version": "2012-10-17",
     "Statement": [
         {
             "Effect": "Allow",
             "Action": [
                 "autoscaling:CreateAutoScalingGroup",
                 "autoscaling:DescribeAutoScalingGroups",
                 "autoscaling:DescribeScalingActivities",
                 "autoscaling:UpdateAutoScalingGroup",
                 "autoscaling:CreateLaunchConfiguration",
                 "autoscaling:DescribeLaunchConfigurations",
                 "cloudformation:CreateStack",
                 "cloudformation:DescribeStacks",
                 "ec2:AuthorizeSecurityGroupEgress",
                 "ec2:AuthorizeSecurityGroupIngress",
                 "ec2:RevokeSecurityGroupEgress",
                 "ec2:RevokeSecurityGroupIngress",
                 "ec2:CreateSecurityGroup",
                 "ec2:createTags",
                 "ec2:DescribeImages",
                 "ec2:DescribeKeyPairs",
                 "ec2:DescribeRegions",
                 "ec2:DescribeSecurityGroups",
                 "ec2:DescribeSubnets",
                 "ec2:DescribeVpcs",
                 "eks:CreateCluster",
                 "eks:DescribeCluster",
                 "iam:AddRoleToInstanceProfile",
                 "iam:AttachRolePolicy",
                 "iam:CreateRole",
                 "iam:CreateInstanceProfile",
                 "iam:CreateServiceLinkedRole",
                 "iam:GetRole",
                 "iam:ListRoles",
                 "iam:PassRole",
                 "ssm:GetParameters"
             ],
             "Resource": "*"
         }
     ]
 }

Once you’ve decided on which route to go, give the policy a name and description and create it!

GitLab Policy

With our policy all created we can now jump into creating a role to attach it to…

Creating the Provisioning Role

GitLab is going to need access to your AWS environment and it will gain this through what is called the provisioning role. Now before jumping directly into IAM to create this we will need a couple of bits of information from GitLab, mainly our Account ID along with an External Id that we can use to further restrict access to the role we are about to create. To grab this information, head into your GitLab group and select Kubernetes from the left-hand navigation menu. Then, click Add Kubernetes cluster and select Amazon EKS. Note: If deploying a project level cluster the process to get here is via Operations->Kubernetes

Retrieving your Account and External ID from GitLab

Now that we have the account and external id tucked away we can jump into the IAM Console and begin creating our provisioning (Cross Account) role (AWS Servicies -> IAM -> Roles -> Create role)

Within the role creation dialogue, select Another AWS account as the trusted entity and be sure to select Require external ID within the options. Go ahead and fill in the two fields with the information obtained earlier from GitLab.

Creating IAM Provisioning Role

From the policies section, search and attach the policy we created earlier

Attaching Policy to Role

Walkthrough the rest of the wizard, tagging your resources as per your own tagging policies and naming your new role! Now, let’s do it all over again but differently for the service role

Creating the Service Role

While the provisioning role we just created will be used to create the required resources, the service role will be used to manage them. Go through the process of creating a new role again, however this time selecting AWS service as the trusted entity, EKS as the service, and EKS – Cluster as the use case (see below)

Walkthrough the rest of the creation dialogue, naming and tagging everything accordingly, accepting the default policy of AmazonEKSClusterPolicy, and creating the role. Once the role has been created we need to go back into it and edit in order to attach another required policy. So go ahead and find your newly created service role within the list of roles, drill in, and select to Attach policies – we will need to search the list and attach the AmazonEKSServicePolicy to our service role

In the end, our new service role should look as follows:

So with that we have completed the prerequisites of setting up our IAM roles and head back into GitLab to perform the rest of the K8s deployment.

Deploying an EKS Cluster from GitLab

Remember that provisioning role we created a few steps back – well, we are going to need its’ associated ARN in order to tell GitLab what role it has access to. So maybe before leaving IAM, go ahead and make note of the provisioning roles ARN. Again, back in GitLab select Kubernetes from the left-hand navigation menu, then Add Kubernetes cluster, then Amazon EKS – copy/paste the provisioning role arn into the appropriate spot and click Authenticate with AWS

We now move on to the details page. Here we have to enter a variety of information but let me help break it down for you:

  • Kubernetes cluster name – pretty self explanatory right 🙂
  • Environment scope – Here we can specify environments for our K8s clusters. This is really only useful if we plan to have more than one K8s cluster attached to a group/project. Basically, we could specify an environment variable in our project that would cause our CI to run in say a production cluster, or a test cluster – this is where that mapping from an environment variable to cluster takes place. My advice, leave it at * if you are only going to have one K8s cluster
  • Kubernetes version – well, yeah, the version
  • Service role – remember all that work we did creating a service role in IAM – well, select it here!
  • Region – Hey there, where in the world do you want to instantiate this EKS cluster!
  • Key pair name – This keypair will be used to instantiate and authenticate the EC2 K8s nodes that are sparked up
  • VPC – Here’s hoping you have setup a VPC already 🙂
  • Subnets – Subnets to use within the VPC
  • Security Group – Select which security group to apply to the EKS nodes
  • Instance type – How much money do you want to spend? J/K – how many resources do you think you will need here…
  • Number of nodes – I’d probably just leave this at 3, unless of course you are a CI maniac!!
  • GitLab-managed cluster – Do you want GitLab to manage all of this for you – or are you going to go into the K8s cluster and create namespaces and all that madness. Honestly, just leave this checked unless you know what you are doing!

Once you are happy with all your choices select Create Kubernetes cluster. At this point give yourself a pat on the back and take about a 30-minute break – deploying K8s on EKS along with all of the associated configuration takes a little bit – go enjoy a beverage of choice!

Now depending on the version of GitLab you are running you may have to do some more work installing Helm before continuing on. If using an older version of GitLab Helm will be listed as an installable application from the Applications tab of your K8s cluster – if on a new version of GL, Helm Tiller automatically gets provisioned into the GitLab namespace of the cluster – so no need to do more. Either way, once Helm is installed head the Applications tab – here you can go ahead and click Install on the GitLab Runner application, as well as any others you wish to have running in your K8s cluster.

Once GitLab Runner has been instantiated you are good to go to start using your AWS EKS K8s cluster to run your CI!

Please note that this is an EKS K8s cluster managed by GitLab – meaning you won’t see any managed nodes within the EKS settings of the cluster on AWS. If diving into the K8s cluster is something you are interested in then there is a little work you need to do in order to authorize an IAM account to assume the K8s role – But that is certainly a topic for another blog 🙂

Although the process of adding an EKS K8s cluster to run GitLab CI is not a difficult one, I’ve noticed some variances in the documentation on how to set this up – and had to run through the process a couple times before I finally landed on the procedure above – so I thought I’d share in hopes of maybe pushing others in the right direction! Happy Coding!