Integrate Amazon Elastic Container Registry with Jenkins
This is now possible using amazon-ecr-credential-helper as described in https://aws.amazon.com/blogs/compute/authenticating-amazon-ecr-repositories-for-docker-cli-with-credential-helper/.
The short of it is:
- Ensure that your Jenkins instance has the proper AWS credentials to pull/push with your ECR repository. These can be in the form of environment variables, a shared credential file, or an instance profile.
- Place docker-credential-ecr-login binary at one of directories in $PATH.
- Write the Docker configuration file under the home directory of the Jenkins user, for example, /var/lib/jenkins/.docker/config.json. with the content
{"credsStore": "ecr-login"}
- Install the Docker Build and Publish plugin and make sure that the jenkins user can contact the Docker daemon.
- Finally, create a project with a build step that publishes the docker image
As @Connor McCarthy said, while waiting for Amazon to come up with a better solution for more permanent keys, in the mean time we'd need to generate the keys on the Jenkins server ourselves somehow.
My solution is to have a periodic job that updates the Jenkins credentials for ECR every 12 hours automatically, using the Groovy API. This is based on this very detailed answer, though I did a few things differently and I had to modify the script.
Steps:
- Make sure your Jenkins master can access the required AWS API. In my setup the Jenkins master is running on EC2 with an IAM role, so I just had to add the permission
ecr:GetAuthorizationToken
to the server role. [update] To get any pushes complete successfully, you'd also need to grant these permissions:ecr:InitiateLayerUpload, ecr:UploadLayerPart, ecr:CompleteLayerUpload, ecr:BatchCheckLayerAvailability, ecr:PutImage
. Amazon has a built-in policy that offers these capabilities, calledAmazonEC2ContainerRegistryPowerUser
. - Make sure that the AWS CLI is installed on the master. In my setup, with the master running in a debian docker container, I've just added this shell build step to the key generation job:
dpkg -l python-pip >/dev/null 2>&1 || sudo apt-get install python-pip -y; pip list 2>/dev/null | grep -q awscli || pip install awscli
- Install the Groovy plugin which allows you to run Groovy script as part of the Jenkins system.
- In the credentials screen, look for your AWS ECR key, click "Advanced" and record its "ID". For this example I'm going to assume it is "12345".
- Create a new job, with a periodic launch of 12 hours, and add a "system Groovy script" build step with the following script:
import jenkins.model.*
import com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl
def changePassword = { username, new_password ->
def creds = com.cloudbees.plugins.credentials.CredentialsProvider.lookupCredentials(
com.cloudbees.plugins.credentials.common.StandardUsernameCredentials.class,
Jenkins.instance)
def c = creds.findResult { it.username == username ? it : null }
if ( c ) {
println "found credential ${c.id} for username ${c.username}"
def credentials_store = Jenkins.instance.getExtensionList(
'com.cloudbees.plugins.credentials.SystemCredentialsProvider'
)[0].getStore()
def result = credentials_store.updateCredentials(
com.cloudbees.plugins.credentials.domains.Domain.global(),
c,
new UsernamePasswordCredentialsImpl(c.scope, "12345", c.description, c.username, new_password))
if (result) {
println "password changed for ${username}"
} else {
println "failed to change password for ${username}"
}
} else {
println "could not find credential for ${username}"
}
}
println "calling AWS for docker login"
def prs = "/usr/local/bin/aws --region us-east-1 ecr get-login".execute()
prs.waitFor()
def logintext = prs.text
if (prs.exitValue()) {
println "Got error from aws cli"
throw new Exception()
} else {
def password = logintext.split(" ")[5]
println "Updating password"
changePassword('AWS', password)
}
Please note:
- the use of the hard coded string
"AWS"
as the username for the ECR credentials - this is how ECR works, but if you have multiple credentials with the username "AWS", then you'd need to update the script to locate the credentials based on the description field or something. - You must use the real ID of your real ECR key in the script, because the API for credentials replaces the credentials object with a new object instead of just updating it, and the binding between the Docker build step and the key is by the ID. If you use the value
null
for the ID (as in the answer I linked before), then a new ID will be created and the setting of the credentials in the docker build step will be lost.
And that's it - the script should be able to run every 12 hours and refresh the ECR credentials, and we can continue to use the Docker plugins.
I was looking into this exact same issue too. I didn't come up with the answer either of us was looking for, but I was able to create a workaround with shell scripting. Until AWS comes out with a better solution to ECR credentials, I plan on doing something along these lines.
I replaced the Docker Build and Publish step of the Jenkins job with and Execute Shell step. I used the following script (could probably be written better) to build and publish my container to ECR. Replace the variables in < > brackets as needed:
#!/bin/bash
#Variables
REG_ADDRESS="<your ECR Registry Address>"
REPO="<your ECR Repository>"
IMAGE_VERSION="v_"${BUILD_NUMBER}
WORKSPACE_PATH="<path to the workspace directory of the Jenkins job>"
#Login to ECR Repository
LOGIN_STRING=`aws ecr get-login --region us-east-1`
${LOGIN_STRING}
#Build the containerexit
cd ${WORKSPACE_PATH}
docker build -t ${REPO}:${IMAGE_VERSION} .
#Tag the build with BUILD_NUMBER version and Latests
docker tag ${REPO}:${IMAGE_VERSION} ${REPO_ADDRESS}/${REPO}:${IMAGE_VERSION}
#Push builds
docker push ${REG_ADDRESS}/${REPO}:${IMAGE_VERSION}