AWS Elastic Beanstalk and Private Docker Hub Repos
Elastic Beanstalk makes it simple to deploy an application on AWS
infrastructure, including automatic scaling. When this works it’s
good. When it doesn’t it can be frustrating to debug but we typically
have all the tools necessary to find the cause.
I’ve deployed an application on Elastic Beanstalk where the Docker
images were hosted on an Elastic Container Service repostory. This
went well but was only for testing purposes as I’d made
Beanstalk-specific changes to the Docker config file. Once this was
deployed and tests were successful I needed to consolidate changes to
the Docker config file and have Beanstalk pull from our Docker Hub
repository. This proved difficult due to reasons.
Originally, to use an ECS repo, I just specified the full repo name,
along the lines of
“123456789.dkr.ecr.us-east-1.amazonaws.com/service:latest”. This was
enough to get the cluster to pull the image.
Later, switching to using a private Docker Hub repo, things
changed. To do this AWS says you must push a Docker credentials config
file to S3 in an older format then reference this in your Dockerrun
file. Getting the credentials into the correct format is simple, just
strip an intermediate object (the “auths” key on the root object) and
leave its keys as keys of the root. See the docs at AWS for details:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.container.console.html#docker-images-private
I tried configuring things as described by AWS. It didn’t
work. Deploying a new version meant to pull from a private Docker Hub
repo resulted in errors in the logs like:
[Instance: i-1234345] Command failed on instance. Return code: 1
Output: (TRUNCATED)…px/test-service not found: does not exist or no
pull access Failed to pull Docker image 500px/test-service:latest:
Error response from daemon: repository 500px/test-service not found:
does not exist or no pull access. Check snapshot logs for
details. Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/03build.sh
failed. For more detail, check /var/log/eb-activity.log using console
or EB CLI.
Welp. That didn’t work.
I tried re-doing things a few times because you never know… and that
didn’t help.
Now to actually sorting things out. We’ve got an instance ID. We can
go from there to an EC2 instance we can SSH into (note that we’ll
connect as the ec2-user).
ssh ec2-user@<public DNS>
And have a look at the Docker config file:
cat ~/.dockercfg
Everything looks well, proper user is there. Yet the `docker pull` fails.
cat ~/.docker/config.json
Well, those are the old credentials from testing against a Docker repo
on AWS. Let’s just try that pull ourselves again:
docker pull 500px/test-service
Failing, as expected. What if we remove the old creds…
rm ~/.docker/config.json
docker pull 500px/test-service
Oh. So that worked.
So it turns out that this Elastic Beanstalk application was started
with the AWS sample application. It then was changed to pull a Docker
image from an AWS Docker repository. This apparently created a Docker
config file with the correct credentials in the location that newer
versions of Docker expect, at ~/.docker/config.json. Later, trying to
switch to use the image from Docker Hub, requires specifying a key at
S3 containing the Docker Hub credentials:
cat Dockerrun.aws.json
[…]
“Authentication”: {
“Bucket”: “devops”,
“Key”: “test.dockercfg”
},
[…]
Beanstalk then apparently pulls the credentials object at S3 and
stores it on the EC2 instance at the old Docker config location,
~/.dockercfg. When the Beanstalk scripts
(/opt/elasticbeanstalk/hooks/appdeploy/) tried to pull Docker
preferred the new config location and, when that failed, it did not try
to fall back to the old config location. This all makes sense but
definitely led to a bit of pain. Ideally the EC2 instance would be
cleaned up after each deploy, removing any credentials (or other
temporary data) to avoid such an issue.
So, I suppose the point here is that even with a system like Elastic
Beanstalk providing a great deal of automation to ease your
deployments there may still be issues. But, with a little
investigation, nothing is hidden from you and simple debugging
practices will still serve you well.