ACME Certificate for Internal Server using Route 53

This procedure results in a TLS certificate issued by an ACME CA for an internal server using an AWS Route 53 Hosted Zone that is designated for ACME DNS verification.

Consider a company that has a domain registration of company.com with a legacy DNS provider that does not have an API for dynamic record creation. A private AWS Route 53 Hosted Zone exists for internal.company.com that is inaccessible from the public internet; there is no zone delegation from company.com to this zone. A TLS certificate is desired for app.internal.company.com using DNS verification. A public AWS Route 53 Hosted Zone exists for aws.company.com and an AWS IAM user has permission to add TXT records to that zone.

A static CNAME is created manually in the legacy DNS system for _acme-challenge.app.internal.company.com pointing to _acme-challenge.app.aws.company.com. When acme.sh is executed using the DNS verification method, it will create a record in the AWS Route 53 Hosted Zone for aws.company.com that can be queried by the ACME CA.

First let’s get setup for running acme.sh:

  • Create an acme_home directory at ~/.acme.sh.docker or another location of your choice. The reason a directory named ~/.acme.sh.docker is suggested instead of the default location of ~/.acme.sh is that it the Docker container and a local installation of acme.sh can’t share the same configuration files.

  • Create an AWS IAM user that has permission to update AWS Route 53 only. An example Terraform module shows the resources necessary. The example below shows retrieving the AWS API Access Key from using 1Password CLI when performing an interactive experiment. It’s worth noting that the current version of acme.sh stores the AWS credentials in $acme_home/account.conf for later use. This is suboptimal and I hope an option to disable this is added.

Modify the values in the following script and execute to register and an ACME CA:

Then evaluate and modify the following to create a new TLS certificate:

If the procedure completes successfully, the key, certificate, and chain content will be located in $acme_home/$hostname.

SSH to private EC2 instances using AWS Session Manager

I like launching AWS EC2 instances on inaccessible private subnets for safety. That makes connecting with Ansible using SSH a pain. Instead of using a bastion/jump host or bringing up a VPN, here’s a handy procedure that supports a dynamic collection of client configurations. It uses AWS CLI to setup an SSH proxy. As long as the instance already has an Instance Profile and attached policy to use AWS Systems Manager Session Manager, this works great.

Install the AWS Systems Manager Session Manager plugin for AWS CLI by running brew install --cask session-manager-plugin on macOS. Installation instructions for lesser operating systems may be found in the AWS docs.

Given two clients, I configure sections for them in my ~/.ssh/config with a suffix following the AWS EC2 InstanceId:

Host i-*.client-a
	User ec2-user
	IdentityFile ~/.ssh/client-a-servers
	ProxyCommand sh -c "aws ssm start-session --profile client-a --region us-east-1 --target $(echo %h | sed s/\.client-a//) --document-name AWS-StartSSHSession --parameters 'portNumber=%p'"

Host i-*.client-b
	User ec2-user
	IdentityFile ~/.ssh/client-b-servers
	ProxyCommand sh -c "aws ssm start-session --profile client-b --region us-southwest-8 --target $(echo %h | sed s/\.client-b//) --document-name AWS-StartSSHSession --parameters 'portNumber=%p'"

With that in place, I can connect to either client’s instances by adding a suffix to the AWS EC2 InstanceId. For example: ssh i-hexhexhexhexhexhe.client-b

Within my Ansible inventory and configuration, I can override the username, private key, and other SSH options when needed.

I suppose I could also create a subshell to resolve tokens in the host value to dynamically choose the profile and region.