I like launching AWS EC2 instances on inaccessible private subnets for safety. That makes connecting with Ansible using SSH a pain. Instead of using a bastion/jump host or bringing up a VPN, here’s a handy procedure that supports a dynamic collection of client configurations. It uses AWS CLI to setup an SSH proxy. As long as the instance already has an Instance Profile and attached policy to use AWS Systems Manager Session Manager, this works great.
Install the AWS Systems Manager Session Manager plugin for AWS CLI by running brew install --cask session-manager-plugin
on macOS. Installation instructions for lesser operating systems may be found in the AWS docs.
Given two clients, I configure sections for them in my ~/.ssh/config
with a suffix following the AWS EC2 InstanceId:
Host i-*.client-a User ec2-user IdentityFile ~/.ssh/client-a-servers ProxyCommand sh -c "aws ssm start-session --profile client-a --region us-east-1 --target $(echo %h | sed s/\.client-a//) --document-name AWS-StartSSHSession --parameters 'portNumber=%p'" Host i-*.client-b User ec2-user IdentityFile ~/.ssh/client-b-servers ProxyCommand sh -c "aws ssm start-session --profile client-b --region us-southwest-8 --target $(echo %h | sed s/\.client-b//) --document-name AWS-StartSSHSession --parameters 'portNumber=%p'"
With that in place, I can connect to either client’s instances by adding a suffix to the AWS EC2 InstanceId. For example: ssh i-hexhexhexhexhexhe.client-b
Within my Ansible inventory and configuration, I can override the username, private key, and other SSH options when needed.
I suppose I could also create a subshell to resolve tokens in the host value to dynamically choose the profile and region.