Wishlist Endpoint Discovery
Learn how to use the recently-added Tailscale, DNS, and Zeroconf endpoint discovery in …
SSH certificates allow system administrators to SSH into machines without having to manage authorized keys in the servers.
In summary, you create a key pair to be used as a Certificate Authority (CA), and add the public key of that key pair to the server:
TrustedUserCAKeys /etc/ssh/my-root-ca.pub
Then, usually, a system administrator or an automated system creates certificates for the users that need to access the servers.
Those certificates are created with the CA’s private key, the user’s public key, a list of principals and a validity period.
That means that to SSH into a machine, the user needs both their private SSH key and the certificate. The certificate should also match their user (principal) and be used within the time frame in which it is valid.
All those characteristics can help system administrators manage user keys by, well, not managing them at all. They can create short-lived (short being used loosely here) certificates automatically and send to the users, or use some system that creates a new certificate for each access (and those are in fact short lived, a couple of minutes only).
This solves a couple of problems:
There is, though, a minor inconvenience: you’ll need to pass both your private key and the certificate when SSHing:
ssh -i /tmp/cert -i ~/.ssh/id_ed25519 host.foo.bar
Things like gcloud
, okta
, and I’m sure others, work around that by having a
SSH wrapper, so you do:
wrapper ssh host.foo.bar
And the wrapper:
ssh
passing it as a parameterThis avoids the hassle of having to manually calling the API (or downloading a
cert from a webpage), giving it the right perms (0600
), and then finally
SSHing into the target server.
It’s quite simple: you can have as many CA’s you want. Down to the server, if you want to.
Permissions like “which users can sudo” should probably still be managed by some configuration management tool.
The main thing here is managing, which machines a given user can SSH into.
Let’s try this out, shall we?
We can do it inside Docker, so we don’t risk locking anyone out while playing with it:
mkdir -p /tmp/ca
cd /tmp/ca
docker run -v $PWD:/tmp/ca -p 2222:22 -it --rm ubuntu
And let’s install OpenSSH Server in it, and create the CA’s key pair:
# within the container:
apt update
apt install -y --no-install-recommends openssh-server
cd /tmp/ca
ssh-keygen -f ca -t ed25519
Then, we need to set it up and restart the SSH server:
# within the container:
# set up the ca key and authorized principals
echo "TrustedUserCAKeys /tmp/ca/ca.pub
PasswordAuthentication no" > /etc/ssh/sshd_config.d/ca.conf
useradd carlos --create-home
# restart ssh
service ssh restart
Let’s first copy our public key to the server, in my case, I have it on my SSH agent, so it should look like this:
ssh-add -L > /tmp/ca/carlos.pub
chmod 0644 /tmp/ca/carlos.pub
# Or, you could copy the public key directly, like so:
cp ~/.ssh/id_ed25519.pub /tmp/ca/carlos.pub
Now, we can create a certificate using our public key, and the CA’s private key:
# within the container:
ssh-keygen \
-u \
-s /tmp/ca/ca \
-n carlos \
-I id-1 \
-V +1w \
/tmp/ca/carlos.pub
We then copy /tmp/carlos-cert.pub
to our host machine, and use both the
certificate, and the user’s private key to SSH:
ssh \
-i /tmp/ca/carlos-cert.pub \
-i ~/.ssh/id_ed25519 \ # this line might not be needed
-F /dev/null \
-o UserKnownHostsFile=/dev/null \
-p 2222 \
carlos@localhost id
You might need to pass the path to your private key as well, in my case, it’s getting it from the SSH Agent.
We can avoid that TOFU warning by issuing a host certificate, instructing the server to advertise it and our client to trust anything from it automatically.
First, let’s create a host certificate:
# within the container:
ssh-keygen -s /tmp/ca/ca \
-I "id-1" \
-h \
-z 1 \
/etc/ssh/ssh_host_ed25519_key.pub
Notice the -h
instead of the -o
. You can also set a validity, principals
(e.g. host name) and more.
Then, we set up the server to advertise it and restart the SSH server, like so:
# within the container:
echo "HostCertificate /etc/ssh/ssh_host_ed25519_key-cert.pub" >> /etc/ssh/sshd_config.d/ca.conf
service ssh restart
Finally, on our client, we can create a new known hosts file with the certificate public key:
echo "@cert-authority * CONTENTS_OF_/tmp/ca/ca.pub" > known_hosts
And finally, SSH passing it as a parameter:
ssh \
-i /tmp/ca/carlos-cert.pub \
-F /dev/null \
-o UserKnownHostsFile=./known_hosts \
-p2222 \
carlos@localhost
And, sure enough, it works:
Just wanted to show you what said certificates look like:
# ssh-keygen -L -f /tmp/ca/carlos-cert.pub
/tmp/ca/carlos-cert.pub:
Type: ecdsa-sha2-nistp256-cert-v01@openssh.com user certificate
Public key: ECDSA-CERT SHA256:1mEjon99blahciC4T1Mqj6I06FeFWtl/NGwXXBzwSfk
Signing CA: ED25519 SHA256:dmfXhQCffjhtvyIwiFr1Elx/L5EO7/EvbpgCknL1Xg0 (using ssh-ed25519)
Key ID: "id-1"
Serial: 1
Valid: from 2022-03-04T01:00:00 to 2022-03-11T01:01:40
Principals:
carlos
Critical Options:
force-command /usr/bin/id
Extensions:
permit-X11-forwarding
permit-agent-forwarding
permit-port-forwarding
permit-pty
permit-user-rc
Things to notice:
-O force-command="/usr/bin/id"
option, thus it has a critical option. In the code example above that is
not the case.force-command
in your certificates, it is also a good idea to
disable user-rc
. If user-rc
is permitted, the users might get around
the force-command
restrictions. You can disable it by passing
-O no-user-rc
.That’s it. No sorcery required 🙂
Key points:
TrustedUserCAKeys
on the
serversHostCertificate
option@cert-authority
in their known_hosts
file to avoid
TOFU warnings and man in the middle attacks