Terraform Linux Provider
In a previous post, I introduced Terraform as a tool for managing infrastructure. I alluded to how Terraform works on state in an abstract sense, and in this post I’ll introduce an unorthodox implementation of that idea(fixme). Because everything in linux is a file, linux can be managed by manipulating the state of the filesystem. And what did I say Terraform was good at? That’s right, we can use Terraform to model linux’s configuration and manage it.
Creating a Provider Plugin
Actually, I’m not going to cover the general basics of writing a Terraform provider; that much is covered well in the guide. The Golang code is frankly quite boring - read in a data structure parsed from the HCL and make some API calls. This post will stay a bit more conceptual and show the basics of what you can do with the provider I’ve written.
Connecting to the Host
First, in order to modify the filesystem, we need a way to connect to the machine. The SFTP protocol includes a type of RPC that enables managing a remote filesystem, but I’m not convinced it adds much benefit over plain SSH. I’m working with linux systems that have a shell, and I already know enough of the shell to do what I want. When something goes wrong, I want to try repeating it via a shell command. Sending readable text instead of a binary format will only be a drop in the bucket in terms of bytes on the network.
provider "linux" {
address = "127.0.0.2:22"
private_key_pem = file("/home/user/.ssh/id_rsa")
}
That’s all we’ll need to configure this provider - the required information for establishing a SSH connection. This does assume that the public key already exists on the host, but many cloud providers support preloading such a key. Here’s an example using the TLS and Digital Ocean providers:
provider "tls" {}
resource "tls_private_key" "infra" {
algorithm = "RSA"
}
provider "digitalocean" {}
resource "digitalocean_ssh_key" "infra" {
name = "infra"
public_key = tls_private_key.infra.public_key_openssh
}
resource "digitalocean_droplet" "main" {
name = "main"
# ...
ssh_keys = [digitalocean_ssh_key.infra.id]
}
provider "linux" {
address = "${digitalocean_droplet.main.ipv4_address}:22"
private_key_pem = tls_private_key.infra.private_key_pem
}
Now this is starting to look like something I can bring up and tear down with low effort and high reproducibility.
Working with Files
Simply creating a cloud VM is the 21st century equivalent of watching grass grow. Let’s put some software on it! Say I want to run NGINX with a custom configuration file.
resource "linux_package" "nginx" {
name = "nginx"
}
resource "linux_file" "nginx_conf" {
src = "file://${path.module}/nginx.conf"
dest = "/etc/nginx/nginx.conf"
checksum = "c53249c2d61dbbba2771079549c3bd7bb17de6923290a3dd2fa42eb334d24cce"
depends_on = [linux_package.nginx]
}
This is the equivalent of running a dnf install
and then scp
ing the file to the remote host. Want to set permissions (chmod
/chown
) or even set SELinux context (chcon
)? No problem, there are resource attributes for those too.
Now of course on it’s own, this isn’t particularly powerful - any other infrastructure tool could do the same thing. The power comes in being able to combine configuration management of the system with that of the cloud provider. And if you’re already bullish on the way Terraform handles state, you may enjoy it applied to this area as well. Like any other resource, Terraform knows what to do when the definition is modified or deleted.
Other than packages and files, the provider also implements resources for a few other linux “primitives”:
- directories
- users
- groups
- systemd services
With just these, you can get pretty far in deploying software.
Uses
I am certainly not suggesting this as a good idea for all use cases. I have really liked using it for managing just a few servers with a handful of easy-to-run (read: binary plus configuration file or two) services.
Obviously this goes against the immutable server idea1 that is extremely popular today, e.g. what tools like Docker or Packer do. I certainly don’t disagree with the troubles of snowflake servers or configuration drift - I have seen it all firsthand. But that idea does come at a cost. Amongst other things, maybe pushing hundreds of megabytes or even several gigabytes on every commit is sometimes wasteful. Maybe having to put dozens of install steps into a Dockerfile means the program was poorly designed or our systems are bloated in the first place.
Anyway, check out the source if you’re interested. There’s a cool example in there that configures the linux firewall to run securely behind Cloudflare. And for what it’s worth, this blog and all other parts of my site runs on servers fully managed with Terraform.
A lot of folks are looking for ways to easily host third party apps, be it traditional approaches like sharing Ansible playbooks or platforms like Standstorm. I’m trying to see if Terraform makes sense for me.