It might be a little confusing, but CoreOS is not actually the name of an operating system. CoreOS is the name of the company that develops a set of tools for the container ecosystem. The name of the operating system that runs on each of the hosts in a CoreOS cluster is Container Linux. Realizing this made it a lot easier to find the information I was looking for while trying to understand how all the tools work together.

As I mentioned before, Container Linux is a Linux distribution. The selling point of this distribution is that it contains the bare minimum for it to operate. It is designed to run applications inside of containers, so it doesn’t provide things that other Linux distributions provide (Browser, Office suite, GUI, etc…). Stripping the things that are not needed saves some disk space and probably some memory and CPU cycles (Assuming some daemons included in most distributions will not be running). Is it worth to change the distribution we are used to using just for a little more resources? Probably not, but lets talk about the things we would get if we decide to do it.

  • Automatic software updates – In other distributions, the system remains the same until a system administrator updates it. Linux container constantly updates the underlying system (including the kernel) with security and stability patches.
  • Cluster configutaion – Allows you to declaratively configure (partition disks, add users, etc…) all the machines in your cluster.
  • Kubernetes – CoreOs makes it easy to build a Kubernetes cluster in most cloud providers.

Container Linux is just a part of the puzzle, but it changes the way we think about managing clusters of machines. With Container Linux, all machines in the cluster work together to achieve whatever tasks they were assigned (running different containers in different configurations).

Running Container Linux

Most cloud platforms allow you to start hosts with Container Linux installed on them with a few clicks. If you want to run it in virtual machines or bare metal, there are guides in CoreOS docs to start from scratch.

Since I’ve been playing with Terraform, I’m going to use it for creating my instances. If you are not familiar with Terraform, you can read my introduction to Terraform post and getting familiar with Terraform.

To create a single CoreOS instance I used a configuration like this one:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
// Configure Google Cloud
provider "google" {
  credentials = "${file("credentials.json")}"
  project = "ncona-17504"
  version = "~> 1.13"
}

// CoreOS machines
resource "google_compute_instance" "us-central1-c--f1-micro" {
  name = "us-central1-c--f1-micro"
  machine_type = "f1-micro"
  zone = "us-central1-c"

  boot_disk {
    initialize_params {
      image = "coreos-stable"
    }
  }

  network_interface {
    network = "default"
    access_config = {}
  }
}

After applying this configuration, the instance will be created and available. This instance is not very special, you can probably SSH to it, and run some commands on it, but nothing too exciting.

Provisioning

One of the selling points of CoreOS is how they make it easy to provision machines for creating container management clusters. I’m not going to go into that much depth in this post, but I’ll show some simple provisioning examples.

CoreOS uses a provisioning system called Ignition. What it allows to do is very basic: configure partitions, create files and create users. This might not sound like much, but can be used to achieve most things.

Ignition config files are usually generated from Container Linux config files. This is a Container Linux config file that adds a user to a CoreOS instance:

1
2
3
4
5
6
passwd:
  users:
    - name: adrian
      ssh_authorized_keys:
       - my_public_key
      groups: [sudo, docker]

This file can be transformed to an Ignition file using their config transpiler (ct).

1
ct --in-file config.yml --out-file config.ign

The resulting ignition file looks something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
{
  "ignition": {
    "config": {},
    "timeouts": {},
    "version": "2.1.0"
  },
  "networkd": {},
  "passwd": {
    "users": [
      {
        "groups":["sudo","docker"],
        "name": "adrian",
        "sshAuthorizedKeys": [
          "my_public_key"
        ]
      }
    ]
  },
  "storage": {},
  "systemd": {}
}

The way to tell the CoreOS instance to use this ignition file is by using the metadata field when creating the instance. It would be something like this in Terraform:

1
2
3
metadata {
  user-data = "${file("provisioning/ignition/config.ign")}"
}

Adding this to your instance would create the specified user and allow SSH access with the given key.

[ docker  linux  ]
Resource Management in Kubernetes - Requests and Limits
Managing Kubernetes Applications with Helm
Managing Kubernetes Objects With Yaml Configurations
Playing with Kubernetes locally with Minikube
Getting Rails to run in an Alpine container