As in the last post Demystifying Infrastructure as Code, it should be clear how important it is for organizations moving their DevOps initiatives forward to learn and use IaC. The same importance should be applied to Configuration Management as well to prevent environmental drift, enable rapid deployments, provide scalable solutions, and maintain system integrity. Using source control, continuous integration, testing, and other software development practices will allow for sustainable automation at the operational level.

Configuration Management

Configuration Management is a system to ensure the state of software installation and systems configuration. For example, an agent or system that can ssh into a server or virtual machine and make sure that packages are installed and the configuration files are in place to make it work and start up without direct human intervention. Some even validate the state of existing configuration and can revert any manual changes to keep system integrity.

Below follows some examples of configuration management tools and basic implementation ideas.

Ansible

Ansible can execute playbooks to build infrastructure and install and configure the systems that are in place. It’s a write-once style system and isn’t depending on state management. It uses ssh instead of agents to manage the software and involves writing a lot of YAML. The steps to get started are:

  • Install
  • Inventory
  • Verify
  • Connect
  • Write YAML and deploy

---
- yum: name={{contact.item}} state=installed
with_items:
- app_server
- acme_software


- service: name=app_server state=running enabled=yes


- template: src=/opt/code/templates/foo.j2 dest=/etc/foo.conf
notify: 
- restart app server

Puppet

Puppet is a centralized system with a client-server and agents on target systems to manage the software and configuration on a system. It’s a ruby based declarative state language (meaning that you write the code with the intent of how you want the system state to be and let puppet do the work to implement it). A note is that it can be pretty readable and understandable as it seems in example code, but as it grows in complexity, you will likely need to create classes and create a hierarchy using a YAML configuration called heira data. You will be able to group things like “web servers” or “external api” and apply chunks of code to those systems. Or you can create a common class that adds a specific set of user credentials to all systems.

  • Install the server
  • Install an agent
  • Create a database
  • Configure and deploy!

include nginx

nginx::resource::server { 'www.puppetlabs.com':
  www_root => '/var/www/www.puppetlabs.com',
}

Chef

Chef provide both a chef-solo and a client server model to allow integration with centralized management. It’s even more ruby-centric than puppet and if you’re familiar with ruby, working in Chef will feel pretty comfortable. It is written in a manner that is procedural rather than declarative, meaning the steps and order of implementation of any installation and configuration is important.

  • Install workstation software
  • Create repo, example, framework, cookbook
  • (Install, config chef server, infra, and/or habitat)
  • Run the client to deploy local

cookbook 'nginx', '~> 12.0.12', :supermarket

user 'human' do
  comment 'human name'
  uid 1234
  guid 'groupname'
  home '/home/human'
  shell '/bin/bash'
  password '$1$JJj...'
End

SaltStack

Salt was initially developed as a configuration parameters database. It’s language agnostic and supports agents or agentless modes. Here are the basic installation steps to get started. The example code is handled from the salt server by using salt state apply commands on the configuration located on the file system of the salt server. These can be automated like a cron, jenkins, or other system to run commands as needed.

  • Install server and minions, configure to connect
  • Run commands to query
  • Run commands to install/configure

vim:
  pkg.installed: []

/etc/vimrc:
  file.managed:
    - source: salt://vimrc
    - mode: 644
    - user: root
    - group: root

nginx:
  pkg.installed: []
  service.running:
    - require:
      - pkg: nginx

Containers

It’s important to include Container Systems and Tools when doing any kind of IaC or ConfgMgmt solutions. These capabilities sit in a space that overlaps or in some cases complete usurps the need of some of the other tools. The bundling of both config and infrastructure together solves specific problems for some developers where they don’t have to learn as much of an underlying infrastructure to make it work.

Packer

Packer is a HasiCorp tool like Terraform, so it also uses the HashiCorp Configuration Language (HCL). Formatting is going to feel similar to writing plain terraform. The steps to install and configure will be similar to other HashiCorp tools as well.

  • Install
  • Write templates
  • Authenticate
  • Initialize, format, validate
  • Build

source "amazon-ebs" "ubuntu" {
  ami_name      = "learn-packer-linux-aws"
  instance_type = "t2.micro"
  region        = "us-west-2"
  source_ami_filter {
    filters = {
      name                = "ubuntu/images/*ubuntu-xenial-16.04-amd64-server-*"
      root-device-type    = "ebs"
      virtualization-type = "hvm"
    }
    most_recent = true
    owners      = ["099720109477"]
  }
  ssh_username = "ubuntu"
}

Docker

Docker is commonly used by developers to pull down images from a docker repository with software already built and configured so that it jump-starts needing to have to do the installs and configuration. The Dockerfile can be used to extend an image with software on it or to focus on building your own that you can replicate for more of your team to use.

  • Install Docker
  • Build a Dockerfile and app
  • Docker build and run

# syntax=docker/dockerfile:1
FROM node:12-alpine
RUN 
Learn more about the "RUN " Dockerfile command.
apk add --no-cache python2 g++ make
WORKDIR
Learn more about the "WORKDIR" Dockerfile command.
 /app
COPY
Learn more about the "COPY" Dockerfile command.
 . .
RUN yarn install --production
CMD ["node", "src/index.js"]
EXPOSE 3000


Kubernetes

Kubernetes isn’t easy or simple, or even sometimes well-understood. But most folks can learn how to utilize a cluster without having to know how to build it. Here are some steps for what it takes to get something running. Here are some local install docs to see what they are like and the commands run to get an app there.

  • Install minikube
  • Start and query cluster
  • Create a deployment
  • Deploy app
  • Manage cluster

kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4
kubectl expose deployment hello-node --type=LoadBalancer --port=8080
minikube service hello-node
minikube addons enable metrics-server
kubectl get pod,svc -n kube-system

Helm

Helm is used to define, install, and update the applications you build in Kubernetes, almost like a package manager on a linux system. The following steps already assume you have a kubernetes cluster, which you can install via the minikube commands found in the tutorials above.

  • Install Helm
  • Initialize
  • Install Chart

apiVersion: v1
kind: ConfigMap
metadata:
  name: mychart-configmap
data:
  myvalue: "Hello World"