Saltstack: How To Deploy EC2 instances with Salt Cloud

Perhaps this topic has been covered too many times, or may be old hat to the general Saltstack community, but while searching for a good tutorial to help me learn how to use Salt Cloud effectively, I didn’t find a good resource to help me through the quirks of the process.  This is my attempt to make an informational tutorial on how to deploy EC2 instances using Salt Cloud.  Using Salt Cloud to deploy instances is an amazing way to deploy infrastructure in the fastest way possible.  This tool is not only useful with Amazon EC2, but with its support of many popular cloud providers, you can apply these concepts to your favorite flavor of cloud provider.

Before we begin, I have to mention that this tutorial requires some knowledge of Amazon’s EC2 service.  If you haven’t deployed anything with EC2, it’s beneficial to get familiar with it before you begin.  In addition, some basic knowledge of Saltstack will be important as well.

Requirements:

 

  • Understanding of EC2 key pairs and security groups
  • EC2 access key ID and secret access key
  • Saltstack basic knowledge

CreateSalt Master

 

The first thing we’ll need is a Salt Master.  This is where we’ll create our profiles for Salt Cloud, as well as store our Salt States.  This is also where we will be deploying our EC2 instances from.  We’ll create our Salt Master using Amazon’s EC2 interface (which I will not cover here).  These are the specs for the VM you should create:

 

  • Ubuntu 14.04 (this is the flavor we’ll use for this tutorial)
  • t2.small instance type (the size of this master instance will depend on the size of the clusters you deploy, for now we’ll keep it small)

 

Once your instance is deployed, log into it using your Key Pair.

Set perms on your private key:

chmod 400 ~/aws.pem

Now use it to login to your new Salt Master:

ssh -i ~/aws.pem ubuntu@salt_master_ip

Now let’s install the Salt tools we’ll need.

First install the Saltstack repo:

add-apt-repository ppa:saltstack/salt

Now run:

apt-get update

And finally install the Salt Master and Salt Cloud tools:

apt-get install python-software-properties salt-master salt-cloud

Now from our local computer let’s send our private key from our AWS key pair up to the Salt Master:

scp -i ~/aws.pem ~/aws.pem ubuntu@salt_master_ip:~/

Now the key is in the ubuntu home dir for us to move to /etc/salt.

On the Salt Master:

mv ~/aws.pem /etc/salt

 

Configure Salt Cloud Providers

 

Now we are ready to configure our EC2 salt provider on the Salt Master.  Make sure you’re logged into the Salt Master and let’s create a file called:

 

/etc/salt/cloud.providers.d/ec2-us-west-2.conf

 

Your file should follow this format, and keep in mind this is YAML, so formatting is important, use spaces not tabs:

 

ec2-us-west-2-public:
  minion:
    master: 
  id: aws access key ID
  key: 'aws secret access key'
  private_key: /etc/salt/aws.pem (private key from AWS key pair we'll be using)
  keyname: (AWS key pair name we'll be using)
  ssh_interface: public_ips
  securitygroup: YOUR_AWS_SECURITY_GROUP
  location: us-west-2
  availability_zone: us-west-2a
  provider: ec2
  del_root_vol_on_destroy: True
  del_all_vols_on_destroy: True
  rename_on_destroy: True

 

To look something like this(id and key below are not real):

 

ec2-us-west-2-public:
  minion:
    master: hostname.of-salt-master.com
  id: XW20F7XJXUU4K2ALM4BX
  key: 'MeFFsm1EVD0Ky8VgVPh3IEsRIgrD413RL7Xm2Y9Hn'
  private_key: /etc/salt/aws.pem
  keyname: salt-cloud-deployed
  ssh_interface: public_ips
  securitygroup: salt-cloud-security-group
  location: us-west-2
  availability_zone: us-west-2a
  provider: ec2
  del_root_vol_on_destroy: True
  del_all_vols_on_destroy: True
  rename_on_destroy: True

 

**In this example we are only creating one ec2 ‘provider’ using only one file /etc/salt/cloud.provider.d/ec2-us-west-2.conf, but we could easily add another file that contains credentials for a different ec2 availability zone and region. For instance, /etc/salt/cloud.provider.d/ec2-us-east-1.conf, could be another provider file that describes details for a US East 1 availability zone.

 

Configure Salt Cloud Profiles

 

Alright, now let’s make note of the title we gave our provider ‘ec2-us-west-2-public’ and create our instance profiles for this provider. Create this file:

/etc/salt/cloud.profiles.d/ec2_us_west-2.conf

**It’s important to note that files in this directory need to have the .conf extension or salt-cloud will not recognize them. In addition, these are also YAML, so mind  your formatting.

Your file should follow this format:

profile_name:
  provider: ec2-us-west-2-public (the provider we created above)
  image: ami-9abea4fb (AMI ID)
  size: t2.nano (Instance Type)
  ssh_username: ubuntu (default user)
  tag: {'Environment': 'production'} (tags to organize your instances)
  sync_after_install: grains (Misc Salt Cloud options which you can lookup here)

A finished profile will look like this:

ec2_west_nano_prod:
  provider: ec2-us-west-2-public
  image: ami-9abea4fb
  size: t2.nano
  ssh_username: ubuntu
  tag: {'Environment': 'production'}
  sync_after_install: grains

ec2_west_micro_prod:
  provider: ec2-us-west-2-public
  image: ami-9abea4fb
  size: t2.micro
  ssh_username: ubuntu
  tag: {'Environment': 'production'}
  sync_after_install: grains

ec2_west_nano_dev:
  provider: ec2-us-west-2-public
  image: ami-9abea4fb
  size: t2.nano
  ssh_username: ubuntu
  tag: {'Environment': 'dev'}
  sync_after_install: grains

ec2_west_micro_dev:
  provider: ec2-us-west-2-public
  image: ami-9abea4fb
  size: t2.micro
  ssh_username: ubuntu
  tag: {'Environment': 'production'}
  sync_after_install: grains

 

Spin Em Up

 

We’ve created 4 profiles that we’ll use to spin up instances.  These profiles are using the Ubuntu 14.04 AMI.  Now, let’s try spinning up a server with one of these profiles.

The command looks like this:

salt-cloud -p profile_name name_of_new_instance

And using our profiles that we just created, looks like this:

salt-cloud -p ec2_west_nano_dev saltcloud_nano_test

Once that completes, your new instance will now be a minion that you can control with your Salt Master. Run a test.ping:

salt '*' test.ping

Your new instance should show up in the list:

saltcloud_nano_test:
  True

 

Awesome, it worked! You will also see this new instance in your EC2 control panel in your Instances section.

There’s one more thing left to do, and that is to update the hostname of the server. Run this and you’ll see that that the host is different than the Salt Minion Id:

salt 'saltcloud_nano_test' grains.item host

The result will look something like this (your host will likely be different):

saltcloud_nano_test:
    ----------
    host:
        ip-172-39-38-65

Unfortunately, salt-cloud doesn’t update the hostname during the spin up of a new server, so we need to create a process to do this manually. Let’s create a Salt State:

mkdir -p /srv/salt/update_hostname

Create:

/srv/salt/update_hostname/init.sls

And add:

/opt/update_hostname.pl:
  file.managed:
    - source: salt://update_hostname/update_hostname.pl
    - mode: 775
update_hostname:
  cmd.run:
    - name: /opt/update_hostname.pl
    - require:
      - file: /opt/update_hostname.pl

Then put his quick perl script in the same dir. This script only supports Ubuntu and Debian.  Create:

/srv/salt/update_hostname/update_hostname.pl

Contents:

#!/usr/bin/perl
$config = `cat /etc/salt/minion`;
$config =~ /.*id:\s(.*)/;
print $1."\n";
open(HN, ">/etc/hostname") || die 'Can not open /etc/hostname:'.$!."\n";
print HN $1."\n";
close HN;

Now let’s run our state:

salt 'saltcloud_nano_test' state.sls update_hostname

I like to reboot the minion after I run the state against it just to make sure all of the services start with the new hostname:

salt 'saltcloud_nano_test' system.reboot

Let’s check to make sure our hostname change worked once the instance reboots:

salt 'saltcloud_nano_test' grains.item host

We should see:

saltcloud_nano_test:
    ----------
    host:
        saltcloud_nano_test

Excellent! Now that we know how to spin up a server using Salt Cloud, let’s destroy this test instance:

salt-cloud -d saltcloud_nano_test

And just like that, you’ve created and destroyed an instance on EC2 with Salt Cloud. This is very useful, but what if we want to spin up more than 1 instance.  We’ll accomplish that using a cloud map file.  With a cloud map we can deploy multiple instances in one command.

Let’s create a cloud map file:

/etc/salt/cloud.maps.d/infra.ec2

Add:

ec2_west_nano_dev:
  - devweb
ec2_west_micro_dev:
  - devdb
ec2_west_nano_prod:
  - prodweb
ec2_west_micro_prod:
  - proddb

This file contains the profile and instance names in YAML format.  Now, let’s create all of these instances with this command:

salt-cloud -m /etc/salt/cloud.maps.d/infra.ec2

What you’ll see is Salt Cloud creating all of the instances that you’ve defined in your cloud map file, with a single command.  Cloud maps allow you to create complex infrastructure setups with a single command, and give your orchestration toolbox a huge power up.  Salt Cloud coupled with Saltstack will help you wrangle your Amazon EC2 infrastructure with ease.

It’s worth mentioning that this post was inspired by Bastien Kim’s post of the same topic. His post covers many of these topics, but includes some instruction of how to work with AWS.
Thanks for reading!

 

  • Pingback: AWS Week in Review February 15, 2016 | wart1949()
  • Pingback: AWS Week in Review – February 15, 2016 – SMACBUZZ()
  • Pramod Singh

    Great post.. Thanks.

    • Eric

      I’m happy you found this helpful Pramod.

  • https://www.youtube.com/channel/UCkcozAPEy5Dr2DqzhtAtsZQ APOS

    How do you define custom RAM allocation for an ec2 instance in cloud.profiles.d? For the love of God I can’t find a way to do this, so any help would be appreciated.

    • Eric

      Hi APOS,

      I’m replying quite late to this, and I’m sure by now you’ve figure this out, but for the benefit of newer readers, the ‘size’ field in a Salt Cloud profile can only correspond to an already predefined instance type on the cloud provider. So it is, unfortunately, only limited to whatever VM types are available to the user.

      One issue I had during this process was wanting to create instances that had different default hard drive sizes. The way I solved that was to create custom AMIs that had the HD size I wanted, then I could specify this AMI ‘image’ in my Salt Cloud profile and assign any instance type to it. I realize this doesn’t solve your problem, but it may give an idea on how to approach it.

      Best,
      Eric

      • https://www.youtube.com/channel/UCkcozAPEy5Dr2DqzhtAtsZQ APOS

        No worries on the late reply! Actually, still haven’t found a proper way of allocating custom RAM to an ec2 instance by way of cloud.profiles.d, but as far as creating instances with different default hard drive sizes goes, there’s an alternative to using custom AMIs with set HD sizes. You can use block device mappings inside your cloud profile file instead.

        (e.g. for SSD volumes)
        If you want to add additional drives:

        block_device_mappings:
        – DeviceName: /dev/sdf
        Ebs.VolumeSize: 400
        Ebs.VolumeType: gp2
        – DeviceName: /dev/sdg
        Ebs.VolumeSize: 150
        Ebs.VolumeType: gp2

        Or if you just want to alter the default size of your primary drive:

        block_device_mappings:
        – DeviceName: /dev/sda1
        Ebs.VolumeSize: 200
        Ebs.VolumeType: gp2

        According to the salt-docs: you can choose between standard (magnetic disk), gp2 (SSD), or io1 (provisioned IOPS). (default=standard)

        • Eric

          Nice tip APOS, Thanks!

 

Leave a Comment