Software Development Tips and Tricks


Swift 4 titleized String Extension

Swift 4.2

Rails provides the titleize inflector (capitalizes all the words in a string), and I needed one for Swift too. The code snippet below adds a computed property to the String class and follows these rules:

  • The first letter of the first word of the string is capitalized
  • The first letter of the remaining words in the string are capitalized, except those that are considered “small words”

Create a file in your Xcode project named String+Titleized.swift and paste in the following String extension:

Configure SMALL_WORDS to your liking. In this example I was titleizing Spanish phrases, so my SMALL_WORDS contains various definite articles and conjunctions. An example of the usage and output:

Note: This post is a Swift 4.2 version of this one written in Swift 2.


Leveraging Instance Size Flexibility with EC2 Reserved Instances

Determining which EC2 reserved instances to purchase in AWS can be a daunting task, especially given the fact that you’re signing up for a long(ish)-term commitment that costs you (or your employer) real money. It wasn’t until after several months of working with reserved instances and reading up that I became comfortable with their concepts and learning about a quite useful feature known as Instance Size Flexibility.

But first, we need to cover what this post is not about, and that is how to choose what type of instance you need to run a given application (web server, continuous integration build server, database, etc.). There are plenty of tutorials out there. Once you’ve become comfortable with your choice of instance types (I gravitate towards the T, M, and R types), you can begin thinking about saving on your EC2 compute costs by purchasing reserved instances.

I will admit to being a bit confused the first time I began purchasing reserved instances, and I attribute that to the fact that, well, they are a bit confusing. Standard reserved instances. Convertible reserved instances. Zonal reserved instances. No upfront payment. Partial upfront payment. Reserved instance marketplace. There’s a lot to take in, and on top of that, it is a bit nerve-wracking making a choice that you might have to live with (and pay) for a while. In fact, even after spending quite some time reading through everything, it still took me a few billing cycles to realize how reserved instances really worked.

While I can’t help you get over that initial intimidation factor, what I can do is share a bit of wisdom I gathered from How Reserved Instances Are Applied, with specific attention paid to How Regional Reserved Instances Are Applied.

With some exceptions, you can purchase a number of nano (or other size) reserved instances for a given instance type, and those reservations can be applied to larger (or smaller) instances in that same family. Note that there are exceptions (I told you it was confusing), as this feature does not apply to:

  • Reserved Instances that are purchased for a specific Availability Zone
  • bare metal instances
  • Reserved Instances with dedicated tenancy
  • Reserved Instances for Windows, Windows with SQL Standard, Windows with SQL Server Enterprise, Windows with SQL Server Web, RHEL, and SLES

But that’s okay, because my favorite type of machine, a shared tenancy instance running Ubuntu 16.04 or 18.04 LTS, is supported.

Instance Size Flexibility works like this. Each instance size is assigned a normalization factor, with the small size being given the unit factor of 1. A nano instance has a normalization factor of 0.25. That is, for the purposes of instance size flexibility and reserved instances, a single reservation for a small instance is the equivalent of 4 nano instances, and vice versa, 4 nano reserved instances are the equivalent of a single small reserved instance.

AWS publishes the normalization factors in the How Reserved Instances Are Applied documentation, but we’ll provide it here as well:

Instance sizeNormalization factor

Using Instance Size Flexibility In Your Account

Now let’s take advantage of our knowledge about normalization factors and see how we can apply them to our account (and our bill). We’re going to leverage the Ruby programming language and the AWS SDK for Ruby. If you’ve never used Ruby before, do yourself a favor and invest some time with it. You’ll be glad you did.

Let’s get started.

We’re going to be applying the instance size flexibility normalization factors, so let’s declare a Hash of their values.

Using Bundler to pull in our AWS SDK gem, we will retrieve all of our instances in a given region (remember that this feature is scoped to the zones in a given region). I am using us-east-2 in this example, also known as US East Ohio.

Note that the above uses ~/.aws/credentials. If you do not have this file you will need to configure your access key ID and secret access key.

Let’s iterate over our instances (filtering out Windows instances since they are not eligible for Instance Size Flexibility) and create a hash of the various classes. In the end we want our hash to contain, as its keys, all of the classes (types) of instances we have, and the values to be a list of the sizes of those classes.

For example, if we had 4 t2.nano, 3 t2.small instances, 1 t2.large, 4 m4.xlarge instances, and 2 m4.2xlarge instances, our hash would look like this: {"t2"=>["nano", "nano", "nano", "nano", "small", "small", "small", "large"], "m4"=>["large", "large", "large", "large", "2xlarge", "2xlarge"]}.

Now we’re going to determine how many equivalent small instances we have. This is done by adding our normalization factors for each of the instance sizes.

Using our previous example of 4 t2.nano, 3 t2.small instances, 1 t2.large, 4 m4.xlarge instances, and 2 m4.2xlarge instances, we’re walking through the math of 0.25 + 0.25 + 0.25 + 0.25 + 1 + 1 + 1 + 4 for our t2 instances and 8 + 8 + 8 + 8 + 16 + 16 for the m4 instances. This results in a Hash that looks like this: {"t2"=>8, "m4"=>64}. To be clear, the interpretation of this is that we have, for the purposes of Instance Size Flexibility with reserved instances, the equivalent of 8 t2.small and 64 m4.small instances in us-east-2. Put another way, if we purchased 8 t2.small reserved instances and 64 m4.small instances in us-east-2, we would have 100% coverage of our EC2 costs with a reserved instance.

Now, let’s take it a step further and see what the equivalence would be for the other sizes. In other words, we know we have the equivalent of 8 t2.small and 64 m4.small instances, but what if we wanted to know how many equivalent nano instances we had? This loop will create a row for each class and size:

Again, taking our previous example, we would expect to see 32 t2.nano instances and 256 m4.nano instances. That’s right. If we purchased 32 t2.nano and 256 m4.nano instances we would have the equivalent of our 4 t2.nano, 3 t2.small instances, 1 t2.large, 4 m4.xlarge instances, and 2 m4.2xlarge instances. Now, there doesn’t happen to be such a thing as an m4.nano instance, and we’ve corrected for this in our published example code.


Creating Strong Passwords with DuckDuckGo

Over the past year I’ve been taking online privacy more seriously and began looking at alternative search engines such as DuckDuckGo and Startpage. In addition, when creating strong passwords I turn to tools such as KeePass and Strong Password Generator. Earlier today I duckducked strong password, and formed a smile on my face when I saw this:

Well now how cool is that? Very cool.

Even cooler, however, is using DuckDuckGo’s pwgen feature to create passwords of varying strengths and lengths. Duckduck pwgen strong 16 to get something like:

If you prefer a “lower strength” password, you can use the low parameter, for example, pwgen low 24. Or, just average strength with pwgen 32 (the strength parameter is omitted).

From looking at the difference in output between low, average, and high strength passwords, it appears that:

  • low strength passwords are created from the character set [a-zA-Z]
  • medium strength passwords include numbers, increasing the set to [0-9a-zA-Z]
  • high strength passwords include symbols in the set [!@#$%^&*()] (note that the brackets are not in the set, this is regular expression bracket notation)

Instant Answers

This DuckDuckGo feature uses instant answers, an increasingly common feature of search engines. Each DuckDuckGo instant answer has an entry page, and the password generator is (aptly) named Password. You can even review the Perl source code on Github: Password.pm

Closing Thoughts

To be honest, I think this is a pretty cool feature. Now we could argue as to what constitutes a “strong” password, but we won’t. We could discuss entropy, passwords vs. passphrases, and so on. But we won’t. For a quick way to generate a pretty doggone good password, though, just duckduck one.


Ansible 2.7 Deprecation Warning – apt and squash_actions

Ansible 2.7 was released recently and along with it brought a new deprecation warning for the apt module:

TASK [Install base packages] ****************************************** 
Thursday 18 October 2018  15:35:52 +0000 (0:00:01.648)       0:06:25.667 ****** 
[DEPRECATION WARNING]: Invoking "apt" only once while using a loop via 
squash_actions is deprecated. Instead of using a loop to supply multiple items 
and specifying <code>name: {{ item }}</code>, please use <code>name: [u'htop', u'zsh', u's3cmd']</code> and remove 
the loop. This feature will be removed in version 2.11. Deprecation warnings 
can be disabled by setting deprecation_warnings=False in ansible.cfg.

Our apt task was:

- name:  Install base packages
    name:  "{{ item }}"
    state: present
    update_cache: yes
    - htop
    - zsh
    - s3cmd

Very standard.

The new style with Ansible 2.7 should look like:

- name:  Install base packages
    name:  "{{ packages }}"
    state: present
    update_cache:  yes
      - htop
      - zsh
      - s3cmd

The change is self-explanatory (and is alluded to in the deprecation warning): rather than loop over a list and applying the apt module, provide the module with a list of items to process.

You can read up on the documentation for apt in Ansible 2.7 here.


Updating From Such a Repository Can’t Be Done Securely

I recently came across the (incredibly frustrating) error message Updating from such a repository can't be done securely while trying to run apt-get update on an Ubuntu 18.04 LTS installation. Everything was working fine on Ubuntu 16.04.5. It turns out that newer version of apt (1.6.3) on Ubuntu 18.04.1 is stricter with regards to signed repositories than Ubuntu 16.04.5 (apt 1.2.27).

Here’s an example of the error while trying to communicate with the Wazuh repository:

Reading package lists... Done
E: Failed to fetch https://packages.wazuh.com/apt/dists/xenial/InRelease  403  Forbidden [IP: 443]
E: The repository 'https://packages.wazuh.com/apt xenial InRelease' is no longer signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.

After searching around, we found that this issue has already been reported to the Wazuh project, but the solution of adding [trusted=yes] did not work for a repository that had already been added in /etc/apt. After continued searching, the following solution was finally hit upon:

deb [allow-insecure=yes allow-downgrade-to-insecure=yes] https://packages.wazuh.com/apt xenial main

That is, rather than using [trusted=yes] one can use [allow-insecure=yes allow-downgrade-to-insecure=yes]. Running apt-get update afterwards shows that the InRelease section is ignored, and Release is picked up:

Ign:7 https://packages.wazuh.com/apt xenial InRelease
Hit:8 https://packages.wazuh.com/apt xenial Release

Note that this is obviously a temporary solution, and should only be applied to a misbehaving repository! If you’re so inclined, upvote the Wazuh GitHub issue, as a fix at the repository level would be nice.


GeoIP2 and NGINX

There are times when you want to configure your website to explicitly disallow access from certain countries, or only allow access from a given set of countries. While not completely precise, use of the MaxMind GeoIP databases to look up a web client’s country-of-origin and have the web server respond accordingly is a popular technique.

There are a number of NGINX tutorials on how to use the legacy GeoIP database and the ngx_http_geoip_module, and as it happens the default Ubuntu nginx package includes the ngx_http_geoip_module. Unfortunately the GeoIP databases will no longer be updated, and MaxMind has migrated to GeoIP2. Moreover, after January 2, 2019, the GeoIP databases will no longer be available.

This leaves us in a bind. Luckily, while the Ubuntu distribution of NGINX doesn’t come with GeoIP2 support, we can add it by building from source. Which is exactly what we’ll do! In this tutorial we’re going to build nginx from the ground up, modeling its configuration options after those that are used by the canonical nginx packages available from Ubuntu 16.04. You’ll want to go through this tutorial on a fresh installation of Ubuntu 16.04 or later; we’ll be using an EC2 instance created from the AWS Quick Start Ubuntu Server 16.04 LTS (HVM), SSD Volume Type AMI.

If you’re a fan of NGINX and hosting secure webservers, check out our latest post on configuring NGINX with support for TLS 1.3

Getting Started

Since we’re going to be building binaries, we’ll need the build-essential package which is a metapackage that installs applications such as make, gcc, etc.

Now, to install all of the prerequisities libraries we’ll need to compile NGINX:

Using the GeoIP2 database with NGINX requires the ngx_http_geoip2_module and requires the MaxMind development packages from MaxMind:

Getting the Sources

Now let’s go and download NGINX. We’ll be using the latest dot-release of the 1.15 series, 1.15.3. I prefer to compile things in /usr/local/src, so:

We also need the source for the GeoIP2 NGINX module:

Now, to configure and compile.

You will want to make sure that the ngx_http_geoip2_module will be compiled, and should see nginx_geoip2_module was configured in the end of the configure output.

Now, run sudo make. NGINX, for all its power, is a compact and light application, and compiles in under a minute. If everything compiles properly, you can run sudo make install.

A few last things to complete our installation:

  • creating a symlink from /usr/sbin/nginx to /usr/share/nginx/sbin/nginx
  • creating a symlink from /usr/share/nginx/modules to /usr/lib/nginx/modules
  • creating the /var/lib/nginx/body directory
  • installing an NGINX systemd service file

For the Systemd service file, place the following in /lib/systemd/system/nginx.service:

and reload systemd with sudo systemctl daemon-reload. You should now be able to check the status of nginx:

We’ll be starting it momentarily!


On to testing! We’re going to use HTTP (rather than HTTPS) in this example.

While we’ve installed the libraries that interact with the GeoIP2 database, we haven’t yet installed the database itself. This can be accomplished by installing the geoipupdate package from the MaxMind PPA:

# sudo apt-get install -y geoipupdate

Now run sudo geoipupdate -v:

It’s a good idea to periodically update the GeoIP2 databases with geoipupdate. This is typically accomplished with a cron job like:

# crontab -l
30 0 * * 6 /usr/bin/geoipupdate -v | /usr/bin/logger

Note: Use of logger here is optional, we just like to see the output of the geoipupdate invocation in /var/log/syslog.

Nginx Configuration

Now that nginx is built and installed, we have a GeoIP2 database in /usr/share/GeoIP, we can finally get to the task of restricting access to our website. Here is our basic nginx.conf:

load_module modules/ngx_http_geoip2_module.so;

worker_processes auto;

events {
  worker_connections  1024;

http {
  sendfile      on;
  include       mime.types;
  default_type  application/octet-stream;
  keepalive_timeout  65;

  geoip2 /usr/share/GeoIP/GeoLite2-Country.mmdb {
    $geoip2_data_country_code country iso_code;

  map $geoip2_data_country_code $allowed_country {
    default no;
    US yes;

  server {
    listen       80;
    server_name  localhost;

    if ($allowed_country = no) {
      return 403;

    location / {
        root   html;
        index  index.html index.htm;

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   html;

Let’s walk through the relevant directives one at a time.

load_module modules/ngx_http_geoip2_module.so;

Since we built nginx with ngx_http_geopip2_module as a dynamic module, we need to load it explicitly with the load_module directive.

Looking up the ISO country code from the GeoIP2 database utilizes our geoip2 module:

geoip2 /usr/share/GeoIP/GeoLite2-Country.mmdb {
  $geoip2_data_country_code country iso_code;

The country code of the client IP address will be placed in the NGINX variable $geoip2_data_country_code. From this value we determine what to set $allowed_country to:

map $geoip2_data_country_code $allowed_country {
  default no;
  US yes;

map in the NGINX configuration file is a bit like a switch statement (I’ve chosen the Swift syntax of switch):

switch geoip2_data_country_code {
  case 'US':
    allowed_country = "yes"
    allowed_country = "no"

If we wanted to allow IPs from the United States, Mexico, and Canada the map directive would look like:

map $geoip2_data_country_code $allowed_country {
  default no;
  US yes;
  MX yes;
  CA yes;

geoip2 and map by themselves do not restrict access to the site. This is accomplished through the if statement which is located in the server block:

if ($allowed_country = no) {
  return 403;

This is pretty self-explanatory. If $allowed_country is no then return a 403 Forbidden.

If you haven’t done so already, start nginx with systemctl start nginx and give the configuration a go. It’s quite easy to test your nginx configuration by disallowing your country, restarting nginx (systemctl restart nginx), and trying to access your site.

Credits and Disclaimers

NGINX and associated logos belong to NGINX Inc. MaxMind, GeoIP, minFraud, and related trademarks belong to MaxMind, Inc.

The following resources were invaluable in developing this tutorial:


Ubuntu 18.04 on AWS

Ubuntu 18.04 Bionic Beaver was released several months ago now, and is currently (as of this writing) not available as a Quick Start AMI on AWS. But that’s okay, it is easy to create your own AMI based on 18.04. We’ll show you how!

Some assumptions, though. We’re going to assume you know your way around the AWS EC2 console, and have launched an instance or two in your time. If you haven’t, AWS itself has a Getting Started guide just for you.

Starting with 16.04

First, create an Ubuntu Server 16.04 EC2 instance in AWS with ami-0552e3455b9bc8d50, which is found under the Quick Start menu. A t2.micro instance is fine as we’re only going to be using it to build an 18.04 AMI.

Once the instance is available, ssh to it.

Notice that the OS is Ubuntu 16.04.5. We’re now going to upgrade it to 18.04.1 with do-release-upgrade. First, run sudo apt-get update, followed by sudo do-release-upgrade.

The upgrade script will detect that you are connected via an SSH session, and warn that performing an upgrade in such a manner is “risky.” We’ll take the risk and type y at the prompt.

This session appears to be running under ssh. It is not recommended
to perform a upgrade over ssh currently because in case of failure it
is harder to recover.

If you continue, an additional ssh daemon will be started at port
Do you want to continue?

Continue [yN]

You’ll get another warning about firewalls and iptables. Continue here as well!

To continue please press [ENTER]

Terrific, another warning! We’re about to do some seriously downloading, and hopefully it won’t take 6 hours.

You have to download a total of 173 M. This download will take about
21 minutes with a 1Mbit DSL connection and about 6 hours with a 56k

Fetching and installing the upgrade can take several hours. Once the
download has finished, the process cannot be canceled.

 Continue [yN]  Details [d]

Of course, press y to continue, and confirm that we also want to remove obselete packages.

Remove obsolete packages?

28 packages are going to be removed.

 Continue [yN]  Details [d]

At this point the installation and upgrade of packages should actually begin. There is a good chance that you’ll be interrupted with a couple screens requesting what version of GRUB and ssh configuration files you want to use. I typically keep the currently installed version of a configuration file, as it is likely I’ve made edits (through Ansible of course) to a given file. Rather than do diffs or merges at this point, I’ll wait until the upgrade is complete to review the files.

Once the upgrade is completed you’ll be prompted to reboot.

System upgrade is complete.

Restart required

To finish the upgrade, a restart is required.
If you select 'y' the system will be restarted.

Continue [yN]

After the reboot is completed, login (via ssh) and you should be greeted with

Welcome to Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-1020-aws x86_64)

Terrific! We have a pristine Ubuntu 18.04.1 LTS instance on Linux 4.15. We’re going to use this instance to make a template (AMI) from which to create more.

To start this process, stop the instance in the EC2 console. Once the instance is stopped, right-click on it and under the Image menu, select Create Image.

AWS will pop up a dialog indicating Create Image request received. with a link for viewing the pending image. Click on this link, and at this point you can name the AMI, as well as refer to it by its AMI ID.

Wait until the Status of the AMI is available before continuing!

Creating An 18.04.1 LTS Instance

Go back to the EC2 console and delete (terminate) the t2.micro instance we created, as it is no longer needed. Then, click Launch Instance and select My AMIs. You should see your new Ubuntu 18.04.1 LTS AMI. Select it and configure your instance (type, storage, security groups, etc.) and launch it!

Once your instance is available, ssh to it and see that you’ve just created an Ubuntu 18.04.1 Bionic Beaver server in AWS, and you have an AMI available to build as many as you like!


Not authorized to send Apple events to System Events

As others have written about, Apple appears to be making a hash out of the ability to automate tasks on macOS 10.14 (Mojave). I get it. In the age of hyper-connectivity there is a continuous assault on our computing environments by bad actors looking to extort some cash or otherwise ruin our day. Something needs to safeguard our systems against random scripts aiming to misbehave. Enter Mojave’s enhanced privacy protection controls and event sandboxing.

With earlier versions of Mojave the new event sandboxing mechanism was considerably flaky. In some versions (notably Beta 4), you’d intermittently be presented with Not authorized to send Apple events to System Events when attempting to execute AppleScript applications. As of Mojave Beta 8 (18A371a) I have found that the authorization functionality is at least consistent in prompting you for permission.

As a test, open a Terminal window and enter the following:

osascript -e 'tell application "Finder"' -e 'set _b to bounds of window of desktop' -e 'end tell'

You will get different results depending upon your current automation privacy settings. If you’ve never given permission to Terminal to send events to Finder you’ll see an authorization dialog like this:

If you’ve already given permission (as shown in the example Privacy panel below), the AppleScript tell will succeed and you’ll see something like -2560, 0, 1920, 1440 (the bounds coordinates).

But wait, there’s more! If you had previously given permission, and then revoked it by unchecking the permission in the Privacy panel, you’ll get execution error: Not authorized to send Apple events to Finder. (-1743).

Automation Implications

A lot of folks (myself included) write AppleScript applications that perform some set of tasks. That is sort of the whole point of automation. Again, I get it. If my machine is compromised with a rogue application or script, it could do some serious damage. But that will always be the case.

Now I can imagine how this will be resolved, and it’s going to include me forking over some money to get a verified publisher certificate. My certificate will be signed by Apple, and that signing certificate will be trusted by the OS a priori, and I’ll have to sign my scripts somehow, and so on. That’s the only way I can see this panning out, unless the plan is to literally break the automation capabilities with AppleScript. If you envision a different solution, please leave a comment!


Ansible Vault IDs

There are times when not only you’ll want to have separate vault files for development, staging, and production, but when you will also want to have separate passwords for those individual vaults. Enter vault ids, a feature of Ansible 2.4 (and later).

I had a bit of trouble getting this configured correctly, so I wanted to share my setup in hopes you find it useful as well.

First, we’ll create three separate files that contain our vault passwords. These files should not be checked into revision control, but instead reside in your protected home directory or some other secure location. These files will contain plaintext passwords that will be used to encrypt and decrypt your Ansible vaults. Our files are as follows:

  • ~/.vault-pass.common
  • ~/.vault-pass.staging
  • ~/.vault-pass.production

As you can already guess we’re going to have three separate passwords for our vaults, one each for common credentials we want to encrypt (for example, an API key that is used to communicate with a third party service and is used for all environments), and our staging and production environments. We’ll keep it simple for the contents of each password file:

Obligatory Warning: Do not use these passwords in your environment but instead create strong passwords for each. To create a strong password instead you might try something like:

Once you’ve created your three vault password files, now add to your ansible.cfg [general] section:

vault_identity_list = common@~/.vault-pass.common, staging@~/.vault-pass.staging, production@~/.vault-pass.production

It’s important to note here that your ansible.cfg vault identity list will be consulted when you execute your Ansible playbooks. If the first password won’t open the vault, it will move on to the next one, until one of them works (or, conversely, doesn’t).

Encrypting Your Vaults

To encrypt your vault file you must now explicitly choose which id to encrypt with. For example,


we will encrypt with our common vault id, like this:

# ansible-vault encrypt --encrypt-vault-id common common_vault
Encryption successful

Run head -1 on the resulting file and notice that the vault id used to encrypt is in the header:

If you are in the same directory as your ansible.cfg file, go ahead and view it with ansible-vault view common_vault. Your first identity file (.vault-pass.common) will be consulted for the password. If, however, you are not in the same directory with your ansible.cfg file, you’ll be prompted for the vault password. To make this global, you’ll want to place the vault_identity_list in your ~/.ansible.cfg file.

Repeat the process for other vault files, making sure to specify the id you want to encrypt with:

For a staging vault file:

For a production vault file:

Now you can view any of these files without providing your vault password since ansible.cfg will locate the right password. The same goes running ansible-playbook! Take care though that when you decrypt a file, if you intend on re-encrypting it that you must provide an id to use with the --encrypt-vault-id option!

A Bug, I Think

I haven’t filed this with the Ansible team, but I think this might be a bug. If you are in the same directory as your ansible.cfg (or the identity list is in .ansible.cfg), using --ask-vault to require a password on the command line will ignore the password if it can find it in your vault_identity_list password files. I find this to be counterintuitive: if you explicitly request a password prompt, the password entered should be the one that is attempted, and none other. For example:

# ansible-vault --ask-vault view common_vault
Vault password:

If I type anything other than the actual password for the common identity, I should get an error. Instead Ansible will happily find the password in ~/.vault-pass.common and view the file anyway.

Some Additional Thoughts

I wanted to take a moment to address a comment posted on this article, which can be summarized as:

What’s the point of encrypting services passwords in a vault which you check in to a repository, then pass around a shared vault-passwords file that decrypts them outside of the repository, rather than simply sharing a properties file that has the passwords to the services? It just seems like an extra layer of obfuscation rather than actually more secure.

First, to be clear, a “shared vault-passwords file” is not passed around – either the development operations engineer(s) are or a secured build server is permitted to have the vault passwords. Second, with this technique, you have a minimal number of passwords that are stored in plain text. True, these passwords are capable of unlocking any vaults encrypted with them, but this is true of any master password. Finally, I disagree with the assertion that this is an “extra layer of obfuscation.” If that were the case, any encryption scheme that had a master password (which is what utilizing an Ansible vault password file is), could be considered obfuscation. In the end, this technique is used to accomplish these goals:

  • permit separate sets of services passwords for different environments, i.e., staging and production
  • allow for submitting those services passwords in an encrypted format into a repository (the key here is that these are submitted to a known location alongside the rest of the configuration)
  • allow for decryption of those vaults in a secured environment such as a development operations user account or build server account


Ansible and AWS – Part 5

In Part 5 of our series, we’ll explore provisioning users and groups with Ansible on our AWS servers.

Anyone who has had to add users to an operating environment knows how complex things can get in a hurry. LDAP, Active Directory, and other technologies are designed to provide a centralized repository of users, groups, and access rules. Or, for Linux systems, you can skip that complexity and provision users directly on the server. If you have a lot of servers, Ansible can easily be used to add and delete users and provision access controls.

Now, if you come from an “enterprise” background you might protest and assert that LDAP is the only way to manage users across your servers. You’re certainly entitled to your opinion. But if you’re managing a few dozen or so machines, there’s nothing wrong (in my book) with straight up Linux user provisioning.

Regardless of the technology used, thought must still be given to how your users will be organized, and what permissions users will be given. For example, you might have operations personnel that require sudo access on all servers. Some of your developers may be given the title architect which provides them the luxury of sudo as well on certain servers. Or, you might have a test group that is granted sudo access on test servers, but not on staging servers. And so on. The point is, neither LDAP, Active Directory, or Ansible negate your responsbility of giving thought to how users and groups are organized and setting a policy around it.

So, let’s put together a team that we’ll give different privileges on different systems. Our hypothetical team looks like this:


We’ve decided that access on a given server (or environment) will follow these rules:

productionOnly architects and operations gets access to the environment, and they get sudo access
stagingAll users except tptesters get access to the environment, only architects, operations, and developers get sudo access
testAll users get access to the environent, and with the exception of tptesters, they get sudo access
operationsOnly operations get access to their environment, and they get sudo access

Now, let’s look at how we can enforce these rules with Ansible!

users Role

We’re going to introduce Ansible roles in this post. This is by no means a complete tutorial on roles, for that you might want to check out the Ansible documentation.

Note: git clone https://github.com/iachievedit/ansible-helloworld to get the example repository for this series, and switch to part4 (git checkout part4) to pick up where we left off.

Let’s get started by creating our roles directory in ansible-helloworld.

# git clone https://github.com/iachievedit/ansible-helloworld
# cd ansible-helloworld
# git checkout part4
# mkdir roles

Now we’re going to use the command ansible-galaxy to create a template (not to be confused with a Jinja2 template!) for our first role.

# cd roles
# ansible-galaxy init users
- users was created successfully

Drop in to the users directory that was just created and you’ll see:

# cd users
# ls
README.md files     meta      templates vars
defaults  handlers  tasks     tests

We’ll be working in three directories, vars, files, and tasks. In roles/vars/main.yml add the following:

Recall in previous tutorials our variables definitions were simple tag-value pairs (like HOSTNAME: helloworld). In this post we’re going to take advantage of the fact that variables can be complex types that include lists and dictionaries.

Now, let’s create our users role tasks. We’ll start with creating our groups. In roles/tasks/main.yml:

There’s another new Ansible keyword in use here, loop. loop will take the items in the list given by usergroups and iterate over them, with each item being “plugged in” to item. The Python equivalent might look like:

Loops are powerful constructs in roles and playbooks, so make sure and review the Ansible documentation to review what all can be accomplished with them. Also check out Chris Torgalson’s Untangling Ansible’s Loops, a great overview of Ansible loops and how to leverage them in your playbooks. It also turns out this post is using loops and various constructs to provision users, so definitely compare and contrast the different approaches!

Our next Ansible task will create the users and place them in the appropriate group.

Here it’s important to note that users is being looped over (that is, every item in the list users), and that we’re using a dot-notation to access values inside item. For the first entry in the list, item would have:

item.name = alice
item.group = architects

Now, we could have chosen to allow for multiple groups for each user, in which case we might have defined something like:

That looks pretty good so we’ll stick with that for the final product.

With our user definitions in hand, let’s create an appropriate task to create them in the correct environment. There are two more keywords to introduce: block and when. Let’s take a look:

The block keyword allows us to group a set of tasks together, and is frequently used in conjunction with some type of “scoping” keyword. In this example, we’re using the when keyword to execute the block when a certain condition is met. The tags keyword is another “scoping” keyword that is useful with a block.

Our when conditional indicates that the block will run only if the following conditions are met:

  • the host is in the production group (as defined in ansible_hosts)
  • the user is in either the architects or operations group

The syntax for specifiying this logic looks a little contrived, but it’s quite simple and uses in to return true if a given value is in the specified list. 'production' in group_names is true if the group_names list contains the value production. Likewise for item.groups, but in this case we use the or conditional to add the user to the server if their groups value contains either architects or operations.

We’re not quite done! We want our architects and operations groups to have sudo access on the production servers, so we add the following to our block:

Combining everything together, for production we have:

SSH Keys

Users on our servers will gain access through SSH keys. To add them:

Another new module! authorized_key will edit the ~/.ssh/authorized_keys file of the given user and add the key specified in the key parameter. The lookup function will go and get the key contents from a file (the first argument) given in the location {{ ssh_keys/item.name }}, which will expand to our user’s name.

Note that the lookup function searches the files directory in our role. That is, we have the following:


We do not encrypt public keys (if someone complains you didn’t encrypt their public key, slap them, it’ll make you feel better).


It was years into my career before I realized there was more to life than ksh. No joke, I didn’t realize there was anything but! Today there are a variety of shells, bash, zsh and fish just to name a few. I’ve also learned that an individual’s shell of choice is often as sacrosanct as their choice of editor. So let’s add the ability to set the user’s shell of preference.

First, we need to specify the list of shells we’re going to support. In roles/users/vars/main.yml we’ll add:

bash is already present on our Ubuntu system, so no need to explicitly add it.

Now, in our role task, we add the following before any users are created.

This will ensure all of the shell options we given users are properly installed on the server.

Back to roles/users/vars/main.yml, let’s set the shells of our users:

A different shell for everyone!

Then, again in our role task, we update any addition of a user to include their shell:

Quite simple and elegant.

Editor’s Prerogative: Since this is my blog, and you’re reading it, I’ll give you my personal editor preference. Emacs (or an editor with Emacs keybindings, like Sublime Text) for writing (prose or code), and Vim for editing configuration files. No joke.


Our production environment had a simple rule: only architects and operations are allowed to login, and both get sudo access. Our staging environment is a bit more complicated, all users except tptesters get access to the environment, but only architects, operations, and developers get sudo access. Moreover, we want to have a single lineinfile task and use with_items in it to add the appropriate lines. Unfortunately this isn’t as easy as it sounds, as having with_items in the lineinfile task interferes with our loop tasks. So, we create a separate task specifically for our sudoers updates, and in the end have:

Again, note that we first use a block to create our users and authorized_keys updates for the staging group, only doing so for architects, operations, developers, and testers. The second task adds the appropriate lines in the sudoers file.

Deleting Users (or Groups)

We have a way to add users; we’ll also need a way to remove them (my telecom background comes through when I say “deprovision”).

In roles/users/vars/main.yml we’ll add a new variable deletedusers which contains a list of user names that are to be removed from a server. While we’re at it, let’s add a section from groups that we want to delete as well.

We can then update our user task:

As with the users, we’ll loop over deletedusers and use the absent state to remove the user from the system. Finally, any groups that require deletion can be done so as well with state: absent on the group task.

One last note with the user task with Ansible; we’ve only scratched the surface of its capabilities. There are a variety of parameters that can be set such as expires, home, and of particular interest, remove. Specifying remove: yes will delete the user’s home directory (and files in it), along with their mail spool. If you truly want to be sure and nuke the user from orbit, specify remove: yes in your user task for deletion.


If you go and look at the part5 branch of the GitHub repository, you’ll see that we’ve heavily refactored the main.yml file to rely on include statements. Like good code, a single playbook or Ansible task file shouldn’t be too incredibly long. In the end, our roles/users/tasks/main.yml looks like this:

Hopefully this post has given you some thoughts on how to leverage Ansible for adding and deleting users on your servers. In a future post we might look at how to do the same thing, but with using LDAP and Ansible together.

This Series

Each post in this series is building upon the last. If you missed something, here are the previous posts. We’ve also put everything on this Github repository with branches that contain all of the changes from one part to the next.

To get the most out of walking through this tutorial on your own, download the repository and check out part4 to build your way through to part5.