iAchieved.it

Software Development Tips and Tricks

By

Creating Strong Passwords with DuckDuckGo

Over the past year I’ve been taking online privacy more seriously and began looking at alternative search engines such as DuckDuckGo and Startpage. In addition, when creating strong passwords I turn to tools such as KeePass and Strong Password Generator. Earlier today I duckducked strong password, and formed a smile on my face when I saw this:

Well now how cool is that? Very cool.

Even cooler, however, is using DuckDuckGo’s pwgen feature to create passwords of varying strengths and lengths. Duckduck pwgen strong 16 to get something like:

If you prefer a “lower strength” password, you can use the low parameter, for example, pwgen low 24. Or, just average strength with pwgen 32 (the strength parameter is omitted).

From looking at the difference in output between low, average, and high strength passwords, it appears that:

  • low strength passwords are created from the character set [a-zA-Z]
  • medium strength passwords include numbers, increasing the set to [0-9a-zA-Z]
  • high strength passwords include symbols in the set [!@#$%^&*()] (note that the brackets are not in the set, this is regular expression bracket notation)

Instant Answers

This DuckDuckGo feature uses instant answers, an increasingly common feature of search engines. Each DuckDuckGo instant answer has an entry page, and the password generator is (aptly) named Password. You can even review the Perl source code on Github: Password.pm

Closing Thoughts

To be honest, I think this is a pretty cool feature. Now we could argue as to what constitutes a “strong” password, but we won’t. We could discuss entropy, passwords vs. passphrases, and so on. But we won’t. For a quick way to generate a pretty doggone good password, though, just duckduck one.

By

Ansible 2.7 Deprecation Warning – apt and squash_actions

Ansible 2.7 was released recently and along with it brought a new deprecation warning for the apt module:

TASK [Install base packages] ****************************************** 
Thursday 18 October 2018  15:35:52 +0000 (0:00:01.648)       0:06:25.667 ****** 
[DEPRECATION WARNING]: Invoking "apt" only once while using a loop via 
squash_actions is deprecated. Instead of using a loop to supply multiple items 
and specifying <code>name: {{ item }}</code>, please use <code>name: [u'htop', u'zsh', u's3cmd']</code> and remove 
the loop. This feature will be removed in version 2.11. Deprecation warnings 
can be disabled by setting deprecation_warnings=False in ansible.cfg.

Our apt task was:

- name:  Install base packages
  apt:
    name:  "{{ item }}"
    state: present
    update_cache: yes
  with_items:
    - htop
    - zsh
    - s3cmd

Very standard.

The new style with Ansible 2.7 should look like:

- name:  Install base packages
  apt:
    name:  "{{ packages }}"
    state: present
    update_cache:  yes
  vars:
    packages:
      - htop
      - zsh
      - s3cmd

The change is self-explanatory (and is alluded to in the deprecation warning): rather than loop over a list and applying the apt module, provide the module with a list of items to process.

You can read up on the documentation for apt in Ansible 2.7 here.

By

Updating From Such a Repository Can’t Be Done Securely

I recently came across the (incredibly frustrating) error message Updating from such a repository can't be done securely while trying to run apt-get update on an Ubuntu 18.04 LTS installation. Everything was working fine on Ubuntu 16.04.5. It turns out that newer version of apt (1.6.3) on Ubuntu 18.04.1 is stricter with regards to signed repositories than Ubuntu 16.04.5 (apt 1.2.27).

Here’s an example of the error while trying to communicate with the Wazuh repository:

Reading package lists... Done
E: Failed to fetch https://packages.wazuh.com/apt/dists/xenial/InRelease  403  Forbidden [IP: 13.35.78.27 443]
E: The repository 'https://packages.wazuh.com/apt xenial InRelease' is no longer signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.

After searching around, we found that this issue has already been reported to the Wazuh project, but the solution of adding [trusted=yes] did not work for a repository that had already been added in /etc/apt. After continued searching, the following solution was finally hit upon:

deb [allow-insecure=yes allow-downgrade-to-insecure=yes] https://packages.wazuh.com/apt xenial main

That is, rather than using [trusted=yes] one can use [allow-insecure=yes allow-downgrade-to-insecure=yes]. Running apt-get update afterwards shows that the InRelease section is ignored, and Release is picked up:

Ign:7 https://packages.wazuh.com/apt xenial InRelease
Hit:8 https://packages.wazuh.com/apt xenial Release

Note that this is obviously a temporary solution, and should only be applied to a misbehaving repository! If you’re so inclined, upvote the Wazuh GitHub issue, as a fix at the repository level would be nice.

By

GeoIP2 and NGINX

There are times when you want to configure your website to explicitly disallow access from certain countries, or only allow access from a given set of countries. While not completely precise, use of the MaxMind GeoIP databases to look up a web client’s country-of-origin and have the web server respond accordingly is a popular technique.

There are a number of NGINX tutorials on how to use the legacy GeoIP database and the ngx_http_geoip_module, and as it happens the default Ubuntu nginx package includes the ngx_http_geoip_module. Unfortunately the GeoIP databases will no longer be updated, and MaxMind has migrated to GeoIP2. Moreover, after January 2, 2019, the GeoIP databases will no longer be available.

This leaves us in a bind. Luckily, while the Ubuntu distribution of NGINX doesn’t come with GeoIP2 support, we can add it by building from source. Which is exactly what we’ll do! In this tutorial we’re going to build nginx from the ground up, modeling its configuration options after those that are used by the canonical nginx packages available from Ubuntu 16.04. You’ll want to go through this tutorial on a fresh installation of Ubuntu 16.04 or later; we’ll be using an EC2 instance created from the AWS Quick Start Ubuntu Server 16.04 LTS (HVM), SSD Volume Type AMI.

Getting Started

Since we’re going to be building binaries, we’ll need the build-essential package which is a metapackage that installs applications such as make, gcc, etc.

Now, to install all of the prerequisities libraries we’ll need to compile NGINX:

Using the GeoIP2 database with NGINX requires the ngx_http_geoip2_module and requires the MaxMind development packages from MaxMind:

Getting the Sources

Now let’s go and download NGINX. We’ll be using the latest dot-release of the 1.15 series, 1.15.3. I prefer to compile things in /usr/local/src, so:

We also need the source for the GeoIP2 NGINX module:

Now, to configure and compile.

You will want to make sure that the ngx_http_geoip2_module will be compiled, and should see nginx_geoip2_module was configured in the end of the configure output.

Now, run sudo make. NGINX, for all its power, is a compact and light application, and compiles in under a minute. If everything compiles properly, you can run sudo make install.

A few last things to complete our installation:

  • creating a symlink from /usr/sbin/nginx to /usr/share/nginx/sbin/nginx
  • creating a symlink from /usr/share/nginx/modules to /usr/lib/nginx/modules
  • creating the /var/lib/nginx/body directory
  • installing an NGINX systemd service file

For the Systemd service file, place the following in /lib/systemd/system/nginx.service:

and reload systemd with sudo systemctl daemon-reload. You should now be able to check the status of nginx:

We’ll be starting it momentarily!

Testing

On to testing! We’re going to use HTTP (rather than HTTPS) in this example.

While we’ve installed the libraries that interact with the GeoIP2 database, we haven’t yet installed the database itself. This can be accomplished by installing the geoipupdate package from the MaxMind PPA:

# sudo apt-get install -y geoipupdate

Now run sudo geoipdate -v:

It’s a good idea to periodically update the GeoIP2 databases with geoipupdate. This is typically accomplished with a cron job like:

# crontab -l
30 0 * * 6 /usr/bin/geoipupdate -v | /usr/bin/logger

Note: Use of logger here is optional, we just like to see the output of the geoipupdate invocation in /var/log/syslog.

Nginx Configuration

Now that nginx is built and installed, we have a GeoIP2 database in /usr/share/GeoIP, we can finally get to the task of restricting access to our website. Here is our basic nginx.conf:

load_module modules/ngx_http_geoip2_module.so;

worker_processes auto;

events {
  worker_connections  1024;
}

http {
  sendfile      on;
  include       mime.types;
  default_type  application/octet-stream;
  keepalive_timeout  65;

  geoip2 /usr/share/GeoIP/GeoLite2-Country.mmdb {
    $geoip2_data_country_code country iso_code;
  }

  map $geoip2_data_country_code $allowed_country {
    default no;
    US yes;
  }

  server {
    listen       80;
    server_name  localhost;

    if ($allowed_country = no) {
      return 403;
    }

    location / {
        root   html;
        index  index.html index.htm;
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   html;
    }
  }
}

Let’s walk through the relevant directives one at a time.

load_module modules/ngx_http_geoip2_module.so;

Since we built nginx with ngx_http_geopip2_module as a dynamic module, we need to load it explicitly with the load_module directive.

Looking up the ISO country code from the GeoIP2 database utilizes our geoip2 module:

geoip2 /usr/share/GeoIP/GeoLite2-Country.mmdb {
  $geoip2_data_country_code country iso_code;
}

The country code of the client IP address will be placed in the NGINX variable $geoip2_data_country_code. From this value we determine what to set $allowed_country to:

map $geoip2_data_country_code $allowed_country {
  default no;
  US yes;
}

map in the NGINX configuration file is a bit like a switch statement (I’ve chosen the Swift syntax of switch):

switch geoip2_data_country_code {
  case 'US':
    allowed_country = "yes"
  default:
    allowed_country = "no"
}

If we wanted to allow IPs from the United States, Mexico, and Canada the map directive would look like:

map $geoip2_data_country_code $allowed_country {
  default no;
  US yes;
  MX yes;
  CA yes;
}

geoip2 and map by themselves do not restrict access to the site. This is accomplished through the if statement which is located in the server block:

if ($allowed_country = no) {
  return 403;
}

This is pretty self-explanatory. If $allowed_country is no then return a 403 Forbidden.

If you haven’t done so already, start nginx with systemctl start nginx and give the configuration a go. It’s quite easy to test your nginx configuration by disallowing your country, restarting nginx (systemctl restart nginx), and trying to access your site.

Credits and Disclaimers

NGINX and associated logos belong to NGINX Inc. MaxMind, GeoIP, minFraud, and related trademarks belong to MaxMind, Inc.

The following resources were invaluable in developing this tutorial:

By

Ubuntu 18.04 on AWS

Ubuntu 18.04 Bionic Beaver was released several months ago now, and is currently (as of this writing) not available as a Quick Start AMI on AWS. But that’s okay, it is easy to create your own AMI based on 18.04. We’ll show you how!

Some assumptions, though. We’re going to assume you know your way around the AWS EC2 console, and have launched an instance or two in your time. If you haven’t, AWS itself has a Getting Started guide just for you.

Starting with 16.04

First, create an Ubuntu Server 16.04 EC2 instance in AWS with ami-0552e3455b9bc8d50, which is found under the Quick Start menu. A t2.micro instance is fine as we’re only going to be using it to build an 18.04 AMI.

Once the instance is available, ssh to it.

Notice that the OS is Ubuntu 16.04.5. We’re now going to upgrade it to 18.04.1 with do-release-upgrade. First, run sudo apt-get update, followed by sudo do-release-upgrade.

The upgrade script will detect that you are connected via an SSH session, and warn that performing an upgrade in such a manner is “risky.” We’ll take the risk and type y at the prompt.

This session appears to be running under ssh. It is not recommended
to perform a upgrade over ssh currently because in case of failure it
is harder to recover.

If you continue, an additional ssh daemon will be started at port
'1022'.
Do you want to continue?

Continue [yN]

You’ll get another warning about firewalls and iptables. Continue here as well!

To continue please press [ENTER]

Terrific, another warning! We’re about to do some seriously downloading, and hopefully it won’t take 6 hours.

You have to download a total of 173 M. This download will take about
21 minutes with a 1Mbit DSL connection and about 6 hours with a 56k
modem.

Fetching and installing the upgrade can take several hours. Once the
download has finished, the process cannot be canceled.

 Continue [yN]  Details [d]

Of course, press y to continue, and confirm that we also want to remove obselete packages.

Remove obsolete packages?


28 packages are going to be removed.

 Continue [yN]  Details [d]

At this point the installation and upgrade of packages should actually begin. There is a good chance that you’ll be interrupted with a couple screens requesting what version of GRUB and ssh configuration files you want to use. I typically keep the currently installed version of a configuration file, as it is likely I’ve made edits (through Ansible of course) to a given file. Rather than do diffs or merges at this point, I’ll wait until the upgrade is complete to review the files.

Once the upgrade is completed you’ll be prompted to reboot.

System upgrade is complete.

Restart required

To finish the upgrade, a restart is required.
If you select 'y' the system will be restarted.

Continue [yN]

After the reboot is completed, login (via ssh) and you should be greeted with

Welcome to Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-1020-aws x86_64)

Terrific! We have a pristine Ubuntu 18.04.1 LTS instance on Linux 4.15. We’re going to use this instance to make a template (AMI) from which to create more.

To start this process, stop the instance in the EC2 console. Once the instance is stopped, right-click on it and under the Image menu, select Create Image.

AWS will pop up a dialog indicating Create Image request received. with a link for viewing the pending image. Click on this link, and at this point you can name the AMI, as well as refer to it by its AMI ID.

Wait until the Status of the AMI is available before continuing!

Creating An 18.04.1 LTS Instance

Go back to the EC2 console and delete (terminate) the t2.micro instance we created, as it is no longer needed. Then, click Launch Instance and select My AMIs. You should see your new Ubuntu 18.04.1 LTS AMI. Select it and configure your instance (type, storage, security groups, etc.) and launch it!

Once your instance is available, ssh to it and see that you’ve just created an Ubuntu 18.04.1 Bionic Beaver server in AWS, and you have an AMI available to build as many as you like!

By

Not authorized to send Apple events to System Events

As others have written about, Apple appears to be making a hash out of the ability to automate tasks on macOS 10.14 (Mojave). I get it. In the age of hyper-connectivity there is a continuous assault on our computing environments by bad actors looking to extort some cash or otherwise ruin our day. Something needs to safeguard our systems against random scripts aiming to misbehave. Enter Mojave’s enhanced privacy protection controls and event sandboxing.

With earlier versions of Mojave the new event sandboxing mechanism was considerably flaky. In some versions (notably Beta 4), you’d intermittently be presented with Not authorized to send Apple events to System Events when attempting to execute AppleScript applications. As of Mojave Beta 8 (18A371a) I have found that the authorization functionality is at least consistent in prompting you for permission.

As a test, open a Terminal window and enter the following:

osascript -e 'tell application "Finder"' -e 'set _b to bounds of window of desktop' -e 'end tell'

You will get different results depending upon your current automation privacy settings. If you’ve never given permission to Terminal to send events to Finder you’ll see an authorization dialog like this:

If you’ve already given permission (as shown in the example Privacy panel below), the AppleScript tell will succeed and you’ll see something like -2560, 0, 1920, 1440 (the bounds coordinates).

But wait, there’s more! If you had previously given permission, and then revoked it by unchecking the permission in the Privacy panel, you’ll get execution error: Not authorized to send Apple events to Finder. (-1743).

Automation Implications

A lot of folks (myself included) write AppleScript applications that perform some set of tasks. That is sort of the whole point of automation. Again, I get it. If my machine is compromised with a rogue application or script, it could do some serious damage. But that will always be the case.

Now I can imagine how this will be resolved, and it’s going to include me forking over some money to get a verified publisher certificate. My certificate will be signed by Apple, and that signing certificate will be trusted by the OS a priori, and I’ll have to sign my scripts somehow, and so on. That’s the only way I can see this panning out, unless the plan is to literally break the automation capabilities with AppleScript. If you envision a different solution, please leave a comment!

By

Ansible Vault IDs

There are times when not only you’ll want to have separate vault files for development, staging, and production, but when you will also want to have separate passwords for those individual vaults. Enter vault ids, a feature of Ansible 2.4 (and later).

I had a bit of trouble getting this configured correctly, so I wanted to share my setup in hopes you find it useful as well.

First, we’ll create three separate files that contain our vault passwords. These files should not be checked into revision control, but instead reside in your protected home directory or some other secure location. These files will contain plaintext passwords that will be used to encrypt and decrypt your Ansible vaults. Our files are as follows:

  • ~/.vault-pass.common
  • ~/.vault-pass.staging
  • ~/.vault-pass.production

As you can already guess we’re going to have three separate passwords for our vaults, one each for common credentials we want to encrypt (for example, an API key that is used to communicate with a third party service and is used for all environments), and our staging and production environments. We’ll keep it simple for the contents of each password file:

Obligatory Warning: Do not use these passwords in your environment but instead create strong passwords for each. To create a strong password instead you might try something like:

Once you’ve created your three vault password files, now add to your ansible.cfg [general] section:

vault_identity_list = common@~/.vault-pass.common, staging@~/.vault-pass.staging, production@~/.vault-pass.production

It’s important to note here that your ansible.cfg vault identity list will be consulted when you execute your Ansible playbooks. If the first password won’t open the vault, it will move on to the next one, until one of them works (or, conversely, doesn’t).

Encrypting Your Vaults

To encrypt your vault file you must now explicitly choose which id to encrypt with. For example,

vault_common:

we will encrypt with our common vault id, like this:

# ansible-vault encrypt --encrypt-vault-id common common_vault
Encryption successful

Run head -1 on the resulting file and notice that the vault id used to encrypt is in the header:

If you are in the same directory as your ansible.cfg file, go ahead and view it with ansible-vault view common_vault. Your first identity file (.vault-pass.common) will be consulted for the password. If, however, you are not in the same directory with your ansible.cfg file, you’ll be prompted for the vault password. To make this global, you’ll want to place the vault_identity_list in your ~/.ansible.cfg file.

Repeat the process for other vault files, making sure to specify the id you want to encrypt with:

For a staging vault file:

For a production vault file:

Now you can view any of these files without providing your vault password since ansible.cfg will locate the right password. The same goes running ansible-playbook! Take care though that when you decrypt a file, if you intend on re-encrypting it that you must provide an id to use with the --encrypt-vault-id option!

A Bug, I Think

I haven’t filed this with the Ansible team, but I think this might be a bug. If you are in the same directory as your ansible.cfg (or the identity list is in .ansible.cfg), using --ask-vault to require a password on the command line will ignore the password if it can find it in your vault_identity_list password files. I find this to be counterintuitive: if you explicitly request a password prompt, the password entered should be the one that is attempted, and none other. For example:

# ansible-vault --ask-vault view common_vault
Vault password:

If I type anything other than the actual password for the common identity, I should get an error. Instead Ansible will happily find the password in ~/.vault-pass.common and view the file anyway.

By

Ansible and AWS – Part 5

In Part 5 of our series, we’ll explore provisioning users and groups with Ansible on our AWS servers.

Anyone who has had to add users to an operating environment knows how complex things can get in a hurry. LDAP, Active Directory, and other technologies are designed to provide a centralized repository of users, groups, and access rules. Or, for Linux systems, you can skip that complexity and provision users directly on the server. If you have a lot of servers, Ansible can easily be used to add and delete users and provision access controls.

Now, if you come from an “enterprise” background you might protest and assert that LDAP is the only way to manage users across your servers. You’re certainly entitled to your opinion. But if you’re managing a few dozen or so machines, there’s nothing wrong (in my book) with straight up Linux user provisioning.

Regardless of the technology used, thought must still be given to how your users will be organized, and what permissions users will be given. For example, you might have operations personnel that require sudo access on all servers. Some of your developers may be given the title architect which provides them the luxury of sudo as well on certain servers. Or, you might have a test group that is granted sudo access on test servers, but not on staging servers. And so on. The point is, neither LDAP, Active Directory, or Ansible negate your responsbility of giving thought to how users and groups are organized and setting a policy around it.

So, let’s put together a team that we’ll give different privileges on different systems. Our hypothetical team looks like this:

NameGroup
alicearchitects
bobdevelopers
caroloperations
davetesters
davetptesters

We’ve decided that access on a given server (or environment) will follow these rules:

EnvironmentAccess
productionOnly architects and operations gets access to the environment, and they get sudo access
stagingAll users except tptesters get access to the environment, only architects, operations, and developers get sudo access
testAll users get access to the environent, and with the exception of tptesters, they get sudo access
operationsOnly operations get access to their environment, and they get sudo access

Now, let’s look at how we can enforce these rules with Ansible!

users Role

We’re going to introduce Ansible roles in this post. This is by no means a complete tutorial on roles, for that you might want to check out the Ansible documentation.

Note: git clone https://github.com/iachievedit/ansible-helloworld to get the example repository for this series, and switch to part4 (git checkout part4) to pick up where we left off.

Let’s get started by creating our roles directory in ansible-helloworld.

# git clone https://github.com/iachievedit/ansible-helloworld
# cd ansible-helloworld
# git checkout part4
# mkdir roles

Now we’re going to use the command ansible-galaxy to create a template (not to be confused with a Jinja2 template!) for our first role.

# cd roles
# ansible-galaxy init users
- users was created successfully

Drop in to the users directory that was just created and you’ll see:

# cd users
# ls
README.md files     meta      templates vars
defaults  handlers  tasks     tests

We’ll be working in three directories, vars, files, and tasks. In roles/vars/main.yml add the following:

Recall in previous tutorials our variables definitions were simple tag-value pairs (like HOSTNAME: helloworld). In this post we’re going to take advantage of the fact that variables can be complex types that include lists and dictionaries.

Now, let’s create our users role tasks. We’ll start with creating our groups. In roles/tasks/main.yml:

There’s another new Ansible keyword in use here, loop. loop will take the items in the list given by usergroups and iterate over them, with each item being “plugged in” to item. The Python equivalent might look like:

Loops are powerful constructs in roles and playbooks, so make sure and review the Ansible documentation to review what all can be accomplished with them. Also check out Chris Torgalson’s Untangling Ansible’s Loops, a great overview of Ansible loops and how to leverage them in your playbooks. It also turns out this post is using loops and various constructs to provision users, so definitely compare and contrast the different approaches!

Our next Ansible task will create the users and place them in the appropriate group.

Here it’s important to note that users is being looped over (that is, every item in the list users), and that we’re using a dot-notation to access values inside item. For the first entry in the list, item would have:

item.name = alice
item.group = architects

Now, we could have chosen to allow for multiple groups for each user, in which case we might have defined something like:

That looks pretty good so we’ll stick with that for the final product.

With our user definitions in hand, let’s create an appropriate task to create them in the correct environment. There are two more keywords to introduce: block and when. Let’s take a look:

The block keyword allows us to group a set of tasks together, and is frequently used in conjunction with some type of “scoping” keyword. In this example, we’re using the when keyword to execute the block when a certain condition is met. The tags keyword is another “scoping” keyword that is useful with a block.

Our when conditional indicates that the block will run only if the following conditions are met:

  • the host is in the production group (as defined in ansible_hosts)
  • the user is in either the architects or operations group

The syntax for specifiying this logic looks a little contrived, but it’s quite simple and uses in to return true if a given value is in the specified list. 'production' in group_names is true if the group_names list contains the value production. Likewise for item.groups, but in this case we use the or conditional to add the user to the server if their groups value contains either architects or operations.

We’re not quite done! We want our architects and operations groups to have sudo access on the production servers, so we add the following to our block:

Combining everything together, for production we have:

SSH Keys

Users on our servers will gain access through SSH keys. To add them:

Another new module! authorized_key will edit the ~/.ssh/authorized_keys file of the given user and add the key specified in the key parameter. The lookup function will go and get the key contents from a file (the first argument) given in the location {{ ssh_keys/item.name }}, which will expand to our user’s name.

Note that the lookup function searches the files directory in our role. That is, we have the following:

roles/files/ssh_keys/alice
                     bob
                     carol
                     dave
                     eve

We do not encrypt public keys (if someone complains you didn’t encrypt their public key, slap them, it’ll make you feel better).

Shells

It was years into my career before I realized there was more to life than ksh. No joke, I didn’t realize there was anything but! Today there are a variety of shells, bash, zsh and fish just to name a few. I’ve also learned that an individual’s shell of choice is often as sacrosanct as their choice of editor. So let’s add the ability to set the user’s shell of preference.

First, we need to specify the list of shells we’re going to support. In roles/users/vars/main.yml we’ll add:

bash is already present on our Ubuntu system, so no need to explicitly add it.

Now, in our role task, we add the following before any users are created.

This will ensure all of the shell options we given users are properly installed on the server.

Back to roles/users/vars/main.yml, let’s set the shells of our users:

A different shell for everyone!

Then, again in our role task, we update any addition of a user to include their shell:

Quite simple and elegant.

Editor’s Prerogative: Since this is my blog, and you’re reading it, I’ll give you my personal editor preference. Emacs (or an editor with Emacs keybindings, like Sublime Text) for writing (prose or code), and Vim for editing configuration files. No joke.

Staging

Our production environment had a simple rule: only architects and operations are allowed to login, and both get sudo access. Our staging environment is a bit more complicated, all users except tptesters get access to the environment, but only architects, operations, and developers get sudo access. Moreover, we want to have a single lineinfile task and use with_items in it to add the appropriate lines. Unfortunately this isn’t as easy as it sounds, as having with_items in the lineinfile task interferes with our loop tasks. So, we create a separate task specifically for our sudoers updates, and in the end have:

Again, note that we first use a block to create our users and authorized_keys updates for the staging group, only doing so for architects, operations, developers, and testers. The second task adds the appropriate lines in the sudoers file.

Deleting Users (or Groups)

We have a way to add users; we’ll also need a way to remove them (my telecom background comes through when I say “deprovision”).

In roles/users/vars/main.yml we’ll add a new variable deletedusers which contains a list of user names that are to be removed from a server. While we’re at it, let’s add a section from groups that we want to delete as well.

We can then update our user task:

As with the users, we’ll loop over deletedusers and use the absent state to remove the user from the system. Finally, any groups that require deletion can be done so as well with state: absent on the group task.

One last note with the user task with Ansible; we’ve only scratched the surface of its capabilities. There are a variety of parameters that can be set such as expires, home, and of particular interest, remove. Specifying remove: yes will delete the user’s home directory (and files in it), along with their mail spool. If you truly want to be sure and nuke the user from orbit, specify remove: yes in your user task for deletion.

Recap

If you go and look at the part5 branch of the GitHub repository, you’ll see that we’ve heavily refactored the main.yml file to rely on include statements. Like good code, a single playbook or Ansible task file shouldn’t be too incredibly long. In the end, our roles/users/tasks/main.yml looks like this:

Hopefully this post has given you some thoughts on how to leverage Ansible for adding and deleting users on your servers. In a future post we might look at how to do the same thing, but with using LDAP and Ansible together.

This Series

Each post in this series is building upon the last. If you missed something, here are the previous posts. We’ve also put everything on this Github repository with branches that contain all of the changes from one part to the next.

To get the most out of walking through this tutorial on your own, download the repository and check out part4 to build your way through to part5.

By

Ansible and AWS – Part 4

So far in our series we’ve covered some fundamental Ansible basics. So fundamental, in fact, that we really haven’t shared anything that hasn’t been written or covered before. In this post I hope to change that with an example of creating an AWS RDS database (MySQL-powered) solely within an Ansible playbook.

If you’re new to this series, we’re building up an Ansible playbook one step at a time, starting with Part 1. Check out the Github repository that accompanies this series to come up to speed. The final result of this post will be on the part4 branch.

Prerequisites

First, some prerequisites, you’ll need to:

  • generate a MySQL password
  • pick a MySQL administrator username
  • create an IAM user in S3 for RDS access

I love this page for one-liner password creation. Here’s one that works nicely on macOS:

date +%s | shasum -a 256 | base64 | head -c 32 ; echo

Whatever you generate will go in your vault file, and for this post we’ll create a new vault file for our staging group.

IAM User

We’re going to use an AWS IAM user with programmatic access to create the RDS database for us. I’m just going to name mine RDSAdministrator and directly attach the policy AmazonRDSFullAccess. Capture your access key and secret access key for use in the vault.

Reorganizing Our Variables

I mentioned in Part 2 that like code, playbooks will invariably be refactored. We’re going to refactor some bits now!

There’s a special group named all (we’ve used it before), and we’re going to use it with our RDS IAM credentials. This is worth paying particularly close attention to.

In group_vars we’ll create two directories, all and staging. staging.yml that we had before will be renamed to staging/vars.yml. all will have a special file named all.yml and both all and staging directories will contain a vault file, so in the end we have something like this:

group_vars/
          |
          +-all/
          |    |
          |    |-all.yml
          |    +-vault
          |
          +-staging/
                   |
                   |-vars.yml
                   +-vault

This is important. The use of all and all.yml is another bit of magic. Don’t rename all.yml to vars.yml, it will not work!

Let’s recap what variables we’re placing in each file and why:

VariableLocationRationale
AWS_RDS_ACCESS_KEYgroup_vars/all/all.ymlIAM access credentials to RDS will apply to all groups (staging, production, etc.)
AWS_RDS_SECRET_KEYgroup_vars/all/all.ymlIAM access credentials to RDS will apply to all groups (staging, production, etc.)
AWS_RDS_SECURITY_GROUPgroup_vars/all/all.ymlOur security group allowing access to created MySQL databases will apply to all Ansible groups
OPERATING_ENVIRONMENTgroup_vars/staging/vars.ymlThe OPERATING_ENVIRONMENT is set to staging for all servers in that group
MYSQL_ADMIN_USERNAMEgroup_vars/staging/vars.ymlStaging servers will utilize the same MySQL credentials
MYSQL_ADMIN_PASSWORDgroup_vars/staging/vars.ymlStaging servers will utilize the same MySQL credentials
HOSTNAMEhost_vars//vars.ymlEach host has its own hostname, and thus this goes into the host_vars
AWS_S3_ACCESS_KEYhost_vars//vars.ymlIAM access credentials to read S3 buckets is currently limited to a single host
AWS_S3_SECRET_KEYhost_vars//vars.ymlIAM access credentials to read S3 buckets is currently limited to a single host

This is our current organization. Over time we may decide to limit the RDS IAM credentials to a specific host, or move the S3 IAM credentials to the staging group.

RDS

Amazon’s Relational Database Service is a powerful tool for instantiating a hosted relational database. With several different types of database engines to choose from I expect increased usage of RDS, especially by smaller shops that need to reserve their budget for focusing on applications and less on managing infrastructure items such as databases.

If you’ve never used RDS before, I highly suggest you go through the AWS Management Console and create one by hand before using an Ansible playbook to create one.

MySQL Access Security Group

We’re going to cheat here a bit, only because I haven’t had the opportunity to work with EC2 security groups with Ansible. But to continue on we’re going to need a security group that we’ll apply to our database instance to allow traffic on port 3306 from our EC2 instances.

I’ve created a group called mysql-sg and set the Inbound rule to accept traffic on port 3306 from any IP in the 172.31.0.0/16 subnet (which covers my account’s default VPC address range).

mysql_security_group

Note: If you have abandoned the default VPC in favor of a highly organized set of VPCs for staging, production, etc. environments, you’ll want to adjust this.

Pip and Boto

The Ansible AWS tasks rely on the boto package for interfacing with the AWS APIs (which are quite extensive). We will want to utilize the latest boto package available and install it through pip rather than apt-get install. The following tasks accomplish this for us:

rds Module

Finally, we’ve come to the all-important RDS module! Our previous Ansible modules have been straightforward with just a couple of parameters (think apt, hostname, etc.). The rds module requires a bit more configuration.

Here’s the task and then we’ll discuss each of the parameters.

This is minimal number of parameters I’ve found to be required to get a functioning RDS database up-and-running.

The aws_access_key and aws_secret_key parameters are self-explanatory, and we provide the RDS IAM credentials from our encrypted vault.

The command parameter used here is create, i.e., we are creating a new RDS database. db_engine is set to MySQL since we’re going to be use MySQL as our RDS database engine. Let’s take a look at the rest:

  • region – like many AWS services, RDS databases are located in specific regions. us-east-2 is the region we’re using in this series. For a complete list of regions available for RDS, see Amazon’s documentation.
  • instance_name – our RDS instance needs a name
  • instance_type – the RDS instance type. db.t2.micro is a small free-tier eligible type. For a complete list of instance classes (types), see Amazon’s documentation.
  • username – our MySQL administrator username
  • password – our MySQL administrator password
  • size – the size, in GB, of our database. 10GB is the smallest database size one can create (try going lower)

The next two parameters publicly_accessible and vpc_security_groups dictate who can connect to our RDS database instance. We don’t want our database to be publicly accessible; only instances within our VPC should be able to communicate with this database, so we set publicly_accessible to no. However, without specifying a security group for our instance, no one will be to talk to it. Hence, we specify that our security group created above is to be applied with the vpc_security_groups parameter.

Creating an RDS database takes time. For our db.t2.micro instance, from initial creation to availability took about 10 minutes. To prevent an error such as Timeout waiting for RDS resource staging-mysql we increased our wait_timeout from 300 seconds (5 minutes) to 900 seconds (15 minutes).

register

You’ll notice we snuck in a new parameter to our rds task called register. register is the Ansible method of storing the result of a task in order to use it later in the playbook. The rds task provides a considerable amount of detailed output that is useful to us, the least of which is the endpoint name of the RDS database we just created. That’s important to know! We’ll use this to create a database in our new MySQL instance with the mysql_db task.

Like the rds task, mysql_db has a few dependencies. For it to function you’ll want to add the mysql-python pip package, which in turn requires libmysqlclient-dev (not including libmysqlclient-dev will throw a mysql_config not found error). This can be accomplished with:

Including mysql-client was not, strictly speaking, required, but it’s a handy package to have installed on a server that accesses a database.

Creating a MySQL database with Ansible

Now that we’ve created our RDS instance, let’s create a database in that instance with Ansible’s mysql_db task.

Remember that register: rds on the rds task is what allows us to use rds.instance.endpoint (for grins you might put in a debug task to see all of the information the rds task returns). We provide our MySQL username and password (this is to login to the RDS instance), and the name of the database we want to create. present is the default for the state parameter, but I like to include it anyway to be explicit in the playbook as to what I’m trying to accomplish.

There are a lot of parameters for the mysql_db module, so make sure and check out the Ansible documentation.

Configuration Information

Undoubtedly we’re going to provide our software with access to the database, and knowing it’s endpoint is important. Let’s save this information to our /etc/environment by adding the following line to templates/environment.j2:

Note that since we’re going to be referencing the output of the rds task in this template we will want to move writing out /etc/environment to the end of the playbook.

If desired, we can also create a .my.cnf file for the ubuntu user that provides automated access to the database. Now, it’s up to you to decide whether or not you’re comfortable with the idea of the password included in .my.cnf; if you aren’t, simply remove it. To create .my.cnf we’ll introduce one additional new module, blockinfile.

blockinfile allows you to add blocks of text to either an existing file, or if you specify create: yes as shown here, will create the file and add the provided content (given in the block parameter).

You can see how running this task performs:

ubuntu@helloworld:~$ cat .my.cnf
# BEGIN ANSIBLE MANAGED BLOCK
[client]
host="staging-mysql.c8tzlhizx66z.us-east-2.rds.amazonaws.com"
user="mysqladmin"
password="<Our MySQL administrator password>"
# END ANSIBLE MANAGED BLOCK

ubuntu@helloworld:~$ mysql
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 83
Server version: 5.6.39-log MySQL Community Server (GPL)

Recap

We’ve covered a lot of ground in this post! Several new modules, refactoring our variables, using Amazon’s RDS service. Whew! Here’s a quick recap of the new modules that you’ll want to study:

  • rds – you can create, modify, and delete RDS instances in your AWS account through this powerful task
  • register – the register keyword allows you to capture the output a task and use that output in subsequent tasks in your playbook
  • mysql_db – once your RDS instance is created, add a new database to it with the mysql_db task
  • blockinfile – another great task to have in your Ansible arsenal, blockinfile can add blocks of text to existing files or create new ones

This Series

Each post in this series is building upon the last. If you missed something, here are the previous posts. We’ve also put everything on this Github repository with branches that contain all of the changes from one part to the next.

By

Ansible and AWS – Part 3

This is the third article in a series looking at utilizing Ansible with AWS. So far the focus has been Ansible itself, and we’ll continue that in this post and look at the Ansible Vault. The final version of the Ansible playbook are on the part3 branch of this repository.

You will frequently come across occasions where you want to submit something that needs to be kept secret to a code repository. API keys, passwords, credentials, or other sensitive information that really is a part of the configuration of a server or application, and should be revisioned controlled, but shouldn’t be exposed to “the world” are great candidates for things to lock up in an Ansible vault. I’ll illustrate several different applications of a vault, and then share a great technique for diffing vault files.

Vault It!

Any file can be encrypted (vaulted) with the ansible-vault command. now.txt contains:

Now is the time for all good men to come to the aid of their country.

Let’s encrypt it with ansible-vault:

# ansible-vault encrypt now.txt
New Vault password:

Type in a good password and you’ll be prompted to confirm it. If the passwords match you’ll see Encryption successful. Now, cat the file:

# cat now.txt
$ANSIBLE_VAULT;1.1;AES256
30323261363061623735613964633862613232393261323638396435373561396237313161336531
...
33363033636132353034306130383064306264613935353133386664613062613438383361663462
33393438396236383230

You’ll notice that ansible-vault did not change the name of your file or add an extension. Thus from looking at the filename you can’t tell this file is a “vault” file.

To edit the file, use ansible-vault edit now.txt. You’ll be prompted for the password (I hope you remember it!), and if provided successfully ansible-vault will open the file with the default editor (on my Mac it’s vi but you can set the EDITOR shell variable explicitly). Make any changes you like and then save them in your editor. ansible-vault will reencrypt the file for you automatically.

ansible-vault view is another handy command; if you just want to look at the contents of the file and not edit them use ansible-vault view FILENAME and supply the password.

Practical Use

The most practical use of the vault capability is to protect sensitive information such as API keys, etc. We’re going to use it for two applications: encrypt AWS IAM credentials for accessing an S3 bucket, and an SSL certificate key.

First though, let’s talk about where to put the actual variables. We could put them in vars.yml and then encrypt the entire file. This is overkill. Better is to create a file called vault (so we know it’s a vault!) and put our values there. But, we’re going to use a little indirection technique like this:

vault:

By the way, to be clear, the vault file here is located in our host_vars directory for the target host, alongside our vars.yml for that host.

vars.yml:

Again, this is the vars.yml file we created previously for our host.

Our playbooks and template files will reference AWS_ACCESS_KEY and AWS_SECRET_KEY which in turn will get their values from the vault. In this manner you can quickly view your vars.yml and know which values you get from the vault (hence the prefix VAULT_). NB: This is purely a convention I’ve come to find to be extremely useful!

The Real Thing

To access S3 buckets we’re going to use the s3cmd package, so let’s add it to our playbook apt task:

Then, in our AWS console we’ll create a new IAM user named S3Reader and give it Programmatic access. For this user we can Attach existing policies directly and use the existing policy AmazonS3ReadOnlyAccess. You should be given both an access key and a secret key. We’ll plug these both in to our vault file and then encrypt it with ansible-vault encrypt.

Even though s3cmd is installed, it needs a configuration file .s3cfg. We’re going to create one with our credentials (the ones just obtained), again through the use of the template task. Create a file in the templates directory called s3cfg.j2 (make note we aren’t putting a dot in front of the file) with the following content:

[default]
access_key={{ AWS_ACCESS_KEY }}
secret_key={{ AWS_SECRET_KEY }}

Now, add the template task that will create the .s3cfg file:

Note the use of the user and group parameters for this task. The file is being created in the /home/ubuntu/ directory and since our Ansible playbook is issuing commands as the root user (the become_user directive in the playbook we’ve glossed over thus far), it would have, by default, set the owner and group of the file to root. We don’t want that, so we explicitly call out that the user and group should be set to ubuntu.

Execute your playbook, this time including --ask-vault to ensure you get prompted for your vault password (we’ll show you in a minute how to set this in a configuration file).

# ansible-playbook playbook.yml --ask-vault

If everything was done correctly you’ll have a new file .s3cfg in your /home/ubuntu directory that has your S3 credentials (the ones you placed in the vault file). Indeed, we can now run the s3cmd ls:

ubuntu@helloworld:~$ s3cmd ls
2016-10-01 14:06  s3://elasticbeanstalk-us-east-1-124186967073
2017-10-04 00:20  s3://iachievedit-cbbucket
2015-12-21 23:47  s3://iachievedit-repos
...

Nice.

key.pem

If you run a website you’re undoubtedly familiar with SSL certificates and the need to have one (or more). You’re also familiar with the private key for your certificate (unless you started your career with Let’s Encrypt and it’s abstracted some of the “magic”). That private key is, strictly speaking, a part of the server configuration, and we want everything on our server to be revisioned controlled, including that private key. So let’s vault it.

I have a private key named key.pem and the contents look like what you’d expect:

# cat key.pem
-----BEGIN PRIVATE KEY-----
MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQDDXnM+DZ0tTn9M
38CRP2dwOuRJhH3xRc3WcTmL1S+EIxdFGIffy0+YVXU9Uzbn0XDl0TgtfZwBcMeU
...
wVumMxo0FpnH9Wb3QtVwlv1zxaW2VakRvdiZ31PkUWK7oCkFJPVj0yGe/MDsCfQX
lCDhi/ncV+DL/FWXgPafUIA=
-----END PRIVATE KEY-----

We don’t want to submit this as-is to revision control, so let’s encrypt it with ansible-vault and put it in a directory called files.

# ansible-vault encrypt $HOME/key.pem
# cd ansible-helloworld
# mkdir files
# cp $HOME/key.pem files/

Note here that you will want to use the same vault password as you did for vault. Also, the directory files (like templates) is a special name to Ansible. Don’t call it file or my_files, etc., just files!

Our key.pem will be copied to /etc/ssl/private, so let’s create a task for that in the playbook:

If you run the playbook now and then check /etc/ssl/private/key.pem on your server you’ll see that Ansible decrypted the file and stored the decrypted contents; precisely what we wanted!

Your Vault Password

If you grow tired of supplying the vault password to --ask-vault each time you run a playbook, or even when using ansible-vault edit you can save the password in plain text in your home directory or other secure location. This is an example of a file you would not submit to revision control!

# export ANSIBLE_VAULT_PASSWORD_FILE=~/.ansible_vault_pass.txt
# ansible-playbook playbook.yml

The file ~/.ansible_vault_pass.txt contains the vault password in the clear. Add the environment variable ANSIBLE_VAULT_PASSWORD_FILE to your shellrc file (e.g., .zshrc, .bashrc, etc.) to make things even simpler.

Vault Diffs

I found this one weird trick just the other day and it’s a lifesaver if you are responsible for reviewing changes to API keys, passwords, etc. Courtesy of Mark Longair on Stack Overflow.

We have two “vaulted” files in our repository, a file named vault in our host_vars directory and key.pem. We’d like to easily diff them for changes, so following along with the Stack Overflow post, create a .gitattributes file with:

host_vars/*/vault diff=ansible-vault merge=binary
files/key.pem diff=ansible-vault merge=binary

Then, set the diff driver for files with that diff=ansible-vault attribute:

# git config --global diff.ansible-vault.textconv "ansible-vault view"

With this, if you make changes to your vaulted files, a git diff will automatically show you the actual diff vs. the encrypted diff (which is, obviously, unreadable).

Final Thoughts

Credentials, API keys, passwords, private keys. It’s been hammered in your head to not submit those to revision control systems, and rightfully so if they aren’t encrypted. But these pieces of information are critical to the successful build of a server or environment, and part of the charm that Ansible brings is the ability to rebuild a server (or sets of servers) and not worry about whether or not you missed a step. With the Ansible vault, however, you can encrypt this information and include it alongside the rest of your Ansible playbook; supply the vault password and watch the magic happen.

This Series

Each post in this series is building upon the last. If you missed something, here are the previous posts. We’ve also put everything on this Github repository with branches that contain all of the changes from one part to the next.