iAchieved.it

Software Development Tips and Tricks

By

Updating Yarn’s Apt Key on Ubuntu

If you’re one of those unfortunate souls that run into the following error when running apt update

you are not alone. Fortunately the fix is easy, but it’s buried in the comments, so here it is without a lot of wading:

Rerun apt update (or the apt-get equivalent), and you should be golden.

By

Swift 4 titleized String Extension

Swift 4.2

Rails provides the titleize inflector (capitalizes all the words in a string), and I needed one for Swift too. The code snippet below adds a computed property to the String class and follows these rules:

  • The first letter of the first word of the string is capitalized
  • The first letter of the remaining words in the string are capitalized, except those that are considered “small words”

Create a file in your Xcode project named String+Titleized.swift and paste in the following String extension:

Configure SMALL_WORDS to your liking. In this example I was titleizing Spanish phrases, so my SMALL_WORDS contains various definite articles and conjunctions. An example of the usage and output:

Note: This post is a Swift 4.2 version of this one written in Swift 2.

By

Leveraging Instance Size Flexibility with EC2 Reserved Instances

Determining which EC2 reserved instances to purchase in AWS can be a daunting task, especially given the fact that you’re signing up for a long(ish)-term commitment that costs you (or your employer) real money. It wasn’t until after several months of working with reserved instances and reading up that I became comfortable with their concepts and learning about a quite useful feature known as Instance Size Flexibility.

But first, we need to cover what this post is not about, and that is how to choose what type of instance you need to run a given application (web server, continuous integration build server, database, etc.). There are plenty of tutorials out there. Once you’ve become comfortable with your choice of instance types (I gravitate towards the T, M, and R types), you can begin thinking about saving on your EC2 compute costs by purchasing reserved instances.

I will admit to being a bit confused the first time I began purchasing reserved instances, and I attribute that to the fact that, well, they are a bit confusing. Standard reserved instances. Convertible reserved instances. Zonal reserved instances. No upfront payment. Partial upfront payment. Reserved instance marketplace. There’s a lot to take in, and on top of that, it is a bit nerve-wracking making a choice that you might have to live with (and pay) for a while. In fact, even after spending quite some time reading through everything, it still took me a few billing cycles to realize how reserved instances really worked.

While I can’t help you get over that initial intimidation factor, what I can do is share a bit of wisdom I gathered from How Reserved Instances Are Applied, with specific attention paid to How Regional Reserved Instances Are Applied.

With some exceptions, you can purchase a number of nano (or other size) reserved instances for a given instance type, and those reservations can be applied to larger (or smaller) instances in that same family. Note that there are exceptions (I told you it was confusing), as this feature does not apply to:

  • Reserved Instances that are purchased for a specific Availability Zone
  • bare metal instances
  • Reserved Instances with dedicated tenancy
  • Reserved Instances for Windows, Windows with SQL Standard, Windows with SQL Server Enterprise, Windows with SQL Server Web, RHEL, and SLES

But that’s okay, because my favorite type of machine, a shared tenancy instance running Ubuntu 16.04 or 18.04 LTS, is supported.

Instance Size Flexibility works like this. Each instance size is assigned a normalization factor, with the small size being given the unit factor of 1. A nano instance has a normalization factor of 0.25. That is, for the purposes of instance size flexibility and reserved instances, a single reservation for a small instance is the equivalent of 4 nano instances, and vice versa, 4 nano reserved instances are the equivalent of a single small reserved instance.

AWS publishes the normalization factors in the How Reserved Instances Are Applied documentation, but we’ll provide it here as well:

Instance sizeNormalization factor
nano0.25
micro0.5
small1
medium2
large4
xlarge8
2xlarge16
4xlarge32
8xlarge64
9xlarge72
10xlarge80
12xlarge96
16xlarge128
18xlarge144
24xlarge192
32xlarge256

Using Instance Size Flexibility In Your Account

Now let’s take advantage of our knowledge about normalization factors and see how we can apply them to our account (and our bill). We’re going to leverage the Ruby programming language and the AWS SDK for Ruby. If you’ve never used Ruby before, do yourself a favor and invest some time with it. You’ll be glad you did.

Let’s get started.

We’re going to be applying the instance size flexibility normalization factors, so let’s declare a Hash of their values.

Using Bundler to pull in our AWS SDK gem, we will retrieve all of our instances in a given region (remember that this feature is scoped to the zones in a given region). I am using us-east-2 in this example, also known as US East Ohio.

Note that the above uses ~/.aws/credentials. If you do not have this file you will need to configure your access key ID and secret access key.

Let’s iterate over our instances (filtering out Windows instances since they are not eligible for Instance Size Flexibility) and create a hash of the various classes. In the end we want our hash to contain, as its keys, all of the classes (types) of instances we have, and the values to be a list of the sizes of those classes.

For example, if we had 4 t2.nano, 3 t2.small instances, 1 t2.large, 4 m4.xlarge instances, and 2 m4.2xlarge instances, our hash would look like this: {"t2"=>["nano", "nano", "nano", "nano", "small", "small", "small", "large"], "m4"=>["large", "large", "large", "large", "2xlarge", "2xlarge"]}.

Now we’re going to determine how many equivalent small instances we have. This is done by adding our normalization factors for each of the instance sizes.

Using our previous example of 4 t2.nano, 3 t2.small instances, 1 t2.large, 4 m4.xlarge instances, and 2 m4.2xlarge instances, we’re walking through the math of 0.25 + 0.25 + 0.25 + 0.25 + 1 + 1 + 1 + 4 for our t2 instances and 8 + 8 + 8 + 8 + 16 + 16 for the m4 instances. This results in a Hash that looks like this: {"t2"=>8, "m4"=>64}. To be clear, the interpretation of this is that we have, for the purposes of Instance Size Flexibility with reserved instances, the equivalent of 8 t2.small and 64 m4.small instances in us-east-2. Put another way, if we purchased 8 t2.small reserved instances and 64 m4.small instances in us-east-2, we would have 100% coverage of our EC2 costs with a reserved instance.

Now, let’s take it a step further and see what the equivalence would be for the other sizes. In other words, we know we have the equivalent of 8 t2.small and 64 m4.small instances, but what if we wanted to know how many equivalent nano instances we had? This loop will create a row for each class and size:

Again, taking our previous example, we would expect to see 32 t2.nano instances and 256 m4.nano instances. That’s right. If we purchased 32 t2.nano and 256 m4.nano instances we would have the equivalent of our 4 t2.nano, 3 t2.small instances, 1 t2.large, 4 m4.xlarge instances, and 2 m4.2xlarge instances. Now, there doesn’t happen to be such a thing as an m4.nano instance, and we’ve corrected for this in our published example code.

By

Creating Strong Passwords with DuckDuckGo

Over the past year I’ve been taking online privacy more seriously and began looking at alternative search engines such as DuckDuckGo and Startpage. In addition, when creating strong passwords I turn to tools such as KeePass and Strong Password Generator. Earlier today I duckducked strong password, and formed a smile on my face when I saw this:

Well now how cool is that? Very cool.

Even cooler, however, is using DuckDuckGo’s pwgen feature to create passwords of varying strengths and lengths. Duckduck pwgen strong 16 to get something like:

If you prefer a “lower strength” password, you can use the low parameter, for example, pwgen low 24. Or, just average strength with pwgen 32 (the strength parameter is omitted).

From looking at the difference in output between low, average, and high strength passwords, it appears that:

  • low strength passwords are created from the character set [a-zA-Z]
  • medium strength passwords include numbers, increasing the set to [0-9a-zA-Z]
  • high strength passwords include symbols in the set [!@#$%^&*()] (note that the brackets are not in the set, this is regular expression bracket notation)

Instant Answers

This DuckDuckGo feature uses instant answers, an increasingly common feature of search engines. Each DuckDuckGo instant answer has an entry page, and the password generator is (aptly) named Password. You can even review the Perl source code on Github: Password.pm

Closing Thoughts

To be honest, I think this is a pretty cool feature. Now we could argue as to what constitutes a “strong” password, but we won’t. We could discuss entropy, passwords vs. passphrases, and so on. But we won’t. For a quick way to generate a pretty doggone good password, though, just duckduck one.

By

Ansible 2.7 Deprecation Warning – apt and squash_actions

Ansible 2.7 was released recently and along with it brought a new deprecation warning for the apt module:

TASK [Install base packages] ****************************************** 
Thursday 18 October 2018  15:35:52 +0000 (0:00:01.648)       0:06:25.667 ****** 
[DEPRECATION WARNING]: Invoking "apt" only once while using a loop via 
squash_actions is deprecated. Instead of using a loop to supply multiple items 
and specifying <code>name: {{ item }}</code>, please use <code>name: [u'htop', u'zsh', u's3cmd']</code> and remove 
the loop. This feature will be removed in version 2.11. Deprecation warnings 
can be disabled by setting deprecation_warnings=False in ansible.cfg.

Our apt task was:

- name:  Install base packages
  apt:
    name:  "{{ item }}"
    state: present
    update_cache: yes
  with_items:
    - htop
    - zsh
    - s3cmd

Very standard.

The new style with Ansible 2.7 should look like:

- name:  Install base packages
  apt:
    name:  "{{ packages }}"
    state: present
    update_cache:  yes
  vars:
    packages:
      - htop
      - zsh
      - s3cmd

The change is self-explanatory (and is alluded to in the deprecation warning): rather than loop over a list and applying the apt module, provide the module with a list of items to process.

You can read up on the documentation for apt in Ansible 2.7 here.

By

Updating From Such a Repository Can’t Be Done Securely

I recently came across the (incredibly frustrating) error message Updating from such a repository can't be done securely while trying to run apt-get update on an Ubuntu 18.04 LTS installation. Everything was working fine on Ubuntu 16.04.5. It turns out that newer version of apt (1.6.3) on Ubuntu 18.04.1 is stricter with regards to signed repositories than Ubuntu 16.04.5 (apt 1.2.27).

Here’s an example of the error while trying to communicate with the Wazuh repository:

Reading package lists... Done
E: Failed to fetch https://packages.wazuh.com/apt/dists/xenial/InRelease  403  Forbidden [IP: 13.35.78.27 443]
E: The repository 'https://packages.wazuh.com/apt xenial InRelease' is no longer signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.

After searching around, we found that this issue has already been reported to the Wazuh project, but the solution of adding [trusted=yes] did not work for a repository that had already been added in /etc/apt. After continued searching, the following solution was finally hit upon:

deb [allow-insecure=yes allow-downgrade-to-insecure=yes] https://packages.wazuh.com/apt xenial main

That is, rather than using [trusted=yes] one can use [allow-insecure=yes allow-downgrade-to-insecure=yes]. Running apt-get update afterwards shows that the InRelease section is ignored, and Release is picked up:

Ign:7 https://packages.wazuh.com/apt xenial InRelease
Hit:8 https://packages.wazuh.com/apt xenial Release

Note that this is obviously a temporary solution, and should only be applied to a misbehaving repository! If you’re so inclined, upvote the Wazuh GitHub issue, as a fix at the repository level would be nice.

By

GeoIP2 and NGINX

There are times when you want to configure your website to explicitly disallow access from certain countries, or only allow access from a given set of countries. While not completely precise, use of the MaxMind GeoIP databases to look up a web client’s country-of-origin and have the web server respond accordingly is a popular technique.

There are a number of NGINX tutorials on how to use the legacy GeoIP database and the ngx_http_geoip_module, and as it happens the default Ubuntu nginx package includes the ngx_http_geoip_module. Unfortunately the GeoIP databases will no longer be updated, and MaxMind has migrated to GeoIP2. Moreover, after January 2, 2019, the GeoIP databases will no longer be available.

This leaves us in a bind. Luckily, while the Ubuntu distribution of NGINX doesn’t come with GeoIP2 support, we can add it by building from source. Which is exactly what we’ll do! In this tutorial we’re going to build nginx from the ground up, modeling its configuration options after those that are used by the canonical nginx packages available from Ubuntu 16.04. You’ll want to go through this tutorial on a fresh installation of Ubuntu 16.04 or later; we’ll be using an EC2 instance created from the AWS Quick Start Ubuntu Server 16.04 LTS (HVM), SSD Volume Type AMI.

Getting Started

Since we’re going to be building binaries, we’ll need the build-essential package which is a metapackage that installs applications such as make, gcc, etc.

Now, to install all of the prerequisities libraries we’ll need to compile NGINX:

Using the GeoIP2 database with NGINX requires the ngx_http_geoip2_module and requires the MaxMind development packages from MaxMind:

Getting the Sources

Now let’s go and download NGINX. We’ll be using the latest dot-release of the 1.15 series, 1.15.3. I prefer to compile things in /usr/local/src, so:

We also need the source for the GeoIP2 NGINX module:

Now, to configure and compile.

You will want to make sure that the ngx_http_geoip2_module will be compiled, and should see nginx_geoip2_module was configured in the end of the configure output.

Now, run sudo make. NGINX, for all its power, is a compact and light application, and compiles in under a minute. If everything compiles properly, you can run sudo make install.

A few last things to complete our installation:

  • creating a symlink from /usr/sbin/nginx to /usr/share/nginx/sbin/nginx
  • creating a symlink from /usr/share/nginx/modules to /usr/lib/nginx/modules
  • creating the /var/lib/nginx/body directory
  • installing an NGINX systemd service file

For the Systemd service file, place the following in /lib/systemd/system/nginx.service:

and reload systemd with sudo systemctl daemon-reload. You should now be able to check the status of nginx:

We’ll be starting it momentarily!

Testing

On to testing! We’re going to use HTTP (rather than HTTPS) in this example.

While we’ve installed the libraries that interact with the GeoIP2 database, we haven’t yet installed the database itself. This can be accomplished by installing the geoipupdate package from the MaxMind PPA:

# sudo apt-get install -y geoipupdate

Now run sudo geoipupdate -v:

It’s a good idea to periodically update the GeoIP2 databases with geoipupdate. This is typically accomplished with a cron job like:

# crontab -l
30 0 * * 6 /usr/bin/geoipupdate -v | /usr/bin/logger

Note: Use of logger here is optional, we just like to see the output of the geoipupdate invocation in /var/log/syslog.

Nginx Configuration

Now that nginx is built and installed, we have a GeoIP2 database in /usr/share/GeoIP, we can finally get to the task of restricting access to our website. Here is our basic nginx.conf:

load_module modules/ngx_http_geoip2_module.so;

worker_processes auto;

events {
  worker_connections  1024;
}

http {
  sendfile      on;
  include       mime.types;
  default_type  application/octet-stream;
  keepalive_timeout  65;

  geoip2 /usr/share/GeoIP/GeoLite2-Country.mmdb {
    $geoip2_data_country_code country iso_code;
  }

  map $geoip2_data_country_code $allowed_country {
    default no;
    US yes;
  }

  server {
    listen       80;
    server_name  localhost;

    if ($allowed_country = no) {
      return 403;
    }

    location / {
        root   html;
        index  index.html index.htm;
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   html;
    }
  }
}

Let’s walk through the relevant directives one at a time.

load_module modules/ngx_http_geoip2_module.so;

Since we built nginx with ngx_http_geopip2_module as a dynamic module, we need to load it explicitly with the load_module directive.

Looking up the ISO country code from the GeoIP2 database utilizes our geoip2 module:

geoip2 /usr/share/GeoIP/GeoLite2-Country.mmdb {
  $geoip2_data_country_code country iso_code;
}

The country code of the client IP address will be placed in the NGINX variable $geoip2_data_country_code. From this value we determine what to set $allowed_country to:

map $geoip2_data_country_code $allowed_country {
  default no;
  US yes;
}

map in the NGINX configuration file is a bit like a switch statement (I’ve chosen the Swift syntax of switch):

switch geoip2_data_country_code {
  case 'US':
    allowed_country = "yes"
  default:
    allowed_country = "no"
}

If we wanted to allow IPs from the United States, Mexico, and Canada the map directive would look like:

map $geoip2_data_country_code $allowed_country {
  default no;
  US yes;
  MX yes;
  CA yes;
}

geoip2 and map by themselves do not restrict access to the site. This is accomplished through the if statement which is located in the server block:

if ($allowed_country = no) {
  return 403;
}

This is pretty self-explanatory. If $allowed_country is no then return a 403 Forbidden.

If you haven’t done so already, start nginx with systemctl start nginx and give the configuration a go. It’s quite easy to test your nginx configuration by disallowing your country, restarting nginx (systemctl restart nginx), and trying to access your site.

Credits and Disclaimers

NGINX and associated logos belong to NGINX Inc. MaxMind, GeoIP, minFraud, and related trademarks belong to MaxMind, Inc.

The following resources were invaluable in developing this tutorial:

By

Ubuntu 18.04 on AWS

Ubuntu 18.04 Bionic Beaver was released several months ago now, and is currently (as of this writing) not available as a Quick Start AMI on AWS. But that’s okay, it is easy to create your own AMI based on 18.04. We’ll show you how!

Some assumptions, though. We’re going to assume you know your way around the AWS EC2 console, and have launched an instance or two in your time. If you haven’t, AWS itself has a Getting Started guide just for you.

Starting with 16.04

First, create an Ubuntu Server 16.04 EC2 instance in AWS with ami-0552e3455b9bc8d50, which is found under the Quick Start menu. A t2.micro instance is fine as we’re only going to be using it to build an 18.04 AMI.

Once the instance is available, ssh to it.

Notice that the OS is Ubuntu 16.04.5. We’re now going to upgrade it to 18.04.1 with do-release-upgrade. First, run sudo apt-get update, followed by sudo do-release-upgrade.

The upgrade script will detect that you are connected via an SSH session, and warn that performing an upgrade in such a manner is “risky.” We’ll take the risk and type y at the prompt.

This session appears to be running under ssh. It is not recommended
to perform a upgrade over ssh currently because in case of failure it
is harder to recover.

If you continue, an additional ssh daemon will be started at port
'1022'.
Do you want to continue?

Continue [yN]

You’ll get another warning about firewalls and iptables. Continue here as well!

To continue please press [ENTER]

Terrific, another warning! We’re about to do some seriously downloading, and hopefully it won’t take 6 hours.

You have to download a total of 173 M. This download will take about
21 minutes with a 1Mbit DSL connection and about 6 hours with a 56k
modem.

Fetching and installing the upgrade can take several hours. Once the
download has finished, the process cannot be canceled.

 Continue [yN]  Details [d]

Of course, press y to continue, and confirm that we also want to remove obselete packages.

Remove obsolete packages?


28 packages are going to be removed.

 Continue [yN]  Details [d]

At this point the installation and upgrade of packages should actually begin. There is a good chance that you’ll be interrupted with a couple screens requesting what version of GRUB and ssh configuration files you want to use. I typically keep the currently installed version of a configuration file, as it is likely I’ve made edits (through Ansible of course) to a given file. Rather than do diffs or merges at this point, I’ll wait until the upgrade is complete to review the files.

Once the upgrade is completed you’ll be prompted to reboot.

System upgrade is complete.

Restart required

To finish the upgrade, a restart is required.
If you select 'y' the system will be restarted.

Continue [yN]

After the reboot is completed, login (via ssh) and you should be greeted with

Welcome to Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-1020-aws x86_64)

Terrific! We have a pristine Ubuntu 18.04.1 LTS instance on Linux 4.15. We’re going to use this instance to make a template (AMI) from which to create more.

To start this process, stop the instance in the EC2 console. Once the instance is stopped, right-click on it and under the Image menu, select Create Image.

AWS will pop up a dialog indicating Create Image request received. with a link for viewing the pending image. Click on this link, and at this point you can name the AMI, as well as refer to it by its AMI ID.

Wait until the Status of the AMI is available before continuing!

Creating An 18.04.1 LTS Instance

Go back to the EC2 console and delete (terminate) the t2.micro instance we created, as it is no longer needed. Then, click Launch Instance and select My AMIs. You should see your new Ubuntu 18.04.1 LTS AMI. Select it and configure your instance (type, storage, security groups, etc.) and launch it!

Once your instance is available, ssh to it and see that you’ve just created an Ubuntu 18.04.1 Bionic Beaver server in AWS, and you have an AMI available to build as many as you like!

By

Not authorized to send Apple events to System Events

As others have written about, Apple appears to be making a hash out of the ability to automate tasks on macOS 10.14 (Mojave). I get it. In the age of hyper-connectivity there is a continuous assault on our computing environments by bad actors looking to extort some cash or otherwise ruin our day. Something needs to safeguard our systems against random scripts aiming to misbehave. Enter Mojave’s enhanced privacy protection controls and event sandboxing.

With earlier versions of Mojave the new event sandboxing mechanism was considerably flaky. In some versions (notably Beta 4), you’d intermittently be presented with Not authorized to send Apple events to System Events when attempting to execute AppleScript applications. As of Mojave Beta 8 (18A371a) I have found that the authorization functionality is at least consistent in prompting you for permission.

As a test, open a Terminal window and enter the following:

osascript -e 'tell application "Finder"' -e 'set _b to bounds of window of desktop' -e 'end tell'

You will get different results depending upon your current automation privacy settings. If you’ve never given permission to Terminal to send events to Finder you’ll see an authorization dialog like this:

If you’ve already given permission (as shown in the example Privacy panel below), the AppleScript tell will succeed and you’ll see something like -2560, 0, 1920, 1440 (the bounds coordinates).

But wait, there’s more! If you had previously given permission, and then revoked it by unchecking the permission in the Privacy panel, you’ll get execution error: Not authorized to send Apple events to Finder. (-1743).

Automation Implications

A lot of folks (myself included) write AppleScript applications that perform some set of tasks. That is sort of the whole point of automation. Again, I get it. If my machine is compromised with a rogue application or script, it could do some serious damage. But that will always be the case.

Now I can imagine how this will be resolved, and it’s going to include me forking over some money to get a verified publisher certificate. My certificate will be signed by Apple, and that signing certificate will be trusted by the OS a priori, and I’ll have to sign my scripts somehow, and so on. That’s the only way I can see this panning out, unless the plan is to literally break the automation capabilities with AppleScript. If you envision a different solution, please leave a comment!

By

Ansible Vault IDs

There are times when not only you’ll want to have separate vault files for development, staging, and production, but when you will also want to have separate passwords for those individual vaults. Enter vault ids, a feature of Ansible 2.4 (and later).

I had a bit of trouble getting this configured correctly, so I wanted to share my setup in hopes you find it useful as well.

First, we’ll create three separate files that contain our vault passwords. These files should not be checked into revision control, but instead reside in your protected home directory or some other secure location. These files will contain plaintext passwords that will be used to encrypt and decrypt your Ansible vaults. Our files are as follows:

  • ~/.vault-pass.common
  • ~/.vault-pass.staging
  • ~/.vault-pass.production

As you can already guess we’re going to have three separate passwords for our vaults, one each for common credentials we want to encrypt (for example, an API key that is used to communicate with a third party service and is used for all environments), and our staging and production environments. We’ll keep it simple for the contents of each password file:

Obligatory Warning: Do not use these passwords in your environment but instead create strong passwords for each. To create a strong password instead you might try something like:

Once you’ve created your three vault password files, now add to your ansible.cfg [general] section:

vault_identity_list = common@~/.vault-pass.common, staging@~/.vault-pass.staging, production@~/.vault-pass.production

It’s important to note here that your ansible.cfg vault identity list will be consulted when you execute your Ansible playbooks. If the first password won’t open the vault, it will move on to the next one, until one of them works (or, conversely, doesn’t).

Encrypting Your Vaults

To encrypt your vault file you must now explicitly choose which id to encrypt with. For example,

vault_common:

we will encrypt with our common vault id, like this:

# ansible-vault encrypt --encrypt-vault-id common common_vault
Encryption successful

Run head -1 on the resulting file and notice that the vault id used to encrypt is in the header:

If you are in the same directory as your ansible.cfg file, go ahead and view it with ansible-vault view common_vault. Your first identity file (.vault-pass.common) will be consulted for the password. If, however, you are not in the same directory with your ansible.cfg file, you’ll be prompted for the vault password. To make this global, you’ll want to place the vault_identity_list in your ~/.ansible.cfg file.

Repeat the process for other vault files, making sure to specify the id you want to encrypt with:

For a staging vault file:

For a production vault file:

Now you can view any of these files without providing your vault password since ansible.cfg will locate the right password. The same goes running ansible-playbook! Take care though that when you decrypt a file, if you intend on re-encrypting it that you must provide an id to use with the --encrypt-vault-id option!

A Bug, I Think

I haven’t filed this with the Ansible team, but I think this might be a bug. If you are in the same directory as your ansible.cfg (or the identity list is in .ansible.cfg), using --ask-vault to require a password on the command line will ignore the password if it can find it in your vault_identity_list password files. I find this to be counterintuitive: if you explicitly request a password prompt, the password entered should be the one that is attempted, and none other. For example:

# ansible-vault --ask-vault view common_vault
Vault password:

If I type anything other than the actual password for the common identity, I should get an error. Instead Ansible will happily find the password in ~/.vault-pass.common and view the file anyway.

Some Additional Thoughts

I wanted to take a moment to address a comment posted on this article, which can be summarized as:

What’s the point of encrypting services passwords in a vault which you check in to a repository, then pass around a shared vault-passwords file that decrypts them outside of the repository, rather than simply sharing a properties file that has the passwords to the services? It just seems like an extra layer of obfuscation rather than actually more secure.

First, to be clear, a “shared vault-passwords file” is not passed around – either the development operations engineer(s) are or a secured build server is permitted to have the vault passwords. Second, with this technique, you have a minimal number of passwords that are stored in plain text. True, these passwords are capable of unlocking any vaults encrypted with them, but this is true of any master password. Finally, I disagree with the assertion that this is an “extra layer of obfuscation.” If that were the case, any encryption scheme that had a master password (which is what utilizing an Ansible vault password file is), could be considered obfuscation. In the end, this technique is used to accomplish these goals:

  • permit separate sets of services passwords for different environments, i.e., staging and production
  • allow for submitting those services passwords in an encrypted format into a repository (the key here is that these are submitted to a known location alongside the rest of the configuration)
  • allow for decryption of those vaults in a secured environment such as a development operations user account or build server account