Category Archives: DevOps

Migrating Your Nameservers from GoDaddy to AWS Route 53

I can already hear the rabble shouting, “Why would you use GoDaddy as your domain registrar?!” A fair question, but sometimes we don’t always get to choose our domain registrar (e.g., we inherited it) and aren’t in a position to change it. But that doesn’t mean GoDaddy has to provide DNS management for your domain.

In this post I’ll show you how to change your domain’s nameservers from GoDaddy to AWS’ Route 53. Reader Beware: In this post we are going to change a domain’s nameservers from one provider (GoDaddy) to another (AWS). Note that this is not (I repeat, not) the same as transferring your DNS entries. I joke about GoDaddy’s repeated warnings that changing “nameservers” is risky, but unless your zone files have been populated in the new environment, you will definitely be in for calamity when your hostnames no longer resolve.

Getting AWS Route 53 Ready

Our first step is to create a hosted zone in AWS Route 53. Login in to your AWS account and go to the Route 53 dashboard and click Create hosted zone.

Our zone will be for the domain sonorasecurity.com and it will be a public hosted zone, in that we want public Internet DNS queries for our domain to be resolved. Once you’ve filled in this information, click Create hosted zone.

Once the zone is created you’ll see two DNS entries automatically created, an NS entry and an SOA entry.

We’re interested in the NS entry and the fully-qualified domain names listed. In this example there are four nameservers listed:

  • ns-1528.awsdns-63.org
  • ns-95.awsdns-11.com
  • ns-658.awsdns-18.net
  • ns-1724.awsdns-23.co.uk

We’re going to now use these values in our GoDaddy account to change the nameservers for our domain sonorasecurity.com.

Updating Our Nameservers

Before we update our domain’s nameservers, let’s verify that they are currently hosted by GoDaddy. In a shell type dig +short NS <your_domain_name>. For example:

dig +short NS sonorasecurity.com
ns07.domaincontrol.com.
ns08.domaincontrol.com.

So far so good. Now login to your GoDaddy account that manages the domain, and go over to the DNS Management page. Type in your domain name and select it in the dropdown box.

Scroll down and find the Nameservers section and next to Using default nameservers click the Change button.

Here is where it becomes comical how many times GoDaddy implores us not to try to change nameservers. The first page warns you that Changing nameservers is risky. While that is true if you don’t know what you’re doing, you’re a professional, so click on Enter my own nameservers (advanced).

You’ll be presented with a form for entering our AWS nameserver FQDNs. Here it is important to take note to not add the period after the FQDN (GoDaddy will give you an Unexpected Error Occurred message if you try).

Enter all of the nameservers listed in the AWS NS record and click Save.

Once again we get a warning about our risky behavior! Yes, yes. Check, Yes, I consent and click Continue.

After clicking Continue you will likely see a banner at the top of the DNS management page indicating a change is in progress. Once completed you’ll see your nameservers listed, and GoDaddy indicating that “We can’t display your DNS information because your nameservers aren’t managed by us.”

Now, in a terminal you can type dig +short NS <your_domain_name> and you should see your nameservers updated, like this:

dig +short NS sonorasecurity.com
ns-1528.awsdns-63.org.
ns-1724.awsdns-23.co.uk.
ns-658.awsdns-18.net.
ns-93.awsdns-11.com.

And there you have it, your domain’s DNS entries can now be managed with AWS Route 53!

Encrypting Existing S3 Buckets

Utilizing encryption everywhere, particularly in cloud environments, is a solid idea that just makes good sense. AWS S3 makes it easy to create buckets whose objects are encrypted by default, but what if you didn’t initially configure it that way and already have objects uploaded?

It’s easy enough to change the default encryption setting of the bucket. Select the Default Encryption box and choose one of the encryption options. I prefer the simplicity of choosing the AWS-managed keys for AES-256. Click Save.

You can now see that the default encryption setting for the bucket is AES-256. That is, any new objects uploaded to the bucket will automatically be encrypted.

Now, we talked about new objects uploaded to the bucket, but what about existing objects? That’s where the catch is: changing the default encryption of the bucket does not affect existing objects!

To remedy this one must copy all of the objects in the S3 bucket “onto” themselves. Yes, that’s really how it is done. This can be accomplished easily using the application s3cmd. s3cmd can be installed using apt-get on Debian-based systems, or brew on macOS. For more installation options of s3cmd see S3tools.org.

With s3cmd cp you provide the target and destination buckets. In this case the target and destination are the same. Make sure and include the --recursive option (similar to using cp -R to copy directories).

Reloading an existing object’s overview in the S3 console shows that the object is now encrypted!

And remember: future objects uploaded to this S3 bucket will be encrypted and that you only need to do the copy-over method once.

TLS 1.3 with NGINX and Ubuntu 18.04 LTS

OpenSSL 1.1.1 is now available to Ubuntu 18.04 LTS with the release of 18.04.3. This porting of OpenSSL 1.1.1 has opened up the ability to run with TLS 1.3 on your Ubuntu 18.04 LTS NGINX-powered webserver. To add TLS 1.3 support to your existing NGINX installation, first upgrade your Ubuntu 18.04 LTS server to 18.04.3, and then find the ssl_protocols directive for NGINX and add TLSv1.3 at the end:

Restart NGINX with systemctl restart nginx.

It really is as simple as that! If your browser supports TLS 1.3 (and all major browsers do as of November 2019 with the notable exception of Microsoft Edge) it will negotiate to it. As of this writing (November 2019), you would not want to disable TLSv1.2. Odds are you will break tools such as cURL and other HTTPS agents accessing your site. Here’s an example of what that looks like for curl on macOS 10.14.6 (Mojave):

In other words, the stock macOS 10.14.6 curl cannot establish a connection with a webserver running only TLS 1.3.

Enabling 0-RTT

There are a lot of compelling features to TLS 1.3, one of them being 0-RTT for performance gains in establishing a connection to the webserver. NGINX enables TLS 1.3 0-RTT if the configuration parameter ssl_early_data is set to on. If you are using the stock NGINX provided by Ubuntu 18.04 LTS 0-RTT is not supported. Let’s upgrade to the version provided by the NGINX PPA and enable it.

Go back to your NGINX configuration and place the ssl_early_data directive near all of the other ssl_ directives, like this:

Now, all that being said, 0-RTT is not something you will want to enable without careful consideration. The “early” in SSL early data comes from the idea that if the client already has a pre-shared key, it can reuse the key. This is a great post outlining the benefits, and risks, of enabling 0-RTT.

DevOps ToolChain, WikiPedia, CC BY-SA 4.0

Auditing Shared Account Usage

Occasionally you find yourself in a situation where utilizing a shared account cannot be avoided. One such scenario is managing the deployment of NodeJS applications with Shipit and PM2. Here’s how the scenario typically works:

Alice, Bob, and Carol are three developers working on NodeJS applications that need to be deployed to their staging server. They’ve decided on the use of PM2 as their process manager, and ShipIt as their deployment tool. Their shipitfile.js file contains a block for the staging server, and it looks something like:

staging: {
      servers: [
        {
          host: 'apps.staging.iachieved.it',
          user: 'deployer',
        },
      ],
      environment:  'staging',
      branch: 'develop',
    },

As we can see the deployer user will be used to deploy our application, and by extension, will be the user that pm2 runs the application under. Alice, Bob, and Carol all have their SSH keys put in /home/deployer/.ssh/authorized_keys so they can deploy. Makes sense.

Unfortunately what this also means is Alice, Bob, or Carol can ssh to the staging server as the deployer user. Even though deployer is an unprivileged user, we really don’t want that. Moreover, by default, we can’t trace who deployed, or if someone is misusing the deployer user. Let’s take a look at how we can address this.

Creating a Deployment Group

The first thing we want to do is to create a security group for those that are authorized to perform deployments. I’ll be using an Active Directory security group in this example, but a standard Unix group would work as well. We can use getent to see the members of the group. getent will come in handy to help determine whether someone attempting to deploy is authorized.

# getent group "application deployment@iachieved.it"
application deployment@iachieved.it:*:1068601118:alice@iachieved.it,bob@iachieved.it

SSH authorized_keys command

Until I started researching this problem of auditing and restricting shared account usage I was unaware of the command option in the SSH authorized_keys file. One learns something new every day. What the command option provides for is executing a command immediately upon SSHing via a given key. Consider that we put the following entry in the deployer user ~/.ssh/authorized_keys file:

ssh-rsa AAAA...sCBR alice

and this is Alice’s public key. We would expect that Alice would be able to ssh deployer@apps.iachieved.it and get a shell. But what if we wanted to intercept this SSH and run a script instead? Let’s try it out:

deployer@apps.iachieved.it:~/.ssh/authorized_keys:

command="/usr/bin/logger -p auth.INFO Not permitted" ssh-rsa AAAA...sCBR alice

When Alice tries to ssh as the deployer user, we get an entry in auth.log:

Jul  5 22:30:58 apps deployer: Not permitted

and Alice sees Connection to apps closed..

Well that’s no good! We do want Alice to be able to use the deployer account to deploy code.

A Wrapper Script

First, we want Alice to be able to deploy code with the deployer user, but we also want to:

  • know that it was Alice
  • ensure Alice is an authorized deployer
  • not allow Alice to get a shell

Let’s look at how we can create a script to execute each SSH invocation that will meet all of these criteria.

Step 1, let’s log and execute whatever Alice was attempting to do.

/usr/local/bin/deploy.sh:

SSH_ORIGINAL_COMMAND will be set automatically by sshd, but we need to provide SSH_REMOTE_USER, so in the authorized_keys file:

command="export SSH_REMOTE_USER=alice@iachieved.it;/usr/local/bin/deploy.sh" ssh-rsa AAAA...sCBR alice

Note that we explicitly set SSH_REMOTE_USER to alice@iachieved.it. The takeaway here is that it associates any attempt by Alice to use the deployer account to her userid. We then execute deploy.sh which logs the invocation. If Alice tries to ssh and get a shell with ssh deployer@apps the connection will still be closed, as SSH_ORIGINAL_COMMAND is null. But, let’s say she runs ssh deployer@apps ls /:

alice@iachieved.it@apps ~> ssh deployer@apps ls /
bin
boot
dev
etc

In /var/log/auth.log we see:

Jul  6 13:43:25 apps sshd[18554]: Accepted publickey for deployer from ::1 port 48832 ssh2: RSA SHA256:thZna7v6go5EzcZABkieCmaZzp+6WSlYx37a3uPOMSs
Jul  6 13:43:25 apps sshd[18554]: pam_unix(sshd:session): session opened for user deployer by (uid=0)
Jul  6 13:43:25 apps systemd-logind[945]: New session 54 of user deployer.
Jul  6 13:43:25 apps systemd: pam_unix(systemd-user:session): session opened for user deployer by (uid=0)
Jul  6 13:43:26 apps deployer: alice@iachieved.it executed ls /

What is important here is that we can trace what Alice is executing.

Managing a Deployment Security Group

Left as is this technique is much preferred to a free-for-all with the deployer user, but more can be done using security groups to have finer control of who can use the account at any given time. Let’s add an additional check in the /usr/local/bin/deploy.sh script with the uig function introduced in the last post.

The authorized_keys file gets updated, and let’s add Bob and Carol’s keys for our additional positive (Bob) and negative (Carol) test:

Since Bob is a member of the application deployment@iachieved.it group, he can proceed:

Jul  6 20:09:26 apps sshd[21886]: Accepted publickey for deployer from ::1 port 49148 ssh2: RSA SHA256:gs3j1xHvwJcSMBXxaqag6Pb7A595HVXIz2fMoCX2J/I
Jul  6 20:09:26 apps sshd[21886]: pam_unix(sshd:session): session opened for user deployer by (uid=0)
Jul  6 20:09:26 apps systemd-logind[945]: New session 79 of user deployer.
Jul  6 20:09:26 apps systemd: pam_unix(systemd-user:session): session opened for user deployer by (uid=0)
Jul  6 20:09:27 apps deployer: bob@iachieved.it is in application deployment@iachieved.it and executed ls /

Now, Carol’s turn to try ssh deployer@apps ls /:

Jul  6 20:15:37 apps deployer: carol@iachieved.it is not in application deployment@iachieved.it and is not authorized to execute ls /

Poor Carol.

Closing Thoughts

For some teams the idea of having to manage a deployment security group and bespoke authorized_keys file may be overkill. If you’re in an environment with enhanced audit controls and accountability the ability to implement safeguards and audits to code deployments may be a welcome addition.

DevOps ToolChain, WikiPedia, CC BY-SA 4.0

A Script for Testing Membership in a Unix Group

Sometimes you just need a boolean test for a given question. In this post we’ll look at answering the question, “Is this user in a given group?” Seems simple enough.

It’s easy to see what groups a user is a member of in a shell:

% id
alice@iachieved.it@darthvader:~$ id
uid=1068601116(alice@iachieved.it) gid=1068601115(iachievedit@iachieved.it) groups=1068601115(iachievedit@iachieved.it),1068600513(domain users@iachieved.it),1068601109(linux ssh@iachieved.it),1068601118(application deployment@iachieved.it)

Note that Alice is an Active Directory domain user. We want to test whether or not she is a member of the application deployment@iachieved.it group. We can see this with our eyes in the terminal, but a little scripting is in order. We’ll skip error checking for this first example.

uig.sh:

Let’s take it out for a spin.

alice@iachieved.it@darthvader:~$ ./uig.sh `whoami` "linux administrators@iachieved.it"
User alice@iachieved.it IS NOT in group linux administrators@iachieved.it

Now let’s test whether or not Alice is in the application deployment@iachieved.it group:

alice@iachieved.it@darthvader:~$ ./uig.sh `whoami` "application deployment@iachieved.it"
User alice@iachieved.it IS in group application deployment@iachieved.it

Terrific. This will come in handy in the next blog post.

Let’s clean this up into a function that can be sourced in a script or shell:

alice@iachieved.it@darthvader:~$ uig `whoami` "linux ssh@iachieved.it"
alice@iachieved.it@darthvader:~$ echo $?
0

Or, the invocation we’re most likely to use:

TLS 1.3 with NGINX and Ubuntu 18.10

TLS 1.3 is on its way to a webserver near you, but it may be a while before major sites begin supporting it. It takes a bit of time for a new version of anything to take hold, and even longer if it’s the first new version of a protocol in nearly 10 years.

Fortunately you don’t have to wait to start experimenting with TLS 1.3; all you need is OpenSSL 1.1.1 and open source NGINX 1.15 (currently the mainline version), and you’re good to go.

OpenSSL

OpenSSL 1.1.1 is the first version to support TLS 1.3 and its ciphers:

  • TLS_AES_256_GCM_SHA384
  • TLS_CHACHA20_POLY1305_SHA256
  • TLS_AES_128_GCM_SHA256
  • TLS_AES_128_CCM_8_SHA256
  • TLS_AES_128_CCM_SHA256

Since 1.1.1 is available out-of-the-box in Ubuntu 18.10 Cosmic Cuttlefish (as well as FreeBSD 12.0 and Alpine 3.9), we’ll be using it for this tutorial. Note that 18.10 is not an LTS release, and the decision was made to port to OpenSSL 1.1.1 to 18.04 (Bionic Beaver), but it did not make it in 18.04.2. We like to make things easy on ourselves, and launched a publicly available ubuntu-cosmic-18.10-amd64-server-20181018 AMI in AWS.

NGINX

NGINX hardly needs an introduction, so we’ll skip straight to its support for TLS 1.3, which came all the way back in version 1.13.0 (August 2017), well before the protocol was finalized. Combined with OpenSSL 1.1.1, the current open source version (1.15), NGINX is fully capable of supporting TLS 1.3, including 0-RTT.

Current Browser Support for TLS 1.3

TLS 1.3 will be a moving target for months to come, but as of this writing (February 23, 2018), here’s a view of browser support for it. As you can see, it’s pretty limited at this point, with only the Chrome, Brave, and Firefox browsers capable of establishing a connection with a TLS 1.3-only webserver.

OS Browser TLS 1.3 Support Negotiated Cipher
macOS 10.14.3 Chrome 72.0.3626.109 Yes TLS_AES_256_GCM_SHA384
macOS 10.14.3 Firefox 65.0.1 Yes TLS_AES_256_GCM_SHA384
macOS 10.14.3 Brave 0.59.35 Yes TLS_AES_256_GCM_SHA384
macOS 10.14.3 Safari 12.0.3 (14606.4.5) No NA
macOS 10.14.4

Safari 12.1 Yes TLS_AES_256_GCM_SHA384
iOS 12.2 (Beta) Safari Yes TLS_AES_256_GCM_SHA384
Windows 10.0.17134 IE 11.345.17134.0 No NA
Windows 10.0.17134 Edge 17.17134 No NA
Ubuntu 18.10 curl/7.61.0 Yes TLS_AES_256_GCM_SHA384
Ubuntu 18.04.2 curl/7.58.0 No NA

Note: An astute reader might notice iOS 12.2 (currently in Beta) indeed supports TLS 1.3 and our webserver confirms it!

Testing It Out

To test things out, we’ll turn to our favorite automation tool, Ansible and our tls13_nginx_cosmic repository with playbooks.

We happened to use an EC2 instance running Ubuntu 18.10, as well as Let’s Encrypt and Digital Ocean‘s Domain Records API. That’s a fair number of dependencies, but an enterprising DevOps professional should be able to take our example playbooks and scripts and modify them to suit their needs.

Rather than return HTML content (content-type: text/html), we return text/plain with interesting information from NGINX itself. This is facilitated by the LUA programming language and LUA NGINX module. The magic is here in our nginx.conf:

This results in output similar to:

In all of our tests thus far, TLS_AES_256_GCM_SHA384 was chosen as the ciphersuite.

Qualys SSL Assessment

Now let’s look at what Qualys SSL Server Test has to say about our site.

Not an A+, but notice in our nginx.conf we are not configuring HSTS or OCSP. Our standard Let’s Encrypt certificate is also hampering our score here.

Here’s what Qualys has to say about our server configuration:

The highlight here is that TLS 1.3 is supported by our server, whereas TLS 1.2 is not. This was done on purpose to not allow a connecting client to use anything but TLS 1.3. You definitely would not do this in practice as of February 2019, as the Qualys Handshake Simulation shows. Only Chrome 70 was able to connect to our server.

Closing Thoughts

As a DevOps practitioner, and someone who manages dozens of webservers professionally, I’m quite excited about the release and adoption of TLS 1.3. It will, no doubt, take quite some time before a majority of browsers and sites support it.

If you’re interested more about TLS 1.3 in general, there are a lot of great resources out there. Here are just a few:

Wikipedia has a good rundown of TLS 1.3 features and changes from TLS 1.2.

The folks at NGINX recently hosted a webinar on R17, the latest NGINX Plus version. TLS 1.3 and it’s benefits were covered in more detail.

Here’s a great tutorial on deploying modern TLS configurations (including 1.3) from Probely.

And, last but not least, Cloudflare has a number of in-depth TLS 1.3 articles.

Updating Yarn’s Apt Key on Ubuntu

If you’re one of those unfortunate souls that run into the following error when running apt update

you are not alone. Fortunately the fix is easy, but it’s buried in the comments, so here it is without a lot of wading:

Rerun apt update (or the apt-get equivalent), and you should be golden.

Leveraging Instance Size Flexibility with EC2 Reserved Instances

Determining which EC2 reserved instances to purchase in AWS can be a daunting task, especially given the fact that you’re signing up for a long(ish)-term commitment that costs you (or your employer) real money. It wasn’t until after several months of working with reserved instances and reading up that I became comfortable with their concepts and learning about a quite useful feature known as Instance Size Flexibility.

But first, we need to cover what this post is not about, and that is how to choose what type of instance you need to run a given application (web server, continuous integration build server, database, etc.). There are plenty of tutorials out there. Once you’ve become comfortable with your choice of instance types (I gravitate towards the T, M, and R types), you can begin thinking about saving on your EC2 compute costs by purchasing reserved instances.

I will admit to being a bit confused the first time I began purchasing reserved instances, and I attribute that to the fact that, well, they are a bit confusing. Standard reserved instances. Convertible reserved instances. Zonal reserved instances. No upfront payment. Partial upfront payment. Reserved instance marketplace. There’s a lot to take in, and on top of that, it is a bit nerve-wracking making a choice that you might have to live with (and pay) for a while. In fact, even after spending quite some time reading through everything, it still took me a few billing cycles to realize how reserved instances really worked.

While I can’t help you get over that initial intimidation factor, what I can do is share a bit of wisdom I gathered from How Reserved Instances Are Applied, with specific attention paid to How Regional Reserved Instances Are Applied.

With some exceptions, you can purchase a number of nano (or other size) reserved instances for a given instance type, and those reservations can be applied to larger (or smaller) instances in that same family. Note that there are exceptions (I told you it was confusing), as this feature does not apply to:

  • Reserved Instances that are purchased for a specific Availability Zone
  • bare metal instances
  • Reserved Instances with dedicated tenancy
  • Reserved Instances for Windows, Windows with SQL Standard, Windows with SQL Server Enterprise, Windows with SQL Server Web, RHEL, and SLES

But that’s okay, because my favorite type of machine, a shared tenancy instance running Ubuntu 16.04 or 18.04 LTS, is supported.

Instance Size Flexibility works like this. Each instance size is assigned a normalization factor, with the small size being given the unit factor of 1. A nano instance has a normalization factor of 0.25. That is, for the purposes of instance size flexibility and reserved instances, a single reservation for a small instance is the equivalent of 4 nano instances, and vice versa, 4 nano reserved instances are the equivalent of a single small reserved instance.

AWS publishes the normalization factors in the How Reserved Instances Are Applied documentation, but we’ll provide it here as well:

Instance size Normalization factor
nano 0.25
micro 0.5
small 1
medium 2
large 4
xlarge 8
2xlarge 16
4xlarge 32
8xlarge 64
9xlarge 72
10xlarge 80
12xlarge 96
16xlarge 128
18xlarge 144
24xlarge 192
32xlarge 256

Using Instance Size Flexibility In Your Account

Now let’s take advantage of our knowledge about normalization factors and see how we can apply them to our account (and our bill). We’re going to leverage the Ruby programming language and the AWS SDK for Ruby. If you’ve never used Ruby before, do yourself a favor and invest some time with it. You’ll be glad you did.

Let’s get started.

We’re going to be applying the instance size flexibility normalization factors, so let’s declare a Hash of their values.

Using Bundler to pull in our AWS SDK gem, we will retrieve all of our instances in a given region (remember that this feature is scoped to the zones in a given region). I am using us-east-2 in this example, also known as US East Ohio.

Note that the above uses ~/.aws/credentials. If you do not have this file you will need to configure your access key ID and secret access key.

Let’s iterate over our instances (filtering out Windows instances since they are not eligible for Instance Size Flexibility) and create a hash of the various classes. In the end we want our hash to contain, as its keys, all of the classes (types) of instances we have, and the values to be a list of the sizes of those classes.

For example, if we had 4 t2.nano, 3 t2.small instances, 1 t2.large, 4 m4.xlarge instances, and 2 m4.2xlarge instances, our hash would look like this: {"t2"=>["nano", "nano", "nano", "nano", "small", "small", "small", "large"], "m4"=>["large", "large", "large", "large", "2xlarge", "2xlarge"]}.

Now we’re going to determine how many equivalent small instances we have. This is done by adding our normalization factors for each of the instance sizes.

Using our previous example of 4 t2.nano, 3 t2.small instances, 1 t2.large, 4 m4.xlarge instances, and 2 m4.2xlarge instances, we’re walking through the math of 0.25 + 0.25 + 0.25 + 0.25 + 1 + 1 + 1 + 4 for our t2 instances and 8 + 8 + 8 + 8 + 16 + 16 for the m4 instances. This results in a Hash that looks like this: {"t2"=>8, "m4"=>64}. To be clear, the interpretation of this is that we have, for the purposes of Instance Size Flexibility with reserved instances, the equivalent of 8 t2.small and 64 m4.small instances in us-east-2. Put another way, if we purchased 8 t2.small reserved instances and 64 m4.small instances in us-east-2, we would have 100% coverage of our EC2 costs with a reserved instance.

Now, let’s take it a step further and see what the equivalence would be for the other sizes. In other words, we know we have the equivalent of 8 t2.small and 64 m4.small instances, but what if we wanted to know how many equivalent nano instances we had? This loop will create a row for each class and size:

Again, taking our previous example, we would expect to see 32 t2.nano instances and 256 m4.nano instances. That’s right. If we purchased 32 t2.nano and 256 m4.nano instances we would have the equivalent of our 4 t2.nano, 3 t2.small instances, 1 t2.large, 4 m4.xlarge instances, and 2 m4.2xlarge instances. Now, there doesn’t happen to be such a thing as an m4.nano instance, and we’ve corrected for this in our published example code.

DevOps ToolChain, WikiPedia, CC BY-SA 4.0

Ansible 2.7 Deprecation Warning – apt and squash_actions

Ansible 2.7 was released recently and along with it brought a new deprecation warning for the apt module:

TASK [Install base packages] ****************************************** 
Thursday 18 October 2018  15:35:52 +0000 (0:00:01.648)       0:06:25.667 ****** 
[DEPRECATION WARNING]: Invoking "apt" only once while using a loop via 
squash_actions is deprecated. Instead of using a loop to supply multiple items 
and specifying <code>name: {{ item }}</code>, please use <code>name: [u'htop', u'zsh', u's3cmd']</code> and remove 
the loop. This feature will be removed in version 2.11. Deprecation warnings 
can be disabled by setting deprecation_warnings=False in ansible.cfg.

Our apt task was:

- name:  Install base packages
  apt:
    name:  "{{ item }}"
    state: present
    update_cache: yes
  with_items:
    - htop
    - zsh
    - s3cmd

Very standard.

The new style with Ansible 2.7 should look like:

- name:  Install base packages
  apt:
    name:  "{{ packages }}"
    state: present
    update_cache:  yes
  vars:
    packages:
      - htop
      - zsh
      - s3cmd

The change is self-explanatory (and is alluded to in the deprecation warning): rather than loop over a list and applying the apt module, provide the module with a list of items to process.

You can read up on the documentation for apt in Ansible 2.7 here.

Updating From Such a Repository Can’t Be Done Securely

I recently came across the (incredibly frustrating) error message Updating from such a repository can't be done securely while trying to run apt-get update on an Ubuntu 18.04 LTS installation. Everything was working fine on Ubuntu 16.04.5. It turns out that newer version of apt (1.6.3) on Ubuntu 18.04.1 is stricter with regards to signed repositories than Ubuntu 16.04.5 (apt 1.2.27).

Here’s an example of the error while trying to communicate with the Wazuh repository:

Reading package lists... Done
E: Failed to fetch https://packages.wazuh.com/apt/dists/xenial/InRelease  403  Forbidden [IP: 13.35.78.27 443]
E: The repository 'https://packages.wazuh.com/apt xenial InRelease' is no longer signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.

After searching around, we found that this issue has already been reported to the Wazuh project, but the solution of adding [trusted=yes] did not work for a repository that had already been added in /etc/apt. After continued searching, the following solution was finally hit upon:

deb [allow-insecure=yes allow-downgrade-to-insecure=yes] https://packages.wazuh.com/apt xenial main

That is, rather than using [trusted=yes] one can use [allow-insecure=yes allow-downgrade-to-insecure=yes]. Running apt-get update afterwards shows that the InRelease section is ignored, and Release is picked up:

Ign:7 https://packages.wazuh.com/apt xenial InRelease
Hit:8 https://packages.wazuh.com/apt xenial Release

Note that this is obviously a temporary solution, and should only be applied to a misbehaving repository! If you’re so inclined, upvote the Wazuh GitHub issue, as a fix at the repository level would be nice.