iAchieved.it

Software Development Tips and Tricks

By

Fixing A Module Compiled Cannot Be Imported Errors

This was an annoying error message the first time I ran into it. I had just downloaded Xcode 10.2 beta 2 and opened a project I had been working on in Xcode 10.1 and was greeted by Module compiled with Swift 4.2.1 cannot be imported by the Swift 5.0 compiler. After a bit of searching I realized that I needed to rebuild my Carthage modules, and had forgotten that xcode-select needed to be employed as well.

So if you run into this like I did, the fix is simple. Choose which Xcode you need to work with and then use xcode-select to choose it for commandline builds and run carthage update.

For example, if I need to switch to the beta version of Xcode:

sudo xcode-select -s /Applications/Xcode.app
carthage update

I include a nice Makefile in my projects that streamlines switching back and forth:

If I need to switch to using Xcode-beta (which I frequently do since my phone is usually running a beta build), I can just type make xcode-beta.

Now, of course, when I open my project back up in Xcode I get Module compiled with Swift 5.0 cannot be imported by the Swift 4.2.1 compiler., but that’s okay, a simple make xcode will get me going!

By

TLS 1.3 Support Coming with Safari 12.1

With the landing of macOS 10.14.4 Beta (18E194d), Safari has versioned up from 12.0.3 to 12.1, and includes with it support for TLS 1.3!

Pointing Safari 12.1 at tls13.iachieved.it (which only runs TLS 1.3) returns

Your User Agent is: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_4) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.1 Safari/605.1.15 with TLS_AES_256_GCM_SHA384 selected as the cipher. Note: If your current browser doesn’t support TLS 1.3 it will not be able to connect with tls13.iachieved.it.

Interesting in standing up your own TLS 1.3 webserver or see what browsers and clients support TLS 1.3? Come check out our instructions on configuring NGINX to do just that.

By

TLS 1.3 with NGINX and Ubuntu 18.10

TLS 1.3 is on its way to a webserver near you, but it may be a while before major sites begin supporting it. It takes a bit of time for a new version of anything to take hold, and even longer if it’s the first new version of a protocol in nearly 10 years.

Fortunately you don’t have to wait to start experimenting with TLS 1.3; all you need is OpenSSL 1.1.1 and open source NGINX 1.15 (currently the mainline version), and you’re good to go.

OpenSSL

OpenSSL 1.1.1 is the first version to support TLS 1.3 and its ciphers:

  • TLS_AES_256_GCM_SHA384
  • TLS_CHACHA20_POLY1305_SHA256
  • TLS_AES_128_GCM_SHA256
  • TLS_AES_128_CCM_8_SHA256
  • TLS_AES_128_CCM_SHA256

Since 1.1.1 is available out-of-the-box in Ubuntu 18.10 Cosmic Cuttlefish (as well as FreeBSD 12.0 and Alpine 3.9), we’ll be using it for this tutorial. Note that 18.10 is not an LTS release, and the decision was made to port to OpenSSL 1.1.1 to 18.04 (Bionic Beaver), but it did not make it in 18.04.2. We like to make things easy on ourselves, and launched a publicly available ubuntu-cosmic-18.10-amd64-server-20181018 AMI in AWS.

NGINX

NGINX hardly needs an introduction, so we’ll skip straight to its support for TLS 1.3, which came all the way back in version 1.13.0 (August 2017), well before the protocol was finalized. Combined with OpenSSL 1.1.1, the current open source version (1.15), NGINX is fully capable of supporting TLS 1.3, including 0-RTT.

Current Browser Support for TLS 1.3

TLS 1.3 will be a moving target for months to come, but as of this writing (February 23, 2018), here’s a view of browser support for it. As you can see, it’s pretty limited at this point, with only the Chrome, Brave, and Firefox browsers capable of establishing a connection with a TLS 1.3-only webserver.

OSBrowserTLS 1.3 SupportNegotiated Cipher
macOS 10.14.3Chrome 72.0.3626.109YesTLS_AES_256_GCM_SHA384
macOS 10.14.3Firefox 65.0.1YesTLS_AES_256_GCM_SHA384
macOS 10.14.3Brave 0.59.35YesTLS_AES_256_GCM_SHA384
macOS 10.14.3Safari 12.0.3 (14606.4.5)NoNA
macOS 10.14.4

Safari 12.1YesTLS_AES_256_GCM_SHA384
iOS 12.2 (Beta)SafariYesTLS_AES_256_GCM_SHA384
Windows 10.0.17134IE 11.345.17134.0NoNA
Windows 10.0.17134Edge 17.17134NoNA
Ubuntu 18.10curl/7.61.0YesTLS_AES_256_GCM_SHA384
Ubuntu 18.04.2curl/7.58.0NoNA

Note: An astute reader might notice iOS 12.2 (currently in Beta) indeed supports TLS 1.3 and our webserver confirms it!

Testing It Out

To test things out, we’ll turn to our favorite automation tool, Ansible and our tls13_nginx_cosmic repository with playbooks.

We happened to use an EC2 instance running Ubuntu 18.10, as well as Let’s Encrypt and Digital Ocean‘s Domain Records API. That’s a fair number of dependencies, but an enterprising DevOps professional should be able to take our example playbooks and scripts and modify them to suit their needs.

Rather than return HTML content (content-type: text/html), we return text/plain with interesting information from NGINX itself. This is facilitated by the LUA programming language and LUA NGINX module. The magic is here in our nginx.conf:

This results in output similar to:

In all of our tests thus far, TLS_AES_256_GCM_SHA384 was chosen as the ciphersuite.

Qualys SSL Assessment

Now let’s look at what Qualys SSL Server Test has to say about our site.

Not an A+, but notice in our nginx.conf we are not configuring HSTS or OCSP. Our standard Let’s Encrypt certificate is also hampering our score here.

Here’s what Qualys has to say about our server configuration:

The highlight here is that TLS 1.3 is supported by our server, whereas TLS 1.2 is not. This was done on purpose to not allow a connecting client to use anything but TLS 1.3. You definitely would not do this in practice as of February 2019, as the Qualys Handshake Simulation shows. Only Chrome 70 was able to connect to our server.

Closing Thoughts

As a DevOps practitioner, and someone who manages dozens of webservers professionally, I’m quite excited about the release and adoption of TLS 1.3. It will, no doubt, take quite some time before a majority of browsers and sites support it.

If you’re interested more about TLS 1.3 in general, there are a lot of great resources out there. Here are just a few:

Wikipedia has a good rundown of TLS 1.3 features and changes from TLS 1.2.

The folks at NGINX recently hosted a webinar on R17, the latest NGINX Plus version. TLS 1.3 and it’s benefits were covered in more detail.

Here’s a great tutorial on deploying modern TLS configurations (including 1.3) from Probely.

And, last but not least, Cloudflare has a number of in-depth TLS 1.3 articles.

By

Updating Yarn’s Apt Key on Ubuntu

If you’re one of those unfortunate souls that run into the following error when running apt update

you are not alone. Fortunately the fix is easy, but it’s buried in the comments, so here it is without a lot of wading:

Rerun apt update (or the apt-get equivalent), and you should be golden.

By

Swift 4 titleized String Extension

Swift 4.2

Rails provides the titleize inflector (capitalizes all the words in a string), and I needed one for Swift too. The code snippet below adds a computed property to the String class and follows these rules:

  • The first letter of the first word of the string is capitalized
  • The first letter of the remaining words in the string are capitalized, except those that are considered “small words”

Create a file in your Xcode project named String+Titleized.swift and paste in the following String extension:

Configure SMALL_WORDS to your liking. In this example I was titleizing Spanish phrases, so my SMALL_WORDS contains various definite articles and conjunctions. An example of the usage and output:

Note: This post is a Swift 4.2 version of this one written in Swift 2.

By

Leveraging Instance Size Flexibility with EC2 Reserved Instances

Determining which EC2 reserved instances to purchase in AWS can be a daunting task, especially given the fact that you’re signing up for a long(ish)-term commitment that costs you (or your employer) real money. It wasn’t until after several months of working with reserved instances and reading up that I became comfortable with their concepts and learning about a quite useful feature known as Instance Size Flexibility.

But first, we need to cover what this post is not about, and that is how to choose what type of instance you need to run a given application (web server, continuous integration build server, database, etc.). There are plenty of tutorials out there. Once you’ve become comfortable with your choice of instance types (I gravitate towards the T, M, and R types), you can begin thinking about saving on your EC2 compute costs by purchasing reserved instances.

I will admit to being a bit confused the first time I began purchasing reserved instances, and I attribute that to the fact that, well, they are a bit confusing. Standard reserved instances. Convertible reserved instances. Zonal reserved instances. No upfront payment. Partial upfront payment. Reserved instance marketplace. There’s a lot to take in, and on top of that, it is a bit nerve-wracking making a choice that you might have to live with (and pay) for a while. In fact, even after spending quite some time reading through everything, it still took me a few billing cycles to realize how reserved instances really worked.

While I can’t help you get over that initial intimidation factor, what I can do is share a bit of wisdom I gathered from How Reserved Instances Are Applied, with specific attention paid to How Regional Reserved Instances Are Applied.

With some exceptions, you can purchase a number of nano (or other size) reserved instances for a given instance type, and those reservations can be applied to larger (or smaller) instances in that same family. Note that there are exceptions (I told you it was confusing), as this feature does not apply to:

  • Reserved Instances that are purchased for a specific Availability Zone
  • bare metal instances
  • Reserved Instances with dedicated tenancy
  • Reserved Instances for Windows, Windows with SQL Standard, Windows with SQL Server Enterprise, Windows with SQL Server Web, RHEL, and SLES

But that’s okay, because my favorite type of machine, a shared tenancy instance running Ubuntu 16.04 or 18.04 LTS, is supported.

Instance Size Flexibility works like this. Each instance size is assigned a normalization factor, with the small size being given the unit factor of 1. A nano instance has a normalization factor of 0.25. That is, for the purposes of instance size flexibility and reserved instances, a single reservation for a small instance is the equivalent of 4 nano instances, and vice versa, 4 nano reserved instances are the equivalent of a single small reserved instance.

AWS publishes the normalization factors in the How Reserved Instances Are Applied documentation, but we’ll provide it here as well:

Instance sizeNormalization factor
nano0.25
micro0.5
small1
medium2
large4
xlarge8
2xlarge16
4xlarge32
8xlarge64
9xlarge72
10xlarge80
12xlarge96
16xlarge128
18xlarge144
24xlarge192
32xlarge256

Using Instance Size Flexibility In Your Account

Now let’s take advantage of our knowledge about normalization factors and see how we can apply them to our account (and our bill). We’re going to leverage the Ruby programming language and the AWS SDK for Ruby. If you’ve never used Ruby before, do yourself a favor and invest some time with it. You’ll be glad you did.

Let’s get started.

We’re going to be applying the instance size flexibility normalization factors, so let’s declare a Hash of their values.

Using Bundler to pull in our AWS SDK gem, we will retrieve all of our instances in a given region (remember that this feature is scoped to the zones in a given region). I am using us-east-2 in this example, also known as US East Ohio.

Note that the above uses ~/.aws/credentials. If you do not have this file you will need to configure your access key ID and secret access key.

Let’s iterate over our instances (filtering out Windows instances since they are not eligible for Instance Size Flexibility) and create a hash of the various classes. In the end we want our hash to contain, as its keys, all of the classes (types) of instances we have, and the values to be a list of the sizes of those classes.

For example, if we had 4 t2.nano, 3 t2.small instances, 1 t2.large, 4 m4.xlarge instances, and 2 m4.2xlarge instances, our hash would look like this: {"t2"=>["nano", "nano", "nano", "nano", "small", "small", "small", "large"], "m4"=>["large", "large", "large", "large", "2xlarge", "2xlarge"]}.

Now we’re going to determine how many equivalent small instances we have. This is done by adding our normalization factors for each of the instance sizes.

Using our previous example of 4 t2.nano, 3 t2.small instances, 1 t2.large, 4 m4.xlarge instances, and 2 m4.2xlarge instances, we’re walking through the math of 0.25 + 0.25 + 0.25 + 0.25 + 1 + 1 + 1 + 4 for our t2 instances and 8 + 8 + 8 + 8 + 16 + 16 for the m4 instances. This results in a Hash that looks like this: {"t2"=>8, "m4"=>64}. To be clear, the interpretation of this is that we have, for the purposes of Instance Size Flexibility with reserved instances, the equivalent of 8 t2.small and 64 m4.small instances in us-east-2. Put another way, if we purchased 8 t2.small reserved instances and 64 m4.small instances in us-east-2, we would have 100% coverage of our EC2 costs with a reserved instance.

Now, let’s take it a step further and see what the equivalence would be for the other sizes. In other words, we know we have the equivalent of 8 t2.small and 64 m4.small instances, but what if we wanted to know how many equivalent nano instances we had? This loop will create a row for each class and size:

Again, taking our previous example, we would expect to see 32 t2.nano instances and 256 m4.nano instances. That’s right. If we purchased 32 t2.nano and 256 m4.nano instances we would have the equivalent of our 4 t2.nano, 3 t2.small instances, 1 t2.large, 4 m4.xlarge instances, and 2 m4.2xlarge instances. Now, there doesn’t happen to be such a thing as an m4.nano instance, and we’ve corrected for this in our published example code.

By

Creating Strong Passwords with DuckDuckGo

Over the past year I’ve been taking online privacy more seriously and began looking at alternative search engines such as DuckDuckGo and Startpage. In addition, when creating strong passwords I turn to tools such as KeePass and Strong Password Generator. Earlier today I duckducked strong password, and formed a smile on my face when I saw this:

Well now how cool is that? Very cool.

Even cooler, however, is using DuckDuckGo’s pwgen feature to create passwords of varying strengths and lengths. Duckduck pwgen strong 16 to get something like:

If you prefer a “lower strength” password, you can use the low parameter, for example, pwgen low 24. Or, just average strength with pwgen 32 (the strength parameter is omitted).

From looking at the difference in output between low, average, and high strength passwords, it appears that:

  • low strength passwords are created from the character set [a-zA-Z]
  • medium strength passwords include numbers, increasing the set to [0-9a-zA-Z]
  • high strength passwords include symbols in the set [!@#$%^&*()] (note that the brackets are not in the set, this is regular expression bracket notation)

Instant Answers

This DuckDuckGo feature uses instant answers, an increasingly common feature of search engines. Each DuckDuckGo instant answer has an entry page, and the password generator is (aptly) named Password. You can even review the Perl source code on Github: Password.pm

Closing Thoughts

To be honest, I think this is a pretty cool feature. Now we could argue as to what constitutes a “strong” password, but we won’t. We could discuss entropy, passwords vs. passphrases, and so on. But we won’t. For a quick way to generate a pretty doggone good password, though, just duckduck one.

By

Ansible 2.7 Deprecation Warning – apt and squash_actions

Ansible 2.7 was released recently and along with it brought a new deprecation warning for the apt module:

TASK [Install base packages] ****************************************** 
Thursday 18 October 2018  15:35:52 +0000 (0:00:01.648)       0:06:25.667 ****** 
[DEPRECATION WARNING]: Invoking "apt" only once while using a loop via 
squash_actions is deprecated. Instead of using a loop to supply multiple items 
and specifying <code>name: {{ item }}</code>, please use <code>name: [u'htop', u'zsh', u's3cmd']</code> and remove 
the loop. This feature will be removed in version 2.11. Deprecation warnings 
can be disabled by setting deprecation_warnings=False in ansible.cfg.

Our apt task was:

- name:  Install base packages
  apt:
    name:  "{{ item }}"
    state: present
    update_cache: yes
  with_items:
    - htop
    - zsh
    - s3cmd

Very standard.

The new style with Ansible 2.7 should look like:

- name:  Install base packages
  apt:
    name:  "{{ packages }}"
    state: present
    update_cache:  yes
  vars:
    packages:
      - htop
      - zsh
      - s3cmd

The change is self-explanatory (and is alluded to in the deprecation warning): rather than loop over a list and applying the apt module, provide the module with a list of items to process.

You can read up on the documentation for apt in Ansible 2.7 here.

By

Updating From Such a Repository Can’t Be Done Securely

I recently came across the (incredibly frustrating) error message Updating from such a repository can't be done securely while trying to run apt-get update on an Ubuntu 18.04 LTS installation. Everything was working fine on Ubuntu 16.04.5. It turns out that newer version of apt (1.6.3) on Ubuntu 18.04.1 is stricter with regards to signed repositories than Ubuntu 16.04.5 (apt 1.2.27).

Here’s an example of the error while trying to communicate with the Wazuh repository:

Reading package lists... Done
E: Failed to fetch https://packages.wazuh.com/apt/dists/xenial/InRelease  403  Forbidden [IP: 13.35.78.27 443]
E: The repository 'https://packages.wazuh.com/apt xenial InRelease' is no longer signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.

After searching around, we found that this issue has already been reported to the Wazuh project, but the solution of adding [trusted=yes] did not work for a repository that had already been added in /etc/apt. After continued searching, the following solution was finally hit upon:

deb [allow-insecure=yes allow-downgrade-to-insecure=yes] https://packages.wazuh.com/apt xenial main

That is, rather than using [trusted=yes] one can use [allow-insecure=yes allow-downgrade-to-insecure=yes]. Running apt-get update afterwards shows that the InRelease section is ignored, and Release is picked up:

Ign:7 https://packages.wazuh.com/apt xenial InRelease
Hit:8 https://packages.wazuh.com/apt xenial Release

Note that this is obviously a temporary solution, and should only be applied to a misbehaving repository! If you’re so inclined, upvote the Wazuh GitHub issue, as a fix at the repository level would be nice.

By

GeoIP2 and NGINX

There are times when you want to configure your website to explicitly disallow access from certain countries, or only allow access from a given set of countries. While not completely precise, use of the MaxMind GeoIP databases to look up a web client’s country-of-origin and have the web server respond accordingly is a popular technique.

There are a number of NGINX tutorials on how to use the legacy GeoIP database and the ngx_http_geoip_module, and as it happens the default Ubuntu nginx package includes the ngx_http_geoip_module. Unfortunately the GeoIP databases will no longer be updated, and MaxMind has migrated to GeoIP2. Moreover, after January 2, 2019, the GeoIP databases will no longer be available.

This leaves us in a bind. Luckily, while the Ubuntu distribution of NGINX doesn’t come with GeoIP2 support, we can add it by building from source. Which is exactly what we’ll do! In this tutorial we’re going to build nginx from the ground up, modeling its configuration options after those that are used by the canonical nginx packages available from Ubuntu 16.04. You’ll want to go through this tutorial on a fresh installation of Ubuntu 16.04 or later; we’ll be using an EC2 instance created from the AWS Quick Start Ubuntu Server 16.04 LTS (HVM), SSD Volume Type AMI.

If you’re a fan of NGINX and hosting secure webservers, check out our latest post on configuring NGINX with support for TLS 1.3

Getting Started

Since we’re going to be building binaries, we’ll need the build-essential package which is a metapackage that installs applications such as make, gcc, etc.

Now, to install all of the prerequisities libraries we’ll need to compile NGINX:

Using the GeoIP2 database with NGINX requires the ngx_http_geoip2_module and requires the MaxMind development packages from MaxMind:

Getting the Sources

Now let’s go and download NGINX. We’ll be using the latest dot-release of the 1.15 series, 1.15.3. I prefer to compile things in /usr/local/src, so:

We also need the source for the GeoIP2 NGINX module:

Now, to configure and compile.

You will want to make sure that the ngx_http_geoip2_module will be compiled, and should see nginx_geoip2_module was configured in the end of the configure output.

Now, run sudo make. NGINX, for all its power, is a compact and light application, and compiles in under a minute. If everything compiles properly, you can run sudo make install.

A few last things to complete our installation:

  • creating a symlink from /usr/sbin/nginx to /usr/share/nginx/sbin/nginx
  • creating a symlink from /usr/share/nginx/modules to /usr/lib/nginx/modules
  • creating the /var/lib/nginx/body directory
  • installing an NGINX systemd service file

For the Systemd service file, place the following in /lib/systemd/system/nginx.service:

and reload systemd with sudo systemctl daemon-reload. You should now be able to check the status of nginx:

We’ll be starting it momentarily!

Testing

On to testing! We’re going to use HTTP (rather than HTTPS) in this example.

While we’ve installed the libraries that interact with the GeoIP2 database, we haven’t yet installed the database itself. This can be accomplished by installing the geoipupdate package from the MaxMind PPA:

# sudo apt-get install -y geoipupdate

Now run sudo geoipupdate -v:

It’s a good idea to periodically update the GeoIP2 databases with geoipupdate. This is typically accomplished with a cron job like:

# crontab -l
30 0 * * 6 /usr/bin/geoipupdate -v | /usr/bin/logger

Note: Use of logger here is optional, we just like to see the output of the geoipupdate invocation in /var/log/syslog.

Nginx Configuration

Now that nginx is built and installed, we have a GeoIP2 database in /usr/share/GeoIP, we can finally get to the task of restricting access to our website. Here is our basic nginx.conf:

load_module modules/ngx_http_geoip2_module.so;

worker_processes auto;

events {
  worker_connections  1024;
}

http {
  sendfile      on;
  include       mime.types;
  default_type  application/octet-stream;
  keepalive_timeout  65;

  geoip2 /usr/share/GeoIP/GeoLite2-Country.mmdb {
    $geoip2_data_country_code country iso_code;
  }

  map $geoip2_data_country_code $allowed_country {
    default no;
    US yes;
  }

  server {
    listen       80;
    server_name  localhost;

    if ($allowed_country = no) {
      return 403;
    }

    location / {
        root   html;
        index  index.html index.htm;
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   html;
    }
  }
}

Let’s walk through the relevant directives one at a time.

load_module modules/ngx_http_geoip2_module.so;

Since we built nginx with ngx_http_geopip2_module as a dynamic module, we need to load it explicitly with the load_module directive.

Looking up the ISO country code from the GeoIP2 database utilizes our geoip2 module:

geoip2 /usr/share/GeoIP/GeoLite2-Country.mmdb {
  $geoip2_data_country_code country iso_code;
}

The country code of the client IP address will be placed in the NGINX variable $geoip2_data_country_code. From this value we determine what to set $allowed_country to:

map $geoip2_data_country_code $allowed_country {
  default no;
  US yes;
}

map in the NGINX configuration file is a bit like a switch statement (I’ve chosen the Swift syntax of switch):

switch geoip2_data_country_code {
  case 'US':
    allowed_country = "yes"
  default:
    allowed_country = "no"
}

If we wanted to allow IPs from the United States, Mexico, and Canada the map directive would look like:

map $geoip2_data_country_code $allowed_country {
  default no;
  US yes;
  MX yes;
  CA yes;
}

geoip2 and map by themselves do not restrict access to the site. This is accomplished through the if statement which is located in the server block:

if ($allowed_country = no) {
  return 403;
}

This is pretty self-explanatory. If $allowed_country is no then return a 403 Forbidden.

If you haven’t done so already, start nginx with systemctl start nginx and give the configuration a go. It’s quite easy to test your nginx configuration by disallowing your country, restarting nginx (systemctl restart nginx), and trying to access your site.

Credits and Disclaimers

NGINX and associated logos belong to NGINX Inc. MaxMind, GeoIP, minFraud, and related trademarks belong to MaxMind, Inc.

The following resources were invaluable in developing this tutorial: