iAchieved.it

Software Development Tips and Tricks

By

TLS 1.3 with NGINX and Ubuntu 18.04 LTS

OpenSSL 1.1.1 is now available to Ubuntu 18.04 LTS with the release of 18.04.3. This porting of OpenSSL 1.1.1 has opened up the ability to run with TLS 1.3 on your Ubuntu 18.04 LTS NGINX-powered webserver. To add TLS 1.3 support to your existing NGINX installation, first upgrade your Ubuntu 18.04 LTS server to 18.04.3, and then find the ssl_protocols directive for NGINX and add TLSv1.3 at the end:

Restart NGINX with systemctl restart nginx.

It really is as simple as that! If your browser supports TLS 1.3 (and all major browsers do as of November 2019 with the notable exception of Microsoft Edge) it will negotiate to it. As of this writing (November 2019), you would not want to disable TLSv1.2. Odds are you will break tools such as cURL and other HTTPS agents accessing your site. Here’s an example of what that looks like for curl on macOS 10.14.6 (Mojave):

In other words, the stock macOS 10.14.6 curl cannot establish a connection with a webserver running only TLS 1.3.

Enabling 0-RTT

There are a lot of compelling features to TLS 1.3, one of them being 0-RTT for performance gains in establishing a connection to the webserver. NGINX enables TLS 1.3 0-RTT if the configuration parameter ssl_early_data is set to on. If you are using the stock NGINX provided by Ubuntu 18.04 LTS 0-RTT is not supported. Let’s upgrade to the version provided by the NGINX PPA and enable it.

Go back to your NGINX configuration and place the ssl_early_data directive near all of the other ssl_ directives, like this:

Now, all that being said, 0-RTT is not something you will want to enable without careful consideration. The “early” in SSL early data comes from the idea that if the client already has a pre-shared key, it can reuse the key. This is a great post outlining the benefits, and risks, of enabling 0-RTT.

By

OpenWrt SNMP Interface Descriptions

If you’re familiar with configuring network gear, you know that a very useful best practice is providing “plain English” descriptions of your device’s ports. For example, on my Cisco SF500-48MP switch port 24 is the “uplink port” to the gateway router. I make this clear in the port’s description:

sw01#show interfaces description fa1/2/24
Port      Description
-------   -----------
fa1/2/24  Uplink to Internet Gateway

By doing so, the ifAlias OID for this interface is set:

snmpget -c public -v2c sw01.iachieved.it IF-MIB::ifAlias.24 
IF-MIB::ifAlias.24 = STRING: Uplink to Internet Gateway

What is particularly nice about this is that a network monitoring tool such as Observium will display the ifAlias string as a part of the description of the port. Like I said, this becomes very useful, particularly when trying to track down where ports lead to.

In the previous post we installed SNMP on an OpenWrt router and surfaced it in Observium. By default the snmpd package doesn’t present any information for ifAlias, but we can fix that with snmpset.

Permitting snmpset Access

snmpset will make use of the SNMP private community on our OpenWrt (note: if you were working in a production environment you might consider using SNMP v3 with authentication or at the very least changing your community strings). By default the OpenWrt SNMP configuration only permits use of the private community from localhost (i.e., the router itself). We’ll change that to permit access from our private subnet:

Find this section in your /etc/config/snmpd file

config com2sec private
    option secname rw
    option source localhost
    option community private

and change the option source like this:

option source 192.168.77.0/24

Obviously you’ll use the appropriate subnet in your configuration.

Restart snmpd on the router with /etc/init.d/snmpd restart.

Updating ifAlias

To update the appropriate ifAlias entries we need to see the ifDescr list. This can be obtained by walking ifDescr with snmpwalk:

snmpwalk  -c public -v2c gw.gw01.chcgil01.iachieved.it ifDescr
IF-MIB::ifDescr.1 = STRING: lo
IF-MIB::ifDescr.2 = STRING: eth1
IF-MIB::ifDescr.3 = STRING: eth0
IF-MIB::ifDescr.5 = STRING: wlan0
IF-MIB::ifDescr.6 = STRING: wlan1
IF-MIB::ifDescr.7 = STRING: br-lan
IF-MIB::ifDescr.8 = STRING: eth0.1
IF-MIB::ifDescr.9 = STRING: eth1.2
IF-MIB::ifDescr.10 = STRING: eth0.100
IF-MIB::ifDescr.11 = STRING: eth1.3
IF-MIB::ifDescr.12 = STRING: eth1.4

In our Chicago router example let’s label the three interfaces that are OSPF links to other routers:

  • eth1.2 is a link to gw01.dnvrco01
  • eth1.3 is a link to gw01.atlaga01
  • eth1.4 is a link to gw01.dllstx01

From the output of ifDescr we can see that

  • eth1.2 will map to ifAlias.9
  • eth1.3 will map to ifAlias.11
  • eth1.4 will map to ifAlias.12

So let’s set those ifAlias strings!

# snmpset -c private -v2c gw.gw01.chcgil01.iachieved.it ifAlias.9 string "OSPF Link to gw01.dnvrco01"
IF-MIB::ifAlias.9 = STRING: OSPF Link to gw01.dnvrco01
# snmpset -c private -v2c gw.gw01.chcgil01.iachieved.it ifAlias.11 string "OSPF Link to gw01.atlaga01"
IF-MIB::ifAlias.11 = STRING: OSPF Link to gw01.atlaga01
# snmpset -c private -v2c gw.gw01.chcgil01.iachieved.it ifAlias.12 string "OSPF Link to gw01.dllstx01"
IF-MIB::ifAlias.12 = STRING: OSPF Link to gw01.dllstx01

The Catch

The problem with this approach is its persistence – reboot your router and watch those interface descriptions bite the dust. But no worries, the fix is simple.

Go back to /etc/config/snmpd and change your private community to accept interaction from localhost (in other words, what it was originally!):

config com2sec private
    option secname rw
    option source localhost
    option community private

Restart snmpd with /etc/init.d/snmpd restart.

On the router we’re going to edit /etc/rc.local and before exit 0 put:

# Wait for snmpd to accept connections
/bin/sleep 5

/usr/bin/snmpset -c private -v2c localhost ifAlias.9 string "OSPF Link to gw01.dnvrco01" > /tmp/snmpset.log
/usr/bin/snmpset -c private -v2c localhost ifAlias.11 string "OSPF Link to gw01.atlaga01" >> /tmp/snmpset.log
/usr/bin/snmpset -c private -v2c localhost ifAlias.12 string "OSPF Link to gw01.dllstx01" >> /tmp/snmpset.log

I have not optimized the /bin/sleep at this point, but without it snmpset will be talking to an snmpd daemon that isn’t ready. Trust me.

You can now reboot the router and the custom interface descriptions will survive.

Wrapping Up

Why did we go to all the trouble of creating descriptions (aliases) for our OpenWrt interfaces? Again, monitoring tools such as Observium will take those descriptions and apply them to your UI.

At a glance I can quickly see, for example, that eth1.2 is the interface being used for OSPF with gw01.dnvrco01. That information is incredibly useful when working with dozens (or more) links.

By

Recognizing OpenWrt as an OS in Observium

Observium is a great application for monitoring network equipment, regardless of type (e.g., routers, switches, firewalls, etc.) What makes it so powerful is due in large part to the amount of information exposed by SNMP for network gear and its ability to intelligently parse the returned data and display it.

This intelligence can only go so far, however, when a given piece of gear has either an incomplete implementation of SNMP or the values returned aren’t indicative of the equipment. Take, for example, OpenWrt. It is, in a word, an awesome piece of software, capable of turning a $250 Linksys home router into a participant in an OSPF area. Pretty nift.

Due to its open nature there are a number of SNMP options for OpenWrt:

Only one of these will give you a suitable view in Observium, and that is the snmpd package. Let’s install it (note that I’m using the OpenWrt shell vs. LUCI):

# opkg update
# opkg install snmpd

Unfortunately if you add your device now Observium will recognize it as a generic Linux machine. That’s due to the fact that, by default, the OpenWrt snmpd package will not return suitable information in the sysDescr OID for Observium’s OS detection routines.

For reference, here is what you can expect Observium to display with snmpd not configured properly:

Let’s take a look directly at the sysDescr OID with snmpget, which is available by installing snmp on your Observium host (if you’re using a Debian variant). There is a little dance to be done to get snmpget to work properly:

# apt-get install snmp snmp-mibs-downloader
# printf "[snmp]\nmibs +ALL\n" > /etc/snmp/snmp.conf 
# download-mibs
# snmpget -v2c -c public <HOSTNAME> sysDescr.0

For our router:

snmpget -v2c -c public gw.gw01.chcgil01.iachieved.it sysDescr.0
.1.3.6.1.2.1.1.1.0 = STRING: Linux chcgil 4.14.131 #0 SMP Thu Jun 27 12:18:52 2019 armv7l

sysDescr as is will cause Observium to detect the router as a basic Linux OS. We want more. Here’s how to do it! Go to the /etc/config/snmpd file in OpenWrt and find this block:

config system                   
        option sysLocation      'office'
        option sysContact       'bofh@example.com'
        option sysName          'HeartOfGold'     
#       option sysServices      72                
#       option sysDescr         'adult playground'
#       option sysObjectID      '1.2.3.4'

sysLocation can be set to a locale name and Observium will automatically map it properly. Since this router is in Chicago we’ll put Chicago there. Likewise, sysName will be changed to gw01.chcgil01 as this router is Gateway #1 in Chicago Site #1. What we’re particularly interested in changing here is sysDescr. Uncomment the line and change it to OpenWrt. Here’s what our final config system block looks like:

config system                   
        option sysLocation      'Chicago'
        option sysContact       'admin@iachieved.it'
        option sysName          'gw01.chcgil01'     
        option sysDescr         'OpenWrt'

Restart snmpd:

# /etc/init.d/snmpd restart

And check snmpget again:

# snmpget -v2c -c public gw.gw01.chcgil01.iachieved.it sysDescr.0
SNMPv2-MIB::sysDescr.0 = STRING: OpenWrt

Perfect. Add the device to Observium and watch it fill in the rest.

Notice that the Tux logo has been replaced with the OpenWrt logo as Observium correctly identifies this device as running OpenWrt.

By

The Best Fudgy Brownies Ever

This is an experimental blog post, because in the sea of posts about Ansible and Swift is a brownie recipe. To be fair, this is not just any brownie recipe, but the best one ever for fudgy brownies.

Why am I writing this post here on iAchieved.it? Because the world needs to know how to make these brownies, and I’m sick of recipe blogs that are ninety percent rambling (we’re not going to go into the history of the brownie here) and instructions on how to fold. If you don’t know to fold versus whisk, this blog isn’t for you. I’m not even going to bother with fancypants WordPress plugins for recipes. Well, not for now at least.

This recipe makes a 13×9 pan of awesome fudgy brownies. I am not joking.

  • 1/3 cup Dutched cocoa powder
  • 2 ounces unsweetened chocolate, chopped
  • 1/2 cup plus 2 tablespoons boiling water
  • 4 tablespoons unsalted butter, melted
  • 1/2 cup plus 2 tablespoons vegetable oil
  • 2 eggs
  • 2 egg yolks
  • 2 teaspoons vanilla
  • 2 1/2 cups sugar
  • 1 3/4 cups unbleached all-purpose flour
  • 3/4 teaspoon salt
  • 6 ounces bittersweet chocolate chips

In a large mixing bowl (the kind you might, you know, make brownies in) toss in the dutched cocoa powder and chopped unsweetened chocolate. Pour in the boiling water and whisk until smooth. If you’re lucky I might add a picture here.

At this point whisk in the melted butter and vegetable oil, but be prepared for it to look a little weird. The chocolate and the oils will not want to get happy together, but don’t worry, that’s when you whisk in the eggs and extra yolks and everyone comes together like a nice chocolate pudding. By all means, shove your finger in there for a taste, but there’s no sugar yet so it’ll be nasty.

Whisk in the vanilla (I grew up in Texas so it goes without saying my preference is Mexican vanilla), and in 1/2 cup batches whisk in the sugar until thoroughly combined and glossy. Note: If you’ve ever done any baking you know that there’s always cautions regarding over mixing. We will now invoke this caution because you’re going to start adding the flour. Do not overmix after this! We’re making brownies, not bread.

Fold in the flour and salt. I usually do this a half cup at a time and don’t fret if there are some streaks of flour in the batter. It’ll sit a bit and hydrate while you go for that second glass of wine, so relax.

Once you’ve folded in all of the flour and it looks like brownie batter, fold in those bittersweet chocolate chips. Fold, not mix. Remember, boxed brownies have a lot of chemicals that act like guard rails. No rails here, so don’t get crazy.

Pour all of the batter into a greased (PAM, people) 13×9 and bake for about 32 to 35 minutes in a 350 degree oven. I usually rotate it after 15 minutes, but that’s only because Memaw did.

Let cool for a bit if you have self-control, but if you’re like me, by all means, tear into there and enjoy molten fudgy brownie goodness. Here’s what is you can expect to indulge in!

By

Auditing Shared Account Usage

Occasionally you find yourself in a situation where utilizing a shared account cannot be avoided. One such scenario is managing the deployment of NodeJS applications with Shipit and PM2. Here’s how the scenario typically works:

Alice, Bob, and Carol are three developers working on NodeJS applications that need to be deployed to their staging server. They’ve decided on the use of PM2 as their process manager, and ShipIt as their deployment tool. Their shipitfile.js file contains a block for the staging server, and it looks something like:

staging: {
      servers: [
        {
          host: 'apps.staging.iachieved.it',
          user: 'deployer',
        },
      ],
      environment:  'staging',
      branch: 'develop',
    },

As we can see the deployer user will be used to deploy our application, and by extension, will be the user that pm2 runs the application under. Alice, Bob, and Carol all have their SSH keys put in /home/deployer/.ssh/authorized_keys so they can deploy. Makes sense.

Unfortunately what this also means is Alice, Bob, or Carol can ssh to the staging server as the deployer user. Even though deployer is an unprivileged user, we really don’t want that. Moreover, by default, we can’t trace who deployed, or if someone is misusing the deployer user. Let’s take a look at how we can address this.

Creating a Deployment Group

The first thing we want to do is to create a security group for those that are authorized to perform deployments. I’ll be using an Active Directory security group in this example, but a standard Unix group would work as well. We can use getent to see the members of the group. getent will come in handy to help determine whether someone attempting to deploy is authorized.

# getent group "application deployment@iachieved.it"
application deployment@iachieved.it:*:1068601118:alice@iachieved.it,bob@iachieved.it

SSH authorized_keys command

Until I started researching this problem of auditing and restricting shared account usage I was unaware of the command option in the SSH authorized_keys file. One learns something new every day. What the command option provides for is executing a command immediately upon SSHing via a given key. Consider that we put the following entry in the deployer user ~/.ssh/authorized_keys file:

ssh-rsa AAAA...sCBR alice

and this is Alice’s public key. We would expect that Alice would be able to ssh deployer@apps.iachieved.it and get a shell. But what if we wanted to intercept this SSH and run a script instead? Let’s try it out:

deployer@apps.iachieved.it:~/.ssh/authorized_keys:

command="/usr/bin/logger -p auth.INFO Not permitted" ssh-rsa AAAA...sCBR alice

When Alice tries to ssh as the deployer user, we get an entry in auth.log:

Jul  5 22:30:58 apps deployer: Not permitted

and Alice sees Connection to apps closed..

Well that’s no good! We do want Alice to be able to use the deployer account to deploy code.

A Wrapper Script

First, we want Alice to be able to deploy code with the deployer user, but we also want to:

  • know that it was Alice
  • ensure Alice is an authorized deployer
  • not allow Alice to get a shell

Let’s look at how we can create a script to execute each SSH invocation that will meet all of these criteria.

Step 1, let’s log and execute whatever Alice was attempting to do.

/usr/local/bin/deploy.sh:

SSH_ORIGINAL_COMMAND will be set automatically by sshd, but we need to provide SSH_REMOTE_USER, so in the authorized_keys file:

command="export SSH_REMOTE_USER=alice@iachieved.it;/usr/local/bin/deploy.sh" ssh-rsa AAAA...sCBR alice

Note that we explicitly set SSH_REMOTE_USER to alice@iachieved.it. The takeaway here is that it associates any attempt by Alice to use the deployer account to her userid. We then execute deploy.sh which logs the invocation. If Alice tries to ssh and get a shell with ssh deployer@apps the connection will still be closed, as SSH_ORIGINAL_COMMAND is null. But, let’s say she runs ssh deployer@apps ls /:

alice@iachieved.it@apps ~> ssh deployer@apps ls /
bin
boot
dev
etc

In /var/log/auth.log we see:

Jul  6 13:43:25 apps sshd[18554]: Accepted publickey for deployer from ::1 port 48832 ssh2: RSA SHA256:thZna7v6go5EzcZABkieCmaZzp+6WSlYx37a3uPOMSs
Jul  6 13:43:25 apps sshd[18554]: pam_unix(sshd:session): session opened for user deployer by (uid=0)
Jul  6 13:43:25 apps systemd-logind[945]: New session 54 of user deployer.
Jul  6 13:43:25 apps systemd: pam_unix(systemd-user:session): session opened for user deployer by (uid=0)
Jul  6 13:43:26 apps deployer: alice@iachieved.it executed ls /

What is important here is that we can trace what Alice is executing.

Managing a Deployment Security Group

Left as is this technique is much preferred to a free-for-all with the deployer user, but more can be done using security groups to have finer control of who can use the account at any given time. Let’s add an additional check in the /usr/local/bin/deploy.sh script with the uig function introduced in the last post.

The authorized_keys file gets updated, and let’s add Bob and Carol’s keys for our additional positive (Bob) and negative (Carol) test:

Since Bob is a member of the application deployment@iachieved.it group, he can proceed:

Jul  6 20:09:26 apps sshd[21886]: Accepted publickey for deployer from ::1 port 49148 ssh2: RSA SHA256:gs3j1xHvwJcSMBXxaqag6Pb7A595HVXIz2fMoCX2J/I
Jul  6 20:09:26 apps sshd[21886]: pam_unix(sshd:session): session opened for user deployer by (uid=0)
Jul  6 20:09:26 apps systemd-logind[945]: New session 79 of user deployer.
Jul  6 20:09:26 apps systemd: pam_unix(systemd-user:session): session opened for user deployer by (uid=0)
Jul  6 20:09:27 apps deployer: bob@iachieved.it is in application deployment@iachieved.it and executed ls /

Now, Carol’s turn to try ssh deployer@apps ls /:

Jul  6 20:15:37 apps deployer: carol@iachieved.it is not in application deployment@iachieved.it and is not authorized to execute ls /

Poor Carol.

Closing Thoughts

For some teams the idea of having to manage a deployment security group and bespoke authorized_keys file may be overkill. If you’re in an environment with enhanced audit controls and accountability the ability to implement safeguards and audits to code deployments may be a welcome addition.

By

A Script for Testing Membership in a Unix Group

Sometimes you just need a boolean test for a given question. In this post we’ll look at answering the question, “Is this user in a given group?” Seems simple enough.

It’s easy to see what groups a user is a member of in a shell:

% id
alice@iachieved.it@darthvader:~$ id
uid=1068601116(alice@iachieved.it) gid=1068601115(iachievedit@iachieved.it) groups=1068601115(iachievedit@iachieved.it),1068600513(domain users@iachieved.it),1068601109(linux ssh@iachieved.it),1068601118(application deployment@iachieved.it)

Note that Alice is an Active Directory domain user. We want to test whether or not she is a member of the application deployment@iachieved.it group. We can see this with our eyes in the terminal, but a little scripting is in order. We’ll skip error checking for this first example.

uig.sh:

Let’s take it out for a spin.

alice@iachieved.it@darthvader:~$ ./uig.sh `whoami` "linux administrators@iachieved.it"
User alice@iachieved.it IS NOT in group linux administrators@iachieved.it

Now let’s test whether or not Alice is in the application deployment@iachieved.it group:

alice@iachieved.it@darthvader:~$ ./uig.sh `whoami` "application deployment@iachieved.it"
User alice@iachieved.it IS in group application deployment@iachieved.it

Terrific. This will come in handy in the next blog post.

Let’s clean this up into a function that can be sourced in a script or shell:

alice@iachieved.it@darthvader:~$ uig `whoami` "linux ssh@iachieved.it"
alice@iachieved.it@darthvader:~$ echo $?
0

Or, the invocation we’re most likely to use:

By

macOS 10.15 Catalina Adds Additional Filesystem Restrictions

macOS 10.15 (Catalina) has added additional Privacy restrictions that require user intervention before applications can access the certain portions of the filesystem. Not only that, but taking screenshots for this post required permissions to be explicitly granted to Skitch. If you find that Skitch only gives you a screenshot of your Mac’s background, this is the post for you. The fact that macOS 10.15 introduces new security and privacy safeguards is unsurprising as we got introduced to stricter automation controls in Catalina’s predecessor.

Let’s look at what happens when we try to cd ~/Documents in the macOS 10.15 Terminal app:

We’re greeted by a dialog box presented by Finder requesting specific permission for Terminal to access files in the Documents folder:

"Terminal" would like to access files in your Documents folder. Once this permission is granted, Terminal can access the contents of ~/Documents. If the permission isn’t granted, trying to ls ~/Documents results in something along the lines of:

# ls ~/Documents
ls: Documents: Operation not permitted

Updating which filesystem access permissions have been granted for a given application can be accomplished in the Security & Privacy preference panel. In this example Terminal has been granted access to the Documents folder whereas iTerm has not been granted any access.

It was an added bonus while writing this post that even trying to take a screenshot with Skitch on Catalina prompted a dialog requesting explicit access. The final result was that I’ve now authorized Skitch to capture my screen:

There has been much speculation regarding Apple’s WWDC 2019 announcement of Sign in with Apple as to its real intent. What is clear (to me, at least) is that Apple is positioning itself as the guardian of security and privacy (compared to Facebook and Google who are clearly not in the interest of safeguarding either), whether it is Sign in with Apple or tighter controls around what applications can access your data on your own Mac.

By

Fixing A Module Compiled Cannot Be Imported Errors

This was an annoying error message the first time I ran into it. I had just downloaded Xcode 10.2 beta 2 and opened a project I had been working on in Xcode 10.1 and was greeted by Module compiled with Swift 4.2.1 cannot be imported by the Swift 5.0 compiler. After a bit of searching I realized that I needed to rebuild my Carthage modules, and had forgotten that xcode-select needed to be employed as well.

So if you run into this like I did, the fix is simple. Choose which Xcode you need to work with and then use xcode-select to choose it for commandline builds and run carthage update.

For example, if I need to switch to the beta version of Xcode:

sudo xcode-select -s /Applications/Xcode.app
carthage update

I include a nice Makefile in my projects that streamlines switching back and forth:

If I need to switch to using Xcode-beta (which I frequently do since my phone is usually running a beta build), I can just type make xcode-beta.

Now, of course, when I open my project back up in Xcode I get Module compiled with Swift 5.0 cannot be imported by the Swift 4.2.1 compiler., but that’s okay, a simple make xcode will get me going!

By

TLS 1.3 Support Coming with Safari 12.1

With the landing of macOS 10.14.4 Beta (18E194d), Safari has versioned up from 12.0.3 to 12.1, and includes with it support for TLS 1.3!

Pointing Safari 12.1 at tls13.iachieved.it (which only runs TLS 1.3) returns

Your User Agent is: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_4) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.1 Safari/605.1.15 with TLS_AES_256_GCM_SHA384 selected as the cipher. Note: If your current browser doesn’t support TLS 1.3 it will not be able to connect with tls13.iachieved.it.

Interesting in standing up your own TLS 1.3 webserver or see what browsers and clients support TLS 1.3? Come check out our instructions on configuring NGINX to do just that.

By

TLS 1.3 with NGINX and Ubuntu 18.10

TLS 1.3 is on its way to a webserver near you, but it may be a while before major sites begin supporting it. It takes a bit of time for a new version of anything to take hold, and even longer if it’s the first new version of a protocol in nearly 10 years.

Fortunately you don’t have to wait to start experimenting with TLS 1.3; all you need is OpenSSL 1.1.1 and open source NGINX 1.15 (currently the mainline version), and you’re good to go.

OpenSSL

OpenSSL 1.1.1 is the first version to support TLS 1.3 and its ciphers:

  • TLS_AES_256_GCM_SHA384
  • TLS_CHACHA20_POLY1305_SHA256
  • TLS_AES_128_GCM_SHA256
  • TLS_AES_128_CCM_8_SHA256
  • TLS_AES_128_CCM_SHA256

Since 1.1.1 is available out-of-the-box in Ubuntu 18.10 Cosmic Cuttlefish (as well as FreeBSD 12.0 and Alpine 3.9), we’ll be using it for this tutorial. Note that 18.10 is not an LTS release, and the decision was made to port to OpenSSL 1.1.1 to 18.04 (Bionic Beaver), but it did not make it in 18.04.2. We like to make things easy on ourselves, and launched a publicly available ubuntu-cosmic-18.10-amd64-server-20181018 AMI in AWS.

NGINX

NGINX hardly needs an introduction, so we’ll skip straight to its support for TLS 1.3, which came all the way back in version 1.13.0 (August 2017), well before the protocol was finalized. Combined with OpenSSL 1.1.1, the current open source version (1.15), NGINX is fully capable of supporting TLS 1.3, including 0-RTT.

Current Browser Support for TLS 1.3

TLS 1.3 will be a moving target for months to come, but as of this writing (February 23, 2018), here’s a view of browser support for it. As you can see, it’s pretty limited at this point, with only the Chrome, Brave, and Firefox browsers capable of establishing a connection with a TLS 1.3-only webserver.

OSBrowserTLS 1.3 SupportNegotiated Cipher
macOS 10.14.3Chrome 72.0.3626.109YesTLS_AES_256_GCM_SHA384
macOS 10.14.3Firefox 65.0.1YesTLS_AES_256_GCM_SHA384
macOS 10.14.3Brave 0.59.35YesTLS_AES_256_GCM_SHA384
macOS 10.14.3Safari 12.0.3 (14606.4.5)NoNA
macOS 10.14.4

Safari 12.1YesTLS_AES_256_GCM_SHA384
iOS 12.2 (Beta)SafariYesTLS_AES_256_GCM_SHA384
Windows 10.0.17134IE 11.345.17134.0NoNA
Windows 10.0.17134Edge 17.17134NoNA
Ubuntu 18.10curl/7.61.0YesTLS_AES_256_GCM_SHA384
Ubuntu 18.04.2curl/7.58.0NoNA

Note: An astute reader might notice iOS 12.2 (currently in Beta) indeed supports TLS 1.3 and our webserver confirms it!

Testing It Out

To test things out, we’ll turn to our favorite automation tool, Ansible and our tls13_nginx_cosmic repository with playbooks.

We happened to use an EC2 instance running Ubuntu 18.10, as well as Let’s Encrypt and Digital Ocean‘s Domain Records API. That’s a fair number of dependencies, but an enterprising DevOps professional should be able to take our example playbooks and scripts and modify them to suit their needs.

Rather than return HTML content (content-type: text/html), we return text/plain with interesting information from NGINX itself. This is facilitated by the LUA programming language and LUA NGINX module. The magic is here in our nginx.conf:

This results in output similar to:

In all of our tests thus far, TLS_AES_256_GCM_SHA384 was chosen as the ciphersuite.

Qualys SSL Assessment

Now let’s look at what Qualys SSL Server Test has to say about our site.

Not an A+, but notice in our nginx.conf we are not configuring HSTS or OCSP. Our standard Let’s Encrypt certificate is also hampering our score here.

Here’s what Qualys has to say about our server configuration:

The highlight here is that TLS 1.3 is supported by our server, whereas TLS 1.2 is not. This was done on purpose to not allow a connecting client to use anything but TLS 1.3. You definitely would not do this in practice as of February 2019, as the Qualys Handshake Simulation shows. Only Chrome 70 was able to connect to our server.

Closing Thoughts

As a DevOps practitioner, and someone who manages dozens of webservers professionally, I’m quite excited about the release and adoption of TLS 1.3. It will, no doubt, take quite some time before a majority of browsers and sites support it.

If you’re interested more about TLS 1.3 in general, there are a lot of great resources out there. Here are just a few:

Wikipedia has a good rundown of TLS 1.3 features and changes from TLS 1.2.

The folks at NGINX recently hosted a webinar on R17, the latest NGINX Plus version. TLS 1.3 and it’s benefits were covered in more detail.

Here’s a great tutorial on deploying modern TLS configurations (including 1.3) from Probely.

And, last but not least, Cloudflare has a number of in-depth TLS 1.3 articles.