Using Apple’s New WeatherKit REST API

In late March 2020 Apple purchased the Dark Sky app, and along with it the Dark Sky API. No longer accepting sign ups the Dark Sky API will go offline on March 31, 2023. If you’re a developer that needs a reliable weather API, Apple is now providing WeatherKit. Typical “Kit” APIs are only available on Apple devices as libraries, but in this case, WeatherKit does have a REST web service available. Let’s look at how to use it.

The WeatherKit API does require an Apple developer account and access to the Developer Console to configure. Though the developer account costs $99 a year, that includes 500,000 WeatherKit API calls per month. For me personally that is a much better deal than a service such as a AccuWeather or OpenWeather.


To get started with WeatherKit we need to do some provisioning in the Developer Console. The first step will be to create a key.

Select Certificates, Identifiers & Profiles and then Keys. Click the blue circle with white cross to add a new key. We’ll call the key myweatherapp. Check the box next to WeatherKit.

Click Continue and then Register on the Register a New Key screen.

Make note on this screen that you’re about to download a new key, and once you’ve done so you won’t be able to again. This key is necessary to access the WeatherKit REST API (you’ll be signing tokens with it), so keep it in a safe place.

Download your key, and also make note of the Key ID. We’re going to use it later. Look in your Downloads folder for AuthKey_KEYID.p8. In our case the filename is AuthKey_9U5ZXJ4Y65.p8.

Once you’ve downloaded your key, click Done.

Now that we have our key, it’s time to provision our service identifier. Click on Identifiers and once again, the blue circle with white cross. Choose Services ID and Continue.

We’re going to use the reverse domain name notation for the service, i.e., it.iachieved.myweatherapp.

Click Continue and then Register.

Preparing the Keys

The .p8 file downloaded is a plain text file containing an elliptic curve private key in PKCS#8 format. It is not encrypted. We’re going to want the key in PEM format, so let’s convert it with openssl:

openssl pkcs8 -nocrypt -in AuthKey_9U5ZXJ4Y65.p8 -out AuthKey_9U5ZXJ4Y65.pem

NB: The option -nocrypt is required!

We need the public key component as well for signing JWT tokens, so obtain it with openssl:

openssl ec -in AuthKey_9U5ZXJ4Y65.pem -pubout >

You should now have two files which are the private and public key.

Creating and Signing a JWT

Let’s recall how accessing a REST API with a JSON Web Token works.

Apple runs the WeatherKit API service, and in the Developer console you created a key. Apple kept a copy of the public key which it will use to verify JWT signatures. Your application is going to construct a JWT and sign it with the private key. This signed JWT will be presented as a bearer token to the API. If Apple can validate your signature and that your token contents identify provisioned WeatherKit services, you’re golden.

We’ll use to create a JWT by hand.

The JWT to access the WeatherKit API must contain the following header elements:

  • alg – ES256
  • kid – the Key ID obtained when creating your key
  • id – your Developer Account Team ID concatenated with a period, and then your Services Identifier (the reverse domain name)
  • typ – JWT

The JWT payload must contain the following:

  • iss – your Developer Account Team ID
  • sub – the Services identifier (the reverse domain name)
  • iat – the standard “issued at” timestamp in Unix epoch time
  • exp – an expiration timestamp in Unix epoch time

Here is an example in

When I need a quick copy-paste of the iat and exp I use this Python one-liner:

python -c'import time; n=int(time.time()); print("\"iat\": %d," % n); print("\"exp\": %d," % (n+3600))'

Once you’ve constructed your token’s contents it’s time to sign it. In this is accomplished by pasting the public and private key contents into the Verify Signature inputs.

If successful, you should see something like:

The encoded and signed token is your bearer token that is presented to the WeatherKit API for authentication and authorization. Note the contents of the private key and the bearer token were blurred, but the JWT contents were not. You can construct the same contents, but unless they’re signed with the private key of our provisioned service they’re unusable. Thus it is important to keep your private key private!

Calling the API

With a bearer token in hand you can make calls to the WeatherKit API!

The first call we’ll make is to determine what WeatherKit API services are available for the GPS location 32.779167/-96.808891, which happens to be Dallas, Texas. In these examples <TOKEN> should be replaced with the bearer token.

curl "" -H 'Authorization: Bearer <TOKEN>'

This returns:


In other words, for this location, we can obtain the current weather, daily forecast, hourly forecast, forecast for the next hour, and weather alerts. We’ll just check the current weather.

curl "" 
     -H 'Authorization: Bearer <TOKEN>'

A few things to note here:

  • the route changed to /api/v1/weather/
  • the inclusion of a locale code (e.g., en_US)
  • the query parameter dataSets

The result of the call:


The units are returned in metric, and annoyingly the condition code will need to be mapped to make it user friendly (“Partly Cloudy” instead of “PartlyCloudy”).

A C++ JWT Signing Implementation

We used the site to quickly cobble together a usable bearer token, but if you’re building an actual application to make requests to the WeatherKit API you’re going to want to implement JWT signing in code.

Here’s an example in C++ utilizing the jwt-cpp header-only library.

WeatherKit REST API Documentation

The complete REST API documentation for WeatherKit can be found at

Practical Racket – argv

This is my first post featuring the Racket language. At some point we may start evangelizing and looking down our nose at people who don’t leave in the superfluous parentheses, but for now let’s keep it light. Even still, this is not a Racket or Lisp tutorial. If the sight of (sqrt (+ (* 3 4) (* 6 7)))) scares you, run away!

Every time I learn a new language it is always the same:

  • learn the basic syntax
  • search on how to concatenate a string
  • search on how to open and read a file
  • search on how to read and write JSON

And so on. Invariably one also comes to “how do I read argv?” This is that simple post for Racket.

For the equivalent of Python’s sys.argv[1] or Perl’s $ARGV[0], we can do this:

There’s a lot going on here if you’ve never read Lisp or Racket before. current-command-line-arguments returns a vector (not a list) of the arguments passed on the command. We only want the first argument, and Racket doesn’t provide the filename of the script, so the counting starts at zero. vector-ref returns that first element. We bind that to fname and there you have it.

That’s nice and all, but what if there’s no argument supplied on the command line? Let’s try this:

Trickier, indeed.

We’re still going to bind fname, and perhaps there is a bit of cheating with the use of (exit), but with the cond routine we’ll test if the vector is zero length and if so, bail. Otherwise the else clause is triggered and fname will get the value of the clause.

Getting Started with StarFive VisionFive

I’ve been excited to get my hands on a StarFive VisionFive and it finally arrived last week after being on backorder since late March of 2022. Unlike the HiFive which runs OSes suitable for a microcontroller, the VisionFive will boot into a fully functional Fedora distribution with the Xfce desktop environment.

To get started with the VisionFive you’ll need

  • a microSD Card (at least 16GB, preferably a lot more)
  • USB-C Power Supply (like one used for a Raspberry Pi 4)
  • Keyboard and Mouse
  • Monitor
  • HDMI cable

If you’ve never booted up a BeagleBone or Raspberry Pi “from scratch”, head on over to the VisionFive Quick Start Guide for a gentler introduction.

The Fedora image for the VisionFive can be found on Github. For whatever reason the download of the image kept stalling in Safari, so I resorted to using curl:

curl -O

The image is compressed with Zstandard. Compressed it is around 3.5 gigabytes. Uncompressed it is around 13G. Once you’ve downloaded the file, go ahead and verify its integrity with sha256sum:

sha256sum Fedora-riscv64-jh7100-developer-xfce-Rawhide-20211226-214100.n.0-sda.raw.zst
94c73c967e12c80192d7bf25147badd9f0ee1738dc9800d1c502f376df5d5e2f  Fedora-riscv64-jh7100-developer-xfce-Rawhide-20211226-214100.n.0-sda.raw.zst

The checksum for the 20211226 image is 94c73c967e12c80192d7bf25147badd9f0ee1738dc9800d1c502f376df5d5e2f. Now, decompress it:

zstd -d Fedora-riscv64-jh7100-developer-xfce-Rawhide-20211226-214100.n.0-sda.raw.zst

On macOS we’ll use diskutil and dd to write the image. If the microSD is mounted after inserting it, unmount it with diskutil unmountDisk. On my machine the SD card is presented as disk4. Always verify where your disk is mounted before using dd!

/dev/disk4 (external, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:     FDisk_partition_scheme                        *16.0 GB    disk4
   1:             Windows_FAT_32 boot                    58.7 MB    disk4s1
   2:                      Linux                         16.0 GB    disk4s2

diskutil unmountDisk /dev/disk4

Now, let’s write the image to the disk. Take note here that we’re using /dev/rdisk4, i.e., the “raw” disk.

sudo time dd if=Fedora-riscv64-jh7100-developer-xfce-Rawhide-20211226-214100.n.0-sda.raw of=/dev/rdisk4 bs=1g

Once your disk is written insert it into the VisionFive and boot it up! I found that having all of the peripherals (keyboard, mouse, monitor, and Ethernet cable) is the best way to go. Be patient, this board is not as snappy as a Pi 4. But, in a few minutes you’ll be presented with a login screen. The default user is riscv and the password is starfive.

Setting Your Timezone

After booting Fedora I noticed the date was in the future. Typing date at the command line resulted in Mon Aug 22 04:52:18 AM CST 2022. Well, I’m in America/Chicago but why is that still ahead? Big dummy. CST in this context is Asia/Shanghai.

Easily fixed with:

timedatectl set-timezone "America/Chicago"

Resize Your Root Partition

After booting your VisionFive you might notice that the root partition is smaller than the actual microSD size. In this example we’re using a 16GB microSD, but the root partition is only 11.4G. We can fix that! Using these general instructions we can resize the root partition without rebooting.

[riscv@fedora-starfive ~]$ sudo fdisk -l
Disk /dev/mmcblk0: 14.92 GiB, 16022241280 bytes, 31293440 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xae0e1c91

Device         Boot   Start      End  Sectors  Size Id Type
/dev/mmcblk0p2        69632   319487   249856  122M  c W95 FAT32 (LBA)
/dev/mmcblk0p3 *     319488  1320959  1001472  489M 83 Linux
/dev/mmcblk0p4      1320960 25319423 23998464 11.4G 83 Linux
[riscv@fedora-starfive ~]$ sudo fdisk /dev/mmcblk0

Welcome to fdisk (util-linux 2.36.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help): p
Disk /dev/mmcblk0: 14.92 GiB, 16022241280 bytes, 31293440 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xae0e1c91

Device         Boot   Start      End  Sectors  Size Id Type
/dev/mmcblk0p2        69632   319487   249856  122M  c W95 FAT32 (LBA)
/dev/mmcblk0p3 *     319488  1320959  1001472  489M 83 Linux
/dev/mmcblk0p4      1320960 25319423 23998464 11.4G 83 Linux

Command (m for help): d
Partition number (2-4, default 4): 4

Partition 4 has been deleted.

Command (m for help): n
Partition type
   p   primary (2 primary, 0 extended, 2 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (1,4, default 1): 4
First sector (2048-31293439, default 2048): 1320960
Last sector, +/-sectors or +/-size{K,M,G,T,P} (1320960-31293439, default 31293439):

Created a new partition 4 of type 'Linux' and of size 14.3 GiB.
Partition #4 contains a ext4 signature.

Do you want to remove the signature? [Y]es/[N]o: N

Command (m for help): w

The partition table has been altered.
Syncing disks.

Now use resize2fs on /dev/mmcblk0p4:

[riscv@fedora-starfive ~]$ sudo resize2fs /dev/mmcblk0p4
resize2fs 1.45.6 (20-Mar-2020)
Filesystem at /dev/mmcblk0p4 is mounted on /; on-line resizing required
old_desc_blocks = 2, new_desc_blocks = 2
The filesystem on /dev/mmcblk0p4 is now 3746560 (4k) blocks long.


The StarFive VisionFive is really cool. While not exactly usable as a daily driver and a bit sluggish compared to the Raspberry Pi, it is a fully-functional desktop computer with a dual-core RISC-V chip on it. I continue to be excited about the future of RISC-V as an open alternative to x86 or ARM. Two years ago I was tinkering with RISC-V with the equivalent of an Arduino Duo. Today there are RISC-V-based SBCs running full-featured Linux. The Fedora distribution for the VisionFive has a lot of packages installed, including the gcc and g++ compilers, Perl, Python, Ruby, Lua, and even Go. Indeed, it is only a matter of time before one begins seeing other languages like NodeJS (we know, it’s not a language), Swift, Rust, and Racket. Exciting times!

swiftformat, Decent Swift Syntax, and Sublime Text

If you’re writing Swift code for iOS you’re most likely going to be doing so in Xcode. If you’re coding Swift to run on the server, you might want to check out an editor like Sublime Text. In this post we’ll show you how to use Sublime, the Decent Swift Syntax package and swiftformat together to write some nicely formatted Swift code.

Sublime Text

Installing Sublime is as easy as heading over to the Sublime Text website and clicking the download button appropriate for your platform. In this post we’ll be using macOS.

Once you’ve installed Sublime Text create a new file with File – New and then go to its Tools menu and select Command Palette. A search field will open. Start typing ‘package’ and select Package Control: Install Package.

The search field will change again to list available packages to install. Find and install Decent Swift Syntax. You’ll notice from here that files ending in .swift will be inferred as containing the Swift language, and Decent Swift Syntax will go to work.


The Decent Swift Syntax package relies on swiftformat to do the heavy lifting of reformatting your Swift code. To install it you will want to use brew: brew install swiftformat

Two Spaces and a Gotcha

With the end of the spaces vs. tabs war, the new battlefront formed was whether to indent two spaces or four. Anyone with sense knows that the answer is two spaces, so at the top of your Swift file you can add // swiftformat:options --indent 2 in order to tell swiftformat to format your code accordingly.

Now here is an interesting gotcha we you may find when writing closures. Let’s say our closure function signature is something like:

And we’re writing some code that supplies the closure, like:

If this is all you wrote and saved your code swiftformat would happily replace status and error because they were unused arguments. Out of habit I save files often so it came as a bit of a surprise when my arguments started disappearing. Although swiftformat has an stripunusedargs option, it doesn’t appear to permit you to turn it off.

Swift on Linux In 2021

It has been nearly 6 years since Apple first open sourced the Swift language and brought it to Linux. In that time we’ve seen the rise (and sometimes fall) of server-side frameworks such as Zewo, Kitura, and Vapor as well as porting Swift to SBC devices such as the Raspberry Pi and Beaglebone.

I recently checked in with some folks in the Swift ARM community to find out if there was an easy way to install the latest version of Swift on Ubuntu. FutureJones pointed me to a Debian-based repository he’s been working on at A nicely put together repo, supports multiple flavors of Debian and Ubuntu OSes as well as both x86 and ARM architectures! I’ve installed Swift 5.4 and the upcoming 5.6 release with great success.

Using is a piece of cake with an installer script to configure your apt repository list automatically. arkansas below is an Ubuntu VM running on Apple Silicon with Parallels.

Type curl -s | sudo bash to get started.

I want to use Swift 5.6 so I’ll select option 2 which will include the dev repository in my apt sources.

Now it’s time to install Swift through the provided swiftlang package. apt-get install swiftlang is all it takes.

Once installed let’s kick the tires a bit. First, typing swift in the terminal will bring up the REPL:

To really test things out let’s hit a REST API and destructure the response. For this we’ll use ReqRes and just grab a user with GET

And now, some code!

Yes You Can Run Homebrew on an M1 Mac

One of the reasons I took the plunge and bought an M1-based Mac is to test out its performance and suitability as a developer. An essential developer application on the Mac is Homebrew, the “missing package manager for macOS.” Although you cannot install Homebrew today to manage ARM-compiled packages, you can install Homebrew in the Rosetta environment and leverage the x86 packages.

I can’t take credit for coming up with the idea, that would go to OSX Daily, but I have a few improvements to share. I’m going to use iTerm2, and so should you.

Right click on your iTerm application icon and select Duplicate. Rename iTerm copy to something like iTerm x86 or iTerm Rosetta.

Now, right click on your new iTerm icon and click on Get Info and then check Open using Rosetta.

Open your iTerm Rosetta application and install Homebrew! Once installed you should be able to use brew install in the iTerm Rosetta application and use those installed packages seamlessly between the two environments. You won’t, however, be able to use brew install in your arm64 iTerm application (you’ll get Error: Cannot install in Homebrew on ARM processor in Intel default prefix).

Keeping Track

If you’re working in both x86 and ARM environments on your M1 Mac it is easy to lose track which iTerm you are in. We can use a little zsh-foo to help us out. Add the following snippet to the end of your ~/.zshrc:

This little snippet takes advantage of iTerm2’s custom escape codes by setting the background to Intel blue if the arch command returns i386 (which it does if running in Rosetta).

We can do one better, however, by changing our iTerm Rosetta icon. Create your own icon, or, right-click the image below and select Copy Image. Then right-click your iTerm Rosetta application and select Get Info. In the upper-left click on the icon until you see a highlight around it and then paste the new icon image (Command-V).

Launch your iTerm Rosetta application and it’s much easier to distinguish between it and your “native” version.

Detecting macOS Universal Binaries

Apple has transitioned from different instruction set architectures several times now throughout its history. First, from 680×0 to PowerPC, then from PowerPC to Intel x86.
And now, in 2020 from Intel to ARM.

During the first transition 680×0 code ran in an emulator. In subsequent transitions Apple has utilized the translation application Rosetta. From Apple’s documentation, “Rosetta is meant to ease the transition to Apple silicon, giving you time to create a universal binary for your app. It is not a substitute for creating a native version of your app.”

So, how can you tell if an application is already a “universal binary” that provides both x86 and ARM instructions? Open Terminal and find the application’s executable code. For standard macOS applications it is located in /Applications/ For example, Safari’s executable is at /Applications/ Now, we’re going to use the file Unix command to give us information as to the contents.

% file /Applications/ Mach-O universal binary with 2 architectures: [x86_64:Mach-O 64-bit executable x86_64] [arm64e:Mach-O 64-bit executable arm64e] (for architecture x86_64): Mach-O 64-bit executable x86_64 (for architecture arm64e): Mach-O 64-bit executable arm64e

From this you can see that the Safari binary contains executable code for both the x86_64 (Intel) architecture and arm64e (ARM).

As of this writing, November 24, 2020, a few notable applications that are already shipping universal binaries, such as Google Chrome and iTerm2. Of course, Apple’s flagship applications such as Safari, Xcode, Numbers, etc. all support the new ARM instruction set.

I’ve written a quick Ruby script to iterate through the executables in /Applications. To run on your machine:

/usr/bin/ruby -e "$(curl -fsSL"

macOS Bundle Install and OpenSSL Gem

From time to time you run into an issue that requires no end of Googling to sort through. That was my case with using bundler and the OpenSSL gem on macOS Big Sur. Your Gemfile has the following contents:

gem "openssl"

and when running bundle install you’re greeted with this nonsense:

extconf.rb:99:in `<main>': OpenSSL library could not be found. You might want to use --with-openssl-dir=<dir> option to
specify the prefix where OpenSSL is installed. (RuntimeError)
An error occurred while installing openssl (2.2.0), and Bundler cannot continue.

Using gem install “only” required the following incantation:

gem install openssl --install-dir vendor/bundle -- --with-openssl-dir=/usr/local/Cellar/openssl@1.1/1.1.1h

For the life of me, though, I couldn’t figure out how to apply the --with-openssl-dir to bundle!

Well, dear reader, here is how:

bundle config path vendor/bundle
bundle config build.openssl --with-openssl-dir=/usr/local/Cellar/openssl@1.1/1.1.1h

The first line is obligatory and is the “modern” way of setting bundle‘s install path to vendor/bundle (which odds are you want anyway). The build.openssl setting will use the remaining information to pass to gem when installing openssl.

It goes without saying that the exact path used is dependent on your environment; for my Mac the OpenSSL headers and libraries were hanging out in the brew cellar.

Hopefully this post saves someone an hour or so!

Migrating Your Nameservers from GoDaddy to AWS Route 53

I can already hear the rabble shouting, “Why would you use GoDaddy as your domain registrar?!” A fair question, but sometimes we don’t always get to choose our domain registrar (e.g., we inherited it) and aren’t in a position to change it. But that doesn’t mean GoDaddy has to provide DNS management for your domain.

In this post I’ll show you how to change your domain’s nameservers from GoDaddy to AWS’ Route 53. Reader Beware: In this post we are going to change a domain’s nameservers from one provider (GoDaddy) to another (AWS). Note that this is not (I repeat, not) the same as transferring your DNS entries. I joke about GoDaddy’s repeated warnings that changing “nameservers” is risky, but unless your zone files have been populated in the new environment, you will definitely be in for calamity when your hostnames no longer resolve.

Getting AWS Route 53 Ready

Our first step is to create a hosted zone in AWS Route 53. Login in to your AWS account and go to the Route 53 dashboard and click Create hosted zone.

Our zone will be for the domain and it will be a public hosted zone, in that we want public Internet DNS queries for our domain to be resolved. Once you’ve filled in this information, click Create hosted zone.

Once the zone is created you’ll see two DNS entries automatically created, an NS entry and an SOA entry.

We’re interested in the NS entry and the fully-qualified domain names listed. In this example there are four nameservers listed:


We’re going to now use these values in our GoDaddy account to change the nameservers for our domain

Updating Our Nameservers

Before we update our domain’s nameservers, let’s verify that they are currently hosted by GoDaddy. In a shell type dig +short NS <your_domain_name>. For example:

dig +short NS

So far so good. Now login to your GoDaddy account that manages the domain, and go over to the DNS Management page. Type in your domain name and select it in the dropdown box.

Scroll down and find the Nameservers section and next to Using default nameservers click the Change button.

Here is where it becomes comical how many times GoDaddy implores us not to try to change nameservers. The first page warns you that Changing nameservers is risky. While that is true if you don’t know what you’re doing, you’re a professional, so click on Enter my own nameservers (advanced).

You’ll be presented with a form for entering our AWS nameserver FQDNs. Here it is important to take note to not add the period after the FQDN (GoDaddy will give you an Unexpected Error Occurred message if you try).

Enter all of the nameservers listed in the AWS NS record and click Save.

Once again we get a warning about our risky behavior! Yes, yes. Check, Yes, I consent and click Continue.

After clicking Continue you will likely see a banner at the top of the DNS management page indicating a change is in progress. Once completed you’ll see your nameservers listed, and GoDaddy indicating that “We can’t display your DNS information because your nameservers aren’t managed by us.”

Now, in a terminal you can type dig +short NS <your_domain_name> and you should see your nameservers updated, like this:

dig +short NS

And there you have it, your domain’s DNS entries can now be managed with AWS Route 53!

Selecting and Prioritizing Business Projects

First off, this post is not about how to manage your business projects but is instead about how to decide what to embark on in the first place. If you’re a technology leader like myself you know all-to-well that technology is often called upon to formulate and then implement large portions of the product roadmap. It can often be overwhelming to sort through.

Business Stakeholders and the Voices

In any organization there are multiple stakeholders: those with a vested interest in the success of the business. Each of these stakeholders come to the table with their view of what ingredients are needed to ensure that success, and those views are often translated into some type of project and desired outcome.

How do stakeholders form their opinions and develop their views? By listening to the voices and finding themselves in agreement to what the voices are saying. Let’s take a look at the voices.

  • the voice of the market
  • the voice of the customer
  • the voice of business operations
  • the voice of technology
  • the voice of regulation

The voice of the market. In what direction is the overall market headed? For example, if you were in the business of home monitoring solutions in the 2010s, did you see the emerging market of low-cost remote camera systems such as Blink? If so, were the market trends incorporated into your business roadmap?

The voice of the customer. What are your existing customers asking for? Often their asks will overlap with the overall market, but not always. Do you give a lot of time and attention to smaller customers since in aggregate they make up the long tail. The old adage a bird in the hand is worth two in the bush may come into play here as you decide how to prioritize projects.

The voice of the business operations. Sales, marketing, account management, human resources, finance; all of these are components of a mature company and each of them will bring to the table a list of projects that address business needs. Sales and marketing may be in need of a CRM platform; account management is looking for ways to maximize client retention; finance is best served when it has a modern accounting package and streamlined revenue collection methods. All of these are areas where technology can be leveraged to drive productivity and thereby increase gross margins.

The voice of technology. Yes, technology has its own voice as well! If you’re a technology leader today you’re probably accustomed to the need of “paying down technical debt” or migrating from one software development stack to another. Perhaps the authentication mechanism you’ve been using for years is no longer appropriate or you’ve outgrown the database technology you started with. All of these are examples of projects that require resources but with the desire to, like other business units, offer productivity and reliability.

The voice of regulation. Ah yes, the voice of regulation, or as I like to call it, the voice of avoiding heavy fines or going to jail. If you’re in a heavily-regulated environment such as banking or healthcare or are subject to the GDPR it can take a lot of an ongoing effort to ensure compliance with the myriad regulations. Whether it is deploying multi-factor authentication across all of your applications or achieving certifications such as ISO 27001 or HITRUST, the stakeholders that are listening to the voice of regulation value projects that steer the business clear of legal calamities.

How to Select and Prioritize Projects

I like to make the distinction between selecting which projects to work on versus prioritizing existing projects. We’ve all been in situations where “every project is top priority” (my “favorite” phrase is “priority zero” to imply its even more important than top priority) and I’m certainly guilty of pressing on my teams to try to accomplish multiple Herculean efforts at a time.

To be clear there are a lot of methods of how to select projects to invest in, but a good method will always involve clearly articulating the goal of the project, its value, and to whom. From there it is key to plot the project on a prioritization matrix that has as its axes effort/cost and value. A common prioritization matrix from Stagen categorizes projects based upon which quadrant they fall into.

For example, consider the following projects:

  • Refactoring existing web application logging for consumption in Elasticsearch
  • Implementing customer alert tracing for operations support
  • Developing a next-generation prototype of a 5G-based wildfire tracking device
  • Migrating an existing SharePoint to Office 365

Ask yourself the following questions for each project:

  • Who (or which business units) are requesting or championing the project?
  • What is the defined outcome of the project?
  • What is the value of the project to the business or interested parties?
  • What is the overall effort required for the project?

Let’s dig into some details.

Value is in the Eye of the Beholder

Remember that your stakeholders are listening to a myriad of voices, and that their value statements about projects are based in large part on what they are hearing. In other words, just like beauty, value is in the eye of the beholder. Keep that in mind when assessing the value of a project.

Migrating your SharePoint installation to Office 365 may not be at the top of the CEO’s priorities, but I guarantee you that the marketing department will be excited. They’ve been working with the on-prem version since 2013 and IT can’t keep up with the demands being placed on the installation. Even IT is saying we need to migrate this to the cloud. To those business units this might be a “high-value” project.

You’re in the business of tracking wildfires and 2020 has kept you on your toes. Leveraging LoRA’s low-power and low-bandwidth requirements your device is a market leader but is going to begin to face competition from 5G-ready devices. To maintain your edge the product owner and sales team are advocating investing in a next-generation device, and armed with customer testimonials and tradeshow intel they claim this project is critical to the future success of the business.

Most technology companies run some type of operations support team tasked with monitoring the technology platform and in many cases serving as tier one or tier two customer support. Support operations run more efficiently and have greater impact when they are equipped with tools that provide visibility into the technology.

And finally, the technology team has been needing to peel off a developer or two and refactor the existing code base to leverage the latest logging techniques for unified log search capabilities in an Elasticsearch and Kibana platform. To them this a very valuable capability that will enable them to quickly troubleshoot and diagnose issues, find areas for improving performance, and so on. The compliance team might also champion such an effort to meet that audit requirement for maintaining a centralized logging environment.

As you can see, each of these projects may be considered “high value” to those that are requesting it. I’ve rarely come across individuals that propose projects that aren’t, in some sense, worth doing. The goal in assigning value for the purposes of selecting and prioritizing is to take a step back and add some level of normalization.

Effort is Subjective

Like value, effort is also subjective and may depend on who is doing the effort. What I like to do is think of effort as a product of both the number of individuals required to accomplish the project along with the fixed and ongoing costs associated with it. Let’s take the example of migrating SharePoint. At first glance it may seem like it would take only IT, but in reality it could also include training time and expenses to bring the overall organization up-to-speed on the new capabilities or differences from the older version.

Or take the next-generation prototype of our wildfire tracking device. Even prototypes can take up a considerable amount of time depending upon where you start. Will the prototype require a new printed circuit board? Are you deciding to include a new type of microcontroller? How much new code will be required vs. porting over an existing codebase for the firmware? If you pose the effort question to different people you could get answers ranging from high to low depending upon what they are mentally including in their assessment.

Putting it All Together

Now that you’ve listed out all your projects and written clear and concise justifications for them, it is time to plot them on the prioritization matrix and select which ones to put resources against. We’re going to stick with our project examples and imagine that we’ve gone through the exercise to come up with the following.

Project Value Effort
SharePoint Migration to Office 365 Medium Medium
Customer Alert Tracing Medium Low
Next Generation Prototype High High
Centralized Logging Medium High
2×2 Prioritization

The prioritization matrix presented here sorts projects into five basic categories:

  • Selectively Invest
  • Do First
  • Work In
  • Delay
  • Ignore

Let’s review each.

Selectively Invest

High value and high effort projects are those you invest in. The use of the word invest here is quite deliberate: devote (one’s time, effort, or energy) to a particular undertaking with the expectation of a worthwhile result. Think about that for a moment. Time and energy for an organization are both limited and finite – that is, you only have so much of either. Investing is to take one’s time and energy and direct it at a given project is done so with the expectation that the result will pay off.

Do First

Most would agree that eating every day is high value (some might say necessary if you want to continue your existence on this rock) and relatively easy to do. The Do First quadrant is all about taking advantage of the fact that some things take little effort but yield a much higher rate of return. You might call these the proverbial low-hanging fruit. In the overall context of selecting endeavors for the business to provide resources for, Do First projects have a moderate-to-high value and take low-to-medium effort to accomplish. Get them knocked out.

Work In

I like to call this the “time permitting” category. Do you have some spare cycles? I know, most technology organizations would say no! Day in and day out there are demands placed on IT that sometimes seem to outstrip their capacity to get things accomplished. Even still there are always periods of time where the larger projects are entering their completion phase and resources become freed up, but not enough resources to start that next boulder. Take advantage of this time to “work in” some of those projects that you haven’t been able to start on.

Delay and Ignore

To delay something means to postpone or defer action. These projects have value but the effort required may outweigh it, or perhaps the value won’t be fully realized until several market cycles in the future. Don’t spend resources and energy on them now and kick the proverbial can.

Projects to ignore are those that have little value (either to the overall organization or in general) and require a significant amount of effort. It’s important to recognize again that most people don’t propose activities that, in their view, are of limited value and when proposed may not realize they require a lot of effort. When discussing the application of effort and value ratings to projects, it’s important to keep in mind once again that they can be subjective.