swiftformat, Decent Swift Syntax, and Sublime Text

If you’re writing Swift code for iOS you’re most likely going to be doing so in Xcode. If you’re coding Swift to run on the server, you might want to check out an editor like Sublime Text. In this post we’ll show you how to use Sublime, the Decent Swift Syntax package and swiftformat together to write some nicely formatted Swift code.

Sublime Text

Installing Sublime is as easy as heading over to the Sublime Text website and clicking the download button appropriate for your platform. In this post we’ll be using macOS.

Once you’ve installed Sublime Text create a new file with File – New and then go to its Tools menu and select Command Palette. A search field will open. Start typing ‘package’ and select Package Control: Install Package.

The search field will change again to list available packages to install. Find and install Decent Swift Syntax. You’ll notice from here that files ending in .swift will be inferred as containing the Swift language, and Decent Swift Syntax will go to work.


The Decent Swift Syntax package relies on swiftformat to do the heavy lifting of reformatting your Swift code. To install it you will want to use brew: brew install swiftformat

Two Spaces and a Gotcha

With the end of the spaces vs. tabs war, the new battlefront formed was whether to indent two spaces or four. Anyone with sense knows that the answer is two spaces, so at the top of your Swift file you can add // swiftformat:options --indent 2 in order to tell swiftformat to format your code accordingly.

Now here is an interesting gotcha we you may find when writing closures. Let’s say our closure function signature is something like:

And we’re writing some code that supplies the closure, like:

If this is all you wrote and saved your code swiftformat would happily replace status and error because they were unused arguments. Out of habit I save files often so it came as a bit of a surprise when my arguments started disappearing. Although swiftformat has an stripunusedargs option, it doesn’t appear to permit you to turn it off.

Swift on Linux In 2021

It has been nearly 6 years since Apple first open sourced the Swift language and brought it to Linux. In that time we’ve seen the rise (and sometimes fall) of server-side frameworks such as Zewo, Kitura, and Vapor as well as porting Swift to SBC devices such as the Raspberry Pi and Beaglebone.

I recently checked in with some folks in the Swift ARM community to find out if there was an easy way to install the latest version of Swift on Ubuntu. FutureJones pointed me to a Debian-based repository he’s been working on at Swiftlang.xyz. A nicely put together repo, Swiftlang.xyz supports multiple flavors of Debian and Ubuntu OSes as well as both x86 and ARM architectures! I’ve installed Swift 5.4 and the upcoming 5.6 release with great success.

Using https://swiftlang.xyz/public/ is a piece of cake with an installer script to configure your apt repository list automatically. arkansas below is an Ubuntu VM running on Apple Silicon with Parallels.

Type curl -s https://swiftlang.xyz/install.sh | sudo bash to get started.

I want to use Swift 5.6 so I’ll select option 2 which will include the dev repository in my apt sources.

Now it’s time to install Swift through the provided swiftlang package. apt-get install swiftlang is all it takes.

Once installed let’s kick the tires a bit. First, typing swift in the terminal will bring up the REPL:

To really test things out let’s hit a REST API and destructure the response. For this we’ll use ReqRes and just grab a user with GET https://reqres.in/api/users/2.

And now, some code!

Yes You Can Run Homebrew on an M1 Mac

One of the reasons I took the plunge and bought an M1-based Mac is to test out its performance and suitability as a developer. An essential developer application on the Mac is Homebrew, the “missing package manager for macOS.” Although you cannot install Homebrew today to manage ARM-compiled packages, you can install Homebrew in the Rosetta environment and leverage the x86 packages.

I can’t take credit for coming up with the idea, that would go to OSX Daily, but I have a few improvements to share. I’m going to use iTerm2, and so should you.

Right click on your iTerm application icon and select Duplicate. Rename iTerm copy to something like iTerm x86 or iTerm Rosetta.

Now, right click on your new iTerm icon and click on Get Info and then check Open using Rosetta.

Open your iTerm Rosetta application and install Homebrew! Once installed you should be able to use brew install in the iTerm Rosetta application and use those installed packages seamlessly between the two environments. You won’t, however, be able to use brew install in your arm64 iTerm application (you’ll get Error: Cannot install in Homebrew on ARM processor in Intel default prefix).

Keeping Track

If you’re working in both x86 and ARM environments on your M1 Mac it is easy to lose track which iTerm you are in. We can use a little zsh-foo to help us out. Add the following snippet to the end of your ~/.zshrc:

This little snippet takes advantage of iTerm2’s custom escape codes by setting the background to Intel blue if the arch command returns i386 (which it does if running in Rosetta).

We can do one better, however, by changing our iTerm Rosetta icon. Create your own icon, or, right-click the image below and select Copy Image. Then right-click your iTerm Rosetta application and select Get Info. In the upper-left click on the icon until you see a highlight around it and then paste the new icon image (Command-V).

Launch your iTerm Rosetta application and it’s much easier to distinguish between it and your “native” version.

Detecting macOS Universal Binaries

Apple has transitioned from different instruction set architectures several times now throughout its history. First, from 680×0 to PowerPC, then from PowerPC to Intel x86.
And now, in 2020 from Intel to ARM.

During the first transition 680×0 code ran in an emulator. In subsequent transitions Apple has utilized the translation application Rosetta. From Apple’s documentation, “Rosetta is meant to ease the transition to Apple silicon, giving you time to create a universal binary for your app. It is not a substitute for creating a native version of your app.”

So, how can you tell if an application is already a “universal binary” that provides both x86 and ARM instructions? Open Terminal and find the application’s executable code. For standard macOS applications it is located in /Applications/Application.app/Contents/MacOS/. For example, Safari’s executable is at /Applications/Safari.app/Contents/MacOS/Safari. Now, we’re going to use the file Unix command to give us information as to the contents.

% file /Applications/Safari.app/Contents/MacOS/Safari
Safari.app/Contents/MacOS/Safari: Mach-O universal binary with 2 architectures: [x86_64:Mach-O 64-bit executable x86_64] [arm64e:Mach-O 64-bit executable arm64e]
Safari.app/Contents/MacOS/Safari (for architecture x86_64): Mach-O 64-bit executable x86_64
Safari.app/Contents/MacOS/Safari (for architecture arm64e): Mach-O 64-bit executable arm64e

From this you can see that the Safari binary contains executable code for both the x86_64 (Intel) architecture and arm64e (ARM).

As of this writing, November 24, 2020, a few notable applications that are already shipping universal binaries, such as Google Chrome and iTerm2. Of course, Apple’s flagship applications such as Safari, Xcode, Numbers, etc. all support the new ARM instruction set.

I’ve written a quick Ruby script to iterate through the executables in /Applications. To run on your machine:

/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/iachievedit/detectUniversalBinary/main/detectUniversalBinary.rb)"

macOS Bundle Install and OpenSSL Gem

From time to time you run into an issue that requires no end of Googling to sort through. That was my case with using bundler and the OpenSSL gem on macOS Big Sur. Your Gemfile has the following contents:

gem "openssl"

and when running bundle install you’re greeted with this nonsense:

extconf.rb:99:in `<main>': OpenSSL library could not be found. You might want to use --with-openssl-dir=<dir> option to
specify the prefix where OpenSSL is installed. (RuntimeError)
An error occurred while installing openssl (2.2.0), and Bundler cannot continue.

Using gem install “only” required the following incantation:

gem install openssl --install-dir vendor/bundle -- --with-openssl-dir=/usr/local/Cellar/openssl@1.1/1.1.1h

For the life of me, though, I couldn’t figure out how to apply the --with-openssl-dir to bundle!

Well, dear reader, here is how:

bundle config path vendor/bundle
bundle config build.openssl --with-openssl-dir=/usr/local/Cellar/openssl@1.1/1.1.1h

The first line is obligatory and is the “modern” way of setting bundle‘s install path to vendor/bundle (which odds are you want anyway). The build.openssl setting will use the remaining information to pass to gem when installing openssl.

It goes without saying that the exact path used is dependent on your environment; for my Mac the OpenSSL headers and libraries were hanging out in the brew cellar.

Hopefully this post saves someone an hour or so!

Migrating Your Nameservers from GoDaddy to AWS Route 53

I can already hear the rabble shouting, “Why would you use GoDaddy as your domain registrar?!” A fair question, but sometimes we don’t always get to choose our domain registrar (e.g., we inherited it) and aren’t in a position to change it. But that doesn’t mean GoDaddy has to provide DNS management for your domain.

In this post I’ll show you how to change your domain’s nameservers from GoDaddy to AWS’ Route 53. Reader Beware: In this post we are going to change a domain’s nameservers from one provider (GoDaddy) to another (AWS). Note that this is not (I repeat, not) the same as transferring your DNS entries. I joke about GoDaddy’s repeated warnings that changing “nameservers” is risky, but unless your zone files have been populated in the new environment, you will definitely be in for calamity when your hostnames no longer resolve.

Getting AWS Route 53 Ready

Our first step is to create a hosted zone in AWS Route 53. Login in to your AWS account and go to the Route 53 dashboard and click Create hosted zone.

Our zone will be for the domain sonorasecurity.com and it will be a public hosted zone, in that we want public Internet DNS queries for our domain to be resolved. Once you’ve filled in this information, click Create hosted zone.

Once the zone is created you’ll see two DNS entries automatically created, an NS entry and an SOA entry.

We’re interested in the NS entry and the fully-qualified domain names listed. In this example there are four nameservers listed:

  • ns-1528.awsdns-63.org
  • ns-95.awsdns-11.com
  • ns-658.awsdns-18.net
  • ns-1724.awsdns-23.co.uk

We’re going to now use these values in our GoDaddy account to change the nameservers for our domain sonorasecurity.com.

Updating Our Nameservers

Before we update our domain’s nameservers, let’s verify that they are currently hosted by GoDaddy. In a shell type dig +short NS <your_domain_name>. For example:

dig +short NS sonorasecurity.com

So far so good. Now login to your GoDaddy account that manages the domain, and go over to the DNS Management page. Type in your domain name and select it in the dropdown box.

Scroll down and find the Nameservers section and next to Using default nameservers click the Change button.

Here is where it becomes comical how many times GoDaddy implores us not to try to change nameservers. The first page warns you that Changing nameservers is risky. While that is true if you don’t know what you’re doing, you’re a professional, so click on Enter my own nameservers (advanced).

You’ll be presented with a form for entering our AWS nameserver FQDNs. Here it is important to take note to not add the period after the FQDN (GoDaddy will give you an Unexpected Error Occurred message if you try).

Enter all of the nameservers listed in the AWS NS record and click Save.

Once again we get a warning about our risky behavior! Yes, yes. Check, Yes, I consent and click Continue.

After clicking Continue you will likely see a banner at the top of the DNS management page indicating a change is in progress. Once completed you’ll see your nameservers listed, and GoDaddy indicating that “We can’t display your DNS information because your nameservers aren’t managed by us.”

Now, in a terminal you can type dig +short NS <your_domain_name> and you should see your nameservers updated, like this:

dig +short NS sonorasecurity.com

And there you have it, your domain’s DNS entries can now be managed with AWS Route 53!

Selecting and Prioritizing Business Projects

First off, this post is not about how to manage your business projects but is instead about how to decide what to embark on in the first place. If you’re a technology leader like myself you know all-to-well that technology is often called upon to formulate and then implement large portions of the product roadmap. It can often be overwhelming to sort through.

Business Stakeholders and the Voices

In any organization there are multiple stakeholders: those with a vested interest in the success of the business. Each of these stakeholders come to the table with their view of what ingredients are needed to ensure that success, and those views are often translated into some type of project and desired outcome.

How do stakeholders form their opinions and develop their views? By listening to the voices and finding themselves in agreement to what the voices are saying. Let’s take a look at the voices.

  • the voice of the market
  • the voice of the customer
  • the voice of business operations
  • the voice of technology
  • the voice of regulation

The voice of the market. In what direction is the overall market headed? For example, if you were in the business of home monitoring solutions in the 2010s, did you see the emerging market of low-cost remote camera systems such as Blink? If so, were the market trends incorporated into your business roadmap?

The voice of the customer. What are your existing customers asking for? Often their asks will overlap with the overall market, but not always. Do you give a lot of time and attention to smaller customers since in aggregate they make up the long tail. The old adage a bird in the hand is worth two in the bush may come into play here as you decide how to prioritize projects.

The voice of the business operations. Sales, marketing, account management, human resources, finance; all of these are components of a mature company and each of them will bring to the table a list of projects that address business needs. Sales and marketing may be in need of a CRM platform; account management is looking for ways to maximize client retention; finance is best served when it has a modern accounting package and streamlined revenue collection methods. All of these are areas where technology can be leveraged to drive productivity and thereby increase gross margins.

The voice of technology. Yes, technology has its own voice as well! If you’re a technology leader today you’re probably accustomed to the need of “paying down technical debt” or migrating from one software development stack to another. Perhaps the authentication mechanism you’ve been using for years is no longer appropriate or you’ve outgrown the database technology you started with. All of these are examples of projects that require resources but with the desire to, like other business units, offer productivity and reliability.

The voice of regulation. Ah yes, the voice of regulation, or as I like to call it, the voice of avoiding heavy fines or going to jail. If you’re in a heavily-regulated environment such as banking or healthcare or are subject to the GDPR it can take a lot of an ongoing effort to ensure compliance with the myriad regulations. Whether it is deploying multi-factor authentication across all of your applications or achieving certifications such as ISO 27001 or HITRUST, the stakeholders that are listening to the voice of regulation value projects that steer the business clear of legal calamities.

How to Select and Prioritize Projects

I like to make the distinction between selecting which projects to work on versus prioritizing existing projects. We’ve all been in situations where “every project is top priority” (my “favorite” phrase is “priority zero” to imply its even more important than top priority) and I’m certainly guilty of pressing on my teams to try to accomplish multiple Herculean efforts at a time.

To be clear there are a lot of methods of how to select projects to invest in, but a good method will always involve clearly articulating the goal of the project, its value, and to whom. From there it is key to plot the project on a prioritization matrix that has as its axes effort/cost and value. A common prioritization matrix from Stagen categorizes projects based upon which quadrant they fall into.

For example, consider the following projects:

  • Refactoring existing web application logging for consumption in Elasticsearch
  • Implementing customer alert tracing for operations support
  • Developing a next-generation prototype of a 5G-based wildfire tracking device
  • Migrating an existing SharePoint to Office 365

Ask yourself the following questions for each project:

  • Who (or which business units) are requesting or championing the project?
  • What is the defined outcome of the project?
  • What is the value of the project to the business or interested parties?
  • What is the overall effort required for the project?

Let’s dig into some details.

Value is in the Eye of the Beholder

Remember that your stakeholders are listening to a myriad of voices, and that their value statements about projects are based in large part on what they are hearing. In other words, just like beauty, value is in the eye of the beholder. Keep that in mind when assessing the value of a project.

Migrating your SharePoint installation to Office 365 may not be at the top of the CEO’s priorities, but I guarantee you that the marketing department will be excited. They’ve been working with the on-prem version since 2013 and IT can’t keep up with the demands being placed on the installation. Even IT is saying we need to migrate this to the cloud. To those business units this might be a “high-value” project.

You’re in the business of tracking wildfires and 2020 has kept you on your toes. Leveraging LoRA’s low-power and low-bandwidth requirements your device is a market leader but is going to begin to face competition from 5G-ready devices. To maintain your edge the product owner and sales team are advocating investing in a next-generation device, and armed with customer testimonials and tradeshow intel they claim this project is critical to the future success of the business.

Most technology companies run some type of operations support team tasked with monitoring the technology platform and in many cases serving as tier one or tier two customer support. Support operations run more efficiently and have greater impact when they are equipped with tools that provide visibility into the technology.

And finally, the technology team has been needing to peel off a developer or two and refactor the existing code base to leverage the latest logging techniques for unified log search capabilities in an Elasticsearch and Kibana platform. To them this a very valuable capability that will enable them to quickly troubleshoot and diagnose issues, find areas for improving performance, and so on. The compliance team might also champion such an effort to meet that audit requirement for maintaining a centralized logging environment.

As you can see, each of these projects may be considered “high value” to those that are requesting it. I’ve rarely come across individuals that propose projects that aren’t, in some sense, worth doing. The goal in assigning value for the purposes of selecting and prioritizing is to take a step back and add some level of normalization.

Effort is Subjective

Like value, effort is also subjective and may depend on who is doing the effort. What I like to do is think of effort as a product of both the number of individuals required to accomplish the project along with the fixed and ongoing costs associated with it. Let’s take the example of migrating SharePoint. At first glance it may seem like it would take only IT, but in reality it could also include training time and expenses to bring the overall organization up-to-speed on the new capabilities or differences from the older version.

Or take the next-generation prototype of our wildfire tracking device. Even prototypes can take up a considerable amount of time depending upon where you start. Will the prototype require a new printed circuit board? Are you deciding to include a new type of microcontroller? How much new code will be required vs. porting over an existing codebase for the firmware? If you pose the effort question to different people you could get answers ranging from high to low depending upon what they are mentally including in their assessment.

Putting it All Together

Now that you’ve listed out all your projects and written clear and concise justifications for them, it is time to plot them on the prioritization matrix and select which ones to put resources against. We’re going to stick with our project examples and imagine that we’ve gone through the exercise to come up with the following.

Project Value Effort
SharePoint Migration to Office 365 Medium Medium
Customer Alert Tracing Medium Low
Next Generation Prototype High High
Centralized Logging Medium High
2×2 Prioritization

The prioritization matrix presented here sorts projects into five basic categories:

  • Selectively Invest
  • Do First
  • Work In
  • Delay
  • Ignore

Let’s review each.

Selectively Invest

High value and high effort projects are those you invest in. The use of the word invest here is quite deliberate: devote (one’s time, effort, or energy) to a particular undertaking with the expectation of a worthwhile result. Think about that for a moment. Time and energy for an organization are both limited and finite – that is, you only have so much of either. Investing is to take one’s time and energy and direct it at a given project is done so with the expectation that the result will pay off.

Do First

Most would agree that eating every day is high value (some might say necessary if you want to continue your existence on this rock) and relatively easy to do. The Do First quadrant is all about taking advantage of the fact that some things take little effort but yield a much higher rate of return. You might call these the proverbial low-hanging fruit. In the overall context of selecting endeavors for the business to provide resources for, Do First projects have a moderate-to-high value and take low-to-medium effort to accomplish. Get them knocked out.

Work In

I like to call this the “time permitting” category. Do you have some spare cycles? I know, most technology organizations would say no! Day in and day out there are demands placed on IT that sometimes seem to outstrip their capacity to get things accomplished. Even still there are always periods of time where the larger projects are entering their completion phase and resources become freed up, but not enough resources to start that next boulder. Take advantage of this time to “work in” some of those projects that you haven’t been able to start on.

Delay and Ignore

To delay something means to postpone or defer action. These projects have value but the effort required may outweigh it, or perhaps the value won’t be fully realized until several market cycles in the future. Don’t spend resources and energy on them now and kick the proverbial can.

Projects to ignore are those that have little value (either to the overall organization or in general) and require a significant amount of effort. It’s important to recognize again that most people don’t propose activities that, in their view, are of limited value and when proposed may not realize they require a lot of effort. When discussing the application of effort and value ratings to projects, it’s important to keep in mind once again that they can be subjective.

Encrypting Existing S3 Buckets

Utilizing encryption everywhere, particularly in cloud environments, is a solid idea that just makes good sense. AWS S3 makes it easy to create buckets whose objects are encrypted by default, but what if you didn’t initially configure it that way and already have objects uploaded?

It’s easy enough to change the default encryption setting of the bucket. Select the Default Encryption box and choose one of the encryption options. I prefer the simplicity of choosing the AWS-managed keys for AES-256. Click Save.

You can now see that the default encryption setting for the bucket is AES-256. That is, any new objects uploaded to the bucket will automatically be encrypted.

Now, we talked about new objects uploaded to the bucket, but what about existing objects? That’s where the catch is: changing the default encryption of the bucket does not affect existing objects!

To remedy this one must copy all of the objects in the S3 bucket “onto” themselves. Yes, that’s really how it is done. This can be accomplished easily using the application s3cmd. s3cmd can be installed using apt-get on Debian-based systems, or brew on macOS. For more installation options of s3cmd see S3tools.org.

With s3cmd cp you provide the target and destination buckets. In this case the target and destination are the same. Make sure and include the --recursive option (similar to using cp -R to copy directories).

Reloading an existing object’s overview in the S3 console shows that the object is now encrypted!

And remember: future objects uploaded to this S3 bucket will be encrypted and that you only need to do the copy-over method once.

macOS Big Sur Battery Percentage In the Menu Bar

After upgrading to macOS Big Sur beta I noticed that the battery percentage disappeared from my menu bar. One would think getting it back would be as simple as clicking on the battery icon in the menu and selecting Show Percentage, and how could you fault someone for assuming that?

Yet that is a path that leads to both ruin and no menu option to show the battery percentage. Now, to do that you need to go to System Preferences, and find the new preference pane named Dock & Menu Bar:

Click on it and scroll down the left-hand side of the pane until you find the Battery item. Click to discover Show Percentage.

Select Show Percentage and Big Sur will happily once again provide you with what I consider a useful bit of information for determining how much battery you have left.

I2C with the SiFive HiFive1 Rev B

Hey kids! Today we’re going to take a look at the SiFive HiFive1 Rev B and Freedom Metal I2C API.

I am going to be using a classic EEPROM from National Semiconductor, the NM24C17. The NM24C17 is a 16 kilobit (2K) EEPROM that can be written to and read from using I2C.

If you have one of these EEPROMs lying around (and who doesn’t?) and want to use it with your HiFive1 board, you’ll also need:

  • the datasheet
  • a breadboard
  • breadboard wires
  • 2 4.7k pull-up resistors

What you might also want to have handy a digital logic analyzer such as the Logic 8 from Saleae.

The I2C circuit is a simple one, but it is important to note that the NM24C17 EEPROM does not come with I2C pull-up resistors, so we need to add them in our circuit.

Assembling everything with the HiFive1 I2C pins and providing power.

Reading the Datasheet

When working with I2C devices it is so important to read through the datasheet once or twice. Or ten times. Datasheets can be dense and intimidating, but I have rarely come across an issue I was troubleshooting that didn’t end up being caused by not reading the datasheet closely.


Okay, let’s start coding some I2C with Freedom Metal. We’ll start with a basic shell.

Working with I2C in Freedom Metal starts with including the <metal/i2c.h> header file and obtaining a pointer to the I2C device with metal_i2c_get_device. For the HiFive1 Rev B board there is only one device to get, and it’s at index 0. Once you have a pointer to the I2C device, initialize it with metal_i2c_init. We’ll configure our device for 100 kbits/sec (I2C “full speed”) and as the master.

Now, let’s look at our first write function, which will be to write a sequence of bytes to the EEPROM at a given address. This code is very specific to the way the NM24C17 EEPROM functions. We will be using the metal_i2c_write function which takes as its arguments:

  • a pointer to the I2C controller device on the RISC-V chip
  • the address of the I2C bus device to talk to
  • the length of the message to send to the bus device
  • the message to send
  • a flag indicating whether or not to send the I2C stop bit

The first argument will be our struct metal_i2c* i2c_device variable, but the address of the EEPROM on the bus is interesting.

At first blush it appears the address would be 0xa0 to account for the first four bits to transmit are 1 0 1 0, and for the NM24C17 device the 3 page address bits appear to be all 0 (appear is the operative word). That leaves us with the R/W bit, which for a write would be 0. 1010 0000b, or 0xa0, right? Wrong. The R/W bit is not a part of the device address here, which leaves us with 1010000b, which is 0x50.

The message to send to the EEPROM consists of two bytes: the memory address in the EEPROM to write to and the value to write. For simplicity we’ll just write the value 0xab at the address 0x00.

The final argument to metal_i2c_write is to indicate whether or not to signal an I2C stop bit upon completion. Since the stop bit is required for us to write this data to the EEPROM we will use METAL_I2C_STOP_ENABLE.

One thing I’ve found to be true about I2C is that if it works, it works. If it doesn’t, you better have a digital logic analyzer on hand to look at things.


Now let’s read the data that we wrote back in. This requires two function calls: metal_i2c_write and metal_i2c_read.

Notice above that the first step to reading a byte from the EEPROM is to write out the address to be read from, followed by a read. There is only one stop bit in this sequence:

Notice the use of METAL_I2C_STOP_DISABLE; this instructs the I2C controller not to signal a stop bit at the conclusion of the write.

We make double use of the readbuf array by initializing it to the address in the EEPROM we want to read from, and then to hold the data read in.

Writing and Reading Multiple Bytes

Writing and reading one byte at a time to our EEPROM is a bit tedious, so let’s make use of the multiple-byte write.

In this example we send the device address (again, 0x50 for the EEPROM), followed by an address to write to in the EEPROM, and then up to 16 bytes of data.

Before we get to writing again, I’ve made mention that 0x50 is the I2C device address of the EEPROM. While it is, that isn’t the whole story. That is the address for page block 0 of the device, but it supports 8 page blocks. Selecting the page block is done with the lower nibble of the device address. For example, page block 1 can be addressed at 0x51, page block 2 at 0x52, and so on. Each page block is 2 kilobits, or 256 bytes.

At any rate, let’s stick with page block 0 for now and write 16 bytes:

Of course, we are actually writing 17 bytes out to the EEPROM, the first of which is the address we want to write to.

Reading the data back in can look like this:

A Smarter API

After understanding the basics of reading and writing to the NM24C17 it’s time to write an API to encapsulate the nuts and bolts. Our header file looks like this:

Our structure is arranged such that the addr byte is positioned immediately prior to the buf. This organization allows us to take advantage of the C memory layout of the data and writing to the EEPROM. For example:

Our device address is the base address of the EEPROM (0x50) ORed with the page block number. The length of the message to write is the length of the data buffer the user wants to write plus the address byte. Writing starts at the address byte and continues into the buffer.

Remember the exhortation to read the datasheet, and read it several more times? Here is what happens if you start a write on an address not evenly divisible by 16. Notice that our write address starts at 0x22 in page block 1. Sixteen bytes are presumably written, but when trying to read them back in, the last two bytes read are 0xff. Hmm.

From the National Semiconductor datasheet definitions:

  • PAGE – 16 sequential addresses (one byte each) that may be programmed during a “Page Write” programming cycle.
  • PAGE BLOCK – 2,048 (2K) bits organized into 16 pages of addressable memory. (8 bits) x (16 bytes) x (16 pages) = 2,048 bits”

But wait! The Fairchild-printed version of this EEPROM’s datasheet says a bit more:

To minimize write cycle time, NM24C16/17 offer Page Write feature, by which, up to a maximum of 16 contiguous bytes locations can be programmed all at once (instead of 16 individual byte writes). To facilitate this feature, the memory array is organized in terms of “Pages.” A Page consists of 16 contiguous byte locations starting at every 16-Byte address boundary (for example, starting at array address 0x00, 0x10, 0x20 etc.)

Like I said, always read the datasheet and sometimes you have to read two of them to get the whole story.


There are four basic functions needed to use I2C with Freedom Metal:

  • metal_i2c_get_device – obtain a pointer to the underlying I2C device on the microcontroller
  • metal_i2c_init – initialize the I2C device speed and mode
  • metal_i2c_write – address and send data on the bus
  • metal_i2c_read – address and read data from the bus

It really is that simple!

Get the Code

I’ve recently start using PlatformIO to develop on my HiFive1. If you haven’t checked it out I recommend you doing so; it’s very easy to install and start making use of your board right away without having to manually download toolchains and JTAG interfaces. I’ve posted a PlatformIO-based project on GitHub for working with the NM24C17.