Category Archives: Hacking

Using Apple’s New WeatherKit REST API

In late March 2020 Apple purchased the Dark Sky app, and along with it the Dark Sky API. No longer accepting sign ups the Dark Sky API will go offline on March 31, 2023. If you’re a developer that needs a reliable weather API, Apple is now providing WeatherKit. Typical “Kit” APIs are only available on Apple devices as libraries, but in this case, WeatherKit does have a REST web service available. Let’s look at how to use it.

The WeatherKit API does require an Apple developer account and access to the Developer Console to configure. Though the developer account costs $99 a year, that includes 500,000 WeatherKit API calls per month. For me personally that is a much better deal than a service such as a AccuWeather or OpenWeather.

Provisioning

To get started with WeatherKit we need to do some provisioning in the Developer Console. The first step will be to create a key.

Select Certificates, Identifiers & Profiles and then Keys. Click the blue circle with white cross to add a new key. We’ll call the key myweatherapp. Check the box next to WeatherKit.

Click Continue and then Register on the Register a New Key screen.

Make note on this screen that you’re about to download a new key, and once you’ve done so you won’t be able to again. This key is necessary to access the WeatherKit REST API (you’ll be signing tokens with it), so keep it in a safe place.

Download your key, and also make note of the Key ID. We’re going to use it later. Look in your Downloads folder for AuthKey_KEYID.p8. In our case the filename is AuthKey_9U5ZXJ4Y65.p8.

Once you’ve downloaded your key, click Done.

Now that we have our key, it’s time to provision our service identifier. Click on Identifiers and once again, the blue circle with white cross. Choose Services ID and Continue.

We’re going to use the reverse domain name notation for the service, i.e., it.iachieved.myweatherapp.

Click Continue and then Register.

Preparing the Keys

The .p8 file downloaded is a plain text file containing an elliptic curve private key in PKCS#8 format. It is not encrypted. We’re going to want the key in PEM format, so let’s convert it with openssl:

openssl pkcs8 -nocrypt -in AuthKey_9U5ZXJ4Y65.p8 -out AuthKey_9U5ZXJ4Y65.pem

NB: The option -nocrypt is required!

We need the public key component as well for signing JWT tokens, so obtain it with openssl:

openssl ec -in AuthKey_9U5ZXJ4Y65.pem -pubout > AuthKey_9U5ZXJ4Y65.pub

You should now have two files which are the private and public key.

Creating and Signing a JWT

Let’s recall how accessing a REST API with a JSON Web Token works.

Apple runs the WeatherKit API service, and in the Developer console you created a key. Apple kept a copy of the public key which it will use to verify JWT signatures. Your application is going to construct a JWT and sign it with the private key. This signed JWT will be presented as a bearer token to the API. If Apple can validate your signature and that your token contents identify provisioned WeatherKit services, you’re golden.

We’ll use jwt.io to create a JWT by hand.

The JWT to access the WeatherKit API must contain the following header elements:

  • alg – ES256
  • kid – the Key ID obtained when creating your key
  • id – your Developer Account Team ID concatenated with a period, and then your Services Identifier (the reverse domain name)
  • typ – JWT

The JWT payload must contain the following:

  • iss – your Developer Account Team ID
  • sub – the Services identifier (the reverse domain name)
  • iat – the standard “issued at” timestamp in Unix epoch time
  • exp – an expiration timestamp in Unix epoch time

Here is an example in jwt.io:

When I need a quick copy-paste of the iat and exp I use this Python one-liner:

python -c'import time; n=int(time.time()); print("\"iat\": %d," % n); print("\"exp\": %d," % (n+3600))'

Once you’ve constructed your token’s contents it’s time to sign it. In jwt.io this is accomplished by pasting the public and private key contents into the Verify Signature inputs.

If successful, you should see something like:

The encoded and signed token is your bearer token that is presented to the WeatherKit API for authentication and authorization. Note the contents of the private key and the bearer token were blurred, but the JWT contents were not. You can construct the same contents, but unless they’re signed with the private key of our provisioned service they’re unusable. Thus it is important to keep your private key private!

Calling the API

With a bearer token in hand you can make calls to the WeatherKit API!

The first call we’ll make is to determine what WeatherKit API services are available for the GPS location 32.779167/-96.808891, which happens to be Dallas, Texas. In these examples <TOKEN> should be replaced with the bearer token.

curl "https://weatherkit.apple.com/api/v1/availability/32.779167/-96.808891?country=US" -H 'Authorization: Bearer <TOKEN>'

This returns:

["currentWeather","forecastDaily","forecastHourly","forecastNextHour","weatherAlerts"]

In other words, for this location, we can obtain the current weather, daily forecast, hourly forecast, forecast for the next hour, and weather alerts. We’ll just check the current weather.

curl "https://weatherkit.apple.com/api/v1/weather/en_US/32.779167/-96.808891?dataSets=currentWeather" 
     -H 'Authorization: Bearer <TOKEN>'

A few things to note here:

  • the route changed to /api/v1/weather/
  • the inclusion of a locale code (e.g., en_US)
  • the query parameter dataSets

The result of the call:

{"currentWeather":{"name":"CurrentWeather","metadata":{"attributionURL":"https://weatherkit.apple.com/legal-attribution.html","expireTime":"2022-09-11T14:57:09Z","latitude":32.779,"longitude":-96.809,"readTime":"2022-09-11T14:52:09Z","reportedTime":"2022-09-11T14:52:09Z","units":"m","version":1},"asOf":"2022-09-11T14:52:09Z","cloudCover":0.41,"conditionCode":"PartlyCloudy","daylight":true,"humidity":0.70,"precipitationIntensity":0.00,"pressure":1021.51,"pressureTrend":"rising","temperature":22.17,"temperatureApparent":22.72,"temperatureDewPoint":16.52,"uvIndex":3,"visibility":26705.41,"windDirection":348,"windGust":30.99,"windSpeed":18.18}}

The units are returned in metric, and annoyingly the condition code will need to be mapped to make it user friendly (“Partly Cloudy” instead of “PartlyCloudy”).

A C++ JWT Signing Implementation

We used the jwt.io site to quickly cobble together a usable bearer token, but if you’re building an actual application to make requests to the WeatherKit API you’re going to want to implement JWT signing in code.

Here’s an example in C++ utilizing the jwt-cpp header-only library.

WeatherKit REST API Documentation

The complete REST API documentation for WeatherKit can be found at https://developer.apple.com/documentation/weatherkitrestapi.

Yes You Can Run Homebrew on an M1 Mac

One of the reasons I took the plunge and bought an M1-based Mac is to test out its performance and suitability as a developer. An essential developer application on the Mac is Homebrew, the “missing package manager for macOS.” Although you cannot install Homebrew today to manage ARM-compiled packages, you can install Homebrew in the Rosetta environment and leverage the x86 packages.

I can’t take credit for coming up with the idea, that would go to OSX Daily, but I have a few improvements to share. I’m going to use iTerm2, and so should you.

Right click on your iTerm application icon and select Duplicate. Rename iTerm copy to something like iTerm x86 or iTerm Rosetta.

Now, right click on your new iTerm icon and click on Get Info and then check Open using Rosetta.

Open your iTerm Rosetta application and install Homebrew! Once installed you should be able to use brew install in the iTerm Rosetta application and use those installed packages seamlessly between the two environments. You won’t, however, be able to use brew install in your arm64 iTerm application (you’ll get Error: Cannot install in Homebrew on ARM processor in Intel default prefix).

Keeping Track

If you’re working in both x86 and ARM environments on your M1 Mac it is easy to lose track which iTerm you are in. We can use a little zsh-foo to help us out. Add the following snippet to the end of your ~/.zshrc:

This little snippet takes advantage of iTerm2’s custom escape codes by setting the background to Intel blue if the arch command returns i386 (which it does if running in Rosetta).

We can do one better, however, by changing our iTerm Rosetta icon. Create your own icon, or, right-click the image below and select Copy Image. Then right-click your iTerm Rosetta application and select Get Info. In the upper-left click on the icon until you see a highlight around it and then paste the new icon image (Command-V).

Launch your iTerm Rosetta application and it’s much easier to distinguish between it and your “native” version.

I2C with the SiFive HiFive1 Rev B

Hey kids! Today we’re going to take a look at the SiFive HiFive1 Rev B and Freedom Metal I2C API.

I am going to be using a classic EEPROM from National Semiconductor, the NM24C17. The NM24C17 is a 16 kilobit (2K) EEPROM that can be written to and read from using I2C.

If you have one of these EEPROMs lying around (and who doesn’t?) and want to use it with your HiFive1 board, you’ll also need:

  • the datasheet
  • a breadboard
  • breadboard wires
  • 2 4.7k pull-up resistors

What you might also want to have handy a digital logic analyzer such as the Logic 8 from Saleae.

The I2C circuit is a simple one, but it is important to note that the NM24C17 EEPROM does not come with I2C pull-up resistors, so we need to add them in our circuit.

Assembling everything with the HiFive1 I2C pins and providing power.

Reading the Datasheet

When working with I2C devices it is so important to read through the datasheet once or twice. Or ten times. Datasheets can be dense and intimidating, but I have rarely come across an issue I was troubleshooting that didn’t end up being caused by not reading the datasheet closely.

Writing

Okay, let’s start coding some I2C with Freedom Metal. We’ll start with a basic shell.

Working with I2C in Freedom Metal starts with including the <metal/i2c.h> header file and obtaining a pointer to the I2C device with metal_i2c_get_device. For the HiFive1 Rev B board there is only one device to get, and it’s at index 0. Once you have a pointer to the I2C device, initialize it with metal_i2c_init. We’ll configure our device for 100 kbits/sec (I2C “full speed”) and as the master.

Now, let’s look at our first write function, which will be to write a sequence of bytes to the EEPROM at a given address. This code is very specific to the way the NM24C17 EEPROM functions. We will be using the metal_i2c_write function which takes as its arguments:

  • a pointer to the I2C controller device on the RISC-V chip
  • the address of the I2C bus device to talk to
  • the length of the message to send to the bus device
  • the message to send
  • a flag indicating whether or not to send the I2C stop bit

The first argument will be our struct metal_i2c* i2c_device variable, but the address of the EEPROM on the bus is interesting.

At first blush it appears the address would be 0xa0 to account for the first four bits to transmit are 1 0 1 0, and for the NM24C17 device the 3 page address bits appear to be all 0 (appear is the operative word). That leaves us with the R/W bit, which for a write would be 0. 1010 0000b, or 0xa0, right? Wrong. The R/W bit is not a part of the device address here, which leaves us with 1010000b, which is 0x50.

The message to send to the EEPROM consists of two bytes: the memory address in the EEPROM to write to and the value to write. For simplicity we’ll just write the value 0xab at the address 0x00.

The final argument to metal_i2c_write is to indicate whether or not to signal an I2C stop bit upon completion. Since the stop bit is required for us to write this data to the EEPROM we will use METAL_I2C_STOP_ENABLE.

One thing I’ve found to be true about I2C is that if it works, it works. If it doesn’t, you better have a digital logic analyzer on hand to look at things.

Reading

Now let’s read the data that we wrote back in. This requires two function calls: metal_i2c_write and metal_i2c_read.

Notice above that the first step to reading a byte from the EEPROM is to write out the address to be read from, followed by a read. There is only one stop bit in this sequence:

Notice the use of METAL_I2C_STOP_DISABLE; this instructs the I2C controller not to signal a stop bit at the conclusion of the write.

We make double use of the readbuf array by initializing it to the address in the EEPROM we want to read from, and then to hold the data read in.

Writing and Reading Multiple Bytes

Writing and reading one byte at a time to our EEPROM is a bit tedious, so let’s make use of the multiple-byte write.

In this example we send the device address (again, 0x50 for the EEPROM), followed by an address to write to in the EEPROM, and then up to 16 bytes of data.

Before we get to writing again, I’ve made mention that 0x50 is the I2C device address of the EEPROM. While it is, that isn’t the whole story. That is the address for page block 0 of the device, but it supports 8 page blocks. Selecting the page block is done with the lower nibble of the device address. For example, page block 1 can be addressed at 0x51, page block 2 at 0x52, and so on. Each page block is 2 kilobits, or 256 bytes.

At any rate, let’s stick with page block 0 for now and write 16 bytes:

Of course, we are actually writing 17 bytes out to the EEPROM, the first of which is the address we want to write to.

Reading the data back in can look like this:

A Smarter API

After understanding the basics of reading and writing to the NM24C17 it’s time to write an API to encapsulate the nuts and bolts. Our header file looks like this:

Our structure is arranged such that the addr byte is positioned immediately prior to the buf. This organization allows us to take advantage of the C memory layout of the data and writing to the EEPROM. For example:

Our device address is the base address of the EEPROM (0x50) ORed with the page block number. The length of the message to write is the length of the data buffer the user wants to write plus the address byte. Writing starts at the address byte and continues into the buffer.

Remember the exhortation to read the datasheet, and read it several more times? Here is what happens if you start a write on an address not evenly divisible by 16. Notice that our write address starts at 0x22 in page block 1. Sixteen bytes are presumably written, but when trying to read them back in, the last two bytes read are 0xff. Hmm.

From the National Semiconductor datasheet definitions:

  • PAGE – 16 sequential addresses (one byte each) that may be programmed during a “Page Write” programming cycle.
  • PAGE BLOCK – 2,048 (2K) bits organized into 16 pages of addressable memory. (8 bits) x (16 bytes) x (16 pages) = 2,048 bits”

But wait! The Fairchild-printed version of this EEPROM’s datasheet says a bit more:

To minimize write cycle time, NM24C16/17 offer Page Write feature, by which, up to a maximum of 16 contiguous bytes locations can be programmed all at once (instead of 16 individual byte writes). To facilitate this feature, the memory array is organized in terms of “Pages.” A Page consists of 16 contiguous byte locations starting at every 16-Byte address boundary (for example, starting at array address 0x00, 0x10, 0x20 etc.)

Like I said, always read the datasheet and sometimes you have to read two of them to get the whole story.

Recap

There are four basic functions needed to use I2C with Freedom Metal:

  • metal_i2c_get_device – obtain a pointer to the underlying I2C device on the microcontroller
  • metal_i2c_init – initialize the I2C device speed and mode
  • metal_i2c_write – address and send data on the bus
  • metal_i2c_read – address and read data from the bus

It really is that simple!

Get the Code

I’ve recently start using PlatformIO to develop on my HiFive1. If you haven’t checked it out I recommend you doing so; it’s very easy to install and start making use of your board right away without having to manually download toolchains and JTAG interfaces. I’ve posted a PlatformIO-based project on GitHub for working with the NM24C17.

Exploring HiFive1 Rev B GPIOs with PlatformIO

In our last post we looked at the GPIO pins of the SiFive HiFive1 Rev B board, and in this one we will continue doing so, but let’s take a look at PlatformIO on that journey. PlatformIO bills itself as “A new generation ecosystem for embedded development” and aims to really simplify pulling together all of the tools needed for embedded development. In our introduction to the HiFive1 Rev B we outlined all of the packages you needed to install and configure to get started with the board. Cross-compilers, JTAG daemons, remote debug tools, SDKs, and so on. What if we could skip all that, not just for one microcontroller platform, but for hundreds of them? Enter PlatformIO.

Installing PlatformIO

PlatformIO, while compatible with a number of popular editors (Sublime Text, Atom, etc.), really shines with Visual Studio Code. While VSCode has features I don’t find appealing (horizontal bars on my code), it is very configurable and I can turn them off and enable Emacs keybindings. After some time with VSCode and PlatformIO, I really began enjoying how straightforward it was to work with the HiFive1 Rev B.

Installing PlatformIO is simple with Visual Studio Code. Open the Extensions panel and search:

Click on Install. You’ll see a “landing page” of sorts for PlatformIO and a window open up on the bottom indicating it is being installed and configured.

Once PlatformIO is installed we’ll create a project for use with our HiFive1 Rev B board. Click on the PlatformIO Projects button and then Create New Project.

This is where PlatformIO really shines. There are over 800 boards to choose from, and depending on the board, multiple frameworks to work with. For example, with the HiFive1 Rev B you can use either the Freedom E SDK (which we use), or the Zephyr RTOS.

Creating a New File

Obviously we’ll need to write some code, so in your project right-click on src and select New File.

Visual Studio Code will prompt you to name the file in the project explorer; ours is named simply main.c. We’ll do the GPIO equivalent of a Hello World by blinking the onboard green LED:

Building and Uploading

PlatformIO automates and streamlines installation of the correct toolchain, debugger, SDK and so on. To build your project there is a checkmark on the bottom of the editor window; if you hover over it you’ll see the label PlatformIO: Build. Click on it and scroll through the output to see what PlatformIO is doing; I’ve found it very helpful to read everything that is going on.

Uploading your project to the HiFive1 Rev B is as simple as clicking the right arrow that is labeled PlatformIO: Upload. Again, if this is your first time using PlatformIO on a given environment you can see it’s installing what’s necessary to upload to your board.

Here is a great example of what PlatformIO is doing in the background:

While it’s possible to do this by hand (and we have!), how nice it is to have it done automatically for you.

Serial Output

PlatformIO does have the ability to open a serial monitor to your HiFive1 Rev B board, though you may have to do a bit of configuration here. Open the project’s platformio.ini and ensure that your environment configuration has monitor_speed=115200 set. This will tell PlatformIO to use 115200 baud to communicate on the serial port. Then open the serial monitor with the “electric plug” icon and choose the first usbmodem port you see.

I typically prefer to use screen on macOS for this – the console output in Visual Studio Code is nice, but I’m used to screen and like having it in a separate terminal window altogether. For example, typing screen /dev/cu.usbmodemIDENTIFIER 115200 (where IDENTIFIER will be specific to your machine) will bring up the serial console to the board.

Testing the GPIO Pins

Okay, PlatformIO is ready to go and we’re going to take a look at the GPIO pins on the HiFive1 Rev B again. I’ve learned a bit since writing about them in this post. For starters, digital pin 14 on the header is not connected and there is no mapping of a pin from the GPIO device.

In addition, pins 11, 12, and 13 on the header correspond to the MOSI, MISO, and SCK signals for the SPI device; since they are tied to SPI writing a digital one to them has no effect. On the other hand, you can write a 1 to pin 10 (SPI SS) and get an output.

That said, you can use the I2C pins (digital pins 18 and 19) and write a digital one or zero to them as long as we’re willing to give them up as I2C pins.

Pins 15 and 16 are interesting. If you look at the HiFive1 Rev B schematic for J4 (the Arduino 6 pin block on the bottom left of the ESP32) you see GPIO 9 and GPIO10 map to DIG15 and DIG16. But then going over here we see that:

  • GPIO9 is tied to a line labeled SPI_CS2 which comes from the ESP-SOLO-1
  • GPIO10 is tied to a line labeled WF_INT which comes from the ESP-SOLO-1

The schematic notes “Solder across SJ1 to connect GPIO_9 to SPI_CA2” (I think that is a typo). On the physical board I cannot find SJ1 or SJ2, so I’m assuming these connections are present.

So, what does that gives us in the end?

  • DIG2 – easy to use to drive a digital output
  • DIG3 – easy to use to drive a digital output, but doing so interferes with the onboard LED
  • DIG4 – easy to use to drive a digital output
  • DIG5 – easy to use to drive a digital output, but doing so interferes with the onboard LED
  • DIG6 – easy to use to drive a digital output, but doing so interferes with the onboard LED
  • DIG7 – easy to use to drive a digital output
  • DIG9 – easy to use to drive a digital output
  • DIG10 – easy to use to drive a digital output, but doing so interferes with using SPI
  • DIG17 – easy to use to drive a digital output
  • DIG18 – easy to use to drive a digital output, but interferes with I2C
  • DIG19 – easy to use to drive a digital output, but interferes with I2C

In the next post I’m going to be building an I2C circuit that contains 4 EEPROMs, each of which will have the same address. To do so I want to use two digital output pins to specify which I2C chip to work with, so to make life easy I want to choose pins that won’t interfere with anything else. DIG2, DIG4, DIG7, DIG9, and DIG17 are my best bests, and since DIG2 and DIG4 are on the same side of the board I’ll start there.

Conclusion

PlatformIO is pretty damn slick if you ask me and worth checking out if you are just starting with the HiFive1 Rev B. While I still recommend that you understand cross-compilers and the underlying tools that go into supporting such a platform, it really does take the hassle out of downloading and configuring everything!

If you have a HiFive1 Rev B board, try checking out our GPIO project. I have no doubt that you’ll be able to build and upload it with minimal fuss using PlatformIO.

HiFive1 Rev B GPIO Pins

Let’s make use of the HiFive1 Rev B schematics to map out the GPIO controller device pins. Of particular interest is sheet 3, and the following components of the schematic in section D1:

We’re going to use this as our starting point. Three GPIO lines labeled 19, 21, and 22 with LEDs on the lines.

Editor’s Note: This write-up on the SiFive HiFive1 Rev B assumes you have one and are familiar with powering it up, connecting to the serial port, and uploading applications. See our last post if you need to orient yourself with the environment.

Using Metal GPIO

If you haven’t done so already, familiarize yourself with Freedom Metal and the API documentation because we’re going to make use of the GPIO functions.

Recall in the schematic above that the onboard LEDs are tied to GPIO 19 (green LED), GPIO 21 (blue LED), and GPIO 22 (red LED). While the Metal LED API could be used here, we’re going to work directly with the GPIO functions to prove to ourselves that the above GPIO pins are correct, and also check out the 8 color combinations the LED can produce.

Our basic loop is going to cycle from 0 to 7 and turning on the red LED if bit 1 is set, the green LED if bit 2 is set, and the blue LED if bit 3 is set. This should cause our HiFive1 on-board LED to cycle through the following color chart:

Okay, let’s get the basics down. The GPIO can be accessed by obtaining a struct metal_gpio* using the metal_gpio_get_device function, like this:

Now that we have the GPIO device itself, let’s enable GPIO 19 as an output pin. In our schematic take note that the other side of the LED is tied to 3.3V. In other words, there is no setting GPIO 19 high or low required here, if it is enabled you’re going to get the LED on.

To set GPIO 19 as an output:

To disable the output of the pin:

Now I thinkthis works for the onboard LED because of the way it is wired to the 3.3V supply. In fact, for Metal GPIO pin 19, if you try to write a 1 to it you actually turn off the onboard LED and turn on the header pin 3. Speaking of the header pins…

Arduino Header and Pins

Going back to the board schematic, this time on sheet 3 in sections A5, A6, B5, B6, and C5, C6.

Now this is interesting! Our Arduino headers as well have notes on which GPIOs are digital-only (high or low), those that are PWM-capable, and which pins are shared by SPI, I2C, and UART. All we need to do is put this into a nice table, and the following is (I think) a complete GPIO mapping for the SiFive HiFive1 Rev B board.

For example, if you wanted to start your project with blinking an LED on a breadboard, you might start with one of the “standard” digital input/output pins on the header, say pin 7. The chart above (which is derived from the schematic) shows that pin 7 on the header is tied to pin 23 of the GPIO controller. So to use header pin 7 we might write:

Or, we could set at the top of our file something along the lines of #define PIN7 23. Just a thought.

Notice once again that this mapping table shows that there are:

  • 20 digital pins
  • 6 PWM-capable pins
  • 1 SPI
  • 1 I2C
  • 1 UART

Unfortunately SiFive’s HiFive1 Rev B page indicates there are only 19 digital pins and there are 9 PWM-capable ones, so there’s something I’m missing. I think at least for PWM, header pins 17, 18, and 19 on the 6-pin header (to the left of the ESP32) are PWM-capable, though that isn’t clear from the schematic. I’ve asked about that here on the SiFive Forums and will reconcile this post once I learn more.

Getting Some Code

This code is a work in progress and is modeled around the Arduino digital IO library. In digitalio.c you’ll see my take on the functions pinMode and digitalWrite. There is also a delay function to write the ever-popular blinking lights demo. All of this code is MIT licensed with portions copied from the SiFive Freedom E SDK.

Let’s look at how we might use these functions to put the onboard LED through all of the possible colors.

main.c:

Recall the GPIO pins that map to the on-board LEDs are:

  • Red – GPIO_22
  • Green – GPIO_19
  • Blue – GPIO_21

We iterate from 0 to 7 in a loop, test the bits, and set the pins as outputs (since we’re using the onboard LEDs) or disable them. The result is a nice rainbow light show from the HiFive1. If you connect to the serial port you’ll also see the colors being printed out as they are cycled through:

Lagniappe

The code that’s actually uploaded to GitHub includes stepping through the binary representation of our LED rainbow and displaying that if you hook up a few LEDs to your breadboard. Check out the code and look closely here at header pins 2, 4, and 7.

Closing Thoughts and What’s Next

Nothing is more fun than working with microcontrollers and embedded systems, and the HiFive1 is no exception. If you take a look at gpios.h in GitHub you can see that I’m looking as to how to best add additional #defines for the various pins. We will see.

An Introduction to the HiFive1 Rev B and RISC-V

Today I’d like to introduce you to a new development board, the HiFive1 Rev B. Equipped with a RISC-V Freedom E310 microcontroller and designed with an Arduino Uno “form factor”, the HiFive1 Rev B is a neat little board that I hope to learn and develop for.

My HiFive1

There is a lot of material out there about RISC-V and how it is going to change the future of CPUs, but what attracted me to it was the notion of a completely open standard Instruction Set Architecture (ISA). That and I think working with new hardware and development environments is cool.

Getting Started

The Getting Started Guide is crucial to read. If you’re anything like me you want to dig in as quickly as possible with as little reading as possible, but trust me, reading the guide first is very useful.

You don’t get anything but the HiFive1 Rev B board if you’ve ordered it from Crowd Supply and will need a trusty USB-A male to USB-micro-B male cable. This connection can be used for both serial communication and power. Of course, if you have only a system with USB-C you’ll need some set of adapters to get to USB micro-B.

For the host platform we will be using a MacBook Pro (Retina, 13-inch, Early 2015) running macOS 10.15 (Catalina). Hopefully if you’re reading this with the intention of working on the HiFive1 with your Macbook Pro you’ll already have the best terminal program ever installed, but if you don’t regular Terminal.app works.

Let’s see our boot screen first:

To see this boot screen you’ll need to use a serial terminal program. macOS is going to present the HiFive as the two USB modem devices in the /dev directory.

The first cu.usbmodem device presented will be the HiFive1, and my suggestion is to open an iTerm and use screen to connect to it. 115200 bps is your speed and the default 8N1 settings should work, so in our case screen /dev/cu.usbmodem0009790151821 115200 is all we had to type in the terminal.

Time to Develop

There are several key pieces of software you’ll need to install on your Mac to begin developing on the HiFive1.

  • a toolchain (i.e., the compiler, assembler, linker, and associated libraries)
  • OpenOCD On-Chip Debugger
  • the Freedom E SDK
  • Segger J-Link OB debugger

We’ll take each in turn.

Installing the Toolchain and OpenOCD

The reference toolchain is a GNU Embedded Toolchain — v2019.08.0 and can be downloaded directly from SiFive. Go to the Boards page, scroll down until you see the section labeled Prebuilt RISC-V GCC Toolchain and Emulator.

Download both the GNU toolchain and OpenOCD packages and untar both into a suitable installation location. I prefer to keep things like this in the /opt directory, so we’ll do the same here in /opt/riscv.

Before going any further ensure that the compiler can run:

You may have received the error message "riscv64-unknown-elf-gcc" cannot be opened because the developer cannot be verified.. Now, this may be controversial, but I don’t hold to my OS telling me what software I can and can’t run, so let’s get rid of that silliness with spctl:

Let’s try again:

Much better.

Get the SDK

We have a HiFive1 board which uses the Freedom E310 chip, thus will want to get the Freedom E SDK. For this I like to keep the SDK in my projects directory.

Note: You must use --recursive here to check out all of the required packages.

Now, let’s compile that first program!

In the top-level directory freedom-e-sdk is a set of Makefiles that make it easy to compile and generate an image suitable for uploading to the HiFive1 board. In this case, PROGRAM is the name of a subdirectory in freedom-e-sdk/software and TARGET is the board you have.

If this is the first time through you’re going to see a bunch of gibbersh about Python, pip3, virtualenv, etc:

And, if you receive an error regarding the C compiler cannot create executables, you need to set your RISCV_PATH environment variable to point to the correct toolchain like export RISCV_PATH=/opt/riscv/riscv64-unknown-elf-gcc-8.3.0-2019.08.0-x86_64-apple-darwin'.

It’s often a good idea to put the various environment variables in a file such as sifive.env and this source the script into your environment.

sifive.env:

If everything worked properly you’ll see at the end something like after compiling.

Installing J-Link

Now, we’re going to upload this to our board, but we will need the Segger J-Link OB debugger. If it isn’t installed you’ll see something like this when trying to use the upload target.

To get J-Link head over to the download page and grab the J-Link Software and Documentation pack for macOS.

Install per the instructions, and once that is done you can go back to your SDK directory and type:

If everything is installed correctly and the board is attached you’ll see something like:

And that’s it! If you have a terminal window with a connection to the serial output of the board you’ll see Hello, World!.

Your Own C Program

In the directory freedom-e-sdk there is a software folder that has an example template. We’ll use that to create our own ASCII art banner.

Copy the template with something like cp -R software/empty software/leo-welcome and then edit the software/leo-welcome/Makefile to change the PROGRAM name to leo-welcome. Open main.c in the same directory and replace the contents with:

Compile and upload with make PROGRAM=leo-welcome TARGET=sifive-hifive1-revb upload and behold.

Some Assembly

I’ll be honest, I struggled with this part, but that’s primarily due to the school of hard knocks with piecing together the calling convention for RISC-V. Let’s review a few examples.

Hello World in RISC-V Assembly

Again, start off with the example folder inside freedom-e-sdk with something like

In this case we’re going to write Hello World in assembly so delete software/helloasm/main.c and instead create a file called main.S with the following content.

main.S:

In RISC-V the a-registers will contain our procedure arguments, the procedure here being C printf. We use la (load address) to bring the address of the hellomsg label into a0 and then call the printf function. All of these, in reality, are pseudoinstructions.

The Makefile in the template will pick up files with a .S extension, so make PROGRAM=helloasm upload will assemble, link, and upload our file to the HiFive1.

Counting Down

Finally, let’s look at a countdown routine with a max and min. Here things are a bit more complicated as we are going to make use of a prologue and epilogue in our routine. The prologue saves the return address on the stack, and the epilogue loads the address back into the ra register such that the instruction ret will branch back to our caller.

countdown.S:

We can call this function with C like this:

main.c:

Compile and upload!

Good References

There are a lot of great references on RISC-V, its instruction set architecture, and so on. I’ve compiled a few of my favorites here.

Closing Thoughts and What’s Next

That wraps it up for this first look at the SiFive HiFive1 Rev B board. Of course, we haven’t even talked about the ESP32-based wireless capabilities of the board, haven’t talked about the Freedom Metal library or any of the stuff that will accelerate our development. Perhaps that’s next!

And, if you’ve made it this far, you might want to check out our next post on exploring the HiFive1 GPIO pins.

TLS 1.3 with NGINX and Ubuntu 18.10

TLS 1.3 is on its way to a webserver near you, but it may be a while before major sites begin supporting it. It takes a bit of time for a new version of anything to take hold, and even longer if it’s the first new version of a protocol in nearly 10 years.

Fortunately you don’t have to wait to start experimenting with TLS 1.3; all you need is OpenSSL 1.1.1 and open source NGINX 1.15 (currently the mainline version), and you’re good to go.

OpenSSL

OpenSSL 1.1.1 is the first version to support TLS 1.3 and its ciphers:

  • TLS_AES_256_GCM_SHA384
  • TLS_CHACHA20_POLY1305_SHA256
  • TLS_AES_128_GCM_SHA256
  • TLS_AES_128_CCM_8_SHA256
  • TLS_AES_128_CCM_SHA256

Since 1.1.1 is available out-of-the-box in Ubuntu 18.10 Cosmic Cuttlefish (as well as FreeBSD 12.0 and Alpine 3.9), we’ll be using it for this tutorial. Note that 18.10 is not an LTS release, and the decision was made to port to OpenSSL 1.1.1 to 18.04 (Bionic Beaver), but it did not make it in 18.04.2. We like to make things easy on ourselves, and launched a publicly available ubuntu-cosmic-18.10-amd64-server-20181018 AMI in AWS.

NGINX

NGINX hardly needs an introduction, so we’ll skip straight to its support for TLS 1.3, which came all the way back in version 1.13.0 (August 2017), well before the protocol was finalized. Combined with OpenSSL 1.1.1, the current open source version (1.15), NGINX is fully capable of supporting TLS 1.3, including 0-RTT.

Current Browser Support for TLS 1.3

TLS 1.3 will be a moving target for months to come, but as of this writing (February 23, 2018), here’s a view of browser support for it. As you can see, it’s pretty limited at this point, with only the Chrome, Brave, and Firefox browsers capable of establishing a connection with a TLS 1.3-only webserver.

OS Browser TLS 1.3 Support Negotiated Cipher
macOS 10.14.3 Chrome 72.0.3626.109 Yes TLS_AES_256_GCM_SHA384
macOS 10.14.3 Firefox 65.0.1 Yes TLS_AES_256_GCM_SHA384
macOS 10.14.3 Brave 0.59.35 Yes TLS_AES_256_GCM_SHA384
macOS 10.14.3 Safari 12.0.3 (14606.4.5) No NA
macOS 10.14.4

Safari 12.1 Yes TLS_AES_256_GCM_SHA384
iOS 12.2 (Beta) Safari Yes TLS_AES_256_GCM_SHA384
Windows 10.0.17134 IE 11.345.17134.0 No NA
Windows 10.0.17134 Edge 17.17134 No NA
Ubuntu 18.10 curl/7.61.0 Yes TLS_AES_256_GCM_SHA384
Ubuntu 18.04.2 curl/7.58.0 No NA

Note: An astute reader might notice iOS 12.2 (currently in Beta) indeed supports TLS 1.3 and our webserver confirms it!

Testing It Out

To test things out, we’ll turn to our favorite automation tool, Ansible and our tls13_nginx_cosmic repository with playbooks.

We happened to use an EC2 instance running Ubuntu 18.10, as well as Let’s Encrypt and Digital Ocean‘s Domain Records API. That’s a fair number of dependencies, but an enterprising DevOps professional should be able to take our example playbooks and scripts and modify them to suit their needs.

Rather than return HTML content (content-type: text/html), we return text/plain with interesting information from NGINX itself. This is facilitated by the LUA programming language and LUA NGINX module. The magic is here in our nginx.conf:

This results in output similar to:

In all of our tests thus far, TLS_AES_256_GCM_SHA384 was chosen as the ciphersuite.

Qualys SSL Assessment

Now let’s look at what Qualys SSL Server Test has to say about our site.

Not an A+, but notice in our nginx.conf we are not configuring HSTS or OCSP. Our standard Let’s Encrypt certificate is also hampering our score here.

Here’s what Qualys has to say about our server configuration:

The highlight here is that TLS 1.3 is supported by our server, whereas TLS 1.2 is not. This was done on purpose to not allow a connecting client to use anything but TLS 1.3. You definitely would not do this in practice as of February 2019, as the Qualys Handshake Simulation shows. Only Chrome 70 was able to connect to our server.

Closing Thoughts

As a DevOps practitioner, and someone who manages dozens of webservers professionally, I’m quite excited about the release and adoption of TLS 1.3. It will, no doubt, take quite some time before a majority of browsers and sites support it.

If you’re interested more about TLS 1.3 in general, there are a lot of great resources out there. Here are just a few:

Wikipedia has a good rundown of TLS 1.3 features and changes from TLS 1.2.

The folks at NGINX recently hosted a webinar on R17, the latest NGINX Plus version. TLS 1.3 and it’s benefits were covered in more detail.

Here’s a great tutorial on deploying modern TLS configurations (including 1.3) from Probely.

And, last but not least, Cloudflare has a number of in-depth TLS 1.3 articles.

GPIO Chip Selects with the BeagleBone

In my previous post I made mention that I could not use GPIO-based SPI chip selects on the BeagleBone Black with the default McSPI driver (what you are using if you’re opening /dev/spidev. The exact quote I had run across (much to my chagrin at the time) was

Incidentally, the spi-omap2-mcspi.c driver does not support a GPIO as a chip select.

Well, fortunately, a little persistence paid off (I confess, I can be stubborn at times), and I found this patch from Michael Welling that adds GPIO chip select support to the spi-omap2-mcspi.c driver.

As I mentioned in the article on using the 4-slot mikroBUS cape I was using a workaround to lower the chip select by hand. With Linux 4.3 and one additional patch you no longer need it. I’ve included everything in the BBB-420mA repository on the linux-4.3 branch, but wanted to share some additional tips for those looking to work with the cs-gpios tag in DTS files.

Declare your Pins

The snippet below cannot be taken out of context from the larger DTS overlay, but illustrates that we put our GPIO chip select pin in the same group as the rest of the SPI1 pins.

    bb_spi1_pins: pinmux_bb_spi1_pins {
	pinctrl-single,pins = <
	  0x190 0x33	/* mcasp0_aclkx.spi1_sclk, INPUT_PULLUP | MODE3 */
	  0x194 0x33	/* mcasp0_fsx.spi1_d0, INPUT_PULLUP | MODE3 */
	  0x198 0x13	/* mcasp0_axr0.spi1_d1, OUTPUT_PULLUP | MODE3 */
	  0x19c 0x13	/* mcasp0_ahclkr.spi1_cs0, OUTPUT_PULLUP | MODE3 */
	  0x164 0x12	/* eCAP0_in_PWM0_out.spi1_cs1 OUTPUT_PULLUP | MODE2 */
	  0x098 0x17    /* P8 10 gpio2_4.spi1_cs2 OUTPUT_PULLUP | MODE7 */
	  >;
      };

In particular the “magic” 0x098 0x17 refers to P8.10 on the Black (and actually the 0x098 refers to the pin and the 0x17 refers to the settings, which in this case we want OUTPUT_PULLUP and MODE7), which is gpio68 in /sys/class/gpio. Another name for it, which we’ll see below, is <&gpio2 4 0>. So many ways are referring to the same thing. No wonder it gets confusing.

Next, let’s revisit our DTS for SPI1 on the Black:

  fragment@1 {
    target = <&spi1>;	/* spi1 is numbered correctly */
    __overlay__ {
      status = "okay";
      pinctrl-names = "default";
      pinctrl-0 = <&bb_spi1_pins>;

#address-cells = <1>;
#size-cells = <0>;

      cs-gpios = <0>, <0>, <&gpio2 4 0>;

      spi1@0 {
#address-cells = <1>;
#size-cells = <0>;
	compatible = "spidev";
	reg = <0>;
	spi-max-frequency = <16000000>;
	spi-cpol;
	spi-cpha;          
      };

      spi1@1 {
#address-cells = <1>;
#size-cells = <0>;
	compatible = "spidev";
	reg = <1>;
	spi-max-frequency = <16000000>;
	spi-cpol;
	spi-cpha;
      };

      spi1@2 {
#address-cells = <1>;
#size-cells = <0>;
	compatible = "spidev";
	reg = <2>;
	spi-max-frequency = <16000000>;
	spi-cpol;
	spi-cpha;
      };
    };
  };

The SPI bus documentation is a little unclear, but the format for the cs-gpios tag is:

cs-gpios =  <GPIO>|<0>, <GPIO>|<0>, ..., <GPIO>|<0>;

where

  • <0> means “use the default pin” for this chip select
  • GPIO is expanded to something like &gpio 4 0 to identify the gpio pin

So in our example, the line

cs-gpios = <0>, <0>, <&gpio2 4 0>;

in the DTS overlay is telling the OMAP SPI driver that SPI1 chip select 0 (or CS0 you will see on the BeagleBone Black SPI pinouts) will use its default pin assignment, chip select 1 will also use its default pin assignment, and chip select 2 will use <&gpio2 4 0>, which as we said, is P8.10, aka GPIO 68. If you’re confused as to why I want to use P8.10, that’s because on the mikroBUS cape the clickboard in slot 4 has the chip select line running to P8.10 on the Black.

Host 4 Chip Select is P8.10
Host 4 Chip Select is P8.10
           CS0     CS1       CS2
         default default    P8.10
            |       |         |
            |       |         |
            v       v         v
cs-gpios = <0>,    <0>, <&gpio 2 4 0>;

See the GPIO overlay structure for details on the naming structure for GPIO pins, keeping in mind that “Exact meaning of each specifier cell is controller specific, and must be documented in the device tree binding for the device.” This is another way of saying that the structure of the GPIO names aren’t necessarily the same between the BeagleBone Black and say, a Raspberry Pi.

Get onto Linux 4.3

I want to make sure it’s absolutely clear, you need to be on Linux 4.3 on your BeagleBone Black for anything below to work properly! The following are quick instructions that, as of the time of this writing (November 30, 2015), should get you a Black that’s good to go.

First, start off with a latest Debian Jessie image from the BeagleBone Black images page. In my case I’m going to use the 2015-11-29 Debian 8.2 Flasher image which will flash my BeagleBone. For details on how to use a flasher image with a BeagleBone, see these instructions.

Once you’ve flashed the Black, upgrade the kernel to latest 4.3 release candidate (for the BeagleBone), which, as of this writing, was 4.3.0-rc7-bone1. Upgrading is quite easy:

apt-get update
apt-get install -y linux-image-4.3.0-rc7-bone1

You’ll probably see something like:

Setting up linux-image-4.3.0-rc7-bone1 (1jessie) ...
Error! Error! Your kernel headers for kernel 4.3.0-rc7-bone1 cannot be found.
Your kernel headers for kernel 4.3.0-rc7-bone1 cannot be found.
Please install the linux-headers-4.3.0-rc7-bone1 package,
Please install the linux-headers-4.3.0-rc7-bone1 package,
or use the --kernelsourcedir option to tell DKMS where it's located
or use the --kernelsourcedir option to tell DKMS where it's located
Error! Your kernel headers for kernel 4.3.0-rc7-bone1 cannot be found.
Please install the linux-headers-4.3.0-rc7-bone1 package,
or use the --kernelsourcedir option to tell DKMS where it's located
update-initramfs: Generating /boot/initrd.img-4.3.0-rc7-bone1
zz-uenv_txt: Updating /boot/uEnv.txt [uname_r=4.3.0-rc7-bone1]

Ignore the warnings and reboot!

Getting the Kernel Source

Unfortunately, having the 4.3 kernel image on your BeagleBone isn’t enough; you’ll have to patch the spi-omap2 driver, and while I’d love to say it is a piece of cake, well, let’s just say its a bit of a doberge cake (pronounced “doebash”).

I highly recommend this be done on an ARM system such as a Wandboard Quad, or soon-to-be-released BeagleBoard X15. You can also accomplish this with a cross-compiling environment though it is a bit more cumbersome. We’re going to assume you’re compiling with a native ARM system, and then look at how to cross-compile in another installment.

To get started, on a sufficiently sized filesystem (>4G free space), let’s get Robert C. Nelson’s awesome bb-kernel repository:

root@beagleboard-x15.local:/mnt/sd# git clone https://github.com/RobertCNelson/bb-kernel

Since we are going to compile the module for a 4.3-rc7-bone1 kernel, let’s check out against that tag in the repository.

root@beagleboard-x15.local:/mnt/sd/bb-kernel# git checkout 4.3-rc7-bone1
Note: checking out '4.3-rc7-bone1'.
...
HEAD is now at 1f00cb4... 4.3-rc7-bone1 release

This repository provides a great out-of-the-box script to check out the Linux source tree and build everything to make a bootstrapped BeagleBoard kernel, but we want to skip a lot of steps and just build our SPI driver. So you will want to edit the build_kernel.sh script here and comment out everything starting with the AUTO_BUILD if statement:

If you read through the script you’ll see that it checks out the Linux source tree from the appropriate git repository and also patches it for building against the BeagleBone Black. If you didn’t comment anything out it would keep running and build the kernel, modules, create a deployment package, etc. We don’t need that here.

Run the modified build_kernel.sh script. This still may take some time (~20 minutes) as it checks out everything, applies patches, etc.

Before we build our module we want to apply this patch from Michael Welling which fixes an issue with trying to claim the GPIO-based chip select line twice:

Make sure you are in the KERNEL directory which was checked out by running build_kernel.sh:

root@beagleboard-x15.local:/mnt/sd/bb-kernel # cd KERNEL
root@beagleboard-x15.local:/mnt/sd/bb-kernel/KERNEL # wget http://dev.iachieved.it/downloads/spi-omap2-mcspi.c.patch
root@beagleboard-x15.local:/mnt/sd/bb-kernel/KERNEL # git apply spi-omap2-mcspi.c.patch

If you run git diff now you’ll see the diff generated by application of the patch.

Before we can recompile our module we need Module.symvers file from the Linux headers package (pro tip: we wouldn’t need the Linux headers package from Module.symvers if we had built the kernel ourselves, but we’re skipping all of that and building a single module). In the KERNEL directory:

root@BeagleBoard-X15:/mnt/sd/bb-kernel/KERNEL# apt-get install -y linux-headers-4.3.0-rc7-bone1
root@BeagleBoard-X15:/mnt/sd/bb-kernel/KERNEL# cp /usr/src/linux-headers-4.3.0-rc7-bone1/Module.symvers .

Now! Let’s recompile our SPI modules! Three make steps are necessary: prepare, modules_prepare and modules M=drivers/spi:

root@BeagleBoard-X15:/mnt/sd/bb-kernel/KERNEL# make prepare
...
root@BeagleBoard-X15:/mnt/sd/bb-kernel/KERNEL# make modules_prepare
...
root@BeagleBoard-X15:/mnt/sd/bb-kernel/KERNEL# make modules M=drivers/spi
  CC [M]  drivers/spi/spi-dln2.o
  CC [M]  drivers/spi/spi-omap2-mcspi.o
  Building modules, stage 2.
  MODPOST 2 modules
  CC      drivers/spi/spi-dln2.mod.o
  LD [M]  drivers/spi/spi-dln2.ko
  CC      drivers/spi/spi-omap2-mcspi.mod.o
  LD [M]  drivers/spi/spi-omap2-mcspi.ko

Now let’s copy our newly built spi-omap2-driver.ko module over to the BeagleBone Black:


root@BeagleBoard-X15:/mnt/sd/bb-kernel/KERNEL# scp drivers/spi/spi-omap2-mcspi.ko root@192.168.1.106:/lib/modules/4.3.0-rc7-bone1/kernel/drivers/spi/
spi-omap2-mcspi.ko                            100%   21KB  21.4KB/s   00:00

Loading the Overlay

Recall from the previous tutorial you will need to compile and install the overlay into /lib/firmware. Assuming you’ve done this (see the previous post for details) and your /boot/uEnv.txt is in order, load the overlay and then try the transmitReceive420.js script in the linux-4.3 branch of the BBB-420mA repository.

Without a patched spi-omap2 module you would expect to see:

/root/BBB-420mA/node_modules/spi/spi.js:65
    return this._spi.open(this.device);
                     ^

TypeError: Unable to set SPI_IOC_WR_MODE
    at TypeError (native)
    at Spi.open (/root/BBB-420mA/node_modules/spi/spi.js:65:22)
...

With the updated SPI module:

root@beaglebone:~/BBB-420mA# node transmitReceive420.js 7 14
Milliamps:                7
Output to Transmitter:  1417.8125
Input from Receiver:    1405
Millamps:               6.937784262
Milliamps:                14
Output to Transmitter:  2859.375
Input from Receiver:    2870
Millamps:               14.051592792

This script was run with a fully populated mikroBUS cape and 4 clickboards! Two 4-20mA transmitters and two 4-20mA receivers. The transmitter in slot 2 is looped to the receiver in slot 1. Likewise, the transmitter in slot 3 is looped to the receiver in slot 4.

4 SPI Clickboards!
4 SPI Clickboards!

Getting the Code

Everything for this tutorial is on the linux-4.3 branch of the BBB-420mA repository. I highly suggest you look at the README and review the instructions there. There are admittedly a lot of moving parts involved here: compiling overlays, adjusting uEnv.txt, understanding SPI and chip select, 4-20mA clickboards, compiling kernel modules, and oh, NodeJS too! Hopefully though when you Google gpio chip select on beaglebone you’ll find your way here and enjoy the read.

Adventures in BeagleBone Black Device Overlays and SPI

Another post to file under Hacking. This one might appear to be all over the map, but the fact is I wanted to take a mikroBUS cape, BeagleBone Black, and four 4-20mA Transmitter Clickboards and build a little quad-port 4-20mA signal generator. Think of it as a hobbyists Fluke 705 Loop Calibrator.

This is without a doubt an ambitious post, and I realize its not for everyone (how many folks out there are building 4-20mA simulators with BeagleBones?). Hopefully though there are some interesting tidbits of techniques and code that anyone can lift and use in their own projects.

If you were to build everything here in this tutorial, you’re going to need:

Talking about SPI

This is not a tutorial about SPI, but suffice it to say you can come up to speed on using SPI devices with the BeagleBone Black by reading through these tutorials:

The upshot is that SPI, or Serial Peripheral Interface Bus, is a de facto standard for reading/writing data over a simple bus. There are hundreds of different devices that utilize a SPI interface, the 4-20mA T click that we’re interested in being one of them. Because the transmitter is a write only device, there are even fewer lines to concern ourselves with:

  • 3V3 power
  • Ground
  • SCLK, or “clock”
  • CS, aka “chip select” (also referred to as slave select)
  • MOSI, aka “master-out slave-in”

SPI has often been referred to as a protocol “a first-year engineer would develop when faced with designing a bus for the first time.” It’s straightforward. Every device on the bus gets a clock line to know when to latch its inputs; everyone has the same data in and data out lines, and when the master wants to address you it lowers your chip select line (logical 0). No other device on the bus will have its chip select line low, therefore you and the master are free to send on the data lines.

Since we are going to leverage the mikroBUS cape to allow us to populate 4 4-20mA T clickboards, that means in total we’ll have 4 different SPI devices. In reality, there are 6 SPI devices here: 2 masters and 4 slaves.

Our first master is referred to on the BeagleBone Black platform as SPI0. It has one chip select brought out to the expansion header, and most examples you see online will refer to this device tree overlay. Since we’re going to be writing our own overlay, its instructive to look at this one first.

A lot of tutorials out there gloss over some of the details here, and quite frankly at times I don’t blame them because it can be a little mindnumbing reading through all the hex and business about INPUT_PULL | MODE0. But these values do actually map to something and MODE0 means something particular on the BeagleBone Black. What these four lines of code are declaring are the actual lines that are carrying the SPI0 signals for SCLK, chip select, and what are referred to here as D0 and D1. Why are they called D0 and D1 and not MOSI and MISO? Who knows, but you can tell which is which because:

  • D0 is labeled as an INPUT, therefore data is coming from the slave into the master, so it is the MISO line
  • D1 is labeled as an OUTPUT, therefore data is going out from the master into the slave, so it is the MOSI line

So where do these lines come out on the BBB Expansion header? A quick way to determine is to take a look at https://github.com/jadonk/bonescript/blob/master/src/bone.js and simply search for 0x150. You’ll find that its P9_22. For 0x154 you can quickly find out that it is P9_21. You can also look at this image illustrating the SPI pins:

BeagleBone Black SPI Ports
BeagleBone Black SPI Ports

Take a look again at the overlay above and how the 4 pins iterated line up with what the image above shows. It’s instructive when you look at SPI1, which has another chipselect line, cs1, brought out to the header.

mikroBUS Cape

The mikroBUS Cape opens up the possibility of having 4 SPI slaves, but one must be mindful that that isn’t the physical default. If you look carefully at the manual schematic, you’ll see that “host 3” by default does not have the SCLK and MISO lines connected, but rather are jumpered as RX/TX lines (presumably for working with UART devices). If you flip the cape over on the back you can see this jumpering:

Host 3 Jumpered to UART
Host 3 Jumpered to UART

You can also see that Host 3 is jumpered to UART by default in the cape manual:

Host 3 Defaults to UART
Host 3 Defaults to UART

For the remainder of this tutorial, until I get someone more savvy with SMD components than I am, we’ll focus on Host 1, 2, and 4, which fortuitously map directly to the SPI1 device on the BeagleBone. It also maps nicely to this diagram from the SPI Wikipedia entry:

One Master, Three Slaves
One Master, Three Slaves

To get our BeagleBone ready, we’re going to have to write a Device Tree Overlay just for us. We’ll call it BBB-SPICAPE to reiterate the fact that we’re looking to have all SPI devices on our cape. To be clear, this overlay is not for the generic use of the mikroBUS cape! If you are plugging in clickboards that make use of the UART lines, it will look completely different!

Here’s what the file looks like in all its arcane glory:

/dts-v1/;
/plugin/;

/ {
  compatible = "ti,beaglebone", "ti,beaglebone-black";

  /* identification */
  part-number = "BB-SPICAPE-01";
  version = "00A0";

  /* state the resources this cape uses */
  exclusive-use =
    /* the pin header uses */
    "P9.17", /* spi0_cs0 */
    "P9.18", /* spi0_d1 */
    "P9.21", /* spi0_d0 */
    "P9.22", /* spi0_sclk */
    "P9.28", /* spi1_cs0 */
    "P9.29", /* spi1_d0 */
    "P9.30", /* spi1_d1 */
    "P9.31", /* spi1_sclk */
    "P9.42", /* spi1_cs1 */
    "P8.10", /* spi1_cs2, but not really */
    /* the hardware ip uses */
    "spi0",
    "spi1",
    "gpio2_4";

  fragment@0 {
    target = <&am33xx_pinmux>;

    __overlay__ {

      /* avoid stupid warning */
#address-cells = <1>;
#size-cells = <1>;

   my_gpio_pins: pinmux_my_gpio_pins {
    pinctrl-single,pins = <
    /* the gpio pin(s) */
      0x098 0x17    /* P8 10 gpio2_4.spi1_cs2 OUTPUT_PULLUP | MODE7 */
    >;
   };

    bb_spi0_pins: pinmux_bb_spi0_pins {
    pinctrl-single,pins = <
      0x150 0x30    /* spi0_sclk.spi0_sclk, INPUT_PULLUP | MODE0 */
      0x154 0x30    /* spi0_d0.spi0_d0, INPUT_PULLUP | MODE0 */
      0x158 0x10    /* spi0_d1.spi0_d1, OUTPUT_PULLUP | MODE0 */
      0x15c 0x10    /* spi0_cs0.spi0_cs0, OUTPUT_PULLUP | MODE0 */
      >;
      };

    bb_spi1_pins: pinmux_bb_spi1_pins {
    pinctrl-single,pins = <
      0x190 0x33    /* mcasp0_aclkx.spi1_sclk, INPUT_PULLUP | MODE3 */
      0x194 0x33    /* mcasp0_fsx.spi1_d0, INPUT_PULLUP | MODE3 */
      0x198 0x13    /* mcasp0_axr0.spi1_d1, OUTPUT_PULLUP | MODE3 */
      0x19c 0x13    /* mcasp0_ahclkr.spi1_cs0, OUTPUT_PULLUP | MODE3 */
      0x164 0x12    /* eCAP0_in_PWM0_out.spi1_cs1 OUTPUT_PULLUP | MODE2 */
      >;
      };

    };
  };

  fragment@1 {
    target = <&ocp>;
    __overlay__ {
      spi1_cs2 {
      compatible = "gpio-of-helper";
      status = "okay";
      pinctrl-names = "default";
      pinctrl-0 = <>;

      P8_10 {
        gpio-name = "spi1_cs2";
        gpio = <&gpio2 4 0>;
        output;
        init-high;
      };
    };
    };
  };

  fragment@2 {
    target = <&spi0>;   /* spi0 is numbered correctly */
    __overlay__ {
      status = "okay";
      pinctrl-names = "default";
      pinctrl-0 = <&bb_spi0_pins>;

#address-cells = <1>;
#size-cells = <0>;

      spi0@0{
#address-cells = <1>;
#size-cells = <0>;
    compatible = "spidev";
    reg = <0>;
    spi-max-frequency = <16000000>;
    spi-cpol;
    spi-cpha;
      };
    };
  };

  fragment@3 {
    target-path = "/ocp/interrupt-controller@48200000";
    __overlay__ {
#gpio-cells = <2>;
    };
  };

  fragment@4 {
    target = <&spi1>;   /* spi1 is numbered correctly */
    __overlay__ {
      status = "okay";
      pinctrl-names = "default";
      pinctrl-0 = <&bb_spi1_pins>;

#address-cells = <1>;
#size-cells = <0>;

      cs-gpios = <0>, <1>, <&gpio2 4 0>;

      spi1@0 {
#address-cells = <1>;
#size-cells = <0>;
    compatible = "spidev";
    reg = <0>;
    spi-max-frequency = <16000000>;
    spi-cpol;
    spi-cpha;
      };

      spi1@1 {
#address-cells = <1>;
#size-cells = <0>;
    compatible = "spidev";
    reg = <1>;
    spi-max-frequency = <16000000>;
    spi-cpol;
    spi-cpha;
      };

      spi1@2 {
#address-cells = <1>;
#size-cells = <0>;
    compatible = "spidev";
    reg = <2>;
    spi-max-frequency = <16000000>;
    spi-cpol;
    spi-cpha;
      };
    };
  };
 };

There are few things to note here:

I could not get chip select 2 working “automatically” as a part of the Linux SPI driver. Supposedly I should have been able to specify cs-gpios and include the tag <&gpio2 4 0> and that would magically associate the underlying GPIO pin with the chipselect and it would “just work.” Unfortunately it didn’t, but there’s a workaround which I’ll outline in a bit. After quite a bit of searching online I came across this thread where someone writes:

Incidentally, the spi-omap2-mcspi.c driver does not support a GPIO as a chip select.

Now, whether or not that’s true, I’m not one to say. I read through the source code and considered putting in some additional debug statements to trace things down but in the end moved on.

If you know how to get to use one of the GPIO pins configured as a SPI chip select on a BeagleBone, drop me a line!

To compile and prepare our device tree overlay for installation:

dtc -O dtb -o BB-SPICAPE-01-00A0.dtbo -b 0 -@ BB-SPICAPE-01-00A0.dts
cp BB-SPICAPE-01-00A0.dtbo /lib/firmware

Disabling HDMI

If you Google BeagleBone Black SPI1 you will undoubtedly run across the admonishment that you must disable HDMI to use SPI1! They aren’t joking, and the reason why is that the HDMI chip on the Black uses pins that SPI1 uses too. So you have to pick. You get HDMI, or you get SPI1. So in your /boot/uEnv.txt add this line:

You might also have to track down a line in /boot/uEnv.txt and remove cape_universal=enable; that loads a “universal cape” overlay that conflicts with our use.

You might also have to add:

in /boot/uEnv.txt on 4.x kernels.

Loading our DTBO

After you’ve copied the BB-SPICAPE-01-00A0.dtbo file to /lib/firmware, and you’ve disabled the HDMI overlays in /boot/uEnv.txt, reboot your BeagleBone Black. Review the contents of the Cape Manager slots file to make sure there are no overlays installed.

root@beaglebone:~# cat /sys/devices/platform/bone_capemgr/slots
 0: PF----  -1
 1: PF----  -1
 2: PF----  -1
 3: PF----  -1

Now let’s load our overlay!

root@beaglebone:~# echo BB-SPICAPE-01 > /sys/devices/platform/bone_capemgr/slots

If the load was successful you’ll see the following at the end of dmesg.

[   30.422918] bone_capemgr bone_capemgr: part_number 'BB-SPICAPE-01', version 'N/A'
[   30.422960] bone_capemgr bone_capemgr: slot #4: override
[   30.422978] bone_capemgr bone_capemgr: Using override eeprom data at slot 4
[   30.422996] bone_capemgr bone_capemgr: slot #4: 'Override Board Name,00A0,Override Manuf,BB-SPICAPE-01'
[   30.448839] gpio-of-helper ocp:spi1_cs2: Allocated GPIO id=0
[   30.448868] gpio-of-helper ocp:spi1_cs2: ready
[   30.451031] bone_capemgr bone_capemgr: slot #4: dtbo 'BB-SPICAPE-01-00A0.dtbo' loaded; overlay id #0
[   30.510242] spi spi2.2: not using DMA for McSPI

You should also see that 4 SPI devices were created in /dev and that gpio68 is present in /sys/class/gpio:

root@beaglebone:~# ls -l /dev/spi*
crw-rw---- 1 root spi 153, 0 Nov 28 17:10 /dev/spidev1.0
crw-rw---- 1 root spi 153, 3 Nov 28 17:10 /dev/spidev2.0
crw-rw---- 1 root spi 153, 2 Nov 28 17:10 /dev/spidev2.1
crw-rw---- 1 root spi 153, 1 Nov 28 17:10 /dev/spidev2.2
root@beaglebone:~# ls -l /sys/class/gpio/gpio68
lrwxrwxrwx 1 root root 0 Nov 28 17:18 /sys/class/gpio/gpio68 -> ../../devices/platform/ocp/481ac000.gpio/gpio/gpio68

Protocols, Protocols

Whereas SPI provides the physical means of communicating with a SPI device, one must still learn the specifics of how to write, read, and interpret data from a given device. For example, a 256 Kbit Serial SRAM SPI chip will likely have a set of commands to write to the device (and into what memory location), and certain commands to read from the device. Likewise, our 4-20mA devices have a specific protocol for communicating with them.

For some reason MikroElektronika buries its protocols in example code, so when you get a new little clickboard to play with you have to go digging through source code and reverse engineering the protocol out of it.

For the 4-20mA transmitter the code says that the device “performs linear conversion of input number from range 800 – 4095 into current in range 4mA – 20mA”. So we know if we write 800 to the device it will output 4mA. Writing 4095 will cause it to output 20mA. That part is easy, the tricky part is to know how to tell the device to write.

SPI is byte-oriented protocol, i.e., you send a byte at a time. Reviewing the source code example for the 4-20mA T click, we see that its write protocol is to send two bytes formatted as follows:

  1 1 1 1 1 1
  5 4 3 2 1 0 9 8   7 6 5 4 3 2 1 0
+-----------------------------------+
| 0 0 1 1 x x x x | x x x x x x x x |
+-----------------------------------+

that is, where bits 12-15 are 0x3 (think of it as the WRITE_COMMAND), and bits 11-0 is 800 through 4095.

I’m going to be using NodeJS for this application, so, I can accomplish this through:

Don’t copy/paste the above code anywhere, it’s just an illustration that we want to write an uint16 out to the SPI device. You might be asking yourself (because I certainly did), “What is that new Buffer(mbuf.length) business all about?” That is a result of SPI being a “clock-in, clock-out” protocol. From the SPI Wikipedia entry:

During each SPI clock cycle, a full duplex data transmission occurs. The master sends a bit on the MOSI line and the slave reads it, while the slave sends a bit on the MISO line and the master reads it. This sequence is maintained even when only one-directional data transfer is intended.

That is, sending a byte down the MOSI (master-out slave-in) line results in a byte coming in on the MISO (master-in slave-out) line.

That’s the 4-20mA T click, let’s look at the R click. Believe it or not (and when you see the example code, you will), to read the 4-20mA R click you write 2 bytes, and then read the received 2 bytes as follows:

  1 1 1 1 1 1
  5 4 3 2 1 0 9 8   7 6 5 4 3 2 1 0 
+-----------------------------------+
| _ _ _ x x x x x | x x x x x x x _ | // ADC = bits[12:1]
+-----------------------------------+

The underscores just highlight that these are “don’t care” bits.

No kidding, here is the example source code from Mikro:

The first byte is read and masked with 0x1F, thus getting the lower 5 bits, then the second byte is taken. The 5 bits from the first byte then get shifted to the upper 8 bits of a uint16 and bitwise-ored with the second byte. This is then shifted one bit to the right (notice the _ in bit 0 above? That’s what gets shoved off the edge with our shift-right).

A much simpler version in NodeJS:

Controlling with NodeJS

The ultimate goal of this project is to develop a NodeJS Express application which will expose a web interface for manipulating the 4-20mA transmitters. There is a handy SPI module on NPM that we’ll use, so let’s get started installing NodeJS and our SPI module:

curl -sL https://deb.nodesource.com/setup_4.x | bash -
apt-get install -y nodejs
npm install spi

The mikroBUS cape identifies each clickboard slot with a number, so we’ll use that number in our code. Slot (or “host”) 1 has a 4-20mA receiver in it, and slot 2 has a 4-20mA transmitter. The two have been connected creating a functioning 4-20mA loop.

4-20mA Receiver and Transmitter
4-20mA Receiver and Transmitter

It’s important to recognize that the above spidev devices were created when we added our device tree overlay, and that the notation is /dev/spidevMASTER.CHIPSELECT_REG. In this case MASTER is SPI1, which is in the filesystem as spidev2. The cs0 line for SPI1 is to Slot 1, and the cs1 line for SPI1 is to Slot 2.

Note: Working with SPI on the BeagleBone Black can be confusing with the “off-by-one” business. BeagleBone documentation refers to the two SPI devices as SPI0 and SPI1, but they are presented by the Linux SPI module as /dev/spi1.X and /dev/spi2.X. What is even more confusing is if you unload and reload the overlay, Linux will create /dev/spi3.X and /dev/spi4.X!

Continuing, we need two functions here to convert to and from the 12-bit ADC range of 800-4095 to 4-20mA:

If you deal with 4-20mA much you’ll get used to doing these linear equations. In the first example we want to map 4-20mA to 800-4095. Doing the math we get the first equation:

4-20mA to Mikro
4-20mA to Mikro

The second equation reverses the process.

Now, let’s use our knowledge of the T click and R click to write out what we want to transmit and then reread it on the receiver.

Our first step is to convert the command-line argument (which we expect to be milliamps) to the 800-4096 range. We then perform the required operations to write to the device via SPI (bit-or with 0x3000). After transmitting we then read from our receiver to see what it is detecting as the loop current. Again, half of the battle with SPI is knowing the exact format of how to write and read values to the device in question. Running our app we get:

# node outputMilliamps.js 12
Milliamps:              12
Output to Transmitter:  2447.5
Input from Receiver:    2467
Millamps:               12.094688466

Of course, this is an opportunity to determine what policy you’re going to have on precision and how many times you want to sample the receiver. The mikro code for the R click samples a dozen or so times and tosses out the min and max value of what is read. We won’t worry about that for now but will move on to using our 3rd SPI device (SPI1 chipselect 2), which is a little trickier.

Manual Chip Select

If you recall from above, I said that I couldn’t get the chipselect line for Host 4 (on the cape) working. Well, that’s partially true. I couldn’t get it to automatically work, as in, when using the code above like receiver1.transfer(outbuf,outbuf), you’ll notice that’s it. The Linux SPI driver takes care of lowering the chip select line to talk to the device. This works for both chip select 0 and 1, but I could not getting it working for 2. So we’ll just do it manually like this:

Here we simply drive gpio68 low, engage our SPI device, and then raise the pin. We’ve done by hand what the Linux SPI driver handled for us in the previous example.

Why gpio68? That’s the GPIO line that maps to P8.10 on the BeagleBone header, which in turn is delivered to our mikroBUS cape. You can see that by reading the identifiers on the cape itself:

Host 4 Chip Select (P8.10)
Host 4 Chip Select (P8.10)

This is also why our device tree overlay specifies the GPIO pin like this:

If you look at the BeagleBone System Reference Manual, you’ll see too that all of these references like <&gpio 2 4 0> actually do map back to something:

Why is P8.10 referred to as ?
Why is P8.10 referred to as <&gpio2 4 0>?

We want the pin to be specified as an output, and to initialize it high, or logical 1. If we didn’t do this the pin could float, or worse, be driven low, which would in effect mean it was always in a “selected mode.” The Linux SPI driver would have no knowledge of this and multiple SPI devices would be talking on the bus. Which, of course, is a no-no.

Our readReceiver function looks like this:

Note that we wait for the result of raising the chip select line before reading the receiver. Remember, we’re working with Node here, so we have to be explicit about when to read the result of writing to our SPI transmitter. Otherwise we could have charged ahead and read the receiver before the transmitter had been written to, thus resulting in stale data.

Measuring with a Fluke 705 Loop Calibrator

The 4-20mA transmitter clickboard from MikroElectronika does not supply the power for the current loop. The 4-20mA receiver, when paired with the transmitter, can supply this power (~15V), but if you are using the transmitter in other applications you’ll need to provide it yourself. For example, if you want to test the output of your transmitter with a Fluke 705 you’ll need to have a circuit like this:

Connecting a Fluke 705
Connecting a Fluke 705

Note: This is best done with a DC power supply. A fresh 9V battery will work, but you’d be surprised how quickly you can drain a battery pulling 20mA!

Once connected, turn the Fluke 705 to the mA Source/Simulate/Measure setting, and then press the Source/Sim/Measure button until the device reads MEASURE on the display. Use the NodeJS code to tell the transmitter to set the loop to, say, 15mA. The result:

Fluke 705 4-20mA Loop Calibrator
Fluke 705 4-20mA Loop Calibrator

Getting the Code

This tutorial is by no means complete, and we still don’t have a working Express application. That will be added at a later date! In the meantime, you can download the device tree overlay and working NodeJS applications for reading/writing to the 4-20mA clickboards on GitHub.

Once you have the code, run make node to download and install NodeJS 4.x. If you don’t need to install NodeJS you can skip directly to make npm to install the SPI npm module.

To compile the overlay, run make (or make dtbo). This will compile and copy the overlay to /lib/firmware. Don’t forget to edit /boot/uEnv.txt and disable HDMI!. Reboot after making the appropriate changes to /boot/uEnv.txt.

Note: I’ve had issues at times loading the device tree overlay. For some reason the kernel will occasionally lock up in systemd-udevand it will be lights out until the board is reset. I haven’t tracked down why this occurring yet!

Gauging Wireless AP Stability with a BeagleBone Black

It’s easy to roll your own wireless router with hostapd or OpenWRT and a BeagleBone Black and USB Wifi Adapter Dongle. In this post we’re going to roll our own Wifi AP stability tester. To follow along at home you’ll need a:

  • BeagleBone Black
  • Wifi USB Adapter
  • a tri-color LED

You will also need a microSD card and adapter so you can flash the final image on your BeagleBone. Note: We’ll use the term BeagleBone with a cavalier attitude, but we mean BeagleBone Black.

To build our Wifi AP tester, We went with a TP-LINK Nano adapter:

tplink
TP-LINK Wireless N Nano USB Adapter

And for the tri-color LED we picked up a Linrose Super Brite LED, Linrose part number B4361H1/5.

superbrite_led
Three Colors in One L.E.D.!

Preparing Your BeagleBone Black

We’re going to keep the OS simple and go with the latest console release of Debian 8.2 for the BeagleBone Black. The full release contains a desktop environment and we just don’t need that for something this simple.

Grab a release with something like:

and then write it out to your miniSD card where YOUR_DEVICE will be displayed in dmesg after inserting your SD card. Remember: dd is an acronym for disk destroyer so treat it with respect and make sure you have the right device.

Note: This page is the place to go to find the latest on Debian builds for BeagleBone Black.

Cable up your BeagleBone Black to your router via Ethernet. Pop the miniSD into your BeagleBone Black and press and hold the S2 button on the Black, and then apply power.

bbb_sdcard

Once the device boots ssh into it:

We’ve logged in as root here to speed things up. Feel free to use the debian user and insert sudo where necessary.

Software Prerequisites

Now we need to get our BeagleBone in shape to do a little hacking. Because we chose a barebones distribution image there are a few dependencies and “fix-ups” to do. Go ahead and run:

Now, why we installed all this stuff:

  • git because we’re going to git clone a repo containing the code for leveraging Speedtest.net to stress test our Wireless Access Point
  • build-essential because its a meta-package which contains essential packages for building applications, modules, libraries, etc.
  • python because we’re going to be using Python (speedtest-cli)
  • cpanminus for using the cpan tool down download, compile, and install Perl modules
  • wireless-tools firmware-realtek wpasupplicant to use our TP-LINK Wireless USB Adapter

If you’re using a Wireless adapter with a chipset other than RealTek, make sure and get the right firmware (apt-cache search wireless firmware helps).

It sounds like a lot, and in some ways it is, but consider the philosophy of building applications by “not reinventing the wheel.” Well, not reinventing the wheel doesn’t mean you don’t need wheels, and all of these packages are the wheels upon which we’ll take a trip.

One final note: the image we’re using doesn’t have locales installed, and so you get this annoying warning like this:

You can fix that by installing the locales package and configuring en_US.UTF-8.

Follow the onscreen prompts and select the en_US.UTF-8 locale.

We’re going to hack in Perl and Python today. Yes, you heard me, Perl! We use a few Perl libraries from CPAN (remember, not reinventing wheels here).

cpan will likely ask Would you like to configure as much as possible automatically? [yes]. Hit Enter to accept the default yes.

When prompted Is Net::Ifconfig::Wrapper info output correct? Y/N: verify that the eth0 entry looks good and answer (it should be).

Wifi

Use iwconfig to make sure your Wifi adapter is visible.

To get the wireless interface connected to your AP, edit /etc/network/interfaces and add your SSID and WPA (assuming you’re assuming WPA!) credentials. Make sure and uncomment the Wifi directives:

While you are in /etc/network/interfaces also change the eth0 entry from auto to allow-hotplug (more on this in a minute) like this:

Run ifup wlan0 to bring up your interface.

You should of course see a DHCPDISCOVER message go out followed by REQUEST, OFFER, and ACK.

The SSID you put in /etc/network/interfaces will be the SSID of the AP you intend to test.

Installing bbwifiaptest

Now let’s install our BeagleBone Wifi AP Tester. Grab it from GitHub.

The Perl script bbwifiaptest.pl is where the magic happens, and is what will be launched when we install the app as a service.

By default the script is designed to daemonize itself, but you can bypass this with --no-daemon on the command line. To start, try:

You should see something like:

You’re seeing the script use speedtest-cli to test the bandwidth of the connection and then display the statistics of the wlan0 interface. If we cannot reach Speedtest.net for whatever reason (presumably the AP is no longer routing us to the Internet) the script will retry 10 times and then give up.

Of course, our plan is to get a visual indication that the AP is available, so we’ll turn to our LED!

Lite-Brite!

Disclaimer: we’re not hardware heads over at iAchieved.it. If you feel there should be a current-limiting resistor for the LED, by all means, wire one up.

Now that we can run the bbwifiaptest script we need to plug in our tri-color LED. We’re using GPIO 66 and 67 which correspond to pins 7 and 8 on the P8 header of the BeagleBone Black. The cathode for the Linrose B4361H1/5 (the end that goes to ground) is the center pin and should be put in pin 1 or 2 of the P8 header, and the longer of the two anodes should be put in pin 7 of the P8 header. The shorter anode (red) should go in pin 8.

bbpins

Our color codes are as follows:

  • Green – speedtest.net is reachable and we’re downloading from it
  • Orange – speedtest.net is not available but we haven’t given up trying to contact it
  • Red – something is wrong and we’ve given up

An orange light can persist for up to a little over 6 minutes as the script patiently waits for the AP to come back online. After that it’s lights out.

Installing the Service and Testing

Now let’s install the application and set the BeagleBone to start it upon boot. Run the supplied install.sh script:

This script puts the speedtest-cli and bbwifiaptest.pl applications in /usr/local/bin and then enables a systemd daemon to start it once the network is available.

To test this you will need to:

  • have the LED plugged in
  • have the miniSD card plugged in

Log out of the BeagleBone, disconnect the Ethernet cable (we’re going to be running only on the Wifi dongle), and then press and hold the S2 button. While holding S2 down, press (but don’t hold) the Reset button (S1) on the BeagleBone. You can release S2 button after the blue LEDs start flashing. Be patient as the system boots – the LED will remain off until the Wifi connection is established and systemd starts the application. It will take about 30 seconds. One could get clever and create a prerequisite service that flashes the LED orange while the boot sequence is running.

FullSizeRender 17
Up and Running!

Remember when we changed eth0 from auto to allow-hotplug? This drastically reduces our boot time as systemd won’t sit around and wait for the Ethernet interface to come up (which it won’t). We do want it to wait for wlan0 which is why that is left to auto.

Look Ma, No Hands!

Remember up above where we had to press the S2 button to ensure the BeagleBone Black boots off of the microSD card? Well, if you ever pulled power in the tutorial above and forgot to press it again, you’re booting off of the onboard eMMC (which you can think of as a microSD card thats permanently soldered to the BeagleBone). When using our AP tester we don’t want to go around having to remember to press S2, so let’s flash our image to the eMMC. Warning: If you have something on the BeagleBone Black eMMC that you want to keep make sure and copy it off now! The next steps will erase whatever was there.

First, log in and go to /boot/uEnv.txt and go down to the bottom. You should see:

Heed the warning here about dosfstools and rsync and make sure they are installed (they should be). Then, uncomment the cmdline entry and reboot while holding down S2. Wait until all four of the LEDs next to the Ethernet connector light up, and then you can release S2. You should then see the LEDs light up in sequence back-and-forth, like this:

Once the flashing is complete the LEDs will go solid again. Disconnect power, pop out the microSD card, and then reapply power. Within 30 seconds or so you should see the green LED!

Now What?

Now you have a Wifi AP tester! Take your BeagleBone Black and locate it where ever you like in relation to your AP and power it on. If the device connects to the AP and begins downloading traffic you’ll see the green light. If it doesn’t light up at all you know something failed horribly and should reconnect the BeagleBone via Ethernet and take a look at the logs.

If you see the LED turn red after some time it means traffic was no longer getting out to the Internet. That is, for some reason, the AP stopped passing traffic. Again, turn off the device, cable it up, and take a look at the logs. You will be able to see when “traffic runs” started by the ENTRY and EXIT markers. You will want to write some post-processing script to generate reports about the data.

Command Line Options

The bbwifiaptest.pl script has a number of command-line options.

Option Description
--logfile LOGFILE Write logs to LOGFILE (default is /var/log/bbwifiaptest.log)
--speedtest SCRIPT Use SCRIPT for the speedtest-cli script.
--wirelessif WIRELESS_IF Use WIRELESS_IF as the wireless interface to monitor. Default is wlan0.
--no-daemon Don’t daemonize. By default the script will run as a daemon.
--log-console Display logs on the console. By default logs are only written to the logfile.

Next Steps

There’s so many things that can be improved here to make a sturdy little Wifi AP tester. Think multiline character displays with more status information, writing logs to the SD card, providing a web interface on port 80 to see what’s going on. You will also notice that the script isn’t designed to run while the Ethernet is plugged in – the green LED will come on because eth0 is passing traffic. We need to update the script to onlyuse Wifi.

We might make these changes and blog about them, but don’t wait on us, get hacking today! Finally, if you enjoyed this article make sure and follow @iachievedit on Twitter!