iAchieved.it

Software Development Tips and Tricks

By

A Look at the BeagleBoard X15

First things first, don’t call it the BeagleBone X15! This is a BeagleBoard. Affectionately named “The Beast” by its creators, the BeagleBoard X15 is a powerful ARM-based desktop computing/multimedia device. With an MSRP of $239 it provides more CPU horsepower, more RAM, and a large assortment of IO peripherals.

The Beast

The Beast

For example, the BeagleBone Black comes with a single USB2.0 port. The X15 is equipped with 3 USB 3.0 host ports, 1 USB 2.0 host port, and 1 USB 2.0 client port. In addition while devices such as Raspberry Pi and Black have a single 10/100 Ethernet connector, the X15 comes with 2 Gigabit RJ45 ports. When you’re moving data across the network (say, downloading the Linux kernel or copying release image files), that extra bandwidth comes in quite handy.

But wait, there’s more! The X15 comes packed with a lot of computing power thanks to its Texas Instruments Sitara AM5728 dual-core ARM Cortex-A15 clocked at 1.5GHz. Combine that with 2GB of RAM and you’ve got a beefy little machine.

But enough about the precise specifications, you can read all of those here. Let’s make sure we talk about what the BeagleBoard X15 is not. It isn’t a Raspberry Pi and it isn’t a BeagleBone Black, and it definitely isn’t an Arduino. What you won’t find on this board are GPIO headers to turn LEDs off and on, run lines out to play with SPI or I2C devices, or connect capes (also referred to as shields by the Arduino folks). There are expansion slots available on the underside of the board which can be used to add PCIe, LCD, and mSATA connections, but the design as it stands doesn’t appear to be for quickly prototyping circuits like with Arduino, Pi, and BeagleBone.

Serial Console

The X15 does have a standard serial port header for connecting something like a Serial FTDI cable. The ground (black) pin is towards the 12V barrel connector. The standard serial connection settings 115200,8,N,1 apply. Having this connection is handy if you are working with the X15 boot loader U-Boot.

serial2

USB2.0 Port

There’s also a USB 2.0 port hiding next to the Ethernet connector. It’s actually a eSATA/USB hybrid port, so if you need to connect another USB device and don’t need the eSATA port, connect it here.

esataplususbs

Flashing the X15

Like the BeagleBone and other ARM development boards, the X15 also comes with a 4GB eMMC so you can flash a bootable image to it. To do so, head over to the X15 Weekly builds and download an image. The process for flashing the X15 is similar to that of the BeagleBone Black.

Although there are two images (flasher and non-flasher) posted, they are the same with one exception: the contents of /boot/uEnv.txt. The flasher image has the following in its uEnv.txt:

As with the BeagleBone, you can turn a non-flasher load into a flasher by uncommenting the cmdline=init=/opt/scripts/tools/eMMC/init-eMMC-flasher-v3.sh line in /boot/uEnv.txt. I highly recommend you cabling up to the serial port if you make it a habit of flashing your board; it greatly aids in debugging and learning how the flashing process works.

Benchmarks

Now let’s turn to everyone’s favorite method of determining the capabilities of a new computing device, benchmarks. I’m going to be using the sysbench application and running a CPU test for the BeagleBoard X15, BeagleBone Black, and a Wandboard Quad. The sysbench CPU test verifies prime numbers using the trial division method. By default it will verify primes up to 10,000.

System–num-threadstotal timetotal time taken by event execution
BeagleBone Black1286.4899s286.4704
BeagleBoard X151112.2594s112.2518
Wandboard Quad1228.2226s228.2110
BeagleBoard X15256.5488s113.0862
Wandboard Quad457.1758s228.6204

Now I will confess, I always have to take a moment and reread the definition of “total time taken by event execution”. When the number of threads equals 1 total time and total time taken by event execution should be roughly the same. For multicore systems one should utilize the cores by specifying the number of threads to be the same as the number of cores. So, for the X15, we use 2 threads, and likewise, for the Wandboard Quad, we use 4. The results are interesting in that the X15 doesn’t really perform much better than the Wandboard Quad for this benchmark when all cores are utilized. You can see however that one X15 core head-to-head with one Wandboard Quad core, the Cortex-A15 performs much better than the A9.

This is, of course, all academic and doesn’t represent a real-world test. Next up, let’s look at compiling with the X15!

Compiling with the X15

If you’ve ever tried cross-compiling for an ARM-based system, you know first hand it’s less than ideal. Getting a cross-compiling toolchain, building it, ensuring you have all of the right libraries on your host system to compile for the target. Though we’ve been cross-compiling for decades, it is still a pain. So along comes a device like the BeagleBone Black and you think, hey, I could compile my ARM binaries with this! Yes, you could, but you could also get from New York to LA on horseback. The fact is for compiling, both the RaspberryPi, BeagleBone Black, and other “credit-card sized” computers are woefully inadequate for the task.

When originally faced with having to frequently compile ARM packages, whether they be the Linux kernel or something like OpenCV, I turned to the Wandboard QUAD. Make no mistake, this board is no slouch. With a quadcore Cortex-A9 processor the Wandboard Quad is certainly going to have an easier time compiling than a Raspberry Pi or a BeagleBone Black. In fact, it’s a good reference to benchmark against the X15.

Unfortunately it’s a little difficult to get a side-by-side comparison with the Wandboard Quad because of the differences in IO. For example, let’s say we compiled OpenCV on the Quad with the source being on an NFS mount. Well, the Quad has only 100Mb Ethernet, whereas the X15 has 1000Mb. This in and of itself can make a difference. If we compiled the code onto a USB flash drive, the Quad is limited to USB2.0 vs. the X15s USB3.0.

Rather than wring our hands over getting an apples to apples comparison we’re going to use the MicroSD filesystem on the Quad and write to a Class10 microSD on the X15. It should be noted here that both SD cards are Class 10 “high speed” cards.

Our benchmark is going to be straightforward, taking time of make on both systems. For the Quad we’ll use make -j4, and for the X15 we’ll use make -j2. Neither system is running anything taxing in the background, but we didn’t go and specifically turn off every background daemon. No one does that in the real world so why hold up a benchmark that is contrived?

The verdict? The dual-core X15 compiled OpenCV 3.0.0 in 40 minutes compared to around 49 minutes with the Wandboard Quad. Now, this was with compiling on the X15 on the microSD card. I switched the X15 over to using a USB3.0 mass storage drive and compiled OpenCV from scratch in about 36 minutes. As someone who has, in previous lives, been responsible for managing the builds of multiple environments, I can say that every minute you can shave off makes a big difference in your release cycle velocity. Invest in a lot of cores and the fastest disk you can afford, preferably SSD!

What About Other Uses?

Q. How’s the performance as a desktop environment? Can I run LibreOffice and Gimp?
A. I have been using the BeagleBoard X15 as a headless build server, and will likely continue to do so.

Q. What about running Android?
A. I know some folks use these higher-end ARM boards for Android development. I haven’t done enough Android development to write with any authority on the X15’s suitability. I have to believe given the horsepower the 1.5GHz Cortex-A15 brings though it should be a great platform for Android development.

Q. What about the PRUs and DSPs?
A. Confession time: I’ve never used the programmable realtime units (PRUs) on the BeagleBone Black, and I haven’t had any need for DSP processing in my applications. However if you do have a need, the X15 has you covered with:

  • 4 PRUs
  • 2 TI C66x DSPs

With these capabilities the X15 is sure to find its way into automation controls, signal acquisition, audiovisual compression and processing applications, and anything else that benefits from these subsystems.

It’s Getting Toasty in Here

Due to the laws of physics there’s a price to pay for the extra power packed into the X15: heat. This board gets hot. In fact, the processor itself sits underneath a heat sink, and the board was designed to accommodate a fan.

If you install the lm-sensors package you can run sensors and view several thermal management attributes:

  • the status of the fan
  • the temperature as reported by the OMAP I2C temperature sensor
  • the temperature as reported by the DSPEVE and IVA thermal zones

Without a fan of some sort the I2C temperature sensor hovers around 57-60 C while the CPU is, for the most part, idle. However, once you crank up the workload it can quickly soar! In this graph the CPU was idle and then I built OpenCV utilizing make -j2:

temperature_graph

Once the CPU reached 75 C I hit the switch on a little desk fan. This serves to cool it down to around 35 C even under 100% CPU utilization.

It remains to be seen the types of enclosures that will be developed for the X15. If you’re using it on your desk as a build machine there’s probably no need for one. But when using this board let’s just say, don’t touch the heat sink if you aren’t cooling it off with something.

Final Thoughts

The BeagleBoard X15 is an impressive system that packs a punch in terms of CPU, RAM, and IO peripheral support. Combine that with its additional digital signal processing capabilities and programmable real-time units, and it is sure to find its way into a lot of applications. For myself, I’ve been using it for compiling software packages and couldn’t be happier. Plus, I’m itching to get a Swift on the BeagleBone! post written.

By

Introducing the Swift Package Manager

Linux Swift 2.2

Update: If you don’t have Swift installed yet, head on over to our Ubuntu Packages page and get the latest with apt-get install.

In today’s world any serious programming language is going to come with a package manager, an application designed to aid managing the distribution and installation of software “packages”. Ruby has a de jure package management system with the rubygems application, formally added to the language in Ruby 1.9. Python has various competing package management systems with pip, easy_install, and others. NodeJS applications and libraries are delivered via npm.

As a part of the open source release of Swift, Apple has released a Swift Package Manager designed for “managing distribution of source code, aimed at making it easy to share your code and reuse others’ code.” This is a bit of understatement of what the Swift Package Manager can do in that its real power comes in managing the compilation and link steps of a Swift application.

In our previous tutorial we explored how to build up a Swift application that relied on Glibc functions, linking against libcurl routines with a bridging header, and linking against a C function we compiled into an object file. You can see from the Makefile, there are a lot of steps and commands involved!

In this post we’re going to replace all that with a succinct and clean Swift Package Manager Package.swift manifest and simplify our code while we’re at it.

Package.swift

Package.swift is the equivalent of npms package.json. It is the blueprint and set of instructions from which the Swift Package Manager will build your application. As of this writing (December 8, 2015), Package.swift contains information such as:

  • the name of your application (package)
  • dependencies that your application relies on and where to retrieve them
  • targets to build

Unlike package.json however, Package.swift is Swift code, in the same way an SConstruct file is Python. As the Swift Package Manager evolves to meet the complex use cases of building large software packages it will undoubtedly grow to contain additional metadata and instructions on how to build your package (for example, package.json for NodeJS contains instructions on how to execute your unit tests).

There’s quite a bit of documentation available for the Swift Package Manager on Github, so I won’t rehash it here, but will rather dive right in to a working example of a Package.swift for our translator application.

Each Package.swift file starts off with import PackageDescription. PackageDescription is a new Swift module that comes with the binary distribution of Swift for the Ubuntu system. The class Package is provided, which we utilize on the next line.

Don’t let the “declarative” nature fool you, this is Swift code. package gets assigned a new Package object which we’ve created with the init(name: String? = nil, targets: [Target] = [], dependencies: [Dependency] = []) initializer.

The name attribute is self-explanatory, and dependencies is an array of package dependencies. Our application will rely on two packages, CJSONC (which is a wrapper around libjson-c), and CcURL (a wrapper around, you guessed it, libcurl).

The Swift Package Manager authors have devised an interesting mechanism by which to pull in package dependencies which relies on Git and git tags. We’ll get to that in a moment.

Directory Structure

The Swift Package Manager relies on the convention over configuration paradigm for how to organize your Swift source files. By this we simply mean, if you follow the convention the package manager expects, then you have to do very little with your Package.swift. Notice that we didn’t specify the name of our source files in it. We don’t have to because the package manager will figure it out by looking in expected locations.

In short, the Swift Package Manager is happiest when you organize things like this:

project/Package.swift
       /Sources/sourceFile.swift
       /Sources/...
       /Sources/main.swift

In our Sources directory we will place two files: Translator.swift and main.swift. Note: Our previous tutorial used lowercase filenames, such as translator.swift. This convention is used by NodeJS developers. It appears that the Swift community is going with capitalized filenames.

Translator.swift has changed a bit from our previous version. Here is the new version which leverages system modules rather than trying to link against C object files we created by hand.

Two new import statements have been added for CJSONC and CcURL, and a routine we did have in C is now in pure Swift. To be sure under the hood the compile and link system is relying on libraries that were compiled from C source code, but at the binary level, its all the same.

Now, here is where it gets really simple to build! Type swift build and watch magic happen:

# swift build
Cloning Packages/CJSONC
Cloning Packages/CcURL
Compiling Swift Module 'translator' (2 sources)
Linking Executable:  .build/debug/translator

That’s it! Our binary is placed in .build/debug and takes its name from our Package.swift file. By default a debug build is created; if we want a release build, just add -c release to the command:

# swift build -c release
Compiling Swift Module 'translator' (2 sources)
Linking Executable:  .build/release/translator

Running our application:

# .build/debug/translator "Hello world\!" from en to es
Translation:  ¡Hola, mundo!

System Modules

Let’s talk about the two dependencies listed in our Package.swift manifest. If you go to the Github repository of either “packages” you will find very little. Two files, in fact:

  • module.modulemap
  • Package.swift

and the Package.swift file is actually empty!

The format of the module.modulemap file and its purpose is described in the System Modules section of the Swift Package Manager documentation. Let’s take a look at the CJSON one:

module CJSONC [system] {
  header "/usr/include/json-c/json.h"
  link "json-c"
  export *
}

All this file does is map a native C library and headers to a Swift module. In short, if you create a modulemap file you can begin importing functions from all manner of libraries on your Linux system. We’ve created a modulemap for json-c which is installed via apt-get on an Ubuntu system.

The authors of the Swift Package Manager, in the System Modules documentation state:

The convention we hope the community will adopt is to prefix such modules with C and to camelcase the modules as per Swift module name conventions. Then the community is free to name another module simply JPEG which contains more “Swifty” function wrappers around the raw C interface.

Interpretation: if you’re providing a straight-up modulemap file and exposing C functions, name the module CPACKAGE. If at a later date you write a Swift API that uses CPACKAGE underneath, you can call that module PACKAGE. Thus when you see CJSONC and CcURL above you know that you’re dealing with direct C routines.

Creating a System Module

There are several examples of creating system modules in the documentation, but we’ll add one more. Creating a system module is broken down into 3 steps:

  • Naming the module
  • Creating the module.modulemap file
  • Versioning the module

In this directory (CDB) add module.modulemap with the following contents:

Package dependencies in Package.swift are specified with URLs and version numbers. Version.swift lays out the current versioning scheme of major.minor.patch. We need a mechanism by which to version our system module, and the Swift Package Managers have developed a scheme by which you can use git tags.

Now, I’m not sure if git tags will be the only way to specify the version of your package; it does have the downside of tying one to using git for source control of your Swift code.

In our CDB directory.

git init # Initialize a git repository
git add . # Add all of the files in our directory
git commit -m"Initial Version" # Commit
[master (root-commit) d756512] Initial Version
 2 files changed, 5 insertions(+)
 create mode 100644 Package.swift
 create mode 100644 module.modulemap

And the crucial step:

git tag 1.0.0 # Tag our first version

Now we want to use our new module. In a separate directory named use-CDB, adjacent to our CDB directory, create a Package.swift file:

It’s important to note here your directory structure should look like this:

    CDB/module.modulemap
       /Package.swift
use-CDB/Package.swift

In use-CDB run swift build:

# swift build
Cloning Packages/CDB

What swift build has done here is read the package descriptor and “pulled in” the dependency on the CDB package. It so happens that this package is in your local filesystem vs. on a hosted Git repository like Github or BitBucket. The majorVersion is the first tuple of your git tag.

Now let’s say you made an error and needed to change up module.modulemap. You edit the file, commit it, and then run swift build again. Unless you retag you will not pick up these changes! Versioning in action. Either retag 1.0.0 with git tag -f 1.0.0 (-f is for force), or bump your version number with a patch level, like git tag 1.0.1.

To use our new system module we write a quick main.swift in use-CDB:

Use swift build and it will pull in our CDB module for us to use. The next step is to figure out how to use the myDatabase pointer to open the database!

Closing Remarks

It has been less than a week since Apple put Swift and the new package manager out on Github. It’s under heavy churn right now and will undoubtedly rapidly gain new features as time goes on, but it is a great start to being able to quickly build Swift applications on a Linux system!

Getting the Code

You can get the new version of our translator application which uses the Swift Package Manager on Github.

# git clone https://github.com/iachievedit/moreswift
# cd translator_swiftpm
# swift build
Cloning Packages/CJSONC
Cloning Packages/CcURL
Compiling Swift Module 'translator' (2 sources)
Linking Executable:  .build/debug/translator

By

More Swift on Linux

Editor’s Note: This article was written on December 6, 2015, days after Apple open-sourced Swift and made an Ubuntu distribution of the Swift compiler available. All of the techniques used below should be forward compatible, however, there may be easier ways of doing things in the future as the Foundation classes are implemented. Apple has posted a status page that outlines what works and what doesn’t.

Using Glibc Routines with Swift

As I mentioned in Swift on Linux!, the Foundation classes that Objective-C and Swift developers have come to know and love are only partially implemented. And by partially implemented I really mean hardly implemented. Okay, NSError is there and a few others, but no NSURL, NSURLSession, etc.

What is there is the wealth of routines from the GNU C Library, also known as Glibc. You know, the library of rotuines you’d look up with a man page. Functions like popen and fgets, getcwd and qsort. Swift won’t be displacing Python any time soon if this is all we’re left to work with, but you can do something useful and begin exploring the possibilities of intermixing C with Swift. In this tutorial we’ll do exactly that and write up some Swift code that uses popen to spawn wget to make up for the lack of NSURLSession.

So let’s get stuck in and write some Swift.

Swift cat

Create a file named swiftcat.swift and add the following code:

To get access to all of the Glibc routines we use import Glibc. Easy enough. Swift 2 brought us the guard construct, so we’ll use that to ensure that we have an argument to our script. Our first exposure to using a Glibc function is exit(-1). That’s right, nothing special about calling it, it is just the void exit(int status) function.

We’re going to cheat a bit and leverage the /bin/cat command to read the file and write to standard out. To call it though we’ll use popen which will pipe us a stream of bytes that we can read with fgets. There is one thing to notice here, and that is that Glibc routines which take const char* arguments can be given Swift Strings directly. Routines that take char*, as in the case of fgets require some finesse.

fgets does take a char*, so we cannot pass it a String, but rather will use a buffer allocated as a [CChar] (C char) array. The array has a fixed size of 1024 and is initialized with zeroes. Our while loop calls fgets with the stream pointer, and non-nil results contain a buffer from which we can create a Swift String.

Go ahead and save this to a file called swiftcat.swift and then run it!

# swift swiftcat.swift 
Usage:  swiftcat FILENAME

Pass it a file to get the equivalent of cat output!

Mixing in C

You aren’t limited to using Glibc routines with your Swift code. Let’s say we want to use libcurl to escape some strings and get them ready to be included in a URL. This is easy to do with libcurl.

In a file called escapetext.c put the following:

Make sure you have libcurl installed with apt-get install -y libcurl4-gnutls-dev.

Now, compile the file with:

clang -D__TEST__ -o escapetext escapetext.c -lcurl

We include the -D__TEST__ here to pick up the main function. In a minute I’ll show you how to take this routine and include it in a Swift application. Run the C application:

# ./escapetext "hey there\!"
Escaped text:  hey%20there%21

Easy enough. Now, we want to write a Swift application that uses our C routine escapeText. The first thing to do is compile an escapetext.o object file without the -D__TEST__ flag set. This will get rid of main().

clang -c escapetext.c

Now, create a file called escapetext.h and put the function prototype in it.

Write a new file called escapeswift.swift and add the following:

Compile this Swift code with:

swiftc -c escapeswift.swift -import-objc-header escapetext.h 

Notice that we included -import-objc-header escapetext.h. Without this header the Swift compiler won’t be able to find the prototype for escapeText and will subsequently fail with use of unresolved identifier.

Bringing it all together, we link our escapeswift.o and escapetext.o objects together, and pass in the Curl library.

swiftc escapeswift.o escapetext.o -o escapeswift -lcurl

And run it!

# ./escapeswift "how now brown cow"
Escaped text:  how%20now%20brown%20cow

Translator Application

This is a more complex example, but the principals are the same as those outlined above. We’re going to mix C objects and Swift modules together to write a command line application that translates strings from one language to another.

The REST API we’ll be using to do the actual translation returns results in JSON. Since NSJSONSerialization isn’t yet available in Foundation on Linux, we’ll use the libjson-c-dev library, so install it with apt-get install libjson-c-dev.

jsonparse

Two files make up our JSON-parsing routine, parsejson.c and its companion header parsejson.h.

parsejson.c:

parsejson.h

We can easily compile this file with clang -c jsonparse.c.

Translator module

The workhorse of the translator application will be a Swift module called translator. To create this module and prepare it for inclusion with the rest of our project, start with the class file translator.swift:

Take a moment to read through the code. We’re including direct calls to the Curl library here, as well as popen and fgets, and our translatedText routine that is compiled into an object file created by clang.

In addition, create a bridgingHeader.h with the contents:

There are two steps to getting this ready to use in our application:

  • Create a shared library with the translator routine
  • Create a swiftmodule that describes the interface

I will confess, I didn’t understand this until I read on Stackoverflow:

The .swiftmodule describes the Swift module’s interface but it does not contain the module’s implementation. A library or set of object files is still required to link your application against.

First, compile the code into a .o and create a shared library:

swiftc -emit-library translator.swift -module-name translator -import-objc-header bridgingHeader.h
clang -shared -o libtranslator.so translator.o

Now, create the module:

swiftc -emit-module -module-name translator translator.swift -import-objc-header bridgingHeader.h

This leaves us with three files: libtranslator.so, translator.swiftmodule, and translator.swiftdoc.

Main Routine

Our main file, main.swift looks like this:

Again, we’ve made use of Foundation and Glibc, but we’re also using import translator. You must have a translator.swiftmodule in your module search path, which we add with -I.:

swiftc -I. -c main.swift -import-objc-header bridgingHeader.h

Let’s link everything together:

swiftc -o translate.exe jsonparse.o main.o -L. -ltranslator -lcurl -ljson-c -lswiftGlibc -lFoundation

The resulting binary is translate.exe because we intend to wrap a helper script around it to set the LD_LIBRARY_PATH to find the libtranslator.so shared library. Without the helper script (or using ldconfig to update the search path), you need to invoke the excecutable like this:

LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH ./translate.exe "Hello world\!" from en to es
Translation:  ¡Hola, mundo!

Let’s try Irish:

LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH ./translate.exe "Hello world\!" from en to ga
Translation:  Dia duit

Makefile

It’s not clear how “interpreter” friendly Swift will become. Yes, one can create a single monolithic Swift script right now and run it with swift. In fact we did that above. Using bits of code from other Swift files though, without specifying everything on the command line, remains, well, impossible. Maybe I’m wrong and just haven’t figured out the magic incantation to have Swift greedily open up files searching for code to run.

At any rate, our translator application above needs a little help to build. There is a new Swift builder tool, but I found make could get the job done with some appropriate rules:

Getting the Code

You can get all of the code above from Github.

git clone https://github.com/iachievedit/moreswift

The swiftcat code is meant to be run with the swift command, where as escapetext has a simple build.sh script, and translate has a full-on Makefile.

If you’ve enjoyed this tutorial please follow us Twitter at @iachievedit! There will be more to come as Swift on Linux matures.

By

Swift on Linux!

Update! If you have the Swift compiler on your Linux box already, head on over to our tutorial on building Swift applications to run on Linux. If you don’t have one, come and grab it with apt-get from here.

On December 3, 2015 Apple made good on its promise to open source Swift, their new language for development on Apple platforms. Swift has quickly risen in popularity and folks are developing Mac, iOS, WatchOS, and now tvOS apps with it.

While I enjoy developing in the Apple ecosystem and writing iOS apps, my first love will always be writing software for Unix systems (please don’t post a comment reminding me that OS X is a Unix). So while I’m excited about the fact that Swift is now open source, I’m more excited about the ramifications of that fact, primarily inroads Swift can make into the server room, operations environments, and IoT devices.

Swift on Ubuntu

Apple has made available all of the Swift source code available on Github, and even included instructions for building it on Ubuntu 14.04 LTS. I tried it, and while I did get it to compile, it isn’t quite polished yet in terms of “easy installation.” Fortunately if you search around you will eventually find this page for easier installation.

So, here we go, the iAchieved.it quick start guide for Swift on Linux, though its fairer to call it Swift on Ubuntu for now.

Step 1. Get the Package

Go to the download page and grab the version appropriate for your Ubuntu version, either 14.04 or 15.10. I’m going to get the 14.04 version and save it to my /usr/local/archive directory where I keep tarballs.

sudo mkdir /usr/local/archive
cd /usr/local/archive
sudo wget https://swift.org/builds/ubuntu1404/swift-2.2-SNAPSHOT-2015-12-01-b/swift-2.2-SNAPSHOT-2015-12-01-b-ubuntu14.04.tar.gz

December 13 Update: Copy/paste is great, but there are new builds coming out every other week from Apple. Double-check the latest development snapshots before your wget.

Step 2. Unpack

Like /usr/local/archive is a convention I use, I am also picky about where vendor software is installed and typically use the /opt/provider convention, which I’ll do here.

cd /opt
sudo mkdir apple
cd apple
sudo tar -xzf /usr/local/archive/swift-2.2-SNAPSHOT-2015-12-01-b-ubuntu14.04.tar.gz

This will leave us with a directory named swift-2.2-SNAPSHOT-2015-12-01-b-ubuntu14.04 in /opt/apple.

Step 3. Add it to your PATH

The directory structure Apple chose within is a typical usr structure with the Swift compiler in usr/bin. Let’s update our path now to pick up the compiler. In your shell’s rc-file update your PATH environment variable to include /opt/apple/swift-2.2-SNAPSHOT-2015-12-01-b-ubuntu14.04, like this:

Reload your rc file (I use . ~/.zshrc) and type which swift. You should see something like this:

# which swift
/opt/apple/swift-2.2-SNAPSHOT-2015-12-01-b-ubuntu14.04/usr/bin/swift

Step 4. Optional! Emacs Goodness

When developing on the Mac I like Xcode. On a Linux box, emacs is the only editor for me. Since there is swift-mode available out there I naturally wanted to use it, but it turns out it only supports Emacs 24.4 and higher, so we need to install it on Trusty (14.04). Luckily someone out there has a PPA available for it:

Edit your ~/.emacs.d/init.el Emacs configuration file and add something like

Launch emacs and using M-x package-refresh-contents to pull in the packages from the MELPA repository, and then M-x package-install to install the swift-mode package.

Step 5. Try it out!

We’re not going to use the REPL function right now, but will compile everything together. First, you may need to install some dependencies if you don’t already have them:

sudo apt-get install clang libicu-dev

Warning! After an hour or two browsing around I discovered that the current version of Foundation released with Swift for Linux doesn’t have a lot of items implemented. For example, the status page currently states (as of December 5, 2015) that NSURLSession isn’t implemented and that NSURL is “mostly” implemented. Mostly doesn’t include using init(URL:) apparently!

fatal error: init(URL:) is not yet implemented: file Foundation/NSObjCRuntime.swift, line 64

If you take a look at the new Swift Package Manager being developed, you can see in the source code where low-level code for routines like popen is being developed to communicate with Github for downloading repositories. Even NSTask isn’t implemented!

Until these Foundation classes are implemented it might be worth waiting a bit before trying to really do much with Swift on Linux. Our sample code just shows a basic example of using main.swift and a class file to calculate the area of a rectangle. Boring stuff!

In a file called main.swift (this name is important as it defines to Swift the entry point of your application, like main() in C):

Now in a file called rectangle.swift:

Swift on Linux!

Swift on Linux!

Compile it with the swiftc compiler, which should now be in your path.

swiftc main.swift rectangle.swift

And of course, run it!

# ./main
The area of the rectangle is 30.

Conclusion

There’s not a lot one can do right now with Swift on a Linux box. This will rapidly change as Foundation gets built out and others come in and begin developing packages. One should continue to monitor the download page to keep track of updates and keep an eye on the status of Foundation. As classes such as NSTask and NSURLSession are added we should see folks exploring using Swift as a “server language”. For me, I can’t wait!

By

Exploring NetworkManager, D-Bus, systemd, and Raspberry Pi

I had to create a new category named Hacking for this post. The end result is a Raspberry Pi outfitted with LEDs that will inform me of which network interfaces are activated. If you want to actually recreate what I’ve done you’re going to need:

A few days ago I was researching on Google and started off with this question: How do I get notified if a Linux network connection goes down? The top result on StackOverflow led me down the path of learning about technologies that, quite frankly, I hadn’t explored before. Those being:

Call me old-school, but I teethed on MkLinux and RedHat before there was a distribution called Fedora. /etc/sysconfig/network-scripts and /etc/init.d/network were the only game in town, and I simply hadn’t kept up with the latest in Linux-land.

Getting Started

You’re going to need a Raspberry Pi with Debian Jessie (or rather, Raspbian that is based upon Jessie). Head on over to the Raspbians download page and grab the release based upon Jessie. Hopefully this isn’t your first Raspberry rodeo, but if it is, once the .zip file is downloaded, go ahead and unzip 2015-09-24-raspbian-jessie.img.zip and then insert an SD card into your computer, determine what device it was associated with (hint: dmesg), and then:

Once the dd is complete, insert the SD card into your Raspberry Pi SD slot, connect it to your home router via Ethernet, and power it on. It should be visible shortly via Avahi and you can log in with:

As usual, the default password for the pi user is raspberry. Verify that you are indeed running Raspbian Jessie:

From here’s on I’m going to drop to the root user, so if a command fails due to lack of permissions, either sudo su or prepend the command with sudo. If you balk at running as root, see this post explaining why you’re a wimp.
Before proceeding, you might want to also run the following on your Pi:

to get the latest and greatest packages for Raspbian.

Installing NetworkManager

We want to manage our network interfaces with NetworkManager, so we need to install it. Now, the question is: why do we want to manage our network interface with NetworkManager? The answer is to take advantage of the messages NetworkManager places on the D-Bus when the status of “network” changes. After working with it a bit, I like to think of D-Bus as the Linux equivalent of the iOS and OS X NSNotificationCenter. When I want to listen for broadcasted events I subscribe to be notified when said events occur.
Let’s install NetworkManager then with:

While NetworkManager is installing, watch the console output. You’ll see at some point:

Now, this information is key. We are going to go into /etc/network/interfaces and remove a lot of information. Before that, let’s read a bit about how NetworkManager determines whether or not it (rather than something else), is going to manage interfaces.
Okay, three things to do to enable NetworkManager for our interfaces:

  • remove interface entries from /etc/network/interfaces
  • set managed=true in /etc/NetworkManager/NetworkManager.conf
  • /etc/init.d/network-manager restart

After this our files look like:

And then:

You will see in the systemd logs that NetworkManager is struggling with wlan0. We’ll fix that momentarily, but first, take a look at your interfaces with nmcli (NetworkManager CLI):

So far so good. Our Ethernet device is up and connected (which if it weren’t we would have gotten knocked out of our ssh session).

Configuring WiFi

I will confess that this next part took a little trial and error, but hey, that’s what hacking is all about. Without ever having been exposed to NetworkManager one expects to fiddle with the parameters a bit to make things work. Here we go:

Of course, replace MYSSID and MYPSK with your own SSID and pre-shared key.

All of this was sorted out by reading the RedHat documentation.
Before enabling the connection, kill any wpa_supplicant process that may have been hanging out.

and then start the connection with nmcli:

Using nmcli dev status once more:

Very handy indeed.

D-Bus Messages and python-networkmanager

Okay, now we’re getting somewhere! Another new command. On your Pi, run busctl monitor org.freedesktop.NetworkManager. You might see something like:

Now, this is neat! It looks like every time the signal strength of the WiFi connection changes there’s a message on the D-Bus. What happens if you pull the WiFi adapter out altogether? Two signals get emitted: AccessPointRemoved and DeviceRemoved, along with additional signals on other paths.

Now, this isn’t necessarily a D-Bus tutorial or complete NetworkManager tutorial, but, you might want to read up on the NetworkManager D-Bus API here.

Recall that the original question that started me down this hacking journey was how to get notified about changes in network availability. Well, reading D-Bus messages from NetworkManager is the answer, but before we tie everything together we’ll install a Python library that is going to make this much easier. That library is python-networkmanager.

python-networkmanager documentation is a little sparse, but much of the functionality can be gleaned from the examples.

Signals We’re Interested In

After a little trial and error I determined that we’re interested in two NetworkManager signals:

  • DeviceAdded
  • DeviceRemoved

and we’re interested in the StatusChanged signal for a given device.

This is important! The top-level StatusChanged signal for the NetworkManager is the overall status, and we are interested specifically in status changes for each device.

With python-networkmanager we code this as follows:

First, we “connect” (or subscribe) to the DeviceAdded and DeviceRemoved signals. The second argument to connect_to_signal is a callback, which we’ll define later on. Next, we use the GetDevices() method to give us all of the current devices.

For each device we connect to the StateChanged signal. This is how we’ll know whether there was a state change for that specific device. Then, using the python-networkmanager API we get the type of connection (Wired, Wireless, etc.), and determine whether NetworkManager reports the connection as activated (i.e., up and with an address). If all is well we stash this information in our Devices table and call something like ethernet(True) (more on this later).

Now, for a look at our add/remove and state change functions:

Work through this code on your own; hopefully it isn’t too obtuse, but to be fair, none of this will run without filling in the gaps. Like, what does ethernet do? Fear not, the entire code resides in a single file called watchnet.py. Here you will find the ethernet and wifi functions, which simply raise/lower GPIO pins. If you have the GPIO pins connected to LEDs you get a nice visual display of what interface is up/down at the moment. In the first pic both LEDs are lit, thus indicating that both the Ethernet and WiFi connections are up.

bothconnected

Both Network Interfaces Connected

In the second pic I’ve removed the Wireless USB Adapter, and the yellow LED goes out.

WiFi Disconnected

WiFi Disconnected

I am using GPIO pins 23 and 24 on the Raspberry Pi and carrying them out to a green and yellow LED. If you’ve never used the Pi to drive LEDs, have a look at this tutorial, but realize I am using the /sys/class/gpio “method” of setting the pins, and my circuit omits the resistors (I like to live dangerously).

Life cycle of the Wired and Wireless Devices

Now, I’m by no means an expert on NetworkManager and I’m sure there may be some additional states lurking in the machine that I haven’t seen, but what I can gather here is what you should expect for a “normal” sequence for both a Wired and Wireless device:

states

In contrast, a Wireless device has some additional steps to obtain credentials for the connection:

wireless_states

If you don’t have any LEDs lying around (what kind of hacker are you?!) run the watchnet.py script in the foreground and take note of the logs:

In this example the Wireless USB Adapter was pulled (Device removed, which coincidentally you will not see for pulling an Ethernet cable).

But seriously, it’s more fun with LEDs.

Wrapping It Up

Now, to wrap everything up into a nice “no touch” environment, we turn to writing a systemd unit. In /etc/systemd/system/watchnet.service add the contents:

Enable the service:

Of course, make sure watchnet.py is in /home/pi and the execute bit (chmod +x watchnet.py) is set!

Now, for some fun:

  • Pull the power to the Pi
  • Disconnect all of the network interfaces
  • Choose either the Ethernet cable or Wireless USB Adapter and plug it in

The appropriate LED should light up! Plug in the “other” device and its LED will light up as well. So, at a glance, you can see what network interface(s) are connected on your Pi.

Next Time

I’m itching to buy a Adafruit Character LCD for the Pi. Imagine displaying in text various status messages, or changing colors based upon the WiFi signal strength. Next time!

By

Using Monit with Node.js and PM2

As I’ve said before, I’m a bit of a whore when it comes to learning new languages and development frameworks. So it comes as no surprise to myself that at some point I’d start looking at Node.js and JavaScript.

I have another confession to make. I hate JavaScript, even more so than PHP. Nothing about the language is appealing to me, whether its the rules about scoping and the use of var or the bizarre mechanism by which classes are declared (not to mention there are several ways). Semicolons are optional, kind of, but not really. I know plenty of developers who enjoy programming in Objective-C, Python, Ruby, etc.; I have never met anyone who says “Me? I love JavaScript!” Well, perhaps a web UI developer whose only other “language” is CSS or HTML. In fact, a lot of people go out of their way to articulate why JavaScript sucks.

So along comes Node.js, which we can all agree is the new hotness. I’m not sure why it is so appealing. JavaScript on the server! Event-driven programming! Everything is asynchronous and nothing blocks! Okay, great. I didn’t really ask for JavaScript on the server, and event-driven programming is not new. When you develop iOS applications you’re developing in an event-driven environment. Python developers have had the Twisted framework for years. The venerable X system is built upon an event loop. Reading the Node.js hype online one would think event-driven callback execution was invented in the 21st century.

Of course, the Node.js community is also reinventing the wheel in other areas as well. What do the following have in common: brew, apt-get, rpm, gem, easy_install, pip. Every last one is a “package manager” of some sort, aimed at making your life easy, automagically downloading and installing software along with all of its various dependencies onto your system. A new framework is nothing without a package manager that it can call its own, thus the Node.js world gives us the Node Package Manager, or npm. That’s fine. I like to think of myself as a “full-stack developer”, so if I need to learn a new package manager and all of its quirks, so be it.

Unfortunately it didn’t stop there. Node.js has its own collection of application “management” utilities; you know, those helper utilities that aim to provide an “environment” in which to run your application. Apparently Forever was popular for some time until it was displaced by PM2, a “Production process manager for Node.js / io.js applications”

I’m not quite sure when it became en vogue to release version 0 software for production environments, but I suppose it’s all arbitrary (hell, Node.js is what, 0.12?) But true to a version 0 software release, PM2 has given me nothing but fits in creating a system that consistently brings up a Node.js application upon reboot. Particularly in machine-to-machine (M2M) applications this is important; there is frequently no opportunity to ssh into a device that’s on the cellular network and installed out in an oil field tank site. The system must be rock-solid and touch free once it’s installed in the field.

To date the most pernicious bug I’ve come across with PM2 is it completely eating the dump.pm2 file that it ostensibly uses to “resurrect” the environment that was operating. A number of people have reported this issue as well. If I can’t rely on PM2 to consistently restart my Node.js application, I need something to watch the watchers. So who watches the watchers? Monit of course.

Because PM2 refused to cooperate I decided to utilize monit to ensure my Node.js application process was up and running, even after a reboot. In this configuration file example I am checking my process pid (located in the /root/.pm2/pids directory) and then using pm2 start and pm2 stop as the start and stop actions.

NB: Monit executes its scripts with a bare bones environment. If you are ever stumped by why your actions “work on the command line but not with monit”, see this this Stack Overflow post. In the case of PM2, it is critical that the PM2_HOME environment variable be set prior to calling pm2.

The first iteration of my monit configuration looked like this:

Only if this were sufficient, but it isn’t.

For some reason PM2 insists on appending a process ID to the pidfile filename (perhaps for clustering where you need a bunch of processes of the same name), so a simple pidfile check won’t suffice. Other folks even went to the Monit lists looking for wildcard pidfile support and quoted PM2 as the reason why they felt they needed it.

So, now our monit configuration takes advantage of the matching directive and looks like this:

Granted, we should not be running as root here. Future iterations will move the applications to a non-privileged user, but for now this gives us a system that successfully restarts our Node.js applications after a reboot. PM2 is a promising tool and definitely takes care of a lot of mundane tasks we need to accomplish to daemonize an application; unfortunately it is a little rough around the edges when it comes to consistent actions surrounding the ability to survive system restarts. Don’t take my word for it: read the Github issues.

Conclusion

A rolling stone gathers no moss. The more things stay the same, the more things change (or is it the other way around?). I have nothing against new frameworks, but there are times when being an early adopter requires one to pull out a tried-and-true applications to get the job done. In this case our old friend Monit helps up fill in the gaps while Node.js and PM2 mature.

By

Exploring .img Files on Linux

Editor’s note: This tutorial is written using Ubuntu Linux. If you are on a different platform you may need to replace commands such as apt-get with your distributions equivalent.

What’s In Those .img Files?

Have you ever found yourself with an .img file blindly following the tutorial on “flashing” it onto an SD card? Did you know you can create your own disk image files for distributions on thumb drives, SD cards, etc. with minimal effort? This tutorial aims to show you how to:

  • Discover the hidden secrets of a monolithic .img file
  • Mount the partitions in an .img file using losetup, kpartx and mount
  • Create your own .img files and use them as virtual disks
  • Write out your virtual disk image to a thumb drive (or any drive for that matter) for use later

Let’s take a look at the BeagleBone Black Debian distribution file. You can download it with this link.

Once its downloaded you will want to uncompress it with xz --decompress:

If you don’t have xz installed you can obtain it through apt-get install xz-utils.

Once the image is decompressed many tutorials would have you stop there and go straight to using dd to do a byte-by-byte copy onto an SD card, insert into the BeagleBone Black, and boot. We want to examine it a bit further first.

First, we’re going to attach the image file to what is known as a loopback device. The Wikipedia entry introduction is succinct: “In Unix-like operating systems, a loop device is a pseudo-device that makes a file accessible as a block device.” In short, a loop device allows us to present the image file to the operating system as if it were an actual “disk”. To set up the loop device:

We used /dev/loop0 in this example. If /dev/loop0 wasn’t available to us (that is, it was already in use), we could have chosen /dev/loop1, etc. To get a list of the loopback devices that are already in use, one can use losetup -a:

Now /dev/loop0 is attached. What can we do? How about look at the partition table with fdisk?

Whoa! From the output of fdisk we can infer a few things, namely that:

  • There is a partition table present
  • There are two partitions: one with a FAT16 filesystem and another with a “Linux” filesystem

We want some more information about the filesystems on the partitions, so we’re going to utilize the utility kpartx, which according to the man page will Create device maps from partition tables. If you don’t have kpartx on your system, it can be installed with apt-get install kpartx.

To see what kpartx would map, run it with the -l option:

Let’s go ahead and run it and add the maps:

Don’t go looking for loop0p1 in /dev, but rather you should look in /dev/mapper. kpartx will assign the partitions it found and enumerate the partitions in /dev/mapper with the pattern loop0pX where X is the partition number assigned.

Now that the partitions are mapped, let’s examine the filesystems on each partition with file and the --special-files and --dereference options.

From the file manpage: Specifying the -s option causes file to also read argument files which are block or character special files. This is useful for determining the filesystem types of the data in raw disk partitions, which are block special files.

Let’s look at the second partition, which the only thing we know thus far is that it is a “Linux filesystem”.

The second partition is clearly formatted with an ext4 filesystem.

Now that we have our partitions mapped, we can mount them. Create two directories to serve as mountpoints:

Once they are created, mount the filesystems.

It should be quite clear as to why we chose these commands to mount the filesystems. Once they are mounted you can cd into them and look around, change things, etc.

Once you are done and want to “let go” of the .img file, reverse the process with:

Creating Your Own .img Files

Creating your own .img files is quite easy. Let’s say we want to distribute a 2G USB stick that is partitioned with two filesystems: one an ext4 filesystem and the other a btrfs filesystem. On each we’ll store the latest (as of this writing) kernel source code: linux-4.2-rc5.

First, we’ll use the dd utility and the /dev/zero device to create a monolithic 2G file. We’ll use the SI definition for 2 gigabytes, which is 2000000000. This command instructs dd to use /dev/zero as the input file and linuxsource.img as the output file. bs is block size and count is the number of blocks to write. We could have used a block size of 1 but dd is much slower (an inordinate number of minutes compared to 10 seconds with a blocksize of 1000).

Warning: It is a time honored tradition to remind you to treat dd with respect, akin to issuing commands as root. Misused or supplied the wrong arguments dd can seriously ruin your day.

Now we have a raw file, 2G in length, full of zeroes. What do we do with it?

The answer is to present it to the OS as a loop device, and then use fdisk to create a partition table. We effectively created a blank disk drive with nothing on it, so the first step is to put a partition table on the disk.

Now we use fdisk:

Surprise! There’s no partition table on the “disk”. We have to create one and create our partitions. Type n at the Command prompt. n is for new partition.

Hit ENTER to accept the default of a primary partition. Hit ENTER again to accept the default of partition number 1. Hit ENTER again to accept the default of 2048 as the first sector.

Now we will enter +900M as our “last sector”, which is actually applying a size of 900MB for the partition. Here is what the full sequence looks like for creating our first partition:

Let’s create a second partition while we are at it, but if you are curious, you can press p here to print out what the partition will look like when its written.

Create the second partition by issuing n and following the same steps. Choose defaults for the partition type, partition number, first sector, and use +990M for the last sector. To write out the partition table type w.

Don’t ignore the warning here. You’ve changed the partition table of a disk but the kernel is still referencing the old (i.e., non-existent) partition table. Run partprobe and the kernel will reread all of its attached devices for the partition tables. If you run fdisk -l /dev/loop0 now, you won’t get a nasty comment about no partition table!

Okay, we have our partition table written, but there are no actual filesystems present. We will have to make them with mkfs (make filesystem) utilities. Using kpartx once more we need to map the partitions.

We’ll start with our first partition and create an ext4 filesystem, followed by a btrfs filesystem on the second partition.

Note: If mkfs.btrfs is not available you can install it with apt-get install btrfs-tools.

After these commands you should be able to use file -SL to examine the filesystems present.

Now let’s mount our two filesystems:

If you look in the ext4 directory you should find a directory called lost+found. This is a special directory present on ext-based filesystems where fsck puts recovered files. You will not see this directory present in the btrfs directory.

We’re going to unpack the Linux source tree into both filesystems. Download the source from Kernel.org. We are using 4.2 Release Candidate 5.

Using df in the respective directory you can see how much space was taken up.

Now that the source has been exploded into the two filesystems, unmount the partitions.

Go ahead and instruct kpartx to unmap the partitions as well and remind yourself that /dev/loop0 is still presenting our original image file. You can release it as well.

Writing Out the .img File

Now we come to writing the entire linuxsource.img out to a USB thumbdrive. Once more we will be using dd and once more we’ll warn you: ensure you are writing out to the USB device. Writing to the wrong device could render your system inoperable and destroy data!. I suggest watching /var/log/syslog with tail -f to see where the USB drive was attached. In this case it was attached to /dev/sdh.

Warning: Replace /dev/sdX below with the device your system mounted the USB drive.

dd will happily write out every byte in linuxsource.img onto the device presented at /dev/sdh. Once it is done we’re going to have a USB stick with a partition table and two partitions. Once the command completes run fdisk -l /dev/sdX, where X is where your system has the USB drive available.

We can take a look at our partitions and see our filesystems are there!

Mount a partition and verify that our Linux kernel source is intact!

Closing Remarks

Linux provides a variety of powerful tools for creating and manipulating disk images and disks. Although a little daunting at first the concepts of devices, partitions, and filesystems are quite simple to grasp. Hopefully this tutorial was helpful in exploring how to leverage these tools to create your own disk images!