RPi - exposing http and ssh to the internet

This is just a quick guide for my friend Gonza to describe how SSH tunnelling can let you connect to a Raspberry Pi (or any other box running a unix-like OS) from anywhere even if it's behind an internet connection with a dynamic IP. This lets you avoid bothering to set up dyndns or similar. What you need is:

  1. RPi on home network (we'll call this rpi)
  2. remote box running linux, with a known IP address (we'll call this remote)

What we're going to do is establish an SSH connection between rpi and remote that will remain up at all times, and which will re-establish the connection without any user intervention. We'll be authenticating using an SSH key, so if you don't already have one run the following, I think the defaults are ok:

    me@rpi~ $ ssh-keygen -t rsa

And upload it to your remote server, and test it works nicely:

    me@rpi~ $ ssh-copy-id me@remote-server.com
    me@rpi~ $ ssh remote-server.com
    me@remote~ $ 

Next we'll make sure the remote has sshd configured so that as a client connecting via SSH you can specify the ports involved - so open up /etc/ssh/sshd_conf in your favourite editor:

    me@remote~ $ sudo emacs -nw /etc/ssh/sshd_conf
And either uncomment or add the following line:
    GatewayPorts clientspecified
Now that we've got rpi able to connect to remote via SSH we'll setup SSH tunnelling between the two. What this means is that we'll nominate some ports on remote which will have any traffic forwarded directly to some given ports on rpi - in this case I'm using:

type rpi port remote port 
ssh 40022 22
http 40080 8080

This will mean that not only can I SSH to rpi, but there's another one I can use for running, say, some web site or service. To do this we'll use autossh which is responsible for establishing the connection and keeping it up:

    me@rpi~ $ sudo apt-get install autossh
    me@rpi~ $ sudo autossh -Nf -M 40980 -o "PubkeyAuthentication=yes" -o "PasswordAuthentication=no" -i /home/me/.ssh/id_rsa -R remote-server.com:40080:localhost:8080 me@remote-server.com
    me@rpi~ $ sudo autossh -Nf -M 40922 -o "PubkeyAuthentication=yes" -o "PasswordAuthentication=no" -i /home/me/.ssh/id_rsa -R remote-server.com:40022:localhost:22 me@remote-server.com

To test this out, we can run a simple server on our rpi using nc:

    me@rpi~ $ while true; do { echo -e "HTTP/1.1 200 OK\r\n"; date ; uname -a ; echo; echo; } | nc -l 8080; done

And we can cURL the IP address of remote, which will forward the request/response between your laptop and rpi - I've used xxx.yyy.zzz.www in place of the actual IP address:

    me@laptop~ $ curl xxx.yyy.zzz.www:40080
    Sun May  1 13:36:14 UTC 2016
    Linux rpi 3.2.0-4-amd64 #1 SMP Debian 3.2.41-2+deb7u2 x86_64 GNU/Linux
We can try out connecting via SSH:
    me@laptop~ $ ssh me@xxx.yyy.zzz.www -p 40020
    me@rpi~ $ 

To get this command to run after we reboot we can muck around with systemd, or we could create a cron job - the latter is easier, so let's do that:

    $ crontab -e
And enter
    @reboot autossh -Nf -M 40980 -o "PubkeyAuthentication=yes" -o "PasswordAuthentication=no" -i /home/me/.ssh/id_rsa -R remote-server.com:40080:localhost:8080 me@remote-server.com
    @reboot autossh -Nf -M 40922 -o "PubkeyAuthentication=yes" -o "PasswordAuthentication=no" -i /home/me/.ssh/id_rsa -R remote-server.com:40022:localhost:22 me@remote-server.com

And if you want things to be a little easier you could add the following to your ~/.ssh/config:

    Host rpi
        HostName xxx.yyy.zzz.www
        Port 40020
This lets you connect to ssh without having to remember the IP address and port, so you can connect like so:
    me@laptop ~$ ssh me@rpi
    me@rpi ~$

Update, 2016-09-08: There's actually an even simpler way to do this if you don't have a remote machine and a domain, you can expose a tor hidden service. The caveat is that you're only able to access it from within the tor network, which means you won't be able to access it from your iPhone.

Raspberry Pi - Hacking a Canon DSLR shutter release (part 1)

In a previous post I created a simple timed shutter release from an Arduino, and used this to create a couple of time lapse videos. The drawback of this was that changing timing was a little tough - you had to update the source code and reflash the program to the Arduino.

Since I had a Raspberry Pi sitting around with similarly simple programmable GPIO pins I created a simple web frontend. Again hooking up the shutter release involved a pretty simple circuit based around a NPN transistor - I selected a random GPIO pin (pin 12):

The program uses the RPi.GPIO package to take the pictures, and the web frontend is a simple Bootstrap interface served by CherryPy. Currently all you can do is start/stop the timer, change the timing, take a picture ad-hoc.

I wrote this during winter and tried to capture a bit of snowmelt:

Codehttps://github.com/smcl/RPiEOS

Debian - verifying a downloaded ISO image

Part of something I'm doing involved flashing a Debian live image to a USB thumb drive, which is a pretty straightforward process. However this time I realised that I always ignore the little checksum and signature files that sit alongside them and that if I wanted to use them to verify the integrity/security of the image I had no idea how. So here's a quick HOWTO describing the process I followed.

I needed to use a Debian non-free live image since the Thinkpad X250 has an intel wifi device which requires a closed source firmware blob:

    $ wget -c http://cdimage.debian.org/cdimage/unofficial/non-free/cd-including-firmware/8.3.0-live+nonfree/amd64/iso-hybrid/debian-live-8.3.0-amd64-xfce-desktop+nonfree.iso
    $ wget http://cdimage.debian.org/cdimage/unofficial/non-free/cd-including-firmware/8.3.0-live+nonfree/amd64/iso-hybrid/SHA512SUMS.sign
    $ wget http://cdimage.debian.org/cdimage/unofficial/non-free/cd-including-firmware/8.3.0-live+nonfree/amd64/iso-hybrid/SHA512SUMS

Firslty we need to check that the signature of the sha512sum file was created by someone we trust

    $ gpg2 --verify SHA512SUMS.sign SHA512SUMS
    gpg: Signature made Thu 28 Jan 2016 02:07:24 CET
    gpg:                using RSA key 0xDA87E80D6294BE9B
    gpg: Can't check signature: No public key

In this case gpg2 doesn't have any knowledge of the key used to sign this image - 0xDA87E80D6294BE9B. We'll need to retrieve it from the official Debian key server:

    $ gpg2 --keyserver keyring.debian.org --recv-keys 0xDA87E80D6294BE9B
    gpg: requesting key 0xDA87E80D6294BE9B from hkp server keyring.debian.org
    gpg: key 0xDA87E80D6294BE9B: public key "Debian CD signing key " imported
    gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
    gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u
    gpg: Total number processed: 1
    gpg:               imported: 1  (RSA: 1)

Now we have the key we can check the signature, checksums match the key:

    $ gpg2 --verify SHA512SUMS.sign SHA512SUMS
    gpg: Signature made Thu 28 Jan 2016 02:07:24 CET
    gpg:                using RSA key 0xDA87E80D6294BE9B
    gpg: Good signature from "Debian CD signing key " [unknown]
    gpg: WARNING: This key is not certified with a trusted signature!
    gpg:          There is no indication that the signature belongs to the owner.
    Primary key fingerprint: DF9B 9C49 EAA9 2984 3258  9D76 DA87 E80D 6294 BE9B

So the signature is fine, but since we haven't explicitly trusted this key yet we're getting a warning. I'm about 99% sure this is sufficiently secure. Now that is out of the way we can check the ISO using sha512sum:

    $ sha512sum -c <(grep "debian-live-8.3.0-amd64-xfce-desktop+nonfree.iso$" SHA512SUMS)
    debian-live-8.3.0-amd64-xfce-desktop+nonfree.iso: OK

So everything semes to check out and we can be confident the NSA haven't sneakily compromised the image somehow. Just to quickly summarise - here's what we've done:

  1. downloaded a Debian 8.3 live image, checksums and signature
  2. verified the signature was produced by the Debian organisation, and the signature represents the sha512sum 
  3. verified the ISO matches the sha512sum. 


ESP8266 - writing a Lua module for NodeMCU in C

Previously I wrote a little post about getting the NodeMCU firmware built and flashed to an ESP8266 board, with a couple of simple lua examples to get started. 

This is a follow-on exploring what it takes to add some new functionality to the lua libraries, so you can write slightly more complex or low-level functionality in C/assembly and expose it through a lua interface. So this could be considered a nice practical introduction to the C API section of the Programming in Lua book. 

To keep things simple in the previous post we used a prebuilt binary, but for the modifications we want to make we'll need to get the esp SDK and build NodeMCU from source.

Getting the ESP SDK

There's probably a package floating around with the ESP8266 cross-compiler and libraries (actually the processor is an xtensa lx106), but building it isn't too tough. You can either check the documentation on the project's README.md or (if you're on Debian like me) just run the following:

    $ sudo apt-get install make unrar autoconf automake libtool gcc g++ gperf flex bison texinfo gawk ncurses-dev libexpat-dev python python-serial sed git unzip bash
    $ cd /opt # assuming you have write access here
    $ git clone --recursive https://github.com/pfalcon/esp-open-sdk.git
    $ make

Then add /opt/esp-open-sdk/xtensa-lx106-elf/bin to your PATH and check you can use it:

    $ xtensa-lx106-elf-gcc -dumpmachine
    xtensa-lx106-elf

Building the NodeMCU firmware

Now we've got a working xtensa-lx106 cross-compiler so we can get started on the NodeMCU tools. Change to the directory you want to do development in and checkout the code

    $ cd ~/dev
    $ git clone https://github.com/nodemcu/nodemcu-firmware
    $ cd nodemcu-firmware

We then need to decide what modules to build by editing app/include/user_modules.h and commenting/uncommenting the various macro definitions (LUA_USE_MODULES_NET and friends) to decide what modules will be present. If you want to use the same config built in previous post then this gist can be copy-pasted in.

Once this is done we can build the firmware by calling make, and flash it to the board using make flash:

    $ make && sudo make flash

Creating our module and a simple function - test.identity()

To keep things as clean as possible we'll just create a nice new module - take a look in the app/modules directory and you'll see .c files that roughly match up with the names of the NodeMCU lua modules (take a look at their ReadTheDocs page)

I'm going to very imaginatively use "test" as my module name, so I'll create app/modules/test.c as follows:

And modify my app/include/user_modules.h again so that we include the following definition

    #define LUA_USE_MODULES_TEST

Then rebuild, reflash, reconnect and play about a little to make sure things work as expected:

    > print(test.identity(1))
    1
    > print(test.identity("foo"))
    foo

Take a closer look at test.c - I'll summarise what's going on to the best of my ability. We've defined a function test_identity(lua_State *L), this is the C function we want to expose to the lua environment. Because C and Lua have a slightly different programming model (different type sizes, multiple return value support in Lua etc) the way we pass values from a function call in Lua to our C function is via this lua_State parameter which is a sort of stack:

A major component in the communication between Lua and C is an omnipresent virtual stack. Almost all API calls operate on values on this stack. All data exchange from Lua to C and from C to Lua occurs through this stack

When we want to return values from the C function back to Lua, we push them on the stack and then return an integer which says how many there are. 

We can actually see this in action if we start playing around a bit - when we call this function with more than one arguments you'll get the last one returned - as it'll be at the top of the stack:

    > print(test.identity("foo", "bar", "baz"))
    baz

The way we do this in identity() is by leaving the stack unchanged and saying we return a single value. This will be familiar to anyone who has worked with Forth or other stack machines. Most of the remainder of the file is boilerplate to define our "test" module and the functions available in it - NodeMCU has a handy set of helper macros that let us easily these.

Handling parameters and return values - test.add()

OK now we've got a super-simple example we can move on a little. Let's create a function which accepts two numbers as arguments, adds them together and returns them. In terms of the Lua stack what we want to do is take the top two values, add them together and push the result to the stack. It's better to see this in action - so add this function after test_identity() in test.c:

    // test.add(x,y) - take two values and add them
    static int test_add(lua_State *L) {
      double x = lua_tonumber(L, 1);
      double y = lua_tonumber(L, 2);
      lua_pushnumber(L, x+y);
      return 1;
    } 

It might be a little clearer to see the interaction with the stack here. We read a couple of values (x and y) using lua_tonumber() and then push the result to the stack to be returned using lua_pushnumber(). There are a handful of different functions to get differently typed values from the stack:

    int            lua_toboolean (lua_State *L, int index);
    double         lua_tonumber (lua_State *L, int index);
    const char    *lua_tostring (lua_State *L, int index);
    size_t         lua_strlen (lua_State *L, int index);

And to push different types to the stack to return them there are some more which should be self-explanatory:

    void lua_pushnil (lua_State *L);
    void lua_pushboolean (lua_State *L, int bool);
    void lua_pushnumber (lua_State *L, double n);
    void lua_pushlstring (lua_State *L, const char *s, size_t length);
    void lua_pushstring (lua_State *L, const char *s);
Since we've added a new C function we need to add a new row to the test_map initialiser so that we can call it from lua - so to make sure test_add() can be called using test.add() we'll update it like so:
    static const LUA_REG_TYPE test_map[] = {
      { LSTRKEY( "identity" ),         LFUNCVAL( test_identity ) },
      { LSTRKEY( "add" ),              LFUNCVAL( test_add ) }, // this is our new line
      { LSTRKEY( "__metatable" ),      LROVAL( test_map ) },
      { LNILKEY, LNILVAL }
    };
If we re-build, reflash and reconnect to the ESP8266 we should now be able to call test.add():
    > test.add(1, 2)
    3 
    > test.add(1, test.add(2, 3)
    6

Testing the type of values on the stack - test.add2()

If we're feeling a bit mischevous we could extend this example so we have a sort of overloaded function - we can check the parameter types and perform some different actions depending on them. We can test the types of the arguments using lua_is* functions (we need to include c_stdlib.h, since we're using c_zalloc):
    // test.add2(x,y) - take two values and add them, or take two strings and concat them
    static int test_add2(lua_State *L) {
      if (lua_isnumber(L, 1) && lua_isnumber(L, 2)) {     
        double x = lua_tonumber(L, 1);
        double y = lua_tonumber(L, 2);
        lua_pushnumber(L, x+y);
      } else if (lua_isstring(L, 1) && lua_isstring(L, 2)) {
        const char *s1 = lua_tostring(L, 1);
        const char *s2 = lua_tostring(L, 2);
        char *s = c_zalloc(strlen(s1) + strlen(s2));
        c_sprintf(s, "%s%s", s1, s2);
        lua_pushstring(L, s);
      } else {
        lua_pushnil(L);
      }
      return 1;
    }
Be careful since the lua_isstring is a wee bit cheeky - it appears that if numbers can be parsed as strings in this context. And also note that this would be a pretty nasty way to design a function, but as an illustrative example it's fine. The full complement of these functions below:
    int lua_isnumber (lua_State *L, int index);
    int lua_isstring (lua_State *L, int index);
    int lua_isboolean (lua_State *L, int index);
    int lua_isnil (lua_State *L, int index);
    int lua_isfunction (lua_State *L, int index);

Handling Lua function callbacks - test.ping()

So now we're comfortable with the Lua functions in C we can get into a slightly tougher example. What we want to do is pass a function in and use it as a callback later on. This means interacting with the lua registry - which is a key/value store for Lua objects (way better description can be found here):

As ever a little demonstration is the best way - when we want to test whether element 3 on the stack is a function and if it is store it in the registry:

    int my_ref;
    if (lua_isfunction(L, 3))
        my_ref = luaL_ref(L, LUA_REGISTRYINDEX);

When we want to retrieve the value, we use lua_rawgeti() which will retrieve it and push it to the stack:

    lua_rawgeti(L, LUA_REGISTRYINDEX, my_ref);

So if our lua function is at the top of the stack, we can call it using lua_call():

    int num_args = 1;
    int num_retvals = 0;
    lua_call(gL, num_args, num_retvals);

The second argument to lua_call is used to indicate how many arguments we've pushed to the stack to pass to the function, we're also specifying how many values we expect to be returned.

To tie all this together we can wrap the ping example which comes with lwIP (lwIP is the IP stack used by NodeMCU). It's hard to take each part piece-by-piece so I'm just going to dump the final ping functionality in-place in a GitHub fork which you can access like so:

    $ git clone https://github.com/smcl/nodemcu-firmware
    $ git checkout -b dev origin/dev
    $ make && sudo make flash

The commit in question which demonstrates the changes (there were a handful of lwip headers needing fixed) is here if you want it highlighted.

If we wanted to ping google.com, and for each response print the ping details and flash the ESP8266 board's LED we could write something like this:

    > function ping_recv(bytes, ipaddr, seqno, ttl, ping)
    >> output = string.format("%d bytes from %s: icmp_seq = %d, ttl=%d, time=%d ms", bytes, ipaddr, seqno, ttl, ping)
    >> print(output)
    >> gpio.mode(4, gpio.OUTPUT)
    >> gpio.write(4, gpio.LOW) 
    >> tmr.alarm(0, 50, tmr.ALARM_SINGLE, function() gpio.write(4, gpio.HIGH) end)
    >> end
    > test.ping("google.com", 3, ping_recv)
    > 32 bytes from 216.58.214.206: icmp_seq = 1, ttl=53, time=14 ms
    32 bytes from 216.58.214.206: icmp_seq = 2, ttl=53, time=14 ms
    32 bytes from 216.58.214.206: icmp_seq = 3, ttl=53, time=14 ms

Conclusion

There's a reason that Lua widely deployed as an extension language - it's very easy to write Lua and it only takes a medium-sized blog post to describe how to create new Lua modules in C. And since C tends to have an FFI for most languages/libraries we can quickly expose a powerful yet lightweight programming interface for almost any application.

However, even though there are a number modules (checkout the ESP8266 page on ReadTheDocs) for different bits of hardware, I'm actually surprised there aren't more given how simple it was to hook all this up.

If you'd like some more in-depth Lua documentation you really cannot go wrong with Programming in Lua. In addition there's a nice quick-reference over at Luai. The homepage might seem a little overwhelming but once you drill into a single section, like Section 4.1 Functions and Types, you see that it's pretty well laid out.

If you'd like to see more complex examples then take a look through the sources in the app/modules directory. They're not so hard to wrap your head around since lwIP and the board's built-in wifi functions do most of the heavy lifting. A lot of what I learned when writing this post was just from scanning through these files.

I'm still keen to spend more time on the ESP8266 - though in some ways for me it's building up a set of skills I'm not sure I'll ever use - but I'm enjoying myself, and if even a single person reads through this and finds it useful it'll have been worth it.

BeagleBone Black - iPhone as keyboard/mouse

Note: I wrote this a while ago specifically for my BeagleBone Black so some of the instructions refer to "opkg" - the package manager used by Ångström Linux. It looks like they've moved on to use Debian instead of Angstrom nowadays, but realistically you can adapt this guide and use the software on any linux distro on any device.

I thought it would be an interesting exercise to use my iPhone as an input device instead of a physical mouse and keyboard. It turns out it's pretty simple, thanks to Tuomas Räsänen's cool python-uinput module together with pybonjour for simple device discovery via Bonjour.

The "server" running on the Beagle and the iOS "client" (bad metaphors, sorry) can be retrieved via git:

$ git clone https://github.com/smcl/iBeagle
$ cd iBeagle

Preparing your Beaglebone Black

First copy the file iBeagle.py file to your beagle.

$ cd iBeagleServer
$ scp iBeagle.py beaglebone.local: # or whatever the username/IP is

Next there's a little bit of setup on the BeagleBone Black is required, as a kernel module and a couple of libraries are required

$ ssh beaglebone.local

Load the "uinput" module. this requires libudev

$ sudo modprobe uinput

The python module for uinput we're using requires udev-systemd, so install via opkg

$ sudo opkg install udev-systemd
$ git clone https://github.com/tuomasjjrasanen/python-uinput.git
$ cd python-uinput && sudo python setup.py install

Ensure avahi-daemon process is up and running

$ ps ax | grep avahi | grep running

Finally setup pybonjour - bonjour library I used for this. If you're having trouble relating to libdns_sd.so see the section "Appendix - udev trouble" at the end.

$ pushd /usr/src
$ wget https://pybonjour.googlecode.com/files/pybonjour-1.1.1.tar.gz
$ pushd pybonjour-1.1.1
$ tar -xzf  pybonjour-1.1.1.tar.gz
$ cd pybonjour-1.1.1
$ python setup.py install

Finally return to your iBeagleServer directory and run the client

$ popd && popd
$ python iBeagle.py

Preparing the iPhone Client

Currently you need to load the application using Xcode, so you'll need an iOS dev account and an iOS device authorised to your account. Open up iBeagleClient/iBeagleClient.xcodeproj, and build + run on your hardware.

When the server is running you should see your BeagleBone Black listed on the table when you launch the app. If not then try restarting the server, or waiting a few seconds.

Appendix - libudev trouble

I actually had a bit of trouble with this step initially. libdns_sd.so was missing, so I retrieved the avahi sources and built my own copy. The libdns_sd.so library was missing on my version of Angstrom Linux, so I built and installed it separately (which is slightly hacky, there's probably a more sensible way to achieve this)

$ wget http://avahi.org/download/avahi-0.6.31.tar.gz
$ tar -xzf avahi-0.6.31.tar.gz && cd avahi-0.6.31

We need to grab some pre-requisites + set PTHREAD_CFLAGS

$ export PTHREAD_CFLAGS='-lpthread'
$ sudo opkg install libssp-dev
$ sudo opkg install libintltool

And run configure so we only build as little as possible to just get the library we want

$ ./configure --disable-static --disable-mono --disable-monodoc --disable-gtk3 --disable-gtk --disable-qt3 --disable-python --disable-qt4 --disable-core-docs --enable-compat-libdns_sd --disable-tests --with-distro=none
$ make

Then (without running make install) we copy libdns_sd.so and libdns_sd.h into /usr/lib and /usr/include respectively

$ cp ./avahi-compat-libdns_sd/.libs/libdns_sd.so /usr/lib
$ cp ./avahi-compat-libdns_sd/dns_sd.h /usr/include

In my case it was trying to use libdns_sd.so.1 - so I also created a symlink to libdns_sd.so.1.

Thinkpad X250 - Debian, XFCE and X.org config

My trusty 2011 Macbook Air started to die recently, and since I'm getting more and more frustrated with OS X (neutering root in El Capitan, for example) I bit the bullet and picked up a Thinkpad X250 on a nice deal and switched to linux full-time. My distro of choice is Debian 8.3 ("Jessie") which is very nice under the hood,but the out-of-the-box UI is a little bit unpolished. Over the last week or so I spent some time bashing it into shape, and since it wasn't so simple I thought I'd share the main steps with anyone else out there.

Display/DPI and Fonts

The first thing is that the X250's screen is 1920x1080 but only 12.5 inches and when you first boot up the text appears to be scaled at 96dpi - and it's pretty tiny. Annoyingly the screen is not high-res enough for the standard HiDPI/Retina approach - scaling everything 2x - to work, everything looks a little too big. Getting something halfway is involves fiddling around with a lot of settings, and this is compounded by the fact that various apps and desktops have different ways of handling it. 

Note that this is mostly for XFCE - some of the other environments look and behave semi-OK (for example Cinnamon is reasonably usable but it eats my battery for breakfast) but I prefer the look and feel of XFCE even if I have to jump through a couple of hoops to get it to play nice.

First thing's first - my display's DPI setting was being incorrectly detected and I didn't have an xorg.conf to modify, so needed to generate one. Logout, switch to a terminal prompt, kill the lightdm process and use Xorg to generate the file:

    $ sudo service lightdm stop
    $ sudo Xorg --configure
    $ sudo cp /root/xorg.conf.new /etc/X11/xorg.conf # or from wherever Xorg created it

Then open up /etc/X11/xorg.conf and edit the Monitor section's DisplaySize - the ThinkPad X250's size in millimetres, so since the X250's screen is 276mm x 155mm we can set it to the following:

    Section "Monitor"
        Identifier "Monitor0"
        VendorName "Monitor Vendor"
        ModelName "Monitor Model"
        DisplaySize 276 155
        Option "DPMS"
    EndSection

After you reboot X11 will have loaded with the correct DPI setting. Things will still look a little wonky - but we're on the right track so sit tight and keep going.

Fixing the XFCE UI element fonts is pretty simple, go to Settings -> Appearance, select the Fonts tab, check the "Custom DPI Setting" box and input 150. 

If you have Chrome or Chromium installed you'll notice that the UI elements and fonts are pretty huge - this is because it's detected that you have a high DPI and has defaulted to scaling things by 2x. You'll need to edit the launcher so that it uses the --force-device-scale-factor to use 1.3 (a number I reached by pure trial-and-error):

Change the following line in /usr/share/applications/google-chrome.desktop:
    Exec=/usr/bin/google-chrome-stable  %U

to 

    Exec=/usr/bin/google-chrome-stable  --force-device-scale-factor=1.3 %U

If you use Chromium you can make a similar change to /usr/share/applications/chromium.desktop. So now Chrome will look a little saner - I also copied over the Times and Helvetica Neue fonts from my Mac and downloaded  Source Code Pro to make things look even better.

Next the login window - we fixed XFCE's DPI but the login window is still using 177, to alter it we need to fix up /etc/lightdm/lightdm-gtk-greeter.conf:

    font-name=Helvetica Neue 8
    xft-antialias=true
    xft-dpi=150
    xft-hintstyle=hintslight
    xft-rgba=rgb
After all this here's what Chrome running with the updated fonts looks like:

Qt fonts

Qt uses some slightly different settings - you need to ensure that ~/.fonts.conf is as appears below:

$ cat ~/.fonts.conf
<!DOCTYPE fontconfig SYSTEM 'fonts.dtd'>
<fontconfig>
    <match target="font" >
        <edit mode="assign" name="hinting" >
            <bool>true</bool>
        </edit>
    </match>
    <match target="font" >
      <edit mode="assign" name="hintstyle" >
	<const>hintslight</const>
      </edit>
    </match>
    <match target="font" >
        <edit mode="assign" name="antialias" >
            <bool>true</bool>
        </edit>
    </match>
</fontconfig>

To keep the layout consistent I opened up Settings -> Qt 4 Settings, went to the fonts tab and set it as follows:

Video Playback

I found that video playback was pretty choppy - to resolve it I had to just uncomment out a single option in xorg.conf:

    Section "Device"
        # ... lots of commented out options
        Option     "TearFree"           "true" # <-- uncomment or add this one
        Identifier  "Card0"
        Driver      "intel"
        BusID       "PCI:0:2:0"
    EndSection

Trackpad

Now that things look a little better we can fix the next couple of things that bothered me. By default there's a little area in the corner with counts as a "right" click that personally I don't use. Disabling it was as simple as commenting out the SoftButtonAreas and SecondarySoftButtonAreas in /etc/X11/xorg.conf.d/50-synaptics.conf

    Section "InputClass"
        Identifier "Default clickpad buttons"
        MatchDriver "synaptics"
        #Option "SoftButtonAreas" "50% 0 82% 0 0 0 0 0"
        #Option "SecondarySoftButtonAreas" "58% 0 0 15% 42% 58% 0 15%"
    EndSection

There's another couple of problems - the trackpad is extremely fast and sensitive by default, and additionally scrolling is too fast and has a sort of inertia that irritates me (I found I'd end up accidentally zooming in/out a lot). Scrolling can be tamed by adding the VertScrollDelta, CoastingSpeed and CoastingFriction sections to the same 50-synaptics.conf file as above, under the Driver "synaptics" section:

    Section "InputClass"
            Identifier "touchpad catchall"
            Driver "synaptics"
            MatchIsTouchpad "on"
    # This option is recommend on all Linux systems using evdev, but cannot be
    # enabled by default. See the following link for details:
    # http://who-t.blogspot.com/2010/11/how-to-ignore-configuration-errors.html
    #       MatchDevicePath "/dev/input/event*"
    	Option "VertScrollDelta" "200"
    	Option "CoastingSpeed" "1"
    	Option "CoastingFriction" "200"
    EndSection

For me adding the MaxSpeed option (which controls pointer sensitivity/speed) to 50-synaptics.conf didn't work, so I added the following to my ~/.xinitrc:

    synclient MaxSpeed=1.0 # it is 2.5 by default

The final touchpad tweak is to prevent it from registering touches as clicks while you're typing. XFCE has a setting that looks like it should work, but it leaves an agonising 2 second delay after you type before it enables the touchpad again. There's a handy utility to alter this which is syndaemon.

    syndaemon -i .2 -K -t -R -d

This can also go  into .xinitrc.

Power Management

By default the power management settings are a bit of a farce - my battery was exhausted after a handful of hours use even though it's pretty hefty. Strangely it seems that power management is still a bit of a ridiculous scene, I'd last operated a Linux laptop back in 2007 - a Thinkpad T40 - and sadly the out-of-the-box power management is pretty dire. There's a handful of useful applications which can help with individually tweaking things - like powertop - but thankfully there's a useful utility called tlp which appears to improve things significantly with only a little config required.

First you need to add the correct depot to your /etc/apt/sources.list - so add the following line to the end

    deb http://repo.linrunner.de/debian DIST main

And install the tlp and thinkpad packages:

    $ sudo apt-key adv --keyserver pool.sks-keyservers.net --recv-keys CD4E8809
    $ sudo apt-get update
    $ sudo apt-get install tlp tlp-rdw acpi-call-dkms

I still find power management a little unreliable - when closing the lid the system doesn't always sleep/suspend, the battery charge is sometimes reported incorrectly (it was stuck at "52%" for a while, which I didn't notice before it died on me). In addition XScreenSaver appears to mess things up too - when I enabled its power management settings it would kill performance after resuming from sleep (with the xorg process consuming >100% of the cpu). Disabling this altogether was fine.

Sound

Sound through the built-in speakers worked out of the box for me. However I have a little Bose Soundlink II bluetooth speaker which I like to use that's a little bit tricky. After pairing in blueman (which weirdly was a little bit trial-and-error) I wasn't sure how to get sound to play out of it. The solution was to install PulseAudio Volume Control, a utility that provides a nice graphical interface to controlling PulseAudio:

    $ sudo apt-get install pavucontrol

Then start the process that's playing the sound - so for me it's Chrome as I was watching Netflix. Open up pavucontrol, switch to the Playback tab find your application and select the bluetooth speaker from the dropdown. 

This being manual is a little bit frustrating but it's actually a minor step-up from OS X - where sometimes some apps wouldn't output sound to the speaker.

Conclusion

After the above tweaks my experience on Debian as a user is roughly on par with OS X (as a developer, it's far better of course). However there are still a number of things I am missing which would make my life a lot better.

  • Expose style window switcher. I know that there's skippy-xd but it is pretty buggy. There's an xfce bounty for this, so if anyone fancies picking up a whopping $70 then go nuts: https://www.bountysource.com/issues/3327306-expose-like-task-switcher-for-xfce
  • Trackpad gestures. I was quite fond of multi-touch gestures in OS X to switch workspaces, I haven't got this working in Linux/X.org just yet and I'm not sure if it's available.
  • AirDrop-style file sharing from iPhone

Hopefully I can pick away at those.



Python3.4 virtualenv setup

I always seem to forget how to setup a virtualenv for python 3 - with my move to a new laptop using linux I had to do this again so I kept the setup here for future reference. It's simple but easy to forget if you only ever do it, say, once every year or so.

The following will result in a virtualenv in the directory ~/venv/py-3.4 using the executable /usr/bin/python3.4:

$ sudo pip install virtualenv
[sudo] password for sean: 
Downloading/unpacking virtualenv
  Downloading virtualenv-14.0.6-py2.py3-none-any.whl (1.8MB): 1.8MB downloaded
Installing collected packages: virtualenv
Successfully installed virtualenv
Cleaning up...
$ mkdir ~/venv
$ virtualenv -p /usr/bin/python3.4 ~/venv/py-3.4
Running virtualenv with interpreter /usr/bin/python3.4
Using base prefix '/usr'
New python executable in /home/sean/venv/py-3.4/bin/python3.4
Also creating executable in /home/sean/venv/py-3.4/bin/python
Installing setuptools, pip, wheel...done.
$ python --version
Python 2.7.9
$ source ~/venv/py-3.4/bin/activate
(py-3.4) [sean@seoul ~]$ python --version
Python 3.4.2

I'm trying to stick to one-post-per-week but the last month been pretty busy for me so I've exhausted of scheduled posts that I built up. I have about 3-4 interesting things in the pipeline which will pop up over the next month, so I think I can get away with publishing something simple like this for one week :)

ESP8266 - getting started

I picked up a couple of ESP8266 modules - an inexpensive little SoC with WiFi - from AliExpress but ran into a couple of weird issues when trying to get them up and running. A lot of documentation is a little confusing, since it usually begins with flashing an image even though when you power it most likely starts up an open WiFi access point you can connect to. 

Anyway here are the issues I ran into, in order - coincidentally they capture a pretty simple workflow for getting started on the ESP8266.

Issue 0 - uhh ... what do I connect to where?

Update: I didn't know this at the time, but you can simply install the USB to UART bridge drivers from Silicon Labs and connect to the board via USB - on my laptop it appears as /dev/SLAB_USBtoUART. Using this method you can skip this whole section. 

There's dozens of different ESP8266 boards, each with their own little configurations, layouts and labels. Since I bought unbranded boards from AliExpress I didn't really know the search terms to take me to a nice simple diagram of what needs to be connected to each pin of my TTL cable. Anyway my boards look like this:

They connect to my TTL cable a little like this

Just in case it isn't clear from the picture - since I stupidly didn't match up all the colours - here's exactly what is connected to what:

<create table showing connections>

Even though there's a micro-USB connector this set of connections is sufficient to bring up the board and send some simple commands to it.

Issue 1 - connecting via serial (screen/OS X)

There's a very key bit of info for OS X users that's usually skipped over. When you attempt to connect to the ESP8266 using a USB TTL cable and screen you can be mistaken for thinking you've messed something up or you've got a dud chip, as it fails to respond to any input:

Sending an "AT" should result in the response "OK", but in this case the cursor just jumped to the left of the screen and nothing else happened. What's happened here is that GNU Screen on OS X is handling the Return key weirdly, sending only a CR character instead of CR+LF. I played around with the man page for a while but couldn't get it to play nice - eventually I downloaded CoolTerm which did the job nicely.

Issue 2 - firmware

After you've confirmed the chip is up and running by successfully issuing an "AT" you'll probably want to do something a little more useful with it. Since you likely don't speak AT (though a reference is here if you want to try) something simpler like Lua might be suitable. This will involve downloading a binary (or building it yourself, if you feel adventurous), and setting up a couple of things in python.

To save the bother of setting up dev tools for now, we'll use a service which will build a nice little NodeMCU image for you - http://nodemcu-build.com. Here's the build config I used:

Once it's complete you'll receive an email detailing the download link - I received two and opted to use the Integer one, which worked out nicely

To let us flash the ESP8266 we need to connect up the GPIO0 pin (which is labelled "D3" on my board) and reboot it so that it's in "flash" mode - here I've done this with the white cable.

Next we'll need to open up a terminal and setup pyserial and esptool, then download our image and flash it. Both pyserial and esptool can be retrieved via git - I'm just gonna be brief with this part as I've already spent a lot of time with screenshots and pictures already.

$ git clone https://github.com/pyserial/pyserial
Cloning into 'pyserial'...
remote: Counting objects: 4305, done.
remote: Compressing objects: 100% (4/4), done.
remote: Total 4305 (delta 0), reused 0 (delta 0), pack-reused 4301
Receiving objects: 100% (4305/4305), 1.06 MiB | 1.01 MiB/s, done.
Resolving deltas: 100% (3207/3207), done.
Checking connectivity... done.
$ cd pyserial
$ sudo python setup.py install
Password:
running install
... lots of lines later ...
Installed /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyserial-3.0.1-py2.7.egg
Processing dependencies for pyserial==3.0.1
Finished processing dependencies for pyserial==3.0.1
$ git clone https://github.com/themadinventor/esptool
Cloning into 'esptool'...
remote: Counting objects: 293, done.
remote: Total 293 (delta 0), reused 0 (delta 0), pack-reused 293
Receiving objects: 100% (293/293), 106.48 KiB | 0 bytes/s, done.
Resolving deltas: 100% (157/157), done.
Checking connectivity... done.
$ cd esptool

Now we can download our NodeMCU binary, double-check the path of our USB TTL device (since I always forget) and start flashing

$ curl -LO http://nodemcu-build.com/builds/nodemcu-master-7-modules-2016-01-18-22-47-15-integer.bin
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  388k  100  388k    0     0   247k      0  0:00:01  0:00:01 --:--:--  633k
$ ls /dev/tty.usbserial* 
/dev/tty.usbserial-A402O00H
$ python esptool.py --port /dev/tty.usbserial-A402O00H write_flash 0x00000 ./nodemcu-master-7-modules-2016-01-18-22-47-15-integer.bin 
Connecting...
Erasing flash...
Took 1.90s to erase flash block
Wrote 398336 bytes at 0x00000000 in 43.5 seconds (73.2 kbit/s)...

Leaving...
$ 

Now the board should be ready to reboot - if we go ahead and reboot it then connect using CoolTerm (changing to use 9600 baud instead of the 115200 we previously used) there should be a nice little Lua prompt waiting.

Issue 3 - what now?

So what cool things can we do now with our tiny little WiFi chip? Well firstly let's understand and tackle that pesky cannot find init.lua error message. After the NodeMCU firmware boots it searches for a file in the onboard eeprom called "init.lua" whose purpose is to allow you to run some lua code after boot, so you can setup a simple program or define some useful functions without having to get your hands dirty and rebuild the NodeMCU firmware and reflash.

So if we want to create a simple init.lua which prints out a little message on boot, here's what we can do.

$ git clone https://github.com/4refr0nt/luatool
$ cd luatool/luatool
$ echo 'print("check out my cool new ESP8266!")' > ./init.lua
$ ./luatool.py --port /dev/tty.usbserial-A402O00H --src init.lua --dest init.lua --verbose
Upload starting
Stage 1. Deleting old file from flash memory
->file.open("init.lua", "w") -> ok
->file.close() -> ok

Stage 2. Creating file in flash memory and write first line->file.remove("init.lua") -> ok

Stage 3. Start writing data to flash memory...->file.open("init.lua", "w+") -> ok

Stage 4. Flush data and closing file->file.writeline([==[print("check out my cool new ESP8266!")]==]) -> ok
->file.flush() -> ok
->file.close() -> ok
--->>> All done <<<---
$

Now any time you reboot your ESP8266 here's what you'll see instead of that error message:

There are a few nice examples at http://nodemcu.com, which you can either key in manually or upload as part of an init.lua file so that they always run on startup - just be careful you don't leave your ESP8266 in an infinite loop or else you'll have to reflash the firmware and start again. To save you the trouble, here are a couple of simple of examples, all of which involve connecting to a wifi network and serving some data.

The first is a simple hit counter, which records the number of requests it has served, and displays these on a simple page. Send the file using luatool similar to the previous example (changing --src parameter but ensuring --dest is still "init.lua"). Here's what you'll see after visiting the page - because Chrome attempts to fetch both the page and favicon.ico each hit is registered twice :)

Code: hitCount.lua

The second example allows you to switch the onboard LED on or off using a button on the page (which sends a POST request via AJAX). Because we're just dealing with raw TCP requests, I use a very inefficient/wrong way to decide whether or not we're handling a POST request - but since it's just a demonstration it's ok.

After loading the page and toggling the LED here's what we see:

Code (program): ledToggle.lua

Code (webpage): led.html

So there we have it - a simple introduction to connecting to an ESP8266 board, flashing with NodeMCU firmware and a couple of little pointers on how to use luatool to create a lua script which runs at startup. To find out more about the libraries available on NodeMCU check out their page on Read The Docs or for a little more info about the Lua programming language check out Programming in Lua 1st Edition

I seem to have written a few too many "Getting Started" guides, however I'll be spending a fair bit of time in the future on the ESP8266 as it offers plenty of opportunities for interesting work.


Quickstart Python webapp skeleton

Sometimes I've found myself with an hour or so spare and an idea best visualised through a little webapp, but hooking up a server, presentation layer, serving static files like jquery/bootstrap etc can be a tiny bit annoying if you do it more than a couple of times. To remove this little barrier I hooked all my preferred tech together in a simple bare-bones template that is pretty ugly but will suffice for 99% of these little projects I have.

Code: CherryPySkeleton

Demohttp://skeleton.mclemon.io

Arduino - 8 Bit Graphics with SSD1306

After my initial foray into graphics I wanted to see what was possible with the Atmega328p and the SSD1306 screen. Lots of applications which use this little microcontroller involve pretty mundane, uninspiring things like home automation and I like the idea of giving it a wee chance to shine and do something fun. The constraints of a relatively slow processor, extremely low memory, as well as a tiny and monochrome display were pretty enticing to me, especially after reading some of the early stuff on folklore.org on the hacks Bill Atkinson, Andy Hertzfeld et al employed to get most of out the Macintosh's 68000 processor.

FPS counter

Firstly we need to know the limits of the hardware. I wasn't sure whether the processor, the SPI bus or something inside the display itself would limit the frame rate to something unacceptable and put the kibosh on any sort of fast/responsive visualisations. So I whipped up a quick test which sent a bunch of empty frames to the display and every so often output the average frame rate.

Mercifully it seems that we're able to push as much as 207fps to the SSD1306, which means that we've got a fair bit of room to play with and that if anything the Atmega328p is the bottleneck.

Code: ssd1306_spi_maxfps.ino

Breakout

My first attempt was a little clone of breakout. I am apparently hopeless at it - but it's actually the first "game" I'd ever written, so that's fun.

Code: https://github.com/smcl/breakoutduino.git

3D cube

Since the Breakout clone worked smoothly, wanted to see if it was possible to produce simple 3D graphics - eventually settling on creating a spinning cube bouncing around the screen.

Initially I got a little too ambitious and attempted to roll my own general purpose library for 3D graphics - with a set of functions to manipulate a stack of transformation matrices. However I kept on blowing through my stack space  and clobbering a load of program state. This makes sense, since if we have a matrix implementation of 4x4 floats, we'd need to create one for the camera, a rotation and translation then that's already 3 matrices of 64 bytes each. Then we have to multiply the matrices and apply this to 8 vertices you can easily see how a naive implementation could at some point go a bit haywire and chew through more stack than it should.

Instead of taking the time to trim this down using the limited debug tools available to me (no stepping, watch, gdb through my simple USB cable) I just manually calculated the transformations I needed for my rotating/bouncing cube ahead of time and implemented them so they could be parameterised. So starting with the following (with Tx, Ty & Tz representing the center position of the cube, θ being angle of rotation and x, y and z being the co-ordinates of each cube)

Which means that each point can be represented by the following vector

Since I just wanted a rotating cube with wireframe lines I didn't need to do much other than use this to calculate the expected x/y positions of each point on screen and then use existing drawLine() function to connect each. Which resulted in this slightly clumsy function:

The final video involved two rotating cubes, one of which is flying around the screen.

It's quite satisfying result as it's the first time I'd produced a from-scratch 3D projection since university, and was actually surprisingly fast - the video looks a little shakey but in person it's as smooth as butter.

Code3D_cubes.ino

Seed Cathedral


Known as the Seed Cathedral the UK pavilion at Expo 2010 in Shanghai caused quite a stir and rightly earned itself an award from the organising committee for its outstanding design. It's a simple cube,with thousands of transparent plastic rods sticking out of it - each one containing a different type of seed. When you viewed it from a dozen steps back a subtle Union Flag pattern appears, which I must admit is is very neat even if I'm not a proud/patriotic Brit myself.

When I was trying to think of weird or interesting things that could be modelled quickly and easily in 3D which would have an interesting effect this jumped into my mind. The implementation is pretty simple - use similar code to create the projection, and shuffle the camera around a little to create a shimmering monochrome Union Flag. 

... later that day

And after all that hard work, I saw some stuff that left me pretty deflated - two excellent implementations of graphics libraries for AVR microcontrollers. The first one is u8g - a slightly more heavy duty library than the one Adafruit provided. It also provides some nicer font handling ... including a tiny font similar to the one I did. Here's an example of someone using it to draw a rotating cube at a reasonable clip, ~40 fps no less:

The second one is even more impressive and needs to be seen to be believed. Someone managed to create a library to render a 16 bit colour scene in 3D with texturing and lighting.

Be sure to check out the video at the bottom of this link, it's a little humbling after you've just thrown together a couple of simple monochrome visualisations - http://hackaday.com/2016/01/02/better-3d-graphics-on-the-arduino