Amir Chaudhry

thoughts, comments & general ramblings

IoT - Brave New World or Simply evolved M2M?

I was at an event run by Cambridge Wireless today on the Internet of Things. These are my notes. Although I've named people, I'm paraphrasing a lot so best not to treat anything you read here as a direct quote.

There were a few audience polls conducted using little keypads they handed out. There were only 30-ish keypads so it's not necessarily indicative of how the whole room felt. I took some hasty pics of a few of the slides (mainly the poll results) so apologies for the poor quality.

"The Enterprise and the Internet of Things - Evolution or Revolution?"

John Hicklin, Principal Consultant | Commercial Enterprise Markets, CGI UK

Nowadays, you can walk into the C-suite and start a discussion about the IoT. They may not know what they're talking about but someone's told them they need to be thinking about it. But what is IoT?

One of the problems in industry you go to your Director and you explain this new tech and all the awesome things you can do with it and he asks the dreaded question, "So what?" Example of the Connected Toothbrush. With a connected toothbrush you can automatically have a remote relationship with your dentist. But he already drives a BMW and it's not clear how he benefits from that. He wants you in the office where, every time you open your mouth, he sees new business development opportunities. For the moment, he neither needs nor wants a remote relationship with you. Leads to the question about who's going to sign up for the business case.

Number of key areas that need to be addressed (paraphrased):

  • Crack open the Data: We are awash with data but don't know how to use it. Problem with clients is that they really buy into the vision but they have a lot of legacy systems (green screens!) which already run things.

  • Identifying value(?): Volume is not a sign of success. Need to know where the value is, which means understanding it. e.g is Enron. Thousands of documents, accountants all over the place with tons of data. It took a small team of a few researchers doing a case study that discovered no tax had ever been paid. They got to the right data and asked the right questions.

  • Connectivity(?): M2M has grown in a silo'd manner where a device takes data and sends it to one place. This needs to open up more and give access to other areas of business.

  • Business operations: People matter too. Process changes, systems that allow real-time decisions, empowering individuals.

Overall, IoT is an enabler for a number of wider trends. IoT can improve operational efficiency and enhance customer experience. Normally these are in conflict with each-other. However, if we just get into a tech-fest, then we might just end up with a large number of failures as we've seen with 'Big Data'

Poll - Which (categories of) business-related infrastructure categories are most urgently needed for building the Internet of Things.

Poll results

"Overcoming the Internet of Things Connectivity Challenge"

David Dunn, Software Engineer, Electric Imp

Electric Imp aims to help people in Product Development in the IoT space, namely with the connectivity problem. The name 'IMP' came from 'Internet Messaging Processor'. Name is also from Terry Pratchett who's stories had devices that contained an imp which did the work :)

Building connected devices is difficult. Have to be secure, easy to set up. There are many choices for connectivity, problems with managing the code on the devices. Electric Imp picked wifi and decided to do it well. This satisfies a lot of markets but not all.

Side story of the hacked fridge sending spam emails.

Things run through Electric Imp system and they manage the security. It's security as a service. Electric Imp manages deployment of your code too so developers can just push code and it will end up on all their devices.

Simplicity comes from one-step setup. It's optical and uses a smartphone and flashes the screen, which sends the wifi SSID and credentials to the device (via a sensor). Customer code runs within a VM on top of with custom OS (impOS) taking care of I/O (TLS-encrypted wifi connection to service).

Difficult to debug devices in the field so cloud based- service helps with that. They do embedded stuff on the device and cloud stuff in the cloud. There's another VM in the cloud that's paired with the VM on the device. Heavy lifting can happens elsewhere so no need to parse JSON on the device itself.

Q - How do you deal with identity of units?
A - Imps have unique serials and the devices they're plugged into are also unique (missed how).

"Real-life experiences from rolling out M2M networks and how we can learn from these in the shift to IoT"

Jon Lewis, Chief Innovation Officer, Plextek Consulting

Plextek was working in this field when it was called 'telemetry'. One of the earliest things they worked on was 'Lojack' (car tracking). Things then shifted to 'smart cities' and now people talk about 'IoT' and want consumer devices with consumer volumes.

2014 has got to be year of IoT. However, still some way from business cases as it's too expensive to simply track your cat (approx £100 set up fee and then around £30 pcm).

Deployment often costs more than then the sensor devices. c.f street lighting systems. Choice of radio system is critical. Projects in UK/Europe are about incremental cost savings with pay-back in less than 5yrs. In industrialising countries, there is new build which is easier then retrofit. Urbanisation creates strain and demand. Local issues relate to the Spectrum issues and local regulations (e.g mandate to use govt encryption streams). Data privacy is also concern as we don't know who holds the monkey. That's made even more difficult when you go abroad. Local politics also matters.


  • Big opportunity in B2C: but need consolidation in B2B markets to get necessary economies of scale
  • Big B2B opportunity in developing countries: but is tough and needs local partners
  • Application developers need common platforms: to lower R&D costs and enable access to global markets
  • Whole value chain needs to work: building an ecosystem that connects application developers to markets is key to achieving scale

"The role of LTE in M2M"

Georg Steimel, Head of M2M Solutions, Huawei

About Huawei - Three businesses: Carrier network, Consumer business, Enterprise business - Revenues of $39Bn, No2. Telecom solution provider, 150k+ employees worldwide

Georg is in Consumer branch for Europe. Huawei wants to be top M2M module vendor in the coming years. They're operator and system integrator agnostic.

Tech evolution brings new opportunities. Mobile data traffic needs, Network upgrading etc. Ave user expected to generated 1G data traffic per month (currently 63MB today) and there are predicted to be 50Bn+ devices. Examples of users are highly varied including monitoring of pregnant cows.

Challenges for mass rollout are a scattered market requiring complex solutions. Alignment of all parties in one adequate business model is quite tricky, esp when it comes to customer ownership. Should consider the value chain.

Complex business model:

  • From sales to service
  • Many stakeholders
    • Who pays, how much, to whom, for what service, for what benefits?
  • Individual benefit must exceed cost
    • Realistic pricing
    • Appropriate effort
    • Total benefit, direct vs indirect TCO
    • Sustainability

Amount of data generated is also going to be huge. Smart meters in German households could send over 4TB per day, which is a huge strain for mobile networks.

LTE advantage for M2M is obviously bandwidth but that's only relevant to a few verticals (signage, routers, and in-car entertainment). Low latency is more important for industrial alarms and medical devices.

Poll - What business models? Poll results

Break down of poll results.
Poll results

"Using networked visualisation for better decision making"

Fredrik Sjostedt, Vice President | Corporate Marketing, Barco Ltd

Barco does a lot of visualising information. Displays for radiographers, stuff in cinema screens, screens in flight simulators and a lot more.

Need to take in data from sensors, display relevant information and allow operators to make SMART decisions. e.g number of CCTV cameras in London - who's looking at them? Real 'Big Data' step is at the operator level.

Important to use standards, rather than proprietary systems.

Q - In transport visualisation do you have low enough latency to get real-time info?
A- Yes, e.g on a route in Brazil near Sao Palo it's closed dues to fog. Real-time info important for understanding the whole route.

"Regulatory backdrop on IP and standards issues for the M2M eco-system"

Justin Hill, Co-Head of the Patent Prosecution Group and Purvi Parekh, Co-Head of International Telecoms, Olswang

Purvi on the Regulatory environment
No tailor made regulation v high regulatory focus. Regulatory focus from leading bodies and many of them said that existing regulation can be amended to facilitate M2M/IoT. Made more complicated by lack of globally agreed definition of IoT (which make it difficult for lawyers). There's generally promotion of Open Standards. EU does believe that this is v important. People should read the RAND report (Nov 2013), commissioned by the EU to examine if there should be standards.

Justin on IP issues
With any standardisation process, you're hoping to increase adoption with the trade-off of Right of IPR owners. FRAND (fair, reasonable, and non-discriminatory terms) are also important but that's also debatable. When considering how to protect IPR, it's worth comparing whole-system claims with protecting at different layers of the tech/value stack and build a portfolio.

"The impact of autonomous systems on surface transport"

Paul Copping, Corporate Development Director, TRL

TRL is the Transport Research Laboratory. Did some work to consider the impact of autonomous transport on the overall value chain. Some of things we're trying now with autonomous driving have been successfully done in the past by other means (e.g car tracking a lane with magnets).

Transport Value Chain

"Connected Products - Designing for Scale"

Pilgrim Beart, Founder Director, 1248-io

What is IoT? Humans don't scale. Population is predicted to tail off at around 9Bn. Connected devices are growing and the rate of change is exponential (and increasing). In the 90s we used to be our own sysadmin with maybe one device. Now we have several and some cloud services. In the future there will be many more and they'll have to manage themselves. There are different problem you face as you go from 1 to 10 to 100 to 10000 to 1MM+ deployed devices. Issues ranging from device updates al the way to regulation and externalities that affect your business.

In 10 years time, when all is said and done, we won't be calling it the Internet of Things. It'll just be called the Internet.

Q from an application developers perspective wht would you different earlier in your development if you were to go back and start again
A - AlertMe started a bit too early so there were a lot things were not off-the-shelf. Had to build. Right now can see a lot people reinventing the wheel but shouldn't need to do this.

"A new standard to connect the Internet of Everything"

Antony Rix, Senior Consultant, TTP

One of the things TTP works on are wireless standards.

The size of the grey blobs are in an indication of market size.
IoT Market Segments

Users will expect things to cost pretty much the same as they did before. Cost will be critical and as we buy things we generally still expect them to last for years.

TTP Matrix is a proprietary standard specifically for use by low-cost wireless connected devices. It's optimised for efficient use of radio spectrum, long range, low cost and power efficient. Use-case example (DisplayData) are the pricing labels on supermarket. 50k products running through only 3 gateways (other standards would find this difficult). Low power consumption means a battery can last 10 years.

Poll - Which (categories of) technical / architecture building blocks are most urgently needed for building the Internet of Things. Poll results


Interesting discussion of Humans role in IoT. For home consumer devices people may not want to have to make any decisions (completely automated) but corporates my want all information in order to integrate and decide.

How quickly will this happen? It's difficult to say when you're in the middle of it. We'll know it's happened after the event. It takes about a decade for things to get things in to mature in the market. On the other hand, it's happening.

Poll - Big and small data - Which statement do you agree with most/least
I only got a pic of the latter but it's the inverse of the former (unsurprisingly).
Poll results

Share / Comment

From Jekyll site to Unikernel in fifty lines of code.

Mirage has reached a point where it's possible to easily set up end-to-end toolchains to build unikernels! My first use-case is to be able to generate a unikernel which can serve my personal static site but to do it with as much automation as possible. It turns out this is possible with less than 50 lines of code.

I use Jekyll and GitHub Pages at the moment so I wanted a workflow that's as easy to use, though I'm happy to spend some time up front to set up and configure things. The tools for achieving what I want are in good shape so this post takes the example of a Jekyll site (i.e this one) and goes through the steps to produce a unikernel on Travis CI (a continuous integration service) which can later be deployed. Many of these instructions already exist in various forms but they're collated here to aid this use-case.

I will take you, dear reader, through the process and when we're finished, the workflow will be as follows:

  1. You'll write your posts on your local machine as normal
  2. A push to GitHub will trigger a unikernel build for each commit
  3. The Xen unikernel will be pushed to a repo for deployment

To achieve this, we'll first check that we can build a unikernel VM locally, then we'll set up a continuous integration service to automatically build them for us and finally we'll adapt the CI service to also deploy the built VM. Although the amount of code required is small, each of these steps is covered below in some detail. For simplicity, I'll assume you already have OCaml and Opam installed -- if not, you can find out how via the Real Word OCaml install instructions.

Building locally

To ensure that the build actually works, you should run things locally at least once before pushing to Travis. It's worth noting that the mirage-skeleton repo contains a lot of useful, public domain examples and helpfully, the specific code we need is in mirage-skeleton/static_website. Copy both the and files from that folder into a new _mirage folder in your jekyll repository.

Edit so that the two mentions of ./htdocs are replaced with ../_site. This is the only change you'll need to make and you should now be able to build the unikernel with the unix backend. Make sure you have the mirage package installed by running $ opam install mirage and then run:

$ cd _mirage
$ mirage configure --unix
$ mirage build
$ cd ..

That's all it takes! In a few minutes there will be a unikernel built on your system (symlinked as _mirage/mir-www). If there are any errors, make sure that Opam is up to date and that you have the latest version of the static_website files from mirage-skeleton.

Serving the site locally

If you'd like to see this site locally, you can do so from within the _mirage folder by running unikernel you just built. There's more information about the details of this on the Mirage docs site but the quick instructions are:

$ cd _mirage
$ sudo mirage run

# in another terminal window
$ sudo ifconfig tap0

You can now point your browser at and see your site! Once you're finished browsing, $ mirage clean will clear up all the generated files.

Since the build is working locally, we can set up a continuous integration system to perform the builds for us.

Setting up Travis CI

We'll be using the Travis CI service, which is free for open-source projects (so this assumes you're using a public repo). The benefit of using Travis is that you can build a unikernel without needing a local OCaml environment, but it's always quicker to debug things locally.

Log in to Travis using your GitHub ID which will then trigger a scan of your repositories. When this is complete, go to your Travis accounts page and find the repo you'll be building the unikernel from. Switch it 'on' and Travis will automatically set your GitHub post-commit hook and token for you. That's all you need to do on the website.

When you next make a push to your repository, GitHub will inform Travis, which will then look for a YAML file in the root of the repo called .travis.yml. That file describes what Travis should do and what the build matrix is. Since OCaml is not one of the supported languages, we'll be writing our build script manually (this is actually easier than it sounds). First, let's set up the YAML file and then we'll examine the build script.

The Travis YAML file - .travis.yml

The Travis CI environment is based on Ubuntu 12.04, with a number of things pre-installed (e.g Git, networking tools etc). Travis doesn't support OCaml (yet) so we'll use the c environment to get the packages we need, specifically, the OCaml compiler, Opam and Mirage. Once those are set up, our build should run pretty much the same as it did locally.

For now, let's keep things simple and only focus on the latest releases (OCaml 4.01.0 and Opam 1.1.1), which means our build matrix is very simple. The build instructions will be in the file _mirage/, which we will move to and trigger from the .travis.yml file. This means our YAML file should look like:

language: c
before_script: cd _mirage
script: bash -ex

The matrix enables us to have parallel builds for different environments and this one is very simple as it's only building two unikernels. One worker will build for the Xen backend and another worker will build for the Unix backend. The _mirage/ script will clarify what each of these environments translates to. We'll come back to the DEPLOY flag later on (it's not necessary yet). Now that this file is set up, we can work on the build script itself.

The build script -

To save time, we'll be using an Ubuntu PPA to quickly get pre-packaged versions of the OCaml compiler and Opam, so the first thing to do is define which PPAs each line of the build matrix corresponds to. Since we're keeping things simple, we only need one PPA that has the most recent releases of OCaml and Opam.

#!/usr/bin/env bash
echo "yes" | sudo add-apt-repository ppa:$ppa
sudo apt-get update -qq
sudo apt-get install -qq ocaml ocaml-native-compilers camlp4-extra opam

[NB: There are many other PPAs for different combinations of OCaml/Opam which are useful for testing]. Once the appropriate PPAs have been set up it's time to initialise Opam and install Mirage.

export OPAMYES=1
opam init
opam install mirage
eval `opam config env`

We set OPAMYES=1 to get non-interactive use of Opam (it defaults to 'yes' for any user input) and if we want full build logs, we could also set OPAMVERBOSE=1 (I haven't in this example). The rest should be straight-forward and you'll end up with an Ubuntu machine with OCaml, Opam and the Mirage package installed. It's now trivial to do the next step of actually building the unikernel!

mirage configure --$MIRAGE_BACKEND
mirage build

You can see how we've used the environment variable from the Travis file and this is where our two parallel builds begin to diverge. When you've saved this file, you'll need to change permissions to make it executable by doing $ chmod +x _mirage/

That's all you need to build the unikernel on Travis! You should now commit both the YAML file and the build script to the repo and push the changes to GitHub. Travis should automatically start your first build and you can watch the console output online to check that both the Xen and Unix backends complete properly. If you notice any errors, you should go back over your build script and fix it before the next step.

Deploying your unikernel

When Travis has finished its builds it will simply destroy the worker and all its contents, including the unikernels we just built. This is perfectly fine for testing but if we want to also deploy a unikernel, we need to get it out of the Travis worker after it's built. In this case, we want to extract the Xen-based unikernel so that we can later start it on a Xen-based machine (e.g Amazon, Rackspace or - in our case - a machine on Bytemark).

Since the unikernel VMs are small (only tens of MB), our method for exporting will be to commit the Xen unikernel into a repository on GitHub. It can be retrieved and started later on and keeping the VMs in version control gives us very effective snapshots (we can roll back the site without having to rebuild). This is something that would be much more challenging if we were using the 'standard' web toolstack.

The deployment step is a little more complex as we have to send the Travis worker a private SSH key, which will give it push access to a GitHub repository. Of course, we don't want to expose that key by simply adding it to the Travis file so we have to encrypt it somehow.

Sending Travis a private SSH key

Travis supports encrypted environment variables. Each repository has its own public key and the Travis gem uses this public key to encrypt data, which you then add to your .travis.yml file for decryption by the worker. This is meant for sending things like private API tokens and other small amounts of data. Trying to encrypt an SSH key isn't going to work as it's too large. Instead we'll use travis-senv, which encodes, encrypts and chunks up the key into smaller pieces and then reassembles those pieces on the Travis worker. We still use the Travis gem to encrypt the pieces to add them to the .travis.yml file.

While you could give Travis a key that accesses your whole GitHub account, my preference is to create a new deploy key, which will only be used for deployment to one repository.

# make a key pair on your local machine
$ cd ~/.ssh/
$ ssh-keygen -t dsa -C "travis.deploy" -f travis-deploy_dsa
$ cd -

Note that this is a 1024 bit key so if you decide to use a 2048 bit key, then be aware that Travis sometimes has issues. Now that we have a key, we can encrypt it and add it to the Travis file.

# on your local machine

# install the necessary components
$ gem install travis
$ opam install travis-senv

# chunk the key, add to yml file and rm the intermediate
$ travis-senv encrypt ~/.ssh/travis-deploy_dsa _travis_env
$ cat _travis_env | travis encrypt -ps --add
$ rm _travis_env

travis-senv encrypts and chunks the key locally on your machine, placing its output in a file you decide (_travis_env). We then take that output file and pipe it to the travis ruby gem, asking it to encrypt the input, treating each line as separate and to be appended (-ps) and then actually adding that to the Travis file (--add). You can run $ travis encrypt -h to understand these options. Once you've run the above commands, .travis.yml will look as follows.

language: c
before_script: cd _mirage
script: bash -ex
  - secure: ".... encrypted data ...."
  - secure: ".... encrypted data ...."
  - secure: ".... encrypted data ...."

The number of secure variables added depends on the type and size of the key you had to chunk, so it could vary from 8 up to 29. We'll commit these additions later on, alongside additions to the build script.

At this point, we also need to make a repository on GitHub and add the public deploy key so that Travis can push to it. Once you've created your repo and added a README, follow GitHub's instructions on adding deploy keys and paste in the public key (i.e. the content of

Now that we can securely pass a private SSH key to the worker and have a repo that the worker can push to, we need to make additions to the build script.

Committing the unikernel to a repository

Since we can set DEPLOY=1 in the YAML file we only need to make additions to the build script. Specifically, we want to assure that: only the Xen backend is deployed; only pushes to the repo result in deployments, not pull requests (we do still want builds for pull requests).

In the build script (_mirage/, which is being run by the worker, we'll have to reconstruct the SSH key and configure Git. In addition, Travis gives us a set of useful environment variables so we'll use the latest commit hash ($TRAVIS_COMMIT) to name the the VM (which also helps us trace which commit it was built from).

It's easier to consider this section of code at once so I've explained the details in the comments. This section is what you need to add at the end of your existing build script (i.e straight after mirage build).

# Only deploy if the following conditions are met.
if [ "$MIRAGE_BACKEND" = "xen" \
            -a "$DEPLOY" = "1" \
            -a "$TRAVIS_PULL_REQUEST" = "false" ]; then

    # The Travis worker will already have access to the chunks
    # passed in via the yaml file. Now we need to reconstruct 
    # the GitHub SSH key from those and set up the config file.
    opam install travis-senv
    mkdir -p ~/.ssh
    travis-senv decrypt > ~/.ssh/id_dsa # This doesn't expose it
    chmod 600 ~/.ssh/id_dsa             # Owner can read and write
    echo "Host some_user"   >> ~/.ssh/config
    echo "  Hostname"          >> ~/.ssh/config
    echo "  StrictHostKeyChecking no"     >> ~/.ssh/config
    echo "  CheckHostIP no"               >> ~/.ssh/config
    echo "  UserKnownHostsFile=/dev/null" >> ~/.ssh/config

    # Configure the worker's git details
    # otherwise git actions will fail.
    git config --global ""
    git config --global "Travis Build Bot"

    # Do the actual work for deployment.
    # Clone the deployment repo. Notice the user,
    # which is the same as in the ~/.ssh/config file.
    git clone git@some_user:amirmc/www-test-deploy
    cd www-test-deploy

    # Make a folder named for the commit. 
    # If we're rebuiling a VM from a previous
    # commit, then we need to clear the old one.
    # Then copy in both the config file and VM.
    rm -rf $TRAVIS_COMMIT
    mkdir -p $TRAVIS_COMMIT
    cp ../mir-www.xen ../ $TRAVIS_COMMIT

    # Compress the VM and add a text file to note
    # the commit of the most recently built VM.
    bzip2 -9 $TRAVIS_COMMIT/mir-www.xen
    git pull --rebase
    echo $TRAVIS_COMMIT > latest    # update ref to most recent

    # Add, commit and push the changes!
    git add $TRAVIS_COMMIT latest
    git commit -m "adding $TRAVIS_COMMIT built for $MIRAGE_BACKEND"
    git push origin master
    # Go out and enjoy the Sun!

At this point you should commit the changes to ./travis.yml (don't forget the deploy flag) and _mirage/ and push the changes to GitHub. Everything else will take place automatically and in a few minutes you will have a unikernel ready to deploy on top of Xen!

You can see both the complete YAML file and build script in use on my test repo, as well as the build logs for that repo and the deploy repo with a VM.

[Pro-tip: If you add [skip ci] anywhere in your commit message, Travis will skip the build for that commit. This is very useful if you're making minor changes, like updating a README.]

Finishing up

Since I'm still using Jekyll for my website, I made a short script in my jekyll repository ( that builds the site, commits the contents of _site and pushes to GitHub. I simply run this after I've committed a new blog post and the rest takes care of itself.

#!/usr/bin/env bash
jekyll build
git add _site
git commit -m 'update _site'
git push origin master

Congratulations! You now have an end-to-end workflow that will produce a unikernel VM from your Jekyll-based site and push it to a repo. If you strip out all the comments, you'll see that we've written less than 50 lines of code! Admittedly, I'm not counting the 80 or so lines that came for free in the *.ml files but that's still pretty impressive.

Of course, we still need a machine to take that VM and run it but that's a topic for another post. For the time-being, I'm still using GitHub Pages but once the VM is hosted somewhere, I will:

  1. Turn off GitHub Pages and serve from the VM -- but still using Jekyll in the workflow.
  2. Replace Jekyll with OCaml-based static-site generation.

Although all the tools already exist to switch now, I'm taking my time so that I can easily maintain the code I end up using.

Expanding the script for testing

You may have noticed that the examples here are not very flexible or extensible but that was a deliberate choice to keep them readable. It's possible to do much more with the build matrix and script, as you can see from the Travis files on my website repo, which were based on those of the Mirage site and Mort's site. Specifically, you can note the use of more environment variables and case statements to decide which PPAs to grab. Once you've got your builds working, it's worth improving your scripts to make them more maintainable and cover the test cases you feel are important.

Not just for static sites (surprise!)

You might have noticed that in very few places in the toolchain above have I mentioned anything specific to static sites per se. The workflow is simply (1) do some stuff locally, (2) push to a continuous integration service which then (3) builds and deploys a Xen-based unikernel. Apart from the convenient folder structure, the specific work to treat this as a static site lives in the *.ml files, which I've skipped over for this post.

As such, the GitHub+Travis workflow we've developed here is quite general and will apply to almost any unikernels that we may want to construct. I encourage you to explore the examples in the mirage-skeleton repo and keep your build script maintainable. We'll be using it again the next time we build unikernel devices.

Acknowledgements: There were lots of things I read over while writing this post but there were a few particularly useful things that you should look up. Anil's posts on Testing with Travis and Travis for secure deployments are quite succinct (and were themselves prompted by Mike Lin's Travis post several months earlier). Looking over Mort's build script and that of mirage-www helped me figure out the deployment steps as well as improve my own script. Special thanks also to Daniel, Leo and Anil for commenting on an earlier draft of this post.

Share / Comment

Scientific evidence for health supplements

This is an update to an earlier version of this infographic, which I came across two years ago. It was interesting at the time since it did a good job at displaying the myriad supplements in a way that indicated both their likely efficacy, according to research papers, and how popular the search term was. For example, the previous image showed 'strong' evidence for licorice root as an aid for coughs, but it had a much smaller search volume than the equivalently effective fish-oil for blood pressure. I also really liked the concept of the 'Worth it Line'.

The graphic has been updated a couple of times now, with the most recent change a few weeks ago. There are quite a few differences as garlic didn't feature so prominently on the previous image and it looks like a number of things may have been removed (e.g licorice root).

Vitamin D, as beneficial to general health, is still way up there. I decided to try Vitamin D3 supplements over the winter, under the theory that I'm not getting enough sun so a supplement may be useful. I was quite surprised at how much better I felt after a day or so of taking it, to the point where I tried looking up work on Vitamin D and mood*. Scanning the research seemed to indicate that there may be links between Vitamin D, serotonin levels and depression but the extent isn't clear and conclusions were mixed. Of course, that mostly supports what the infographic is saying and it puts it above the 'Worth it Line' (for depression specifically but not general 'mood' per se).

I encourage you to go visit the site as there's an interactive version, where you can highlight a bubble to get more info and be transported to the main study it references. If you like, you can also dig into the spreadsheet itself.

Scientific evidence for health supplements


* I'm well aware that I may be experiencing nothing more than a placebo effect :)

Share / Comment

Switching from Bootstrap to Zurb Foundation

I've just updated my site's HTML/CSS and moved from Twitter Bootstrap to Zurb Foundation. This post captures my subjective notes on the migration.

My use of Bootstrap

When I originally set this site up, I didn't know what frameworks existed or anything more than the basics of dealing with HTML (and barely any CSS). I came across Twitter Bootstrap and immediately decided it would Solve All My Problems. It really did. Since then, I've gone through one 'upgrade' with Bootstrap (from 1.x to 2.x), after which I dutifully ignored all the fixes and improvements (note that Bootstrap was up to v2.3.2 while I was still using v2.0.2).

Responsive Design

For the most part, this was fine with me but for a while now, I've been meaning to make this site 'responsive' (read: not look like crap from a mobile). Bootstrap v3 purports to be mobile-first so upgrading would likely give me what I'm after but v3 is not backwards compatible, meaning I'd have to rewrite parts of the HTML. Since this step was unavoidable, it led me to have another look at front-end frameworks, just to see if I was missing anything. This was especially relevant since we'd just released the new website, itself built with Bootstrap v2.3.1 (we'd done the design/templating work long before v3 was released). It would be useful to know what else is out there for any future work.

Around this time I discovered Zurb Foundation and also the numerous comparisons between them (note: Foundation seems to come out ahead in most of those). A few days ago, the folks at Zurb released version 5, so I decided that now is the time to kick the tires. For the last few days, I've been playing with the framework and in the end I decided to migrate my site over completely.

Foundation's Yeti

Swapping out one framework for another

Over time, I've become moderately experienced with HTML/CSS and I can usually wrangle things to look the way I want, but my solutions aren't necessarily elegant. I was initially concerned that I'd already munged things so much that changing anything would be a pain. When I first put the styles for this site together, I had to spend quite a bit of time overwriting Bootstrap's defaults so I was prepared for the same when using Foundation. Turns out that I was fine. I currently use Jekyll (and Jekyll Bootstrap) so I only had three template files and a couple of HTML pages to edit and because I'd kept most of my custom CSS in a separate file, it was literally a case of swapping out one framework for another and bug-fixing from there onwards. There's definitely a lesson here in using automation as much as possible.

Customising the styles was another area of concern but I was pleasantly surprised to find I needed less customisation than with Bootstrap. This is likely because I didn't have to override as many defaults (and probably because I've learned more about CSS since then). The one thing I seemed to be missing was a way to deal with code sections, so I just took what Bootstrap had and copied it in. At some point I should revisit this.

It did take me a while to get my head around Foundation's grid but it was worth it in the end. The idea is that you should design for small screens first and then adjust things for larger screens as necessary. There are several different default sizes which inherit their properties from the size below, unless you explicitly override them. I initially screwed this up by explicitly defining the grid using the small-# classes, which obviously looks ridiculous on small screens. I fixed it by swapping out small-# for medium-# everywhere in the HTML, after which everything looked reasonable. Items flowed sensibly into a default column for the small screens and looked acceptable for larger screens and perfectly fine on desktops. I could do more styling of the mobile view but I'd already achieved most of what I was after.

Fixing image galleries and embedded content

The only additional thing I used from Bootstrap was the Carousel. I'd written some custom helper scripts that would take some images and thumbnails from a specified folder and produce clickable thumbnails with a slider underneath. Foundation provides Orbit, so I had to spend time rewriting my script to produce the necessary HTML. This actually resulted in cleaner HTML and one of the features I wanted (the ability to link to a specific image) was available by default in Orbit. At this point I also tried to make the output look better for the case where JavaScript is disabled (in essence, each image is just displayed as a list). Below is an example of an image gallery, taken from a previous post, when I joined the computer lab.

Foundation also provides a component called Flex Video, which allows the browser to scale videos to the appropriate size. This fix was as simple as going back through old posts and wrapping anything that was <iframe> in a <div class="flex-video">. It really was that simple and all the Vimeo and YouTube items scaled perfectly. Here's an example of a video from an earlier post, where I gave a walkthrough of the site. Try changing the width of your browser window to see it scale.

Framework differences

Another of the main difference between the two frameworks is that Bootstrap uses LESS to manage its CSS whereas Foundation uses SASS. Frankly, I've no experience with either of them so it makes little difference to me. It's worth bearing in mind for anyone who's workflow does involve pre-processing. Also, Bootstrap is available under the Apache 2 License, while Foundation is released under the MIT license.


Overall, the transition was pretty painless and most of the time was spent getting familiar with the grid, hunting for docs/examples and trying to make the image gallery work the way I wanted. I do think Bootstrap's docs are better but Foundation's aren't bad.

Although this isn't meant to be a comparison, I much prefer Foundation to Bootstrap. If you're not sure which to use then I think the secret is in the names of the frameworks.

  • Bootstrap (for me) was a great way to 'bootstrap' a site quickly with lots of acceptable defaults -- it was quick to get started but took some work to alter.
  • Foundation seems to provide a great 'foundation' on which to create more customised sites -- it's more flexible but needs more upfront thought.

That's pretty much how I'd recommend them to people now.

Share / Comment

Announcing the new

As some of you may have noticed, the new site is now live!

The DNS may still be propagating so if hasn't updated for you then try This post is in two parts: the first is the announcement and the second is a call for content.

New website design!

The new site represents a major milestone in the continuing growth of the OCaml ecosystem. It's the culmination of a lot of volunteer work over the last several months and I'd specifically like to thank Christophe, Ashish and Philippe for their dedication (the commit logs speak volumes). Wireframes

We began this journey just over 8 months ago with paper, pencils and a lot of ideas. This led to a comprehensive set of wireframes and walk-throughs of the site, which then developed into a collection of Photoshop mockups. In turn, these formed the basis for the html templates and style sheets, which we've adapted to fit our needs across the site.

Alongside the design process, we also considered the kind of structure and workflow we aspired to, both as maintainers and contributors. This led us to develop completely new tools for Markdown and templating in OCaml, which are now available in OPAM for the benefit all.

Working on all these things in parallel definitely had it challenges (which I'll write about separately) but the result has been worth the effort.

The journey is ongoing and we still have many more improvements we hope to make. The site you see today primarily improves upon the design, structure and workflows but in time, we also intend to incorporate more information on packages and documentation. With the new tooling, moving the website forward will become much easier and I hope that more members of the community become involved in the generation and curation of content. This brings me to the second part of this post.

Call for content

We have lots of great content on the website but there are parts that could do with a refresh and gaps that could be filled. As a community driven site, we need ongoing contributions to ensure that the site best reflects its members.

For example, if you do commercial work on OCaml then maybe you'd like to add yourself to the support page? Perhaps there are tutorials you can help to complete, like 99 problems? If you're not sure where to begin, there are already a number of content issues you could contribute to.

Although we've gone through a bug-hunt already, feedback on the site is still very welcome. You can either create an issue on the tracker (preferred), or email the infrastructure list.

It's fantastic how far we've come and I look forward to the next phase!

Share / Comment