Amir Chaudhry - 'ocamllabs'thoughts, comments & general ramblings2018-02-19T15:42:42+00:00http://amirchaudhry.com/tags/#ocamllabsAmir ChaudhryCodeMesh 2015Amir Chaudhry2015-11-03T11:15:00+00:00http://amirchaudhry.com/codemesh2015
<script async="" class="speakerdeck-embed" data-id="3035d63437234495ad1cddc117321ff0" data-ratio="1.33333333333333" src="//speakerdeck.com/assets/embed.js"></script>
<p>These are the slides from my talk today at CodeMesh. This time around I was
earlier in the schedule so I get to enjoy the rest of the conference! If
you’re reading this at the conference now, please do follow the link in my
talk to rate it and give me feedback!</p>
<p>The specific items I reference in the talk are below with links to more
information.</p>
<h4 id="security-and-the-bitcoin-piñata">Security and the Bitcoin Piñata</h4>
<p>This is a bounty where we have locked away some bitcoin in a unikernel that is
running our new TLS stack. This was a new model of running a bounty and has
proven a great way to stress test the code in the wild.</p>
<ul>
<li><em>Some background to the Bitcoin Pinata</em>, <a href="http://amirchaudhry.com/bitcoin-pinata/">“The Bitcoin Piñata!”</a></li>
<li><em>The Pinata itself</em> <a href="http://ownme.ipredator.se">“You have reached the BTC Piñata”</a></li>
<li><em>Looking over the results of the attempts</em> <a href="https://mirage.io/blog/bitcoin-pinata-results">“Reviewing the Bitcoin Piñata”</a></li>
</ul>
<p>You can follow up with more of the background work on the TLS stack by looking
at the paper,
<a href="https://nqsb.io/nqsbtls-usenix-security15.pdf">“Not-quite-so-broken TLS: lessons in re-engineering a security protocol specification and implementation”</a>
and find other users of the libraries via <a href="https://nqsb.io">https://nqsb.io</a>.</p>
<h4 id="automated-deployment">Automated deployment</h4>
<p>I’ve previously written about how we do unikernel deployments for MirageOS.
Although the scripts themselves have evolved and become more sophisticated,
these are still a good introduction.</p>
<ul>
<li><em>Initial post on building a unikernel</em> <a href="http://amirchaudhry.com/from-jekyll-to-unikernel-in-fifty-lines/">“From Jekyll to Unikernel”</a></li>
<li><em>The deployment steps for one of our repos</em> <a href="http://amirchaudhry.com/heroku-for-unikernels-pt1">“Heroku for Unikernels: Part 1 - Automated deployment”</a></li>
<li><em>Some thoughts on what the future might look like</em> <a href="http://amirchaudhry.com/heroku-for-unikernels-pt2">“Heroku for Unikernels: Part 2 - Self Scaling Systems”</a></li>
</ul>
<h4 id="summoning-on-demand">Summoning on demand</h4>
<p>The work on summoning unikernels was presented at Usenix this year and you can
read the paper, <a href="http://anil.recoil.org/papers/2015-nsdi-jitsu.pdf">“Jitsu: Just-In-Time Summoning of Unikernels”</a>.
The example I showed in the talk can be found at <a href="http://www.jitsu.v0.no">http://www.jitsu.v0.no</a>.</p>
<h4 id="other-resources">Other resources</h4>
<ul>
<li><em>The MirageOS website</em>, <a href="https://mirage.io">https://mirage.io</a>
<ul>
<li><em>The <a href="https://github.com/mirage/mirage-skeleton">mirage-skeleton repo</a>, which has a number of examples</em></li>
</ul>
</li>
<li><em>The Rump Kernels site</em>, <a href="http://rumpkernel.org">http://rumpkernel.org</a></li>
<li><em>The Nymote site</em>, <a href="http://nymote.org">http://nymote.org</a>
<ul>
<li>*The <a href="http://nymote.org/blog/2013/introducing-nymote/">Introductory post</a> is a useful place to start.</li>
</ul>
</li>
</ul>
<p>To get involved in the development work, please do join the
<a href="http://lists.xenproject.org/cgi-bin/mailman/listinfo/mirageos-devel">MirageOS devel list</a> and try out some of the examples for
yourselves!</p>
Governance of OCaml.orgAmir Chaudhry2015-09-18T14:00:00+00:00http://amirchaudhry.com/governance-ocaml-org
<p><a href="http://ocaml.org/governance.html"><img src="http://amirchaudhry.com/images/web/governance-page.png" alt="Governance Screenshot" /></a></p>
<p>For several months, I’ve been working with the maintainers of OCaml.org
projects to define and document the governance structure around the domain
name. I wrote about this <a href="http://amirchaudhry.com/towards-governance-framework-for-ocamlorg/">previously</a> and I’m pleased to say that
the work for this phase has concluded, with the document now <a href="http://ocaml.org/governance.html">live</a>.</p>
<h2 id="recurring-themes">Recurring themes</h2>
<p>There were some recurring themes that cropped up during my email discussions
with people and I thought it would be useful to present a summary of them,
along with my thoughts. Broadly, the discussions revolved around the
philosophy of the document, the extent of its scope, and the depth of coverage.
This discourse was very important for refining and improving the document.</p>
<h3 id="ideals-and-reality">Ideals and Reality</h3>
<p>Some of the comments I received were essentially that the document did not
represent how we <em>should</em> be organising ourselves. There was occasionally the
sense (to me at least) that the only appropriate form of governance is a fully
democratic and representational one.</p>
<p>That would entail things like official committees, ensuring that various
communities/organisations were represented, and perhaps establishing some
form of electoral processes. Overall, something relatively formal and quite
carefully structured. Of course, instituting such an arrangement would
necessarily require somewhat involved procedures, documentation, and
systems — as well as the volunteer time to manage those processes.</p>
<p>These may be noble aims — and I expect one day we’ll be closer to such ideals —
but one of the critical factors for the current approach was that we record
how things are <em>right now</em>. In my experience, anything else is purely
aspirational and therefore would have little bearing with how things currently
function.</p>
<p>To put it another way, the current document must not describe the structure we
<em>desire</em> to have, but the organisation we <em>actually</em> have — warts and all.
Yes, right now we have a <a href="https://en.wikipedia.org/wiki/Benevolent_dictator_for_life">BDFL</a>*, who personally owns the domain and
therefore can do as he pleases with it. Irrespective of this, the community
has been able to come together, coordinate themselves, and build very useful
things around the domain name. This has happened independently of any formal
community processes and, in my view, has largely been driven by people
supporting each other’s works and generally trying to ‘do the right thing’.</p>
<p>Another aspect to point out is that is that such documents and procedures are
not <em>necessary</em> for success. This is obvious when you consider how far the
OCaml community has come in such a relatively short space of time. Given this,
one might argue why we need any kind of written governance at all.</p>
<p>To answer that, I would say that once things grow beyond a certain scale, I
believe it helps to gather the implicit behaviours and document them clearly.
This allows us to be more systematic in our approach and also enables
newcomers to understand how things work and become involved more quickly. In
addition, having a clear record of how things operate in the present is an
invaluable tool in helping to clarify what exactly we should work on changing
for the future.</p>
<h3 id="extent-of-scope">Extent of scope</h3>
<p>It’s a little confusing to consider that ‘OCaml.org’ is simultaneously a
collection of websites, infrastructural components, and projects.
Disambiguating these from the wider OCaml community was important, and
relatively straightforward, but there were a few questions about the
relationship between the domain name and the projects that use it.</p>
<p>Although the governance covers the OCaml.org <em>domain name</em>, it necessarily has
an impact on the projects which make use of it. This matters because anything
under the OCaml.org domain will, understandably, be taken as authoritative by
users at large. In a way, OCaml.org becomes the sum of the projects under it,
hence it’s necessary to have some lightweight stipulations about what is
expected of those projects.</p>
<p>Projects themselves are free to organise as they wish (BDFL/Democracy/etc) but
there are certain guiding principles for OCaml.org that those projects are
expected to be compatible with (e.g. openness, community-related, comms, etc).
These stipulations are already met by the current projects, so codifying them
is intended to clarify expectations for new projects.</p>
<h3 id="depth-of-coverage">Depth of coverage</h3>
<p>Another of the recurring points was how the current document didn’t capture
every eventuality. Although I could have attempted this, the end result would
have been a lengthy document, full of legalese, that I expect very few people
would ever read. The document would also have needed to cover eventualities
that have not occurred (yet) and/or may be very unlikely to occur.</p>
<p>Of course, this is <em>not</em> a legal document. No-one can be compelled to comply
with it and there are very few sanctions for anyone who chooses not to comply.
However, for those who’ve agreed to it, acceptance signals a clear intent to
take part in a <a href="https://en.wikipedia.org/wiki/Social_contract">social contract</a> with the others involved in work
around the domain name.</p>
<p>Overall, I opted for a lightweight approach that would cover how we typically
deal with issues and result in a more readable document. Areas that are
‘unchartered’ for us should be dealt with as they have been so far — through
discussion and action — and can subsequently be incorporated when we have a
better understanding of the issues and solutions.</p>
<h2 id="a-solid-starting-position">A solid starting position</h2>
<p>The current version of the governance document is now live and it is very much
intended to be a living document, representing where we are now. As the
community continues to grow and evolve, we should revisit this to ensure it is
accurate and is meeting our needs.</p>
<p>I look forward to seeing where the community takes it!</p>
<p><em>In case you’re interested, the set of links below covers the journey from
beginning to end of this process.</em></p>
<ul>
<li><em>Background — <a href="http://amirchaudhry.com/towards-governance-framework-for-ocamlorg/">“Towards a governance framework for OCaml.org”</a></em></li>
<li><em>Discussion phase — <a href="http://lists.ocaml.org/pipermail/infrastructure/2015-August/000518.html">“Adopting a Governance framework…”</a></em></li>
<li><em>Tracking issue — <a href="https://github.com/ocaml/ocaml.org/issues/700">ocaml/ocaml.org#700</a></em></li>
<li><em>Ratification — <a href="http://lists.ocaml.org/pipermail/infrastructure/2015-September/000540.html">“Governance document is now ratified…”</a></em></li>
<li><em>Governance doc — <a href="http://ocaml.org/governance.html">“Governance of the OCaml.org domain”</a></em></li>
</ul>
<p class="footnote">
* Yeah, I made sure to add Xavier to the BDFL list before publishing
this. :)
</p>
<p class="footnote">
Thanks to Ashish, Philippe and Anil for comments on an earlier draft.
</p>
Unikernels at PolyConf!Amir Chaudhry2015-07-04T13:00:00+00:00http://amirchaudhry.com/unikernels-polyconf-2015
<p><strong><em>Updated: 14 July (see below)</em></strong></p>
<script async="" class="speakerdeck-embed" data-id="1076a457408d42d7bb9da27dd88b68c8" data-ratio="1.77777777777778" src="//speakerdeck.com/assets/embed.js"></script>
<p>Above are my slides from a talk at PolyConf this year. I was originally going
to talk about the <a href="http://amirchaudhry.com/brewing-miso-to-serve-nymote/">MISO</a> tool stack and personal clouds (i.e. how we’ll
build <a href="http://nymote.org/blog/2013/introducing-nymote/">towards Nymote</a>) but after some informal conversations with
other speakers and attendees, I thought it would be <em>way</em> more useful to focus
the talk on unikernels themselves — specifically, the ‘M’ in MISO. As a
result, I ended up completely rewriting all my slides! Since I pushed this
post just before my talk, I hope that I’m able to stick to the 30min time slot
(I’ll find out very soon).</p>
<p>In the slides I mention a number of things we’ve done with MirageOS so I
thought it would be useful to list them here. If you’re reading this at the
conference now, please do give me feedback at the end of my talk!</p>
<ul>
<li><em>Thomas’ Hello world and REST service</em>, <a href="http://roscidus.com/blog/blog/2014/07/28/my-first-unikernel/">“My First Unikernel”</a></li>
<li><em>Magnus on</em> <a href="http://www.skjegstad.com/blog/2015/03/25/mirageos-vm-per-url-experiment/">“A unikernel experiment: A VM for every URL”</a></li>
<li><em>Mindy on <a href="http://www.somerandomidiot.com/blog/2014/08/19/i-am-unikernel/">“I Am Unikernel (and So Can You!)”</a></em></li>
<li>
<p><em>The <a href="https://github.com/mirage/mirage-skeleton">mirage-skeleton repo</a>, which has a number of examples</em></p>
</li>
<li><em>My previous posts (referred to in the talk)</em>
<ul>
<li><a href="http://amirchaudhry.com/from-jekyll-to-unikernel-in-fifty-lines/">“From Jekyll site to Unikernel in fifty lines of code.”</a></li>
<li><a href="http://amirchaudhry.com/heroku-for-unikernels-pt1">“Towards Heroku for Unikernels”</a></li>
<li><a href="http://amirchaudhry.com/bitcoin-pinata/">“The Bicoin Piñata!”</a></li>
<li><a href="http://nymote.org/blog/2013/introducing-nymote/">“Introducing Nymote”</a></li>
</ul>
</li>
</ul>
<p>To get involved in the development work, please do join the
<a href="http://lists.xenproject.org/cgi-bin/mailman/listinfo/mirageos-devel">MirageOS devel list</a> and try out some of the examples for
yourselves!</p>
<h3 id="update--14-july">Update — 14 July</h3>
<p>The video of the talk is now available and it’s embedded below. Overall, the
talk seemed to go well and there was enough time for questions.</p>
<p>At the end of the talk, I asked people to give me feedback and shared a URL,
where I had a very short form. I had 21 responses with a rating of
<strong>4.52/5.00</strong>. I’m quite pleased with this and the feedback was also useful.
In a nutshell, the audience seemed to really appreciate the walkthrough (which
encourages me to make some screencasts). There was one comment that I didn’t
do enough justice to the security benefits. Specifically, I could have drawn
more reference to the OCaml TLS work, which prevents bugs like heartbleed.
Security is definitely one of the key benefits of MirageOS unikernels (see
<a href="https://mirage.io/blog/why-ocaml-tls">here</a>), so I’ll do more to emphasise that next time.</p>
<p>Here’s the video and I should mention that the slides seem to be a few
seconds ahead. You’ll notice that I’ve left the feedback link live, too. If
you’d like to tell me what you think of the talk, please do so! There are some
additional comments at the end of this post.</p>
<div class="flex-video">
<iframe width="540" height="304" src="https://www.youtube.com/embed/nZLy19eRWLk" frameborder="0" allowfullscreen=""></iframe>
</div>
<!-- I find it a little awkward watching myself give a talk, especially when I
recognise things I should have said (or obvious mistakes).
-->
<p>Finally, here are few things I should clarify:</p>
<ul>
<li>Security is one of the critical benefits, which is why we need new systems
for personal clouds (rather than legacy stacks).</li>
<li>We still get to use all the existing tools for storage (e.g. EBS), it
doesn’t have to be Irmin.</li>
<li>The <a href="https://mirage.io/blog/introducing-irmin">Introducing Irmin</a> post is the one I was trying to point
an audience member at.</li>
<li>When I mention the DNS server, I said it was 200MB when I actually meant
200<strong>KB</strong>. More info in the <a href="http://nymote.org/docs/2013-asplos-mirage.pdf">MirageOS ASPLOS paper</a>.</li>
<li>I referred to the <a href="http://hubofallthings.com">“HAT Project”</a> and you should also check out the
<a href="http://mor1.github.io/publications/pdf/aarhus15-databox.pdf">“Databox paper”</a>.</li>
<li>A summary of other unikernel approaches is also <a href="http://www.linux.com/news/enterprise/cloud-computing/819993-7-unikernel-projects-to-take-on-docker-in-2015/">available</a>.</li>
</ul>
Towards Heroku for Unikernels: Part 2 - Self Scaling SystemsAmir Chaudhry2015-04-03T15:30:00+00:00http://amirchaudhry.com/heroku-for-unikernels-pt2
<p>In the <a href="http://amirchaudhry.com/heroku-for-unikernels-pt1/">previous post</a> I described the continuous end-to-end system
that we’ve set up for some of the MirageOS projects — automatically going from
a <code class="highlighter-rouge">git push</code> all the way to live deployment, with everything under
version-control.</p>
<p>Everything I described previously already exists and you can set up the
workflow for yourself, the same way many others have done with the Travis CI
scripts for testing/build. However, there are a range of exciting
possibilities to consider if we’re willing to extrapolate <em>just a little</em> from
the tools we have right now. The rest of this post explores these ideas and
considers how we might extend our system.</p>
<p>Previously, we had finished the backbone of the workflow and I discussed a few
ideas about how we should flesh it out — namely more testing and some form of
logging/reporting. There’s substantially more we could do when we consider
how lean and nimble unikernels are, especially if we speculate about the
systems we could create as our <a href="http://amirchaudhry.com/brewing-miso-to-serve-nymote/">toolstack</a> matures. A couple of
things immediately come to mind.</p>
<p>The first is the ability to boot a unikernel only when it is required, which
opens up the possibility of highly-elastic infrastructure. The second is the
ease with which we can push, pull or otherwise distribute unikernels
throughout a system, allowing new forms of deployment to both cloud and
embedded systems. We’ll consider these in turn and see where they take us,
comparing with the current ‘mirage-decks’ deployment I described in
<a href="http://amirchaudhry.com/heroku-for-unikernels-pt1/">Part 1</a>.</p>
<h2 id="demand-driven-clouds">Demand-driven clouds</h2>
<p>The way cloud services are currently provisioned means that you may have
services operating and consuming resources (CPU, memory, etc), even when there
is no demand for them. It would be significantly more efficient if we could
just <em>activate</em> a service when required and then shut it down again when the
demand has passed. In our case, this would mean that when a unikernel is
‘deployed to production’, it doesn’t actually have to be <em>live</em> — it merely
needs to be ready to boot when demand arises. With tools like
<a href="https://github.com/MagnusS/jitsu">Jitsu</a> (Just-In-Time Summoning of Unikernels), we can work
towards this kind of architecture.</p>
<h3 id="summon-when-required">Summon when required</h3>
<p>Jitsu allows us to have unikernels sitting in storage then ‘summon’ them into
existence. This can occur in response to an incoming request and with <em>no
discernible latency</em> for the requester. While unikernels are inactive, they
consume only the actual physical storage required and thus do not take up any
CPU cycles, nor RAM, etc. This means that more can be achieved with fewer
resources and it would significantly improve things like utilization rates of
hardware and power efficiency.</p>
<p>In the case of the <a href="http://decks.openmirage.org">decks.openmirage.org</a> unikernel that I
discussed last time, it would mean that the site would only come online if
someone had requested it and would shut down again afterwards.</p>
<p>In fact, we’ve already been working on this kind of system and
<a href="https://www.usenix.org/conference/nsdi15/technical-sessions/presentation/madhavapeddy">Jitsu will be presented at NSDI</a> in Oakland, California this May.
In the spirit of looking ahead, there’s more we could do.
<!-- ([PDF][jitsu-paper]) --></p>
<h3 id="hyper-elastic-scaling">Hyper-elastic scaling</h3>
<p>At the moment, Jitsu lets you set up a system where unikernels will boot in
response to incoming requests. This is already pretty cool but we could take
this a step further. If we can boot unikernels on demand, then we could use
that to build a system which can automate the <em>scale-out</em> of those services to
match demand. We could even have that system work across multiple machines,
not just one host. So how would all this look in practice for ‘mirage-decks’?</p>
<h4 id="auto-scaling-and-dispersing-our-slide-decks">Auto-scaling and dispersing our slide decks</h4>
<p>Our previous toolchain automatically boots the new unikernel as soon as it is
pulled from the git repo. Using Jitsu, our deployment machine would pull the
unikernel but leave it in the repo — it would only be activated when someone
requests access to it. Most of the time, it may receive no traffic and
therefore would remain ‘turned off’ (let’s ignore webcrawlers for now). When
someone requests to see a slide deck, the unikernel would be booted and
respond to the request. In time it can be turned off again, thus freeing
resources. So far, so good.</p>
<p>Now let’s say that a certain slide deck becomes <em>really</em> popular (e.g. posted
to HackerNews or Reddit). Suddenly, there are <em>many</em> incoming requests and we
want to be able to serve them all. We can use the one unikernel, on one
machine, until it is unable to handle the load efficiently. At this point,
the system can create new copies of that unikernel and automatically balance
across them. These unikernels don’t need to be on the same host and we should
be able to spin them up on different machines.</p>
<p>To stretch this further, we can imagine coordinating the creation of those new
unikernels nearer the <em>source</em> of that demand, for example starting off on a
European cloud, then spinning up on the East coast US and finally over to the
West coast of the US. All this could happen seamlessly and the process can
continue until the demand passes or we reach a predefined limit — after all,
given that we pay for the machines, we don’t really want to turn a Denial of
<em>Service</em> into a Denial of <em>Credit</em>.</p>
<p>After the peak, the system can automatically scale back down to being largely
dormant — ready to react when the next wave of interest occurs.</p>
<h4 id="can-we-actually-do-this">Can we actually do this?</h4>
<p>If you think this is somewhat fanciful, that’s perfectly understandable — as I
mentioned previously, this post is very much about <em>extrapolating</em> from where
the tools are right now. However, unikernels actually make it very easy to
run quick experiments which indicate that we could iterate towards what I’ve
described.</p>
<p>A recent and somewhat extreme experiment ran a
<a href="http://www.skjegstad.com/blog/2015/03/25/mirageos-vm-per-url-experiment/">unikernel VM for <em>each URL</em></a>. Every URL on a small static
site was served from its own, self-contained unikernel, complete with it’s own
web server (even the ‘rss.png’ icon was served separately). You can read the
post to see how this was done and it also led to an interesting
<a href="http://lists.xenproject.org/archives/html/mirageos-devel/2015-03/msg00110.html">discussion</a> on the mailing list (e.g. if you’re only serving a
single item, why use a web server at all?). Of course, this was just an
<em>experiment</em> but it demonstrates what is possible now and how we can iterate,
uncover new problems, and move forward. One such question is how to
automatically handle networking during a scale-out, and this is an area were
tools like <a href="http://amirchaudhry.com/brewing-miso-to-serve-nymote/#signpost">Signpost</a> can be of use.</p>
<p>Overall, the model I’ve described is quite different to the way we currently
use the cloud, where the overhead of a classic OS is constantly consuming
resources. Although it’s tempting to stick with the same frame of reference
we have today we should recognise that the current model is inextricably
intertwined with the traditional software stacks themselves. Unikernels allow
completely new ways of creating, distributing and managing software and it
takes some thought in order to fully exploit their benefits.</p>
<p>For example, having a demand-driven system means we can deliver more services
from just the one set of physical hardware — because not all those services
would be consuming resources at the same time. There would also be a dramatic
impact on the economics, as billing cycles are currently measured in hours,
whereas unikernels may only be active for seconds at a time. In addition to
these benefits, there are interesting possibilities in how such scale-outs can
be coordinated across <em>different</em> devices.</p>
<h2 id="hybrid-deployments">Hybrid deployments</h2>
<p>As we move to a world with more connected devices, the software and services
we create will have to operate across both the cloud and embedded systems.
There have been many names for this kind of distributed system, ranging from
ubiquitous computing to dust clouds and the ‘Internet of Things’ but they all
share the same idea of running software at the edges of the network (rather
than just cloud deployments).</p>
<p>When we consider the toolchain we already have, it’s not much of a stretch to
imagine that we could also build and store a unikernel for ARM-based
deployments. Those unikernels can be deployed onto embedded devices and
currently we target the <a href="http://openmirage.org/wiki/xen-on-cubieboard2">Cubieboard2</a>.</p>
<!-- For the example of our static websites, it would be straightforward to serve them from cubieboards that reside from our homes, thus further minimising the costs to run such infrastructure. However, they could be configured such that if demands begins to peak, then an automated scale-out can occur from the Cubieboard onto the public cloud instead. -->
<!-- You could even set up such a system to push the well-tested unikernels out onto embedded devices elsewhere (think IoT). In this way you only need a Minimal cloud infrastructure for your IoT service, in order to push new code out to end points, where the work is actually done (within a user's home). Think of the Goodnight Lamp, This can drastically reduce cost and any loss of the central service means end devices can keep working. (requires Signpost?). Have a central location where devices can pick up updates from. Doesn't need to do any more than coordinating stuff and devices can work P2P. V cheap to run and make money from selling devices. -->
<p>We could make such a system smarter. Instead of having the edge devices
constantly polling for updates, our deployment process could directly <em>push</em>
the new unikernels out to them. Since these devices are likely to be behind
NATs and firewalls, tools like <a href="http://amirchaudhry.com/brewing-miso-to-serve-nymote/#signpost">Signpost</a> could deal with the issue
of secure connectivity. In this way, the centralized deployment process
remains as a coordination point, whereas most of the workload is dealt with by
the devices the unikernels are running on. If a central machine happens to be
unavailable for any reason, the edge-devices would continue to function as
normal. This kind of arrangement would be ideal for Internet-of-Things style
deployments, where it could reduce the burden on centralised infrastructure
while still enabling continuous deployment.</p>
<p>In this scenario, we could serve the traffic for ‘mirage-decks’ from a
unikernel on a Cubieboard2, which could further minimise the cost of running
such infrastructure. It could be configured such that if demand begins to
peak, then an automated scale-out can occur from the Cubieboard2 directly out
onto the public cloud and/or <em>other Cubieboards</em>. Thus, we can still make use
of third-party resources but only when needed and of the kind we desire. Of
course, running a highly distributed system leads to other needs.</p>
<h2 id="remember-all-the-things">Remember all the things</h2>
<p>When running services at scale it becomes important to track the activity and
understand what is taking place in the system. In practice, this means logging
the activity of the unikernels, such as when and where they were created and
how they perform. This becomes even more complex for a distributed system.</p>
<p>If we also consider the logging needs of a highly-elastic system, then another
problem emerges. Although scaling up a system is straightforward to
conceptualise, scaling it back <em>down</em> again presents new challenges. Consider
all the additional logs and data that have been created during a scale-out —
all of that history needs to be merged back together as the system contracts.
To do that properly, we need tools designed to manage distributed data
structures, with a consistent notion of merges.</p>
<p><a href="http://amirchaudhry.com/brewing-miso-to-serve-nymote/#irmin">Irmin</a> addresses these kinds of needs and it enables a style of
programming very similar to the Git workflow, where distributed nodes fork,
fetch, merge and push data between each other. Building an end-to-end logging
system with Irmin would enable data to be managed and merged across different
nodes and keep track of activity, especially in the case of a scale down. The
ability to capture such information also means the opportunity to provide
analytics to the creators of those unikernels around performance and usage
characteristics.</p>
<p>The use of Irmin wouldn’t be limited to logging as the unikernels themselves
could use it for managing data in lieu of other file systems. I’ll refrain
from extrapolating too far about this particular tool as it’s still under
rapid development and we’ll write more as it matures.</p>
<!-- With something like [Irmin][irmin-post], you may even be able to receive notifications about the type of incoming traffic and raise the limit if you so wish. May be able to configure your embedded devices to scale up to the hosted provider if there's sufficient demand. -->
<h2 id="on-immutable-infrastructure">On immutable infrastructure</h2>
<p>You may have noticed that one of the benefits of the unikernel approach arises
because the artefacts themselves are not altered once they’re created.
This is in line with the recent resurgence of ideas around ‘immutable
infrastructure’. Although there isn’t a precise definition of this, the
approach is that machines are treated as replaceable and can be regularly re
provisioned with a known state. Various tools help the existing systems to
achieve this but in the case of unikernels, everything is already under
version control, which makes managing a deployment much easier.</p>
<p>As our approach is already compatible with such ideas, we can take it a step
further. Immutable infrastructure essentially means the artefact produced
<em>doesn’t matter</em>. It’s disposable because we have the means to easily recreate
it. In our current example, we still ship the unikernel around. In order to
make this ‘fully immutable’, we’d have to know the state of all the packages
and code used when <em>building</em> the unikernel. That would give us a complete
manifest of which package versions were pulled in and from which sources.
Complete information like this would allow us to recreate any given unikernel
in a highly systematic way. If we can achieve this, then it’s the manifest
which generates everything else that follows.</p>
<p>In this world-view, the unikernel itself becomes something akin to caching.
You use it because you don’t want to rebuild it from source — even though
unikernels are quicker to build than a whole OS/App stack. For more security
critical applications, you may want to be assured of the code that is pulled
in, so you examine the manifest file before rebuilding for yourself. This also
allows you to pin to specific versions of libraries so that you can explicitly
adjust the dependencies as you wish. So how do we encode the manifest? This
is another area where Irmin can help as it can keep track of the state of
package history and can recreate the environment that existed for any given
build run. That build run can then be recreated elsewhere without having to
manually specify package versions.</p>
<p>There’s a lot more to consider here as this kind of approach opens up new
avenues to explore. For the time being, we can recognise that the unikernel
approach lends itself to the achieving immutable infrastructure.</p>
<h2 id="what-happens-next">What happens next?</h2>
<p>As I mentioned at the beginning of this post, most of what I’ve described is
speculative. I’ve deliberately extrapolated from where the tools are now so as
to provoke more thoughts and discussion about how this new model can be used
in the wild. Some of the things we’re already working towards but there are
many other uses that may surprise us — we won’t know until we get there and
experimenting is half the fun.</p>
<p>We’ll keep marching on with more libraries, better tooling and improving
quality. What happens with unikernels in the rest of 2015 is largely up to
the wider ecosystem.</p>
<p>That means you.</p>
<p><em>Edit: discuss this post on <a href="http://devel.unikernel.org/t/towards-heroku-for-unikernels/27/1">devel.unikernel.org</a></em></p>
<hr />
<p class="footnote">
Thanks to Thomas Gazagnaire and Richard Mortier for comments on an earlier draft.
</p>
<!-- TODO- xref with Nymote somehow. The above infra is needed for those apps to provide a resilient service. etc -->
Towards Heroku for Unikernels: Part 1 - Automated deploymentAmir Chaudhry2015-03-31T14:30:00+00:00http://amirchaudhry.com/heroku-for-unikernels-pt1
<p>In my <a href="http://amirchaudhry.com/from-jekyll-to-unikernel-in-fifty-lines/">Jekyll to Unikernel post</a>, I described an automated
workflow that would take your static website, turn it into a MirageOS
unikernel, and then store that unikernel in a git repo for later deployment.
Although it was written from the perspective of a static website, the process
was applicable to any MirageOS project.
This post covers how things have progressed since then and the kind of
automated, end-to-end deployments that we can achieve with unikernels.</p>
<p>If you’re already familiar with the above-linked post then it should be clear
that this will involve writing a few more scripts and ensuring
they’re in the right place. The rest of this post will go through a real
world example of such an automated system, which we’ve set up for building and
deploying the unikernel that serves our slide decks — <a href="https://github.com/mirage/mirage-decks">mirage-decks</a>. Once
you’ve gone though this post, you should be able to recreate such a workflow
for your own needs. In Part 2 of this series I’ll build on this post and
consider what the possibilities could be if we extended the system using
some of our <a href="http://amirchaudhry.com/brewing-miso-to-serve-nymote/">other tools</a> — thus arriving at something very much
like our own Heroku for Unikernels.</p>
<h3 id="standardised-build-scripts">Standardised build scripts</h3>
<p>Almost all of our OCaml projects now use Travis CI for build and testing (and
deployment). In fact, there are so many libraries now that we recently put
together an <a href="https://github.com/ocaml/ocaml-travisci-skeleton">OCaml Travis Skeleton</a>, which means we don’t
have to manually keep the scripts in sync across all our repos — and fewer
copy/paste/edits means fewer mistakes.</p>
<p>If you’re familiar with the build scripts from <a href="http://amirchaudhry.com/from-jekyll-to-unikernel-in-fifty-lines#setting-up-travis-ci">last time</a>, then
you can browse the new scripts and you’ll see that they’re broadly similar.
In many cases you may well be able to depend on one or other of the scripts
directly and for a handful of scenarios, you can fork and patch them to
suit you (i.e. for MirageOS unikernels). We can do this because we’ve made it
quick to set up an OCaml environment using an <a href="https://launchpad.net/~avsm">Ubuntu PPA</a>. The rest
of the work is done by the <code class="highlighter-rouge">mirage</code> tool itself so once that’s installed, the
build process becomes fairly straightforward. The complexity around secure
keys was also <a href="http://amirchaudhry.com/from-jekyll-to-unikernel-in-fifty-lines/#sending-travis-a-private-ssh-key">covered last time</a>, which allowed us to commit the
final unikernel to a deployment repo. That means the remaining step is
to automate the deployment itself.</p>
<h3 id="automated-deployment-of-unikernels">Automated deployment of unikernels</h3>
<p>Committing the unikernel to a deployment repo is where the previous post ended
and a <a href="http://amirchaudhry.com/unikernels-for-everyone/">number of people</a> forged ahead and wrote about their
experiences deploying onto AWS and Linode. Many of these deployments
(understandably) involve a number of quite manual steps. It would be
particularly useful to construct a set of scripts that can be fully automated,
such that a <code class="highlighter-rouge">git push</code> to a repo will automatically run through the cycle of
building, testing, storing and <em>activating</em> a new unikernel. We’ve done
exactly this with some of our repos and this post will talk through those
scripts.</p>
<h4 id="the-deployment-options--xen-or-nix">The deployment options — Xen or *nix</h4>
<p>MirageOS unikernels can currently be built for Xen and Unix backends. This is
a straightforward step and typically the build matrix is already set up to
test that both of them build as expected. For this post, I’ve only considered
the Xen backend as that’s our chosen deployment method but it would be equally
feasible to deploy the unix-based unikernels onto a *nix machine in much the
same way.
In this sense, you get to choose whether you want to deploy the unikernels
onto a <a href="http://en.wikipedia.org/wiki/Hypervisor#Classification">Hypervisor</a> (for isolation and security) or whether running
them as unix-processes better suits your needs.
<!-- If you step back and think about what this means, it's *almost*
like considering the
[difference between a Type-1 and Type-2 hypervisor][hyp-class] and selecting
between them. -->
The unikernel approach means that <em>both</em> options are open to
you, with little more than a command-line flag between them.</p>
<p>In terms of the deployment machines there are several options to consider. The
most obvious is to set up a dedicated host, where you have full access to the
machine and can <a href="http://wiki.xenproject.org/wiki/Xen_Project_Beginners_Guide">install Xen</a>. Another is to have a machine
running on EC2 and <a href="http://somerandomidiot.com/blog/2014/08/19/i-am-unikernel/">create scripts</a> to deal with unikernels. You
could also build and deploy onto <a href="http://openmirage.org/wiki/xen-on-cubieboard2">Xen on the Cubieboard2</a>. If you’d
rather test out the complete system first, you could set up an appropriate
<a href="http://www.skjegstad.com/blog/2015/01/19/mirageos-xen-virtualbox/">machine in Virtualbox</a> to work with.</p>
<p>For our workflow, we use Xen unikernels which we deploy to a dedicated host.
For the sake of brevity, I won’t go into the details of how to set up
the machine but you can follow the instructions linked above.</p>
<h4 id="the-scripts-for-decksopenmirageorg">The scripts for decks.openmirage.org</h4>
<p><a href="https://github.com/mirage/mirage-decks">Decks</a> is the source repo that holds many of our slides, which
we’ve presented at conferences and events over the years (I admit that I have
yet to <a href="https://github.com/mirage/mirage-decks/issues/49">add mine</a>). The repo compiles to a unikernel that can
then serve those slides, as you see at <a href="http://decks.openmirage.org">decks.openmirage.org</a>. For
maximum fun-factor, we usually run that unikernel from a Cubieboard2 when
giving talks.</p>
<p><img src="http://amirchaudhry.com/images/singles/mirage-cubieboard.jpg" alt="mirage-decks-on-cubieboard" /></p>
<p>The toolchain for this unikernel includes build, store and deploy. We’ll
recap the first two steps before going through the final one.</p>
<p><strong>Build</strong> — In the root of the decks source repo, you’ll notice the
<code class="highlighter-rouge">.travis.yml</code> file, which fetches the standard build script mentioned earlier.
Building the unikernel proceeds according to the options in the build matrix.</p>
<figure class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="na">language</span><span class="pi">:</span> <span class="s">c</span>
<span class="na">install</span><span class="pi">:</span> <span class="s">wget https://raw.githubusercontent.com/ocaml/ocaml-travisci-skeleton/master/.travis-mirage.sh</span>
<span class="na">script</span><span class="pi">:</span> <span class="s">bash -ex .travis-mirage.sh</span>
<span class="na">env</span><span class="pi">:</span>
<span class="na">matrix</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">OCAML_VERSION=4.02 MIRAGE_BACKEND=unix MIRAGE_NET=socket</span>
<span class="pi">-</span> <span class="s">OCAML_VERSION=4.02 MIRAGE_BACKEND=unix MIRAGE_NET=direct</span>
<span class="pi">-</span> <span class="s">OCAML_VERSION=4.02 MIRAGE_BACKEND=xen</span>
<span class="s">MIRAGE_ADDR="46.43.42.134" MIRAGE_MASK="255.255.255.128" MIRAGE_GWS="46.43.42.129"</span>
<span class="s">DEPLOY=1</span>
<span class="na">global</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">secure</span><span class="pi">:</span> <span class="s2">"</span><span class="s">....</span><span class="nv"> </span><span class="s">encrypted</span><span class="nv"> </span><span class="s">data</span><span class="nv"> </span><span class="s">...."</span>
<span class="pi">-</span> <span class="na">secure</span><span class="pi">:</span> <span class="s2">"</span><span class="s">....</span><span class="nv"> </span><span class="s">encrypted</span><span class="nv"> </span><span class="s">data</span><span class="nv"> </span><span class="s">...."</span>
<span class="pi">-</span> <span class="na">secure</span><span class="pi">:</span> <span class="s2">"</span><span class="s">....</span><span class="nv"> </span><span class="s">encrypted</span><span class="nv"> </span><span class="s">data</span><span class="nv"> </span><span class="s">...."</span>
<span class="s">...</span></code></pre></figure>
<p>In this case, two builds occur for Unix and one for Xen with different
parameters being used for each. If you look at the
<a href="https://github.com/mirage/mirage-decks/blob/master/.travis.yml">actual travis file</a>, you’ll notice there are 26 lines of
encrypted data. This is how we pass the deployment key to Travis CI, so that
it has push access to the <em>separate</em> <a href="https://github.com/mirage/mirage-decks-deployment">mirage-decks-deployment</a>
repo. You can read the section in the previous post to see how we
<a href="https://github.com/mirage/mirage-decks-deployment">send Travis a private key</a>.</p>
<p><strong>Store</strong> — One of the combinations in the build matrix (configured for Xen),
is intended for deployment. When that unikernel is completed, an additional
part of the script is triggered that pushes it into the deployment repo.</p>
<h4 id="deployment-scripts">Deployment scripts</h4>
<p>After the ‘build’ and ‘store’ steps above, we have a
<a href="https://github.com/mirage/mirage-decks-deployment">deployment repository</a> with a collection of Xen unikernels. For
this stage, we have a new set of scripts that live in this repo alongside those
unikernels. Specifically, you’ll notice a folder called <code class="highlighter-rouge">scripts</code> that
contains four files.</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">.</span>
├── Makefile
├── README.md
├── scripts
│ ├── crontab
│ ├── deploy.sh
│ ├── install-hooks.sh
│ └── post-merge.hook
...</code></pre></figure>
<p>A quick summary of the setup is that we clone the repo onto our deployment
machine and install some hooks there. Then a simple cronjob will perform
<code class="highlighter-rouge">git pull</code> at regular intervals. If a merge event occurs, then it means the
repo has been updated and another script is triggered. That script removes the
currently running unikernel and boots the latest version from the repo. It’s
fairly straightforward and I’ll explain what each of the files does below.</p>
<p><strong>Makefile</strong> - After cloning the repo, run <code class="highlighter-rouge">make install</code>. This will trigger
<code class="highlighter-rouge">install-hooks.sh</code> to set things up appropriately. It’s worth remembering that
from this point on, the git repo on the deployment machine will not be
identical to the deployment repo on GitHub.</p>
<p><strong>install-hooks.sh</strong> — The first two lines ensure that the commands
will be run from the root of the git repo. The third line symlinks the
<code class="highlighter-rouge">post-merge.hook</code> file into the appropriate place within the <code class="highlighter-rouge">.git</code> directory.
This is the folder where customized <a href="http://www.git-scm.com/book/en/v2/Customizing-Git-Git-Hooks">git hooks</a> need to be placed in
order to work. The final line adds the file <code class="highlighter-rouge">scripts/crontab</code> to the
deployment machine’s list of cron jobs.</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nv">ROOT</span><span class="o">=</span><span class="k">$(</span>git rev-parse <span class="nt">--show-toplevel</span><span class="k">)</span> <span class="c"># obtain path to root of repo</span>
<span class="nb">cd</span> <span class="nv">$ROOT</span>
<span class="c"># symlink the post-merge.sh file into the .git/hooks folder</span>
ln <span class="nt">-sf</span> <span class="nv">$ROOT</span>/scripts/post-merge.hook <span class="nv">$ROOT</span>/.git/hooks/post-merge
crontab scripts/crontab <span class="c"># add to list of cron jobs</span></code></pre></figure>
<p><strong>crontab</strong> — This file is a cronjob that sets up the deployment machine to
perform a <code class="highlighter-rouge">git pull</code> on the deployment repo at regular intervals. Changing the
file in the repo will ultimately cause it to be updated on the deployment
machine (cf. <code class="highlighter-rouge">deploy.sh</code>). At the moment, it’s set to run every 11 minutes.</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="k">*</span>/11 <span class="k">*</span> <span class="k">*</span> <span class="k">*</span> <span class="k">*</span> <span class="nb">cd</span> <span class="nv">$HOME</span>/mirage-decks-deployment <span class="o">&&</span> git pull</code></pre></figure>
<p><strong>post-merge.hook</strong> — Since we’ve already run the Makefile, this file is
symlinked from the appropriate place on the deployment machine’s copy of the
repo. When a <code class="highlighter-rouge">git pull</code> results in new commits being downloaded and merged,
then this script is triggered immediately afterwards. In this case, it just
executes the <code class="highlighter-rouge">deploy.sh</code> script.</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nv">ROOT</span><span class="o">=</span><span class="k">$(</span>git rev-parse <span class="nt">--show-toplevel</span><span class="k">)</span> <span class="c"># obtain path to root of repo</span>
<span class="nb">exec</span> <span class="nv">$ROOT</span>/scripts/deploy.sh <span class="c"># execute the deploy script</span></code></pre></figure>
<p><strong>deploy.sh</strong> — This is where the work actually happens and you’ll notice that
there really isn’t much to do! I’ve commented in the code below to explain
what’s going on.</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nv">VM</span><span class="o">=</span>mir-decks
<span class="nv">XM</span><span class="o">=</span>xm
<span class="nv">ROOT</span><span class="o">=</span><span class="k">$(</span>git rev-parse <span class="nt">--show-toplevel</span><span class="k">)</span>
<span class="nb">cd</span> <span class="nv">$ROOT</span>
crontab scripts/crontab <span class="c"># Update cron scripts</span>
<span class="c"># Identify the latest build in the repo and then use</span>
<span class="c"># the generic Xen config script to construct a</span>
<span class="c"># specific file for this unikernel. Essentially,</span>
<span class="c"># 'sed' just does a find/replace on two elements and</span>
<span class="c"># the result is written to a new file.</span>
<span class="c">#</span>
<span class="nv">KERNEL</span><span class="o">=</span><span class="nv">$ROOT</span>/xen/<span class="sb">`</span><span class="nb">cat </span>xen/latest<span class="sb">`</span>
sed <span class="nt">-e</span> <span class="s2">"s,@VM@,</span><span class="nv">$VM</span><span class="s2">,g; s,@KERNEL@,</span><span class="nv">$KERNEL</span><span class="s2">/</span><span class="nv">$VM</span><span class="s2">.xen,g"</span> <span class="se">\</span>
< <span class="nv">$XM</span>.conf.in <span class="se">\</span>
<span class="o">></span>| <span class="nv">$KERNEL</span>/<span class="nv">$XM</span>.conf
<span class="c"># Move into the folder with the latest unikernel.</span>
<span class="c"># Remove any uncompressed Xen images found there</span>
<span class="c"># (since we may be starting a rebuilt unikernel).</span>
<span class="c"># Unzip the compressed unikernel.</span>
<span class="c">#</span>
<span class="nb">cd</span> <span class="nv">$KERNEL</span>
rm <span class="nt">-f</span> <span class="nv">$VM</span>.xen
bunzip2 <span class="nt">-k</span> <span class="nv">$VM</span>.xen.bz2
<span class="c"># Instruct Xen to remove the currently running</span>
<span class="c"># unikernel and then start up the new one we</span>
<span class="c"># just unzipped.</span>
<span class="c">#</span>
<span class="nb">sudo</span> <span class="nv">$XM</span> destroy <span class="nv">$VM</span> <span class="o">||</span> <span class="nb">true
sudo</span> <span class="nv">$XM</span> create <span class="nv">$XM</span>.conf</code></pre></figure>
<p>At this point, we now have a complete system!
Of course, this arrangement isn’t perfect and
there are number of things we could improve. For example, it depends on a
cron job, which means it may take a while before a new unikernel is live.
Replacing this with something triggered on a webhook could be an improvement,
but it does mean exposing an end-point to the internet. The scripts will also
redeploy the <em>current</em> unikernel, even if the only change is to the crontab
schedule. Some extra work in the deploy script, using some git tools, might
work around this.</p>
<p>Despite these minor issues, we do have a completely end-to-end workflow that
takes us all the way from pushing some new changes to deploying a new
unikernel! An additional feature is that <em>everything</em> is checked into version
control. Right from the scripts to completed artefacts (including a method of
transmitting secure keys/data, over public systems).</p>
<p>There is minimal work done outside the code you’ve already seen, though there
is obviously some effort involved in setting up the deployment machine.
However, as mentioned earlier, you could either use the unix-based unikernels
or experiment with <a href="http://www.skjegstad.com/blog/2015/01/19/mirageos-xen-virtualbox/">Virtualbox VM with Xen</a> just to test out this
entire toolchain.</p>
<p>Overall, we’ve only added around 20 lines of code to the initial 50 or so that
we use for the Travis CI build. So for <em>less than 100 lines of code</em>, we have
a <em>complete</em> end-to-end system that can take a MirageOS project from a
<code class="highlighter-rouge">git push</code>, all the way through to a live deployment.</p>
<h3 id="fleshing-out-the-backbone">Fleshing out the backbone</h3>
<p>In our current system, if the unikernel <em>builds</em> appropriately then we just
assume it’s ok to deploy to production. Fire and forget! What could
possibly go wrong! Of course, this is a somewhat naive approach and for any
critical system it would be better to hook in some additional things.</p>
<h4 id="testing-frameworks">Testing frameworks</h4>
<p>One obvious improvement would be to introduce a more thorough testing regimen,
which would include running unit tests as part of the build. Indeed, various
libraries in the MirageOS project are already moving towards this model
(e.g see the <a href="http://openmirage.org/wiki/weekly-2015-03-11#Qualityandtest">notes</a> for links).</p>
<p>It’s even possible to go beyond unit tests and introduce more
functional/systems/stress testing on the complete unikernel before permitting
deployment. This would help to surface any wider issues as services interact
and we could even simulate network conditions — achieving something like
‘staging on steroids’.</p>
<h4 id="logging-and-notifications">Logging and notifications</h4>
<p>The scenario we have above also assumes that things work smoothly and nobody
needs to know anything. It would be useful to hook in some form of logging
and reporting, such that when a new unikernel is deployed a notification can
be sent/stored somewhere. In the short term, there are likely existing tools
and ways of doing this so it would be a matter of putting them together.</p>
<h4 id="looking-ahead">Looking ahead</h4>
<p>Overall, with the above model, we can easily set up a system where we go from
writing code, to testing it via CI, to deploying it to a staging server for
functional tests, and finally pushing it out into live deployment. All of
this can be done with a few additional scripts and minimal interaction from
the developer. We can achieve this because we don’t have to concern ourselves
with large blobs of code, multiple different systems and keeping environments
in sync. Once we’ve built the unikernel, the rest almost becomes trivial.</p>
<p>This is close enough for me to declare it as a ‘Heroku for unikernels’ but
obviously, there’s much more we can (and should) do with such a system. If we
extrapolate <em>just a little</em> from where we are now, there are a range of
exciting possibilities to consider in terms of automation, scalability and
distributed systems. Especially if we incorporate other aspects of the
<a href="http://amirchaudhry.com/brewing-miso-to-serve-nymote/">toolstack we’re working towards</a>.</p>
<p><a href="http://amirchaudhry.com/heroku-for-unikernels-pt2/">Part 2</a> of this series is where I’ll consider these possibilities, which will
be more speculative and less constrained. It will cover the kinds of systems
we can create once the tools are more mature and will touch on ideas around
hyper-elastic clouds, embedded systems and what this means for the concept of
immutable infrastructure.</p>
<p>Since we already have the ‘backbone’ of the toolchain in place, it’s easier to
see where it can be extended and how.</p>
<p><em>Edit: The second part of this series is now up -
“<a href="http://amirchaudhry.com/heroku-for-unikernels-pt2/">Self Scaling Systems</a>”</em></p>
<p><em>Edit2: discuss this post on <a href="http://devel.unikernel.org/t/towards-heroku-for-unikernels/27/1">devel.unikernel.org</a></em></p>
<hr />
<p class="footnote">
Thanks to Anil Madhavapeddy and Thomas Leonard for comments on an earlier
draft and Richard Mortier for his work on the deployment toolchain.
</p>
<!--
[jitsu-repo]: https://github.com/MagnusS/jitsu
[jitsu-x]: http://www.skjegstad.com/blog/2015/03/25/mirageos-vm-per-url-experiment
[sp-post]: http://amirchaudhry.com/brewing-miso-to-serve-nymote/#signpost
[cron-conf]: http://en.wikipedia.org/wiki/Cron#Configuration_file
-->
Towards a governance framework for OCaml.orgAmir Chaudhry2015-01-08T18:15:00+00:00http://amirchaudhry.com/towards-governance-framework-for-ocamlorg
<p>The projects around the OCaml.org domain name are becoming more established
and it’s time to think about how they’re organised. 2014 saw a <em>lot</em> of
activity, which built on the <a href="http://www.cl.cam.ac.uk/projects/ocamllabs/news/index.html#OnlineatOCamlorg">successes from 2013</a>.
Some of the main things that stand out to me are:</p>
<ul>
<li>More <a href="http://ocaml.org/contributors.html">volunteers</a> contributing to the public website with
translations, bug fixes and content updates, as well as many new visitors —
for example, the new page on <a href="http://ocaml.org/learn/teaching-ocaml.html">teaching OCaml</a> received over 5k
visits alone. The increasing contributions are a result of the earlier work on
<a href="http://amirchaudhry.com/announcing-new-ocamlorg/">re-engineering the site</a> and there are many ways to get involved
so please do <a href="https://github.com/ocaml/ocaml.org/labels/contribute%21">contribute</a>!</li>
</ul>
<p><a href="http://opam.ocaml.org/"><img style="float: right; margin-left: 10px" src="http://amirchaudhry.com/images/web/opampkg-2015-01-08.png" /></a></p>
<ul>
<li>The relentless improvements and growth of OPAM, both in terms of the
repository — with over 1000 additional packages and several
<a href="http://lists.ocaml.org/pipermail/opam-devel/2014-October/000781.html">new repo maintainers</a> — and also improved workflows (e.g the new
<a href="http://opam.ocaml.org/blog/opam-1-2-pin/">pin functionality</a>).
The OPAM site and package list also moved to the ocaml.org domain, becoming
the substrate for the OCaml Platform efforts. This began with the work towards
<a href="http://opam.ocaml.org/blog/opam-1-2-0-beta4/">OPAM 1.2</a> and there is much more to come (including closer
integration in terms of styling). Join the <a href="http://lists.ocaml.org/listinfo/platform">Platform list</a> to
keep up to date.</li>
</ul>
<ul>
<li>Much more activity on the <a href="http://lists.ocaml.org">mailing lists</a> in general and user groups
requesting to have lists made (e.g the <a href="http://lists.ocaml.org/listinfo/teaching">teaching list</a>). If anyone
has a need for a new list, just ask on the
<a href="http://lists.ocaml.org/listinfo/infrastructure">infrastructure list</a>!</li>
</ul>
<p>There is other work besides those I’ve mentioned and I think by any measure,
all the projects have been quite successful. As the community continues to
develop, it’s important to clarify how things currently work to improve the
level of transparency and make it easier for newcomers to get involved.</p>
<h3 id="factors-for-a-governance-framework">Factors for a governance framework</h3>
<p>For the last couple of months, I’ve been looking over how larger projects
manage themselves and the governance documents that are available. My aim has
been to put such a document together for the OCaml.org domain without
introducing burdensome processes. There are number of things that stood out
to me during this process, which have guided the approach I’m taking.</p>
<p>My considerations for an OCaml.org governance document:</p>
<ul>
<li>
<p>A governance document is not <em>necessary</em> for success but it’s valuable to
demonstrate a commitment to a <strong>stable decision-making process</strong>. There are
many projects that progress perfectly well without any documented processes
and indeed the work around OCaml.org so far is a good example of this (as well
as OCaml itself). However, for projects to achieve a scale greater than the
initial teams, it’s a significant benefit to encode in writing how things work
(NB: please note that I didn’t define the <em>type</em> of decision-making process -
merely that it’s a stable one).</p>
</li>
<li>
<p>It must <strong>clarify its scope</strong> so that there is no confusion about what the
document covers. In the case of OCaml.org, it needs to be clear that the
governance covers the domain itself, rather than referring to the website.</p>
</li>
<li>
<p>It should <strong>document the reality</strong>, rather than represent an aspirational
goal or what people <em>believe</em> a governance structure should look like. It’s
very tempting to think of an idealised structure without recognising that
behaviours and norms have <em>already</em> been established. Sometimes this will be
vague and poorly defined but that might simply indicate areas that the
community hasn’t encountered yet (e.g it’s uncommon for any new project to
seriously think about dispute resolution processes until they have to). In
this sense, the initial version of a governance document should simply be a
written description of how things currently stand, rather than a means to
adjust that behaviour.</p>
</li>
<li>
<p>It should be <strong>simple and self-contained</strong>, so that anyone can understand
the intent quickly without recourse to other documents. It may be tempting to
consider every edge-case or try to resolve every likely ambiguity but this
just leads to large, legal documents. This approach may well be necessary
once projects have reached a certain scale but to implement it sooner would be
a case of premature optimisation — not to mention that very few people would
read and remember such a document.</p>
</li>
<li>
<p>It’s a <strong>living document</strong>. If the community decides that it would prefer a
new arrangement, then the document conveniently provides a stable starting
point from which to iterate. Indeed, it <em>should</em> adapt along with the project
that it governs.</p>
</li>
</ul>
<p>With the above points in mind, I’ve been putting together a draft governance
framework to cover how the OCaml.org domain name is managed. It’s been a
quiet work-in-progress for some time and I’ll be getting in touch with
maintainers of specific projects soon. Once I’ve had a round of reviews, I’ll
be sharing it more widely and posting it here!</p>
<!-- [![FIGURE 06.1 Governance versus anarchy on Flickr](http://amirchaudhry.com/images/web/governance-alpha.png)](https://www.flickr.com/photos/jurgenappelo/5201270923/) -->
Writing Planet in pure OCamlAmir Chaudhry2014-04-29T09:30:00+00:00http://amirchaudhry.com/writing-planet-in-pure-ocaml
<p>I’ve been learning OCaml for some time now but not really had a problem that
I wanted to solve. As such, my progress has been rather slow and sporadic
and I only make time for exercises when I’m travelling. In order to focus my
learning, I have to identify and tackle something specific. That’s usually
the best way to advance and I recently found something I can work on.</p>
<p>As I’ve been trying to write more blog posts, I want to be able to keep as
much content on my own site as possible and syndicate my posts out to other
sites I run. Put simply, I want to be able to take multiple feeds from
different sources and merge them into one feed, which will be served from
some other site. In addition, I also want to render that feed as HTML on a
webpage. All of this has to remain within the OCaml toolchain so it can be
used as part of <a href="http://openmirage.org/">Mirage</a> (i.e. I can use it when
<a href="http://amirchaudhry.com/from-jekyll-to-unikernel-in-fifty-lines">building unikernels</a>).</p>
<p>What I’m describing might sound familiar and there’s a well-known tool that
does this called <a href="http://en.wikipedia.org/wiki/Planet_(software)">Planet</a>. It’s a ‘river of news’ feed reader, which
aggregates feeds and can display posts on webpages and you can find the
<a href="http://www.planetplanet.org">original Planet</a> and it’s successor <a href="http://intertwingly.net/code/venus/docs/index.html">Venus</a>, both written in Python.
However, Venus seems to be unmaintained as there are a number of
<a href="https://github.com/rubys/venus/issues">unresolved issues and pull requests</a>, which have been
languishing for quite some time with no discussion. There does appear to be
a more active Ruby implementation called <a href="http://feedreader.github.io/">Pluto</a>, with recent commits and
no reported issues.</p>
<!--
\[Rant: Frankly, the naming of these versions leaves a lot to be desired.
When you know exactly what you're supposed to Google for you're fine, but
until then you're just on a random-walk through space websites. I'm
lucky I managed to get to the Wikipedia page.\]
-->
<h3 id="benefits-of-a-planet-in-pure-ocaml">Benefits of a Planet in pure OCaml</h3>
<p>Although I could use the one of the above options, it would be much more
useful to keep everything within the OCaml ecosystem. This way I can make
the best use of the <a href="https://queue.acm.org/detail.cfm?id=2566628">unikernel approach</a> with Mirage (i.e lean,
single-purpose appliances). Obviously, the existing options don’t lend
themselves to this approach and there are <a href="https://forge.ocamlcore.org/tracker/index.php?func=detail&aid=1349&group_id=1&atid=101">known bugs</a> as a lot has
changed on the web since Planet Venus (e.g the adoption of HTML5).
Having said that, I can learn a lot from the existing implementations and
I’m glad I’m not embarking into completely uncharted territory.</p>
<p>In addition, the OCaml version doesn’t need to (and <em>shouldn’t</em>) be written
as one monolithic library. Instead, pulling together a collection of
smaller, reusable libraries that present clear interfaces to each other
would make things much more maintainable. This would bring substantially
greater benefits to everyone and <a href="https://opam.ocaml.org/">OPAM</a> can manage the dependencies.</p>
<!--
OPAM makes managing dependencies easy so having a number of single-
purpose libraries is A Good Thing and costs almost nothing. This
approach has already worked well with examples like an [IP address
library][ipaddr] and the [OCaml markdown library][OMD], which can be
used by multiple projects.
-->
<h3 id="breaking-down-the-problem">Breaking down the problem</h3>
<p>The first cut is somewhat straightforward as we have a piece that deals with
the consumption and manipulation of feeds and another that takes the result
and emits HTML. This is also how the original Planet is put together, with a
library called <a href="https://pypi.python.org/pypi/feedparser/">feedparser</a> and another for templating pages.</p>
<p>For the feed-parsing aspect, I can break it down further by considering Atom
and RSS feeds separately and then even further by thinking about how to (1)
consume such feeds and (2) output them. Then there is the HTML component,
where it may be necessary to consider existing representations of HTML. These
are not new ideas and since I’m claiming that individual pieces might be
useful then it’s worth finding out which ones are already available.</p>
<h4 id="existing-components">Existing components</h4>
<p>The easiest way to find existing libraries is via the
<a href="http://opam.ocaml.org/packages">OPAM package list</a>. Some quick searches for <code class="highlighter-rouge">RSS</code>, <code class="highlighter-rouge">XML</code>, <code class="highlighter-rouge">HTML</code>
and <code class="highlighter-rouge">net</code> bring up a lot of packages. The most relevant of these seem to be
<a href="https://opam.ocaml.org/packages/xmlm/xmlm.1.2.0/">xmlm</a>, <a href="https://opam.ocaml.org/packages/ocamlrss/ocamlrss.2.2.2/">ocamlrss</a>, <a href="https://opam.ocaml.org/packages/cow/cow.0.9.1/">cow</a> and maybe <a href="http://opam.ocaml.org/packages/xmldiff/xmldiff.0.1/">xmldiff</a>. I noticed that
nothing appears, when searching for <code class="highlighter-rouge">Atom</code>, but I do know that <code class="highlighter-rouge">cow</code> has an
Atom module for creating feeds. In terms of turning feeds into pages and
HTML, I’m aware of <a href="https://github.com/ocaml/ocaml.org/blob/master/script/rss2html.ml">rss2html</a> used on the <a href="http://ocaml.org">OCaml</a> website and parts of
<a href="http://opam.ocaml.org/packages/ocamlnet/ocamlnet.3.7.3/">ocamlnet</a> that may be relevant (e.g <code class="highlighter-rouge">nethtml</code> and <code class="highlighter-rouge">netstring</code>) as well as
<code class="highlighter-rouge">cow</code>. There is likely to be other code I’m missing but this is useful as a
first pass.</p>
<p>Overall, a number of components are already out there but it’s not obvious
if they’re compatible (e.g html) and there are still gaps (e.g atom). Since
I also want to minimise dependencies, I’ll try and use whatever works but
may ultimately have to roll my own. Either way, I can learn from what
already exists. Perhaps I’m being overconfident but if I can break things
down sensibly and keep the scope constrained then this should be an
achievable project.</p>
<h3 id="the-first-baby-steps---an-atom-parser">The first (baby) steps - an Atom parser</h3>
<p>As this is an exercise for me to learn OCaml by solving a problem, I need to
break it down into bite-size pieces and take each one at a time. Practically
speaking, this means limiting the scope to be as narrow as possible while
still producing a useful result <em>for me</em>. That last part is important as I
have specific needs and it’s likely that the first thing I make won’t be
particularly interesting for many others.</p>
<p>For my specific use-case, I’m only interested in dealing with Atom feeds as
that’s what I use on my site and others I’m involved with. Initial feedback
is that creating an Atom parser will be the bulk of the work and I should
start by defining the types. To keep this manageable, I’m only going to deal
with my own feeds instead of attempting a fully compliant parser (in other
words, I’ll only consider the subset of <a href="https://tools.ietf.org/html/rfc4287">RFC4287</a> that’s relevant to me).
Once I can parse, merge and write such feeds I should be able to iterate
from there.</p>
<p>To make my requirements more concrete:</p>
<ul>
<li>Only consider <em>my own</em> Atom feeds for now</li>
<li>Initially, be able to parse and emit just one Atom feed</li>
<li>Then be able to merge 2+ feeds, specifically:
<ul>
<li>Use tag-based feeds from my personal site as starting points</li>
<li>Be able to de-dupe content</li>
</ul>
</li>
<li>No database or storage (construct it afresh every time)</li>
<li>Minimise library dependencies</li>
</ul>
<!--
Perhaps these requirements are already too much and I may decide to dial
it down even further (e.g just figure out how to consume *one* feed),
but I won't really know until I get started. For example, I can imagine
that I'll need one bunch of code to deal with Atom feeds and then
perhaps I can make another (feedparser), that depends on it and others
to deal with general feeds.
-->
<h4 id="timeframes-and-workflow">Timeframes and workflow</h4>
<p>I’ve honestly no idea how long this might take and I’m treating it as a
side-project. I know there are many people out there who could produce a
working version of everything in a week or two but I’m not one of them (yet).
There are also <em>a lot</em> of ancillary things I need to learn on the way, like
packaging, improving my knowledge of Git and dealing with build systems. If
I had to put a vague time frame on this, I’d be thinking in months rather
than weeks. It might even be the case that others start work on parts of
this and ship things sooner but that’s great as I’ll probably be able to use
whatever they create and move further along the chain.</p>
<p>In terms of workflow, everything will be done in the open, warts and all, and
I expect to make embarrassing mistakes as I go. You can follow along on my
freshly created <a href="https://github.com/amirmc/ocamlatom">OCaml Atom</a> repo, and I’ll be using the issue tracker as
the main way of dealing with bugs and features. Let the fun begin.</p>
<!-- acknowledgements -->
<hr />
<p><em>Acknowledgements:</em> Thanks to <a href="http://erratique.ch">Daniel</a>, <a href="http://ashishagarwal.org">Ashish</a>, <a href="https://github.com/Chris00">Christophe</a>,
<a href="http://philippewang.info/">Philippe</a> and <a href="http://gazagnaire.org">Thomas</a> for discussions on an earlier draft of this post
and providing feedback on my approach.</p>
<!-- links -->
From Jekyll site to Unikernel in fifty lines of code.Amir Chaudhry2014-03-10T18:30:00+00:00http://amirchaudhry.com/from-jekyll-to-unikernel-in-fifty-lines
<p><a href="http://openmirage.org">Mirage</a> has reached a point where it’s possible to easily set up
end-to-end toolchains to build <a href="http://queue.acm.org/detail.cfm?id=2566628">unikernels</a>! <!--\[If you're not sure what that is, read the post [What is a unikernel?][amc-unikernel]\]-->
My first use-case is to be able to generate a unikernel which can serve my
personal static site but to do it with as much automation as possible. It
turns out this is possible with less than 50 lines of code.</p>
<p>I use Jekyll and GitHub Pages at the moment so I wanted a workflow that’s as
easy to use, though I’m happy to spend some time up front to set up and
configure things.
The tools for achieving what I want are in good shape so
this post takes the example of a Jekyll site (i.e this one) and goes through
the steps to produce a unikernel on
<a href="https://travis-ci.org">Travis CI</a> (a continuous integration service) which can later be
deployed. Many of these instructions already exist in various forms but
they’re collated here to aid this use-case.</p>
<p>I will take you, dear reader, through the process and when we’re finished,
the workflow will be as follows:</p>
<ol>
<li>You’ll write your posts on your local machine as normal</li>
<li>A push to GitHub will trigger a unikernel build for each commit</li>
<li>The Xen unikernel will be pushed to a repo for deployment</li>
</ol>
<p>To achieve this, we’ll first check that we can build a unikernel VM locally,
then we’ll set up a continuous integration service to automatically build
them for us and finally we’ll adapt the CI service to also deploy the built
VM. Although the amount of code required is small, each of these steps is
covered below in some detail.
For simplicity, I’ll assume you already have OCaml and Opam
installed – if not, you can find out how via the
<a href="http://realworldocaml.org/install">Real Word OCaml install instructions</a>.</p>
<h2 id="building-locally">Building locally</h2>
<p>To ensure that the build actually works, you should run things locally at
least once before pushing to Travis. It’s worth noting that the
<a href="https://github.com/mirage/mirage-skeleton">mirage-skeleton</a> repo contains a lot of useful, public domain examples
and helpfully, the specific code we need is in
<a href="https://github.com/mirage/mirage-skeleton/tree/master/static_website">mirage-skeleton/static_website</a>. Copy both the <code class="highlighter-rouge">config.ml</code>
and <code class="highlighter-rouge">dispatch.ml</code> files from that folder into a new <code class="highlighter-rouge">_mirage</code> folder in your
jekyll repository.</p>
<p>Edit <code class="highlighter-rouge">config.ml</code> so that the two mentions of <code class="highlighter-rouge">./htdocs</code> are replaced with
<code class="highlighter-rouge">../_site</code>. This is the only change you’ll need to make and you should now
be able to build the unikernel with the unix backend. Make sure you have
the mirage package installed by running <code class="highlighter-rouge">$ opam install mirage</code> and then run:</p>
<p><em>(edit: If you already have <code class="highlighter-rouge">mirage</code>, remember to <code class="highlighter-rouge">opam update</code> to make sure you’ve got the latest packages.)</em></p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nv">$ </span><span class="nb">cd </span>_mirage
<span class="nv">$ </span>mirage configure <span class="nt">--unix</span>
<span class="nv">$ </span>make depend <span class="c"># needed as of mirage 1.2 onward</span>
<span class="nv">$ </span>mirage build
<span class="nv">$ </span><span class="nb">cd</span> ..</code></pre></figure>
<p>That’s all it takes! In a few minutes there will be a unikernel built on
your system (symlinked as <code class="highlighter-rouge">_mirage/mir-www</code>). If there are any errors, make
sure that Opam is up to date and that you have the latest version of the
static_website files from <a href="https://github.com/mirage/mirage-skeleton">mirage-skeleton</a>.</p>
<h3 id="serving-the-site-locally">Serving the site locally</h3>
<p>If you’d like to see this site locally, you can do so from within the
<code class="highlighter-rouge">_mirage</code> folder by running unikernel you just built. There’s more
information about the details of this on the <a href="http://openmirage.org/wiki/mirage-www">Mirage docs site</a>
but the quick instructions are:</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nv">$ </span><span class="nb">cd </span>_mirage
<span class="nv">$ </span><span class="nb">sudo </span>mirage run
<span class="c"># in another terminal window</span>
<span class="nv">$ </span><span class="nb">sudo </span>ifconfig tap0 10.0.0.1 255.255.255.0</code></pre></figure>
<p>You can now point your browser at http://10.0.0.2/ and see your site!
Once you’re finished browsing, <code class="highlighter-rouge">$ mirage clean</code> will clear up all the
generated files.</p>
<p>Since the build is working locally, we can set up a continuous integration
system to perform the builds for us.</p>
<h2 id="setting-up-travis-ci">Setting up Travis CI</h2>
<p><img style="float: right; margin-left: 10px" src="http://amirchaudhry.com/images/jekyll-unikernel/travis.png" /></img></p>
<p>We’ll be using the <a href="https://travis-ci.org">Travis CI</a> service, which is free for open-source
projects (so this assumes you’re using a public repo). The benefit of using
Travis is that you can build a unikernel <em>without</em> needing a local OCaml
environment, but it’s always quicker to debug things locally.</p>
<p>Log in to Travis using your GitHub ID which will then trigger a scan of your
repositories. When this is complete, go to your Travis accounts page and
find the repo you’ll be building the unikernel from. Switch it ‘on’ and
Travis will automatically set your GitHub post-commit hook and token for you.
That’s all you need to do on the website.</p>
<p>When you next make a push to your repository, GitHub will inform Travis,
which will then look for a YAML file in the root of the repo called
<code class="highlighter-rouge">.travis.yml</code>. That file describes what Travis should do and what the build
matrix is. Since OCaml is not one of the supported languages, we’ll be
writing our build script manually (this is actually easier than it sounds).
First, let’s set up the YAML file and then we’ll examine the build script.</p>
<h3 id="the-travis-yaml-file---travisyml">The Travis YAML file - .travis.yml</h3>
<p>The <a href="http://docs.travis-ci.com/user/ci-environment/#CI-environment-OS">Travis CI environment</a> is based on Ubuntu 12.04, with a
number of things pre-installed (e.g Git, networking tools etc). Travis
doesn’t support OCaml (yet) so we’ll use the <code class="highlighter-rouge">c</code> environment to get the
packages we need, specifically, the OCaml compiler, Opam and Mirage. Once
those are set up, our build should run pretty much the same as it did locally.</p>
<p>For now, let’s keep things simple and only focus on the latest releases
(OCaml 4.01.0 and Opam 1.1.1), which means our build matrix is very simple.
The build instructions will be in the file <code class="highlighter-rouge">_mirage/travis.sh</code>, which we
will move to and trigger from the <code class="highlighter-rouge">.travis.yml</code> file. This means our YAML
file should look like:</p>
<figure class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="na">language</span><span class="pi">:</span> <span class="s">c</span>
<span class="na">before_script</span><span class="pi">:</span> <span class="s">cd _mirage</span>
<span class="na">script</span><span class="pi">:</span> <span class="s">bash -ex travis.sh</span>
<span class="na">env</span><span class="pi">:</span>
<span class="na">matrix</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">MIRAGE_BACKEND=xen DEPLOY=0</span>
<span class="pi">-</span> <span class="s">MIRAGE_BACKEND=unix</span></code></pre></figure>
<p>The matrix enables us to have parallel builds for different environments and
this one is very simple as it’s only building two unikernels. One worker
will build for the Xen backend and another worker will build for the Unix
backend. The <code class="highlighter-rouge">_mirage/travis.sh</code> script will clarify what each of these
environments translates to. We’ll come back to the <code class="highlighter-rouge">DEPLOY</code> flag later on
(it’s not necessary yet). Now that this file is set up, we can work on the
build script itself.</p>
<h3 id="the-build-script---travissh">The build script - travis.sh</h3>
<p>To save time, we’ll be using an Ubuntu PPA to quickly get
<a href="https://launchpad.net/~avsm">pre-packaged versions of the OCaml compiler and Opam</a>, so the
first thing to do is define which PPAs each line of the build matrix
corresponds to. Since we’re keeping things simple, we only need one PPA
that has the most recent releases of OCaml and Opam.</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="c">#!/usr/bin/env bash</span>
<span class="nv">ppa</span><span class="o">=</span>avsm/ocaml41+opam11
<span class="nb">echo</span> <span class="s2">"yes"</span> | <span class="nb">sudo </span>add-apt-repository ppa:<span class="nv">$ppa</span>
<span class="nb">sudo </span>apt-get update <span class="nt">-qq</span>
<span class="nb">sudo </span>apt-get install <span class="nt">-qq</span> ocaml ocaml-native-compilers camlp4-extra opam</code></pre></figure>
<p>[NB: There are many <a href="https://launchpad.net/~avsm">other PPAs</a> for different combinations of
OCaml/Opam which are useful for testing]. Once the appropriate PPAs have
been set up it’s time to initialise Opam and install Mirage.</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">export </span><span class="nv">OPAMYES</span><span class="o">=</span>1
opam init
opam install mirage
<span class="nb">eval</span> <span class="sb">`</span>opam config env<span class="sb">`</span></code></pre></figure>
<p>We set <code class="highlighter-rouge">OPAMYES=1</code> to get non-interactive use of Opam (it defaults to ‘yes’
for any user input) and if we want full build logs, we could also set
<code class="highlighter-rouge">OPAMVERBOSE=1</code> (I haven’t in this example).
The rest should be straight-forward and you’ll end up with an
Ubuntu machine with OCaml, Opam and the Mirage package installed. It’s now
trivial to do the next step of actually building the unikernel!</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">mirage configure <span class="nt">--</span><span class="nv">$MIRAGE_BACKEND</span>
mirage build</code></pre></figure>
<p>You can see how we’ve used the environment variable from the Travis file and
this is where our two parallel builds begin to diverge. When you’ve saved
this file, you’ll need to change permissions to make it executable by doing
<code class="highlighter-rouge">$ chmod +x _mirage/travis.sh</code>.</p>
<p>That’s all you need to build the unikernel on Travis! You should now commit
both the YAML file and the build script to the repo and push the changes to
GitHub. Travis should automatically start your first build and you can
watch the console output online to check that both the Xen and Unix backends
complete properly. If you notice any errors, you should go back over your
build script and fix it before the next step.</p>
<h2 id="deploying-your-unikernel">Deploying your unikernel</h2>
<p><img style="float: right; margin-left: 10px" src="http://amirchaudhry.com/images/jekyll-unikernel/octocat.png" /></img></p>
<p>When Travis has finished its builds it will simply destroy the worker and
all its contents, including the unikernels we just built. This is perfectly
fine for testing but if we want to also <em>deploy</em> a unikernel, we need to get
it out of the Travis worker after it’s built. In this case, we want to
extract the Xen-based unikernel so that we can later start it on a Xen-based
machine (e.g Amazon, Rackspace or - in our case - a machine on <a href="http://www.bytemark.co.uk">Bytemark</a>).</p>
<p>Since the unikernel VMs are small (only tens of MB), our method for
exporting will be to commit the Xen unikernel into a repository on GitHub.
It can be retrieved and started later on and keeping the VMs in version
control gives us very effective snapshots (we can roll back the site without
having to rebuild). This is something that would be much more challenging
if we were using the ‘standard’ web toolstack.</p>
<p>The deployment step is a little more complex as we have to send the
Travis worker a private SSH key, which will give it push access to a GitHub
repository. Of course, we don’t want to expose that key by simply adding it
to the Travis file so we have to encrypt it somehow.</p>
<h3 id="sending-travis-a-private-ssh-key">Sending Travis a private SSH key</h3>
<p>Travis supports <a href="http://docs.travis-ci.com/user/encryption-keys/">encrypted environment variables</a>. Each
repository has its own public key and the <a href="http://rubygems.org/gems/travis">Travis gem</a> uses
this public key to encrypt data, which you then add to your <code class="highlighter-rouge">.travis.yml</code>
file for decryption by the worker. This is meant for sending things like
private API tokens and other small amounts of data. Trying to encrypt an SSH
key isn’t going to work as it’s too large. Instead we’ll use
<a href="https://github.com/avsm/travis-senv">travis-senv</a>, which encodes, encrypts and chunks up the key into smaller
pieces and then reassembles those pieces on the Travis worker. We still use
the Travis gem to encrypt the pieces to add them to the <code class="highlighter-rouge">.travis.yml</code> file.</p>
<p>While you could give Travis a key that accesses your whole GitHub account, my
preference is to create a <em>new</em> deploy key, which will only be used for
<a href="https://help.github.com/articles/managing-deploy-keys#deploy-keys">deployment to one repository</a>.</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="c"># make a key pair on your local machine</span>
<span class="nv">$ </span><span class="nb">cd</span> ~/.ssh/
<span class="nv">$ </span>ssh-keygen <span class="nt">-t</span> dsa <span class="nt">-C</span> <span class="s2">"travis.deploy"</span> <span class="nt">-f</span> travis-deploy_dsa
<span class="nv">$ </span><span class="nb">cd</span> -</code></pre></figure>
<p>Note that this is a 1024 bit key so if you decide to use a 2048 bit key,
then be aware that Travis <a href="https://github.com/avsm/travis-senv/issues/1">sometimes has issues</a>. Now that we have
a key, we can encrypt it and add it to the Travis file.</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="c"># on your local machine</span>
<span class="c"># install the necessary components</span>
<span class="nv">$ </span>gem install travis
<span class="nv">$ </span>opam install travis-senv
<span class="c"># chunk the key, add to yml file and rm the intermediate</span>
<span class="nv">$ </span>travis-senv encrypt ~/.ssh/travis-deploy_dsa _travis_env
<span class="nv">$ </span><span class="nb">cat </span>_travis_env | travis encrypt <span class="nt">-ps</span> <span class="nt">--add</span>
<span class="nv">$ </span>rm _travis_env</code></pre></figure>
<p><code class="highlighter-rouge">travis-senv</code> encrypts and chunks the key locally on your machine, placing
its output in a file you decide (<code class="highlighter-rouge">_travis_env</code>). We then take that output
file and pipe it to the <code class="highlighter-rouge">travis</code> ruby gem, asking it to encrypt the input,
treating each line as separate and to be appended (<code class="highlighter-rouge">-ps</code>) and then actually
adding that to the Travis file (<code class="highlighter-rouge">--add</code>). You can run <code class="highlighter-rouge">$ travis encrypt -h</code>
to understand these options. Once you’ve run the above commands,
<code class="highlighter-rouge">.travis.yml</code> will look as follows.</p>
<figure class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="na">language</span><span class="pi">:</span> <span class="s">c</span>
<span class="na">before_script</span><span class="pi">:</span> <span class="s">cd _mirage</span>
<span class="na">script</span><span class="pi">:</span> <span class="s">bash -ex travis.sh</span>
<span class="na">env</span><span class="pi">:</span>
<span class="na">matrix</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">MIRAGE_BACKEND=xen DEPLOY=0</span>
<span class="pi">-</span> <span class="s">MIRAGE_BACKEND=unix</span>
<span class="na">global</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">secure</span><span class="pi">:</span> <span class="s2">"</span><span class="s">....</span><span class="nv"> </span><span class="s">encrypted</span><span class="nv"> </span><span class="s">data</span><span class="nv"> </span><span class="s">...."</span>
<span class="pi">-</span> <span class="na">secure</span><span class="pi">:</span> <span class="s2">"</span><span class="s">....</span><span class="nv"> </span><span class="s">encrypted</span><span class="nv"> </span><span class="s">data</span><span class="nv"> </span><span class="s">...."</span>
<span class="pi">-</span> <span class="na">secure</span><span class="pi">:</span> <span class="s2">"</span><span class="s">....</span><span class="nv"> </span><span class="s">encrypted</span><span class="nv"> </span><span class="s">data</span><span class="nv"> </span><span class="s">...."</span>
<span class="s">...</span></code></pre></figure>
<p>The number of secure variables added depends on the type and size of the key
you had to chunk, so it could vary from 8 up to 29. We’ll commit
these additions later on, alongside additions to the build script.</p>
<p>At this point, we also need to make a repository on GitHub
and add the public deploy key so
that Travis can push to it. Once you’ve created your repo and added a
README, follow GitHub’s instructions on <a href="https://help.github.com/articles/managing-deploy-keys#deploy-keys">adding deploy keys</a>
and paste in the public key (i.e. the content of <code class="highlighter-rouge">travis-deploy_dsa.pub</code>).</p>
<p>Now that we can securely pass a private SSH key to the worker
and have a repo that the worker can push to, we need to
make additions to the build script.</p>
<h3 id="committing-the-unikernel-to-a-repository">Committing the unikernel to a repository</h3>
<p>Since we can set <code class="highlighter-rouge">DEPLOY=1</code> in the YAML file we only need to make
additions to the build script. Specifically, we want to assure that: only
the Xen backend is deployed; only <em>pushes</em> to the repo result in
deployments, not pull requests (we do still want <em>builds</em> for pull requests).</p>
<p>In the build script (<code class="highlighter-rouge">_mirage/travis.sh</code>), which is being run by the worker,
we’ll have to reconstruct the SSH key and configure Git. In addition,
Travis gives us a set of useful <a href="http://docs.travis-ci.com/user/ci-environment/#Environment-variables">environment variables</a> so we’ll
use the latest commit hash (<code class="highlighter-rouge">$TRAVIS_COMMIT</code>) to name the the VM (which also
helps us trace which commit it was built from).</p>
<p>It’s easier to consider this section of code at once so I’ve explained the
details in the comments. This section is what you need to add at the end of
your existing build script (i.e straight after <code class="highlighter-rouge">mirage build</code>).</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="c"># Only deploy if the following conditions are met.</span>
<span class="k">if</span> <span class="o">[</span> <span class="s2">"</span><span class="nv">$MIRAGE_BACKEND</span><span class="s2">"</span> <span class="o">=</span> <span class="s2">"xen"</span> <span class="se">\</span>
<span class="nt">-a</span> <span class="s2">"</span><span class="nv">$DEPLOY</span><span class="s2">"</span> <span class="o">=</span> <span class="s2">"1"</span> <span class="se">\</span>
<span class="nt">-a</span> <span class="s2">"</span><span class="nv">$TRAVIS_PULL_REQUEST</span><span class="s2">"</span> <span class="o">=</span> <span class="s2">"false"</span> <span class="o">]</span><span class="p">;</span> <span class="k">then</span>
<span class="c"># The Travis worker will already have access to the chunks</span>
<span class="c"># passed in via the yaml file. Now we need to reconstruct </span>
<span class="c"># the GitHub SSH key from those and set up the config file.</span>
opam install travis-senv
mkdir <span class="nt">-p</span> ~/.ssh
travis-senv decrypt <span class="o">></span> ~/.ssh/id_dsa <span class="c"># This doesn't expose it</span>
chmod 600 ~/.ssh/id_dsa <span class="c"># Owner can read and write</span>
<span class="nb">echo</span> <span class="s2">"Host some_user github.com"</span> <span class="o">>></span> ~/.ssh/config
<span class="nb">echo</span> <span class="s2">" Hostname github.com"</span> <span class="o">>></span> ~/.ssh/config
<span class="nb">echo</span> <span class="s2">" StrictHostKeyChecking no"</span> <span class="o">>></span> ~/.ssh/config
<span class="nb">echo</span> <span class="s2">" CheckHostIP no"</span> <span class="o">>></span> ~/.ssh/config
<span class="nb">echo</span> <span class="s2">" UserKnownHostsFile=/dev/null"</span> <span class="o">>></span> ~/.ssh/config
<span class="c"># Configure the worker's git details</span>
<span class="c"># otherwise git actions will fail.</span>
git config <span class="nt">--global</span> user.email <span class="s2">"user@example.com"</span>
git config <span class="nt">--global</span> user.name <span class="s2">"Travis Build Bot"</span>
<span class="c"># Do the actual work for deployment.</span>
<span class="c"># Clone the deployment repo. Notice the user,</span>
<span class="c"># which is the same as in the ~/.ssh/config file.</span>
git clone git@some_user:amirmc/www-test-deploy
<span class="nb">cd </span>www-test-deploy
<span class="c"># Make a folder named for the commit. </span>
<span class="c"># If we're rebuiling a VM from a previous</span>
<span class="c"># commit, then we need to clear the old one.</span>
<span class="c"># Then copy in both the config file and VM.</span>
rm <span class="nt">-rf</span> <span class="nv">$TRAVIS_COMMIT</span>
mkdir <span class="nt">-p</span> <span class="nv">$TRAVIS_COMMIT</span>
cp ../mir-www.xen ../config.ml <span class="nv">$TRAVIS_COMMIT</span>
<span class="c"># Compress the VM and add a text file to note</span>
<span class="c"># the commit of the most recently built VM.</span>
bzip2 <span class="nt">-9</span> <span class="nv">$TRAVIS_COMMIT</span>/mir-www.xen
git pull <span class="nt">--rebase</span>
<span class="nb">echo</span> <span class="nv">$TRAVIS_COMMIT</span> <span class="o">></span> latest <span class="c"># update ref to most recent</span>
<span class="c"># Add, commit and push the changes!</span>
git add <span class="nv">$TRAVIS_COMMIT</span> latest
git commit <span class="nt">-m</span> <span class="s2">"adding </span><span class="nv">$TRAVIS_COMMIT</span><span class="s2"> built for </span><span class="nv">$MIRAGE_BACKEND</span><span class="s2">"</span>
git push origin master
<span class="c"># Go out and enjoy the Sun!</span>
<span class="k">fi</span></code></pre></figure>
<p>At this point you should commit the changes to <code class="highlighter-rouge">./travis.yml</code> (don’t forget
the deploy flag) and <code class="highlighter-rouge">_mirage/travis.sh</code> and push the changes to GitHub.
Everything else will take place automatically and in a few minutes you will
have a unikernel ready to deploy on top of Xen!</p>
<p>You can see both the complete YAML file and build script in use on my
<a href="https://github.com/amirmc/www-test">test repo</a>, as well as the <a href="https://travis-ci.org/amirmc/www-test">build logs</a> for that repo
and the <a href="https://github.com/amirmc/www-test-deploy">deploy repo</a> with a VM.</p>
<p><em>[Pro-tip: If you add *<code class="highlighter-rouge">[skip ci]</code></em> anywhere in your
commit message, Travis will skip the build for that commit.
This is very useful if you’re making minor changes, like updating a
README.]*</p>
<h2 id="finishing-up">Finishing up</h2>
<p>Since I’m still using Jekyll for my website, I made a short script in my
jekyll repository (<code class="highlighter-rouge">_deploy-unikernel.sh</code>) that builds the site, commits the
contents of <code class="highlighter-rouge">_site</code> and pushes to GitHub. I simply run this after I’ve
committed a new blog post and the rest takes care of itself.</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="c">#!/usr/bin/env bash</span>
jekyll build
git add _site
git commit <span class="nt">-m</span> <span class="s1">'update _site'</span>
git push origin master</code></pre></figure>
<p>Congratulations! You now have an end-to-end workflow that will produce a
unikernel VM from your Jekyll-based site and push it to a repo. If you
strip out all the comments, you’ll see that we’ve written less than 50 lines
of code! Admittedly, I’m not counting the 80 or so lines that came for free
in the <code class="highlighter-rouge">*.ml</code> files but that’s still pretty impressive.</p>
<p>Of course, we still need a machine to take that VM and run it but that’s a
topic for another post. For the time-being, I’m still using GitHub Pages
but once the VM is hosted somewhere, I will:</p>
<ol>
<li>Turn off GitHub Pages and serve from the VM – but still using Jekyll in
the workflow.</li>
<li>Replace Jekyll with OCaml-based static-site generation.</li>
</ol>
<p>Although all the tools already exist to switch now, I’m taking my time so
that I can easily maintain the code I end up using.</p>
<h2 id="expanding-the-script-for-testing">Expanding the script for testing</h2>
<p>You may have noticed that the examples here are not very flexible or
extensible but that was a deliberate choice to keep them readable. It’s
possible to do much more with the build matrix and script, as you can see
from the Travis files on my <a href="https://github.com/amirmc/amirmc.github.com/tree/master/_mirage">website repo</a>, which were based on
those of the <a href="https://github.com/mirage/mirage-www">Mirage site</a> and <a href="https://github.com/mor1/mort-www">Mort’s site</a>.
Specifically, you can note the use of more environment variables and case
statements to decide which PPAs to grab. Once you’ve got your builds
working, it’s worth improving your scripts to make them more maintainable
and cover the test cases you feel are important.</p>
<h3 id="not-just-for-static-sites-surprise">Not just for static sites (surprise!)</h3>
<p>You might have noticed that in very few places in the toolchain above have I
mentioned anything specific to static sites per se. The workflow is simply
(1) do some stuff locally, (2) push to a continuous integration service
which then (3) builds and deploys a Xen-based unikernel. Apart from the
convenient folder structure, the specific work to treat this as a static
site lives in the <code class="highlighter-rouge">*.ml</code> files, which I’ve skipped over for this post.</p>
<p>As such, the GitHub+Travis workflow we’ve developed here is quite general
and will apply to almost <em>any</em> unikernels that we may want to construct.
I encourage you to explore the examples in the <a href="https://github.com/mirage/mirage-skeleton">mirage-skeleton</a> repo and
keep your build script maintainable. We’ll be using it again the next time
we build unikernel devices.</p>
<hr />
<p><em>Acknowledgements:</em> There were lots of things I read over while writing this
post but there were a few particularly useful things that you should look up.
Anil’s posts on <a href="http://anil.recoil.org/2013/09/30/travis-and-ocaml.html">Testing with Travis</a> and
<a href="http://anil.recoil.org/2013/10/06/travis-secure-ssh-integration.html">Travis for secure deployments</a> are quite succinct (and
were themselves prompted by <a href="http://blog.mlin.net/2013/02/testing-ocaml-projects-on-travis-ci.html">Mike Lin’s Travis post</a> several
months earlier). Looking over Mort’s <a href="https://github.com/mor1/mort-www/blob/master/.travis-build.sh">build script</a> and that of
<a href="https://github.com/mirage/mirage-www/blob/master/.travis-ci.sh">mirage-www</a> helped me figure out the deployment steps as well as improve
my own script. Special thanks also to <a href="http://erratique.ch">Daniel</a>, <a href="http://www.lpw25.net">Leo</a> and <a href="http://anil.recoil.org">Anil</a> for
commenting on an earlier draft of this post.</p>
Switching from Bootstrap to Zurb FoundationAmir Chaudhry2013-11-26T21:05:00+00:00http://amirchaudhry.com/switching-from-bootstrap-to-zurb-foundation
<p>I’ve just updated my site’s HTML/CSS and moved from Twitter Bootstrap to
<a href="http://foundation.zurb.com/learn/features.html">Zurb Foundation</a>. This post captures my subjective notes on the
migration.</p>
<h4 id="my-use-of-bootstrap">My use of Bootstrap</h4>
<p>When I originally set this site up, I didn’t know what frameworks existed or
anything more than the basics of dealing with HTML (and barely any CSS). I
came across Twitter Bootstrap and immediately decided it would Solve All My
Problems. It really did. Since then, I’ve gone through one ‘upgrade’ with
Bootstrap (from 1.x to 2.x), after which I dutifully ignored all the fixes
and improvements (note that Bootstrap was up to v2.3.2 while I was still
using v2.0.2).</p>
<p><img src="http://amirchaudhry.com/images/switch-to-foundation/responsive-design.png" alt="Responsive Design" /></p>
<p>For the most part, this was fine with me but for a while now, I’ve been
meaning to make this site ‘responsive’ (read: not look like crap from a
mobile). Bootstrap v3 purports to be mobile-first so upgrading would likely
give me what I’m after but v3 is <a href="http://getbootstrap.com/getting-started/">not backwards compatible</a>,
meaning I’d have to rewrite parts of the HTML. Since this step was
unavoidable, it led me to have another look at front-end frameworks, just to
see if I was missing anything. This was especially relevant since we’d
<a href="http://amirchaudhry.com/announcing-new-ocamlorg/">just released</a> the new <a href="http://ocaml.org">OCaml.org</a>
website, itself built with Bootstrap v2.3.1 (we’d done the design/templating
work long before v3 was released). It would be useful to know what else is
out there for any future work.</p>
<p>Around this time I discovered Zurb Foundation and also the numerous
comparisons between them (note: Foundation seems to come out ahead in most
of those). A few days ago, the folks at Zurb released
<a href="http://zurb.com/article/1280/foundation-5-blasts-off--2">version 5</a>, so I decided that now is the time to kick the
tires. For the last few days, I’ve been playing with the framework and in
the end I decided to migrate my site over completely.</p>
<p><a href="http://foundation.zurb.com/learn/features.html"><img src="http://amirchaudhry.com/images/switch-to-foundation/zurb-yeti.png" alt="Foundation's Yeti" /></a></p>
<h4 id="swapping-out-one-framework-for-another">Swapping out one framework for another</h4>
<p>Over time, I’ve become moderately experienced with HTML/CSS and I can
usually wrangle things to look the way I want, but my solutions aren’t
necessarily elegant. I was initially concerned that I’d already munged
things so much that changing anything would be a pain. When I first put the
styles for this site together, I had to spend quite a bit of time
overwriting Bootstrap’s defaults so I was prepared for the same when using
Foundation. Turns out that I was fine. I currently use <a href="http://jekyllrb.com">Jekyll</a> (and
<a href="http://jekyllbootstrap.com">Jekyll Bootstrap</a>) so I only had three template files and a couple of
HTML pages to edit and because I’d kept most of my custom CSS in a separate
file, it was literally a case of swapping out one framework for another and
bug-fixing from there onwards. There’s definitely a lesson here in using
automation as much as possible.</p>
<p>Customising the styles was another area of concern but I was pleasantly
surprised to find I needed <em>less</em> customisation than with Bootstrap. This
is likely because I didn’t have to override as many defaults (and probably
because I’ve learned more about CSS since then). The one thing I seemed to
be missing was a way to deal with code sections, so I just took what
Bootstrap had and copied it in. At some point I should revisit this.</p>
<p>It did take me a while to get my head around Foundation’s grid but it was
worth it in the end. The idea is that you should design for small screens
first and then adjust things for larger screens as necessary. There are
several different default sizes which inherit their properties from the size
below, unless you explicitly override them. I initially screwed this up by
explicitly defining the grid using the <code class="highlighter-rouge">small-#</code> classes, which obviously
looks ridiculous on small screens. I fixed it by swapping out <code class="highlighter-rouge">small-#</code> for
<code class="highlighter-rouge">medium-#</code> everywhere in the HTML, after which everything looked reasonable.
Items flowed sensibly into a default column for the small screens and looked
acceptable for larger screens and perfectly fine on desktops. I could do
more styling of the mobile view but I’d already achieved most of what I was
after.</p>
<h4 id="fixing-image-galleries-and-embedded-content">Fixing image galleries and embedded content</h4>
<p>The only additional thing I used from Bootstrap was the <a href="http://getbootstrap.com/javascript/#carousel">Carousel</a>. I’d
written some custom helper scripts that would take some images and
thumbnails from a specified folder and produce clickable thumbnails with a
slider underneath. Foundation provides <a href="http://foundation.zurb.com/docs/components/orbit.html">Orbit</a>, so I had to spend time
rewriting my script to produce the necessary HTML. This actually resulted
in cleaner HTML and one of the features I wanted (the ability to link to a
specific image) was available by default in Orbit. At this point I also
tried to make the output look better for the case where JavaScript is
disabled (in essence, each image is just displayed as a list). Below is an
example of an image gallery, taken from a previous post, when I
<a href="http://amirchaudhry.com/joined-the-computer-lab/">joined the computer lab</a>.</p>
<div class="gallery">
<noscript><small><em>Note: The gallery needs JavaScript but I've tried to make it degrade gracefully. -Amir</em></small></noscript>
<ul class="inline-list">
<li><a data-orbit-link="join-comp-lab-1"><img src="/images/join-comp-lab/join-comp-lab-thumb-1.png" alt="join-comp-lab-thumb-1" /></a></li>
<li><a data-orbit-link="join-comp-lab-2"><img src="/images/join-comp-lab/join-comp-lab-thumb-2.png" alt="join-comp-lab-thumb-2" /></a></li>
<li><a data-orbit-link="join-comp-lab-3"><img src="/images/join-comp-lab/join-comp-lab-thumb-3.png" alt="join-comp-lab-thumb-3" /></a></li>
</ul>
<ul data-orbit="" data-options="next_on_click:true; timer_speed:3000; pause_on_hover:false; bullets:false;">
<li class="gallery-image" data-orbit-slide="join-comp-lab-1"><img src="/images/join-comp-lab/join-comp-lab-1.jpg" alt="join-comp-lab-1" /></li>
<li class="gallery-image" data-orbit-slide="join-comp-lab-2"><img src="/images/join-comp-lab/join-comp-lab-2.jpg" alt="join-comp-lab-2" /></li>
<li class="gallery-image" data-orbit-slide="join-comp-lab-3"><img src="/images/join-comp-lab/join-comp-lab-3.jpg" alt="join-comp-lab-3" /></li>
</ul>
</div>
<p>Foundation also provides a component called <a href="http://foundation.zurb.com/docs/components/flex_video.html">Flex Video</a>, which allows the
browser to scale videos to the appropriate size. This fix was as simple as
going back through old posts and wrapping anything that was <code class="highlighter-rouge"><iframe></code> in a
<code class="highlighter-rouge"><div class="flex-video"></code>. It really was that simple and all the Vimeo and
YouTube items scaled perfectly. Here’s an example of a video from an
earlier post, where I gave a <a href="http://amirchaudhry.com/wireframe-demos-for-ocamlorg/">walkthrough of the ocaml.org site</a>.
Try changing the width of your browser window to see it scale.</p>
<div class="flex-video widescreen vimeo">
<iframe src="http://player.vimeo.com/video/61768157?byline=0&portrait=0&color=de9e6a" width="540" height="303" frameborder="0" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen="true">Video demo</iframe>
</div>
<h4 id="framework-differences">Framework differences</h4>
<p>Another of the main difference between the two frameworks is that Bootstrap
uses <a href="http://lesscss.org">LESS</a> to manage its CSS whereas Foundation uses <a href="http://sass-lang.com">SASS</a>. Frankly,
I’ve no experience with either of them so it makes little difference to me.
It’s worth bearing in mind for anyone who’s workflow does involve
pre-processing. Also, Bootstrap is available under the
<a href="http://getbootstrap.com/getting-started/#license-faqs">Apache 2 License</a>, while Foundation is released under
the <a href="http://foundation.zurb.com/learn/faq.html#question-3">MIT license</a>.</p>
<h4 id="summary">Summary</h4>
<p>Overall, the transition was pretty painless and most of the time was spent
getting familiar with the grid, hunting for docs/examples and trying to make
the image gallery work the way I wanted. I do think Bootstrap’s docs are
better but Foundation’s aren’t bad.</p>
<p>Although this isn’t meant to be a comparison, I much prefer Foundation to
Bootstrap. If you’re not sure which to use then I think the secret is in
the names of the frameworks.</p>
<ul>
<li>Bootstrap (for me) was a <em>great</em> way to ‘<em>bootstrap</em>’ a site quickly with
lots of acceptable defaults – it was quick to get started but took some
work to alter.</li>
<li>Foundation seems to provide a great ‘<em>foundation</em>’ on which to create more
customised sites – it’s more flexible but needs more upfront thought.</li>
</ul>
<p>That’s pretty much how I’d recommend them to people now.</p>
Announcing the new OCaml.orgAmir Chaudhry2013-11-20T23:00:00+00:00http://amirchaudhry.com/announcing-new-ocamlorg
<p>As some of you may have noticed, the new OCaml.org site is now live!</p>
<p>The DNS may still be propagating so if <a href="http://ocaml.org">http://ocaml.org</a> hasn’t updated for you then try http://166.78.252.20. This post is in two parts: the first is the announcement and the second is a call for content.</p>
<h3 id="new-ocamlorg-website-design">New OCaml.org website design!</h3>
<p>The new site represents a major milestone in the continuing growth of the OCaml ecosystem. It’s the culmination of a lot of volunteer work over the last several months and I’d specifically like to thank <a href="https://github.com/Chris00">Christophe</a>, <a href="http://ashishagarwal.org">Ashish</a> and <a href="http://philippewang.info/CL/">Philippe</a> for their dedication (the <a href="https://github.com/ocaml/ocaml.org/commits/master">commit logs</a> speak volumes).</p>
<p><a href="http://amirchaudhry.com/wireframe-demos-for-ocamlorg/"><img src="http://amirchaudhry.com/images/ann-new-ocamlorg/ocaml-home-wire.png" alt="OCaml.org Wireframes" /></a></p>
<p>We began this journey just over 8 months ago with paper, pencils and a lot of ideas. This led to a comprehensive set of <a href="http://amirchaudhry.com/wireframe-demos-for-ocamlorg/">wireframes and walk-throughs</a> of the site, which then developed into a collection of <a href="https://github.com/ocaml/ocaml.org/wiki/Site-Redesign">Photoshop mockups</a>. In turn, these formed the basis for the html templates and style sheets, which we’ve adapted to fit our needs across the site.</p>
<p>Alongside the design process, we also considered the kind of structure and <a href="http://lists.ocaml.org/pipermail/infrastructure/2013-July/000211.html">workflow we aspired to</a>, both as maintainers and contributors. This led us to develop completely new tools for <a href="http://pw374.github.io/posts/2013-09-05-22-31-26-about-omd.html">Markdown</a> and <a href="http://pw374.github.io/posts/2013-10-03-20-35-12-using-mpp-two-different-ways.html">templating</a> in OCaml, which are now available in OPAM for the benefit all.</p>
<p>Working on all these things in parallel definitely had it challenges (which I’ll write about separately) but the result has been worth the effort.</p>
<p><a href="http://ocaml.org"><img src="http://amirchaudhry.com/images/ann-new-ocamlorg/ocaml-home-mockup.png" alt="OCaml.org" /></a></p>
<p>The journey is ongoing and we still have many more improvements we hope to make. The site you see today primarily improves upon the design, structure and workflows but in time, we also intend to incorporate more information on packages and documentation. With the new tooling, moving the website forward will become much easier and I hope that more members of the community become involved in the generation and curation of content. This brings me to the second part of this post.</p>
<h3 id="call-for-content">Call for content</h3>
<p>We have lots of great content on the website but there are parts that could do with a refresh and gaps that could be filled. As a community driven site, we need ongoing contributions to ensure that the site best reflects its members.</p>
<p>For example, if you do commercial work on OCaml then maybe you’d like to add yourself to the <a href="http://ocaml.org/community/support.html">support page</a>? Perhaps there are tutorials you can help to complete, like <a href="http://ocaml.org/learn/tutorials/99problems.html">99 problems</a>? If you’re not sure where to begin, there are already a number of <a href="https://github.com/ocaml/ocaml.org/issues?labels=content">content issues</a> you could contribute to.</p>
<p>Although we’ve gone through a bug-hunt already, feedback on the site is still very welcome. You can either <a href="https://github.com/ocaml/ocaml.org/issues">create an issue</a> on the tracker (preferred), or email the infrastructure list.</p>
<p>It’s fantastic how far we’ve come and I look forward to the next phase!</p>
Migration plan for the OCaml.org redesignAmir Chaudhry2013-11-06T11:00:00+00:00http://amirchaudhry.com/migration-plan-ocaml-org
<p>We’re close to releasing the new design of ocaml.org but need help from the
OCaml community to identify and fix bugs before we switch next week.</p>
<p>Ashish, Christophe, Philippe and I have been discussing how we should go
about this and below is the plan for migration. If anyone would like to
discuss any of this, then the <a href="http://lists.ocaml.org/listinfo/infrastructure">infrastructure list</a> is the best
place to do so.</p>
<ol>
<li>
<p>We’ve made a <strong><a href="https://github.com/ocaml/ocaml.org/tree/redesign">new branch</a></strong> on the main ocaml.org repository with
the redesign. This branch is a fork of the master and we’ve simply cleaned
up and replayed our git commits there.</p>
</li>
<li>
<p>We’ve built a live version of the new site, which is visible at
<strong><a href="http://preview.ocaml.org">http://preview.ocaml.org</a></strong> - this is rebuilt every few minutes
from the branch mentioned above.</p>
</li>
<li>
<p>Over the course of one week, we ask the community to review the new site
and <strong><a href="https://github.com/ocaml/ocaml.org/issues">report any bugs or problems</a></strong> on the issue tracker. We <em>triage</em>
those bugs to identify any blockers and work on those first. This is the
phase we’ll be in from <em>today</em>.</p>
</li>
<li>
<p>After one week (7 days), and after blocking bugs have been fixed, we
<strong>merge the redesign branch</strong> into the master branch. This would
effectively present the new site to the world.</p>
</li>
</ol>
<p>During the above, we would not be able to accept any new pull requests on
the master branch but would be happy to accept them on the new, redesign
branch. Hence, restricting the time frame to one week.</p>
<p>Please note that the above is only intended to merge the <em>design</em> and
<em>toolchain</em> for the new site. Specifically, we’ve created new landing
pages, have new style sheets and have restructured the site’s contents as
well as made some new libraries (<a href="http://pw374.github.io/posts/2013-09-05-22-31-26-about-omd.html">OMD</a> and <a href="http://pw374.github.io/posts/2013-10-03-20-39-07-OPAMaging-MPP.html">MPP</a>). The new toolchain
means people can write files in markdown, which makes contributing content a
lot easier.</p>
<p>Since the files are on GitHub, people don’t even need to clone the site
locally to make simple edits (or even add new pages). Just click the ‘Edit
this page’ link in the footer to be taken to the right file in the
repository and GitHub’s editing and pull request features will allow you to
make changes and submit updates, all from within your browser (see the
<a href="https://help.github.com/articles/creating-and-editing-files-in-your-repository">GitHub Article</a> for details).</p>
<p>There is still work to be done on adding new features but the above changes
are already a great improvement to the site and are ready to be reviewed by
the OCaml community and merged.</p>
Review of the OCaml FPDays tutorialAmir Chaudhry2013-10-28T12:30:00+00:00http://amirchaudhry.com/fpdays-review
<p><a href="http://fpdays.net/2013/sessions/index.php?session=24"><img style="float: right; margin-top: 10px; margin-left: 10px" src="/images/web/fpdays-logo.png" /></a>
Last Thursday a bunch of us from the OCaml Labs team gave an OCaml tutorial
at the <a href="http://fpdays.net/2013/sessions/index.php?session=24">FPDays</a> conference (an event for people interested in Functional
Programming). <a href="https://github.com/yallop">Jeremy</a> and I led the session with <a href="http://www.lpw25.net">Leo</a>, <a href="https://github.com/dsheets">David</a> and
<a href="http://philippewang.info/CL/">Philippe</a> helping everyone progress and dealing with questions.</p>
<p><img style="float: left; margin-right: 10px" src="/images/fpdays2013/fpdays2013-01.jpg" />
It turned out to be by far the <em>most popular session</em> at the conference with
over 20 people all wanting to get to grips with OCaml! An excellent turnout
and a great indicator of the interest that’s out there, especially when you
offer a hands-on session to people. This shouldn’t be a surprise as we’ve
had good attendance for the general <a href="http://www.meetup.com/Cambridge-NonDysFunctional-Programmers/">OCaml meetups</a> I’ve run
and also the <a href="http://ocamllabs.github.io/compiler-hacking/2013/09/17/compiler-hacking-july-2013.html">compiler hacking sessions</a>, which Jeremy and
Leo have been building up (do sign up if you’re interested in either of
those!). We had a nice surprise for attendees, which were
<a href="http://en.wikipedia.org/wiki/Galley_proof">uncorrected proof</a> copies of Real World OCaml and luckily, we had just
enough to go around.</p>
<p>For the tutorial itself, Jeremy put together a nice sequence of exercises
and a <a href="https://github.com/ocamllabs/fpdays-skeleton">skeleton repo</a> (with helpful comments in the code) so that people
could dive in quickly. The event was set up to be really informal and the
rough plan was as following:</p>
<ol>
<li>
<p><em>Installation/Intro</em> - We checked that people had been able to follow the
<a href="http://amirchaudhry.com/fpdays-ocaml-session/">installation instructions</a>, which we’d sent them in advance.
We also handed out copies of the book and made sure folks were comfortable
with <a href="http://opam.ocaml.org">OPAM</a>.</p>
</li>
<li>
<p><em>Hello world</em> - A light intro to get people familiar with the OCaml
syntax and installing packages with OPAM. This would also help people to get
familiar with the toolchain, workflow and compilation.</p>
</li>
<li>
<p><em>Monty Hall browser game</em> - Using <a href="http://ocsigen.org/js_of_ocaml/"><code class="highlighter-rouge">js_of_ocaml</code></a>, we wanted
people to create and run the <a href="http://en.wikipedia.org/wiki/Monty_Hall_problem">Monty Hall problem</a> in their
browser. This would give people a taste of some real world interaction by
having to deal with the DOM and interfaces. If folks did well, they could
add code to keep logs of the game results.</p>
</li>
<li>
<p><em>Client-server game</em> - The previous game was all in the browser (so could
be examined by players) so here the task was to split it into a client and
server, ensuring the two stay in sync. This would demonstrate the
re-usability of the OCaml code already written and give people a feel for
client server interactions. If people wanted to do more, they could use
<a href="http://opam.ocaml.org/pkg/ctypes/0.1.1/">ctypes</a> and get better random numbers.</p>
</li>
</ol>
<p>We did manage to stick to the overall scheme as above and we think this is a
great base from which to improve future tutorials. It has the really nice
benefit of having visual, interactive elements and the ability to run things
both in the browser as well as on the server is a great way to show the
versatility of OCaml. <code class="highlighter-rouge">js_of_ocaml</code> is quite a mature tool and so it’s
no surprise that it’s also used by companies such as Facebook (see the recent
<a href="http://www.youtube.com/watch?v=gKWNjFagR9k">CUFP talk by Julien Verlaguet</a> - skip to <a href="http://www.youtube.com/watch?feature=player_detailpage&v=gKWNjFagR9k#t=1149">19:00</a>).</p>
<p>We learned a lot from running this session so we’ve captured the good, the
bad and the ugly below. This is useful for anyone who’d like to run an
OCaml tutorial in the future and also for us to be aware of the next
time we do this. I’ve incorporated the feedback from the attendees as well
as our own thoughts.</p>
<p><img src="/images/fpdays2013/fpdays2013-03.jpg" alt="Heads down and hands on" /></p>
<h3 id="things-we-learnt">Things we learnt</h3>
<h4 id="the-good">The Good</h4>
<ul>
<li>
<p>Most people really did follow the install instructions beforehand. This
made things so much easier on the day as we didn’t have to worry about
compile times and people getting bored. A few people had even got in touch
with me the night before to sort out installation problems.</p>
</li>
<li>
<p>Many folks from OCaml Labs also came over to help people, which meant
no-one was waiting longer than around 10 seconds before getting help.</p>
</li>
<li>
<p>We had a good plan of the things we wanted to cover but we were happy to
be flexible and made it clear the aim was to get right into it. Several
folks told us that they really appreciated this loose (as opposed to rigid)
structure.</p>
</li>
<li>
<p>We didn’t spend any time lecturing the room but instead got people right
into the code. Having enough of a skeleton to get something interesting
working was a big plus in this regard. People did progress from the early
examples to the later ones fairly well.</p>
</li>
<li>
<p>We had a VM with the correct set up that we could log people into if they
were having trouble locally. Two people made use of this.</p>
</li>
<li>
<p>Of course, It was great to have early proofs of the book and these were
well-received.</p>
</li>
</ul>
<p><img src="/images/fpdays2013/fpdays2013-02.jpg" alt="RWO books galore!" /></p>
<h4 id="the-bad">The Bad</h4>
<ul>
<li>
<p>In our excitement to get right into the exercises, we didn’t really give
an overview of OCaml and its benefits. A few minutes at the beginning would
be enough and it’s important so that people can leave with a few sound-bites.</p>
</li>
<li>
<p>Not everyone received my email about installation, and certainly not the
late arrivals. This meant some pain getting things downloaded and running
especially due to the wifi (see ‘Ugly’ below).</p>
</li>
<li>
<p>A few of the people who <em>had</em> installed, didn’t complete the instructions
fully but didn’t realise this until the morning of the session. There was a good
suggestion about having some kind of test to run that would check
everything, so you’d know if there was something missing.</p>
</li>
<li>
<p>We really should have had a cut-off where we told people to use VMs
instead of fixing installation issues and 10-15 minutes would have been
enough. This would have been especially useful for the late-comers.</p>
</li>
<li>
<p>We didn’t really keep a record of the problems folks were having so we
can’t now go back and fix underlying issues. To be fair, this would have
been a little awkward to do ad-hoc but in hindsight, it’s a good thing to
plan for.</p>
</li>
</ul>
<h4 id="the-ugly">The Ugly</h4>
<ul>
<li>The only ugly part was the wifi. It turned out that the room itself was a
bit of a dead-spot and that wasn’t helped by 30ish devices trying to connect
to one access point! Having everyone grab packages at the same time in the
morning probably didn’t help. It was especially tricky as all our
mitigation plans seemed to revolve around at least having local connectivity.
In any case, this problem only lasted for the morning session and was a
little better by the afternoon. I’d definitely recommend a backup plan in
the case of complete wifi failure next time! One such plan that Leo got
started on was to put the repository and other information onto a flash
drive that could be shared with people. We didn’t need this in the end but
it’ll be useful to have something like this prepared for next time. If
anyone fancies donating a bunch of flash drives, I’ll happily receive them!</li>
</ul>
<p>Overall, it was a great session and everyone left happy, having completed
most of the tutorial (and with a book!). A few even continued at home
afterwards and <a href="https://twitter.com/richardclegg/status/393458073052139520">got in touch</a> to let us know that they got
everything working.
It was a great session and thanks to <a href="https://twitter.com/MarkDalgarno">Mark</a>, <a href="https://twitter.com/JacquiDDavidson">Jacqui</a> and the rest of
the FPDays crew for a great conference!</p>
<p><img src="/images/fpdays2013/fpdays2013-04.jpg" alt="RWO Book giveaway" /></p>
<p>(Thanks to Jeremy, Leo, David and Philippe for contributions to this post)</p>
FP Days OCaml SessionAmir Chaudhry2013-10-22T21:00:00+00:00http://amirchaudhry.com/fpdays-ocaml-session
<p>On Thursday, along with <a href="https://github.com/yallop">Jeremy</a> and
<a href="http://www.lpw25.net">Leo</a>, I’ll be running an OCaml Hands-on Session at
the <a href="http://fpdays.net/2013/">FPDays conference</a>. Below are some prep
instructions for attendees.</p>
<h3 id="preparation-for-the-session">Preparation for the session</h3>
<p>If you’re starting from scratch, installation can take some time so it’s
best to get as much done in advance as possible. You’ll need OPAM (the
package manager), OCaml 4.01 (available through OPAM) and a few libraries
before Thursday. If you have any issues, please contact Amir.</p>
<ul>
<li>
<p><strong>OPAM</strong>: Follow the instructions for your platform at <a href="http://opam.ocaml.org/doc/Quick_Install.html">http://opam.ocaml.org/doc/Quick_Install.html</a>.
OPAM requires OCaml so hopefully the relevant dependencies will kick in and
you’ll get OCaml too (most likely version 3.12). You can get a cup of
coffee while you wait. After installation, run <code class="highlighter-rouge">opam init</code> to initialise OPAM.</p>
</li>
<li>
<p><strong>OCaml 4.01</strong>: We actually need the latest version of OCaml but OPAM
makes this easy. Just run the following (and get more coffee):</p>
</li>
</ul>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nv">$ </span>opam update
<span class="nv">$ </span>opam switch 4.01.0
<span class="nv">$ </span><span class="nb">eval</span> <span class="sb">`</span>opam config env<span class="sb">`</span></code></pre></figure>
<ul>
<li><strong>Libraries</strong>: For the workshop you will need to check that you have the
following installed: <code class="highlighter-rouge">libffi</code>, <code class="highlighter-rouge">pcre</code> and <code class="highlighter-rouge">pkg-config</code>. This will depend on
your platform so on a Mac with homebrew I would do
<code class="highlighter-rouge">brew install libffi pcre pkg-config</code> or on Debian or Ubuntu
<code class="highlighter-rouge">apt-get install libffi-dev</code>. After this, two OCaml packages it’s worth
installing in advance are <code class="highlighter-rouge">core</code> and <code class="highlighter-rouge">js_of_ocaml</code> so simply run:</li>
</ul>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nv">$ </span>opam install core js_of_ocaml</code></pre></figure>
<p>OPAM will take care of the dependencies and the rest we can get on the day!</p>
Feedback requested on the OCaml.org redesignAmir Chaudhry2013-09-24T14:00:00+00:00http://amirchaudhry.com/ocamlorg-request-for-feedback
<p>There is a work-in-progress site at
<a href="http://ocaml-redesign.github.io">ocaml-redesign.github.io</a>, where we’ve
been developing both the tools and design for the new ocaml.org pages. This
allows us to test our tools and fix issues before we consider merging
changes upstream.</p>
<p>There is a more detailed post coming about all the design work to date and
the workflow we’re using, but in the meantime, feedback on the following
areas would be most welcome. Please leave feedback in the form of issues on
the <a href="https://github.com/ocamllabs/sandbox-ocaml.org/issues">ocaml.org sandbox repo</a>. You can also raise points on the
<a href="http://lists.ocaml.org/listinfo/infrastructure">infrastructure mailing list</a>.</p>
<ol>
<li>
<p><strong>OCaml Logo</strong> - There was some feedback on the last iteration of the
logo, especially regarding the font, so there are now several options to
consider. Please look at the images on the
<a href="https://github.com/ocaml/ocaml.org/wiki/Draft-OCaml-Logos">ocaml.org GitHub wiki</a> and then leave your feedback on
<a href="https://github.com/ocamllabs/sandbox-ocaml.org/issues/16">issue #16 on the sandbox repo</a>.</p>
</li>
<li>
<p><strong>Site design</strong> - Please do give feedback on the design and any glitches
you notice. Text on each of the new landing pages is still an initial draft
so comments and improvements there are also welcome (specifically: Home
Page, Learn, Documentation, Platform, Community). There are already a few
<a href="https://github.com/ocamllabs/sandbox-ocaml.org/issues">known issues</a>, so do
add your comments to those threads first.</p>
</li>
</ol>
Wireframe demos for OCaml.orgAmir Chaudhry2013-03-14T00:00:00+00:00http://amirchaudhry.com/wireframe-demos-for-ocamlorg
<h3 id="making-mockups">Making mockups</h3>
<p>Over the last few months, I’ve been working on various aspects of the <a href="http://ocaml.org">OCaml.org</a> design project. This covers things like the design, information architecture and how to incorporate new functionality. One of the methods for thinking through these was to put together a bunch of wireframes using <a href="http://www.balsamiq.com">Balsamiq</a> and use these to express (and generate) ideas as well as get feedback quickly.</p>
<p>If you haven’t used wireframes before, think of them as a slightly more advanced form of sketching things out on a whiteboard. The best aspect is that it’s far quicker, easier and <em>cheaper</em> to iterate using wireframes than on an actual website. As you’ll see below, you can also convey a lot of information about how a site might work by showing people a clickable demo.</p>
<p>I want to make this work public and I thought the best way would be to show you some screencasts of how the upcoming <a href="http://ocaml.org">OCaml.org</a> website might work and also make the demo available to all of you. The three videos below cover three aspects of the site and I’d encourage you to go through them in order (about 16 minutes in total). Apologies if my screen isn’t particularly clear in the videos but you can visit the demo for yourself and see things in more detail (link and info on feedback at the end of this post).</p>
<h3 id="video-walkthroughs">Video walkthroughs</h3>
<p>For those who’d like to watch the videos back-to-back and scaled to fit your browser window, you can <a href="http://vimeo.com/couchmode/album/2301640">view the Vimeo album in ‘couchmode’</a>. Otherwise, individual videos are embedded below (total time 16m17s).</p>
<div class="flex-video widescreen vimeo">
<iframe src="http://player.vimeo.com/video/61768157?byline=0&portrait=0&color=de9e6a" width="540" height="303" frameborder="0" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen="true">Video Part 1 - Overview - http://player.vimeo.com/video/61768157</iframe>
</div>
<div class="flex-video widescreen vimeo">
<iframe src="http://player.vimeo.com/video/61768235?byline=0&portrait=0&color=de9e6a" width="540" height="304" frameborder="0" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen="true">Video Part 2 - Documentation - http://player.vimeo.com/video/61768235</iframe>
</div>
<div class="flex-video widescreen vimeo">
<iframe src="http://player.vimeo.com/video/61768273?byline=0&portrait=0&color=de9e6a" width="540" height="304" frameborder="0" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen="true">Video Part 3 - Continuous Integration - http://player.vimeo.com/video/61768273</iframe>
</div>
<h3 id="public-wireframe-demo">Public wireframe demo</h3>
<p>A demo you can interact with can be found at <a href="https://ocaml.mybalsamiq.com/projects/public-demo/naked/0_home?key=b897ea86d8a8199c6e46b3295ddf630dfa33e5e1">OCaml.org wireframe demo</a> and image files for each page are available on the <a href="https://github.com/ocaml/ocaml.org/wiki/Wireframes">github ocaml.org wiki</a>. Please bear in mind the following:</p>
<ul>
<li>
<p>Not everything that looks like it might be clickable actually is (and vice versa). There’ll be a toggle on the bottom right of the browser window that will highlight what can be clicked.</p>
</li>
<li>
<p>There are parts of the site which are ‘work in progress’ and are marked as such.</p>
</li>
<li>
<p>The designs you see aren’t necessarily final. Your feedback will help shape our decisions and the best way to provide it is via the <a href="http://lists.ocaml.org/listinfo/infrastructure">infrastructure mailing list</a>.</p>
</li>
</ul>