Amir Chaudhry

thoughts, comments & general ramblings

Brewing MISO to serve Nymote

The mission of Nymote is to enable the creation of resilient decentralised systems that incorporate privacy from the ground up, so that users retain control of their networks and data. To achieve this, we reconsider all the old assumptions about how software is created in light of the problems of the modern, networked environment. Problems that will become even more pronounced as more devices and sensors find their way into our lives.

We want to make it simple for anyone to be able to run a piece of the cloud for their own purposes and the first three applications Nymote targets are Mail, Contacts and Calendars, but to get there, we first have to create solid foundations.

Defining the bedrock

In order to create applications that work for the user, we first have to create a robust and reliable software stack that takes care of fundamental problems for us. In other words, to be able to assemble the applications we desire, we must first construct the correct building blocks.

We’ve taken a clean-slate approach so that we can build long-lasting solutions with all the benefits of hindsight but none of the baggage. As mentioned in earlier posts, there are three main components of the stack, which are: Mirage (OS for the Cloud/IoT), Irmin (distributed datastore) and Signpost (identity and connectivity) - all built using the OCaml programming language.

Using the MISO stack to build Nymote

As you’ve already noticed, there’s a useful acronym for the above tools — MISO. Each of the projects mentioned is a serious undertaking in its own right and each is likely to be impactful as a stand-alone concept. However, when used together we have the opportunity to create applications and services with high levels of security, scalability and stability, which are not easy to achieve using other means.

In other words, MISO is the toolstack that we’re using to build Nymote — Nymote is the decentralised system that works for its users.

Each of the projects is at a different phase but they have all have made great strides over the last year.


Mirage — a library operating system that constructs unikernels — is the most mature part of the stack. I previously wrote about the Mirage 1.0 release and only six months later we had an impressive 2.0 release, with continuing advances throughout the year. We achieved major milestones such as the ability to deploy unikernels to ARM-based devices, as well as a clean-slate implementation of the transport layer security (TLS) protocol.

In addition to the development efforts, there have also been many presentations to audiences, ranging from small groups of startups all the way to prestigious keynotes with 1000+ attendees. Ever since we’ve had ARM support, the talks themselves have been delivered from unikernels running on Cubieboards and you can see the growing collection of slides at

All of these activities have led to a tremendous increase in public awareness of unikernels and the value they can bring to developing robust, modern software as well as the promise of immutable infrastructure. As more people look to get involved and contribute to the codebase, we’ve also begun curating a set of Pioneer Projects, which are suitable for a range of skill-levels.

You can find much more information on all the activities of 2014 in the comprehensive Mirage review post. As it’s the most mature component of the MISO stack, anyone interested in the development of code towards Nymote should join the Mirage mailing list.


Irmin — a library to persist and synchronize distributed data structures — made significant progress last year. It’s based on the principles of Git, the distributed version control system, and allows developers to choose the appropriate combination of consistency, availability and partition tolerance for their needs.

Early last year Irmin was released as an alpha with the ability to speak ‘fluent Git’ and by the summer, it was supporting user-defined merge operations and fast in-memory views. A couple of summer projects improved the merge strategies and synchronisation strategies, while an external project — Xenstore — used Irmin to add fault-tolerance.

More recent work has involved a big clean-up in the user-facing API (with nice developer documentation) and a cleaner high-level REST API. Upcoming work includes proper documentation of the REST API, which means Irmin can more easily be used in non-OCaml projects, and full integration with Mirage projects.

Irmin is already being used to create a version controlled IMAP server and a version controlled distributed log system. It’s no surprise that the first major release is coming very soon!


Signpost will be a collection of libraries that aims to provide identity and connectivity between devices. Forming efficient connections between end-points is becoming ever more important as the number of devices we own increases. These devices need to be able to recognise and reach each-other, regardless of their location on the network or the obstacles in between.

This is very much a nascent project and it involves a lot of work on underlying libraries to ensure that security aspects are properly considered. As such, we must take great care in how we implement things and be clear about any trade-offs we make. Our thoughts are beginning to converge on a design we think will work and that we would entrust with our own data, but we’re treating this as a case of ‘Here Be Dragons’. This is a critical piece of the stack and we’ll share what we learn as we chart our way towards it.

Even though we’re at the design stage of Signpost, we did substantial work last year to create the libraries we might use for implementation. A particularly exciting one is Jitsu — which stands for Just In Time Summoning of Unikernels. This is a DNS server that spawns unikernels in response to DNS requests and boots them in real-time with no perceptible lag to the end user. In other words, it makes much more efficient use of resources and significantly reduces latency of services for end-users — services are only run when they need to be, in the places they need to be.

There’s also been lots of efforts on other libraries that will help us iterate towards a complete solution. Initially, we will use pre-existing implementations but in time we can take what we’ve learned and create more robust alternatives. Some of the libraries are listed below (but note the friendly disclaimers!).


OCaml is a mature, powerful and highly pragmatic language. It’s proven ideal for creating robust systems applications and many others also recognise this. We’re using it to create all the tools you’ve read about so far and we’re also helping to improve the ecosystem around it.

One of the major things we’ve been involved with is the coordination of the OCaml Platform, which combines the OCaml compiler with a coherent set of tools and workflows to be more productive in the language and speed up development time. We presented the first major release of these efforts at OCaml 2014 and you can read the abstract or watch the video.

There’s more to come, as we continue to improve the tooling and also support the community in other ways.

Early steps towards applications

Building blocks are important but we also need to push towards working applications. There are different approaches we’ve taken to this, which include building prototypes, wireframing use-cases and implementing features with other toolstacks. Some of this work is also part of a larger EU funded project* and below are brief summaries of the things we’ve done so far. We’ll expand on them as we do more over time.

Mail - As mentioned above, a prototype IMAP server exists (IMAPlet) which uses Irmin to store data. This is already able to connect to a client to serve mail. The important feature is that it’s an IMAP server which is version controlled in the backend and can expose a REST API from the mailstore quite easily.

Contacts - We first made wireframe mockups of the features we might like in a contacts app (to follow in later post) and then built a draft implementation. To get here, code was first written in OCaml and then put through the js_of_ocaml compiler. This is valuable as it takes us closer to a point where we can build networks using our address books and have the system take care of sharing details in a privacy-conscious manner and with minimal maintenance. The summary post has more detail.

Calendar - This use-case was approached in a completely different way as part of a hackathon last year. A rough but functional prototype was built over one weekend, with a team formed at the event. It was centralised but it tested the idea that a service which integrates intimately with your life (to the point of being very invasive) can provide disproportionate benefits. The experience report describes the weekend and our app — Clarity — won first place. This was great validation that the features are desirable so we need to work towards a decentralised, privacy-conscious version.

Time to get involved!

The coming year represents the best time to be working on the MISO stack and using it to make Nymote a reality. All source code is publicly available and the projects are varied enough that there is something for everyone. Browse through issues, consider the projects or simply write online and share with us the things you’d like to see. This promises to be an exciting year!

Sign up to the Nymote mailing list to keep up to date!

* The research leading to these results has received funding from the European Union's Seventh Framework Programme FP7/2007-2013 under the UCN project, grant agreement no 611001.

Share / Comment

Unikernels for everyone!

Many people have now set up unikernels for blogs, documenting their experiences for others to follow. Even more important is that people are going beyond static sites to build unikernels that provide more complicated services and solve real-world problems.

To help newcomers get started, there are now even more posts that that use different tools and target different deployment methods. Below are summaries of some of the posts I found interesting and that will make it easier for you try out different ways of creating and deploying your unikernels.

Unikernel blogs with MirageOS

Mindy picked up where the first set of instructions finished and described her work to get an Octopress blog running on Amazon EC2. As one of the first people outside the core team to work on this, she had a lot of interesting experiences — which included getting into the Mirage networking stack to debug an issue and submit a bugfix! More recently, she also wrote a couple of excellent posts on why she uses a unikernel for her blog. These posts cover the security concerns (and responsibility) of running networked services on today’s Internet and the importance of owning your content — both ideas are at the heart of the work behind Nymote and are well worth reading.

Ian took a different path to AWS deployment by using Vagrant and Test Kitchen to to get his static site together and build his unikernel, and then Packer to create the images for deployment to EC2. All succinctly explained with code available on GitHub for others to try out!

Toby wanted to put together a blog that was a little more complicated than a traditional static site, with specific features like subdomains based on tags and the ability to set future dates for posts. He also pulled in some other libraries so he can use Mustache for sever-side rendering, where his blog posts and metadata are stored as JSON and rendered on request.

Chris saw others working to get unikernel blogs on EC2 and decide he’d try getting his up and running on Linode instead. He is the first person to deploy his unikernel to Linode and he provided a great walkthough with helpful screenshots, as well as brief notes about the handful of differences compared with EC2. Chris also wrote about the issue he had with clean urls (i.e serving /about/index.html when a user visits /about/) — he describes the things he tried out until he was finally able to fix it.

Phil focused on getting unikernels running on a cubieboards, which are ARM based development boards — similar to the Raspberry Pi. He starts by taking Mirage’s pre-built Cubieboard images — which makes it easy to get Xen and an OCaml environment set up on the board — and getting this installed on the Cubieboard. He also noted the issues he came across along with the simple tweaks he made to fix them and finally serves a Mirage hello world page.

More than just static sites

Static sites have become the new ‘hello world’ app. They’re simple to manage, low-risk and provide lots of opportunities to experience something new. These aspects make them ideal for discovering the benefits (and trade offs) of the unikernel approach and I look forward to seeing what variations people come up with — For instance, there aren’t any public instructions for deploying to Rackspace so it would be great to read about someone’s experiences there. However, there are also many other applications that also fit the above criteria of simplicity, low risk and plentiful learning opportunities.

Thomas Leonard decided to create a unikernel for a simple REST service for queuing package uploads for 0install. His post takes you from the very beginning, with a simple hello world program running on Xen, all the way through to creating his REST service. Along the way there a lots of code snippets and explanations of the libraries being used and what they’re doing. This is a great use-case for unikernels and there are a lot of interesting things to take from this post, for example the ease with which Thomas was able to find and fix bugs using regular tools. There’s also lots of information on performance testing and optimising of the unikernel, which he covers in a follow-up post, and he even built tools to visualise the traces.

Of course, there’s much more activity out there than described in this post as people continually propose ideas on the Mirage mailing list — both for things they would like to try out and issues they came up against. In my last post, I pointed out that the workflow is applicable to any type of unikernel and as Thomas showed, with bit of effort it’s already possible to create useful, real-world services using the many libraries that already exist. There’s also a lot of scaffolding in the mirage-skeleton repo that you can build on which makes it even easier to get involved. If you want to dive deeper into the libraries and perhaps learn OCaml, there are lots of resources online and projects to get involved with too.

Now is a great time to try building a unikernel for yourself and as you can see from the posts above, shared experiences help other people progress further and branch out into new areas. When you’ve had a chance to try something out please do share your experiences online!

This post also appears on the Nymote blog.

Share / Comment

Towards a governance framework for

The projects around the domain name are becoming more established and it’s time to think about how they’re organised. 2014 saw a lot of activity, which built on the successes from 2013. Some of the main things that stand out to me are:

  • More volunteers contributing to the public website with translations, bug fixes and content updates, as well as many new visitors — for example, the new page on teaching OCaml received over 5k visits alone. The increasing contributions are a result of the earlier work on re-engineering the site and there are many ways to get involved so please do contribute!

  • The relentless improvements and growth of OPAM, both in terms of the repository — with over 1000 additional packages and several new repo maintainers — and also improved workflows (e.g the new pin functionality). The OPAM site and package list also moved to the domain, becoming the substrate for the OCaml Platform efforts. This began with the work towards OPAM 1.2 and there is much more to come (including closer integration in terms of styling). Join the Platform list to keep up to date.

There is other work besides those I’ve mentioned and I think by any measure, all the projects have been quite successful. As the community continues to develop, it’s important to clarify how things currently work to improve the level of transparency and make it easier for newcomers to get involved.

Factors for a governance framework

For the last couple of months, I’ve been looking over how larger projects manage themselves and the governance documents that are available. My aim has been to put such a document together for the domain without introducing burdensome processes. There are number of things that stood out to me during this process, which have guided the approach I’m taking.

My considerations for an governance document:

  • A governance document is not necessary for success but it’s valuable to demonstrate a commitment to a stable decision-making process. There are many projects that progress perfectly well without any documented processes and indeed the work around so far is a good example of this (as well as OCaml itself). However, for projects to achieve a scale greater than the initial teams, it’s a significant benefit to encode in writing how things work (NB: please note that I didn’t define the type of decision-making process - merely that it’s a stable one).

  • It must clarify its scope so that there is no confusion about what the document covers. In the case of, it needs to be clear that the governance covers the domain itself, rather than referring to the website.

  • It should document the reality, rather than represent an aspirational goal or what people believe a governance structure should look like. It’s very tempting to think of an idealised structure without recognising that behaviours and norms have already been established. Sometimes this will be vague and poorly defined but that might simply indicate areas that the community hasn’t encountered yet (e.g it’s uncommon for any new project to seriously think about dispute resolution processes until they have to). In this sense, the initial version of a governance document should simply be a written description of how things currently stand, rather than a means to adjust that behaviour.

  • It should be simple and self-contained, so that anyone can understand the intent quickly without recourse to other documents. It may be tempting to consider every edge-case or try to resolve every likely ambiguity but this just leads to large, legal documents. This approach may well be necessary once projects have reached a certain scale but to implement it sooner would be a case of premature optimisation — not to mention that very few people would read and remember such a document.

  • It’s a living document. If the community decides that it would prefer a new arrangement, then the document conveniently provides a stable starting point from which to iterate. Indeed, it should adapt along with the project that it governs.

With the above points in mind, I’ve been putting together a draft governance framework to cover how the domain name is managed. It’s been a quiet work-in-progress for some time and I’ll be getting in touch with maintainers of specific projects soon. Once I’ve had a round of reviews, I’ll be sharing it more widely and posting it here!

Share / Comment

Describing the MISO stack at Entrepreneur First

I’m speaking to the Entrepreneur First cohort this morning about the future of resilient, distributed systems and what I’m working on to get us there. Firstly, I’m describing the kinds of solutions we have today, the great things they offer developers as well as the issues they create. This leads into the new toolstack we’re creating, called the MISO stack, and the benefits and trade-offs.

I’m spending more time talking about Mirage OS – the ‘M’ in the MISO stack – because the workflow we’ve developed here underpins how we build, deploy and maintain such applications at scale. As an example of how things can work, I point at my earlier post on how to go from jekyll to unikernel. This uses TravisCI to do all the hard work and all the relevant artefacts, including the final VM, can be version-controlled through Git. I actually deployed this post while the audience was watching, so that I could point at the build logs.

One of the use cases for our toolstack is to make it possible for individuals to create and maintain their own piece of the cloud, a project called Nymote, which will also make it possible to run the Internet of my Things – which itself is related to another things I’m working on, such the Hub of All Things - HAT and the User Centric Networking projects.

This is an exciting summer for all the tools we’re putting together, since we’ve recently announced Mirage OS v2.0, which now works on ARM, are going full steam ahead with Irmin and working hard on improvements to the OCaml ecosystem. It’s a great time to explore these projects, learn a new language and build awesome stuff.

Mirage on ARM

Share / Comment

Winning Seedhack 5.0

[This story was also a guest post on the Seedcamp blog.]

A couple of weeks ago, I took part in Seedhack 5.0, on the theme of life-logging. My team were overall winners with Clarity, our calendar assistant app. This post captures my experiences of what happened over the weekend, the process of how we built the app and the main things I learned. You’ll find out what Clarity is at the end – just like we did.

Meeting people and pitches on Friday

The weekend began with some information on the APIs available to us, which was followed by pizza and mingling with everyone. I spoke to a few people about what they were working on and the technologies they were used to. It was good to find a mixture of experience and I was specifically looking for folks with an interest in functional programming – that’s how I first met Vlad over Twitter.

Tweets with Vlad

After pizza, those people with ideas, even if not fully formed, were invited to share them with the room. I came in to Seedhack with specific thoughts on the kind of things I wanted to work on so I spoke about one of those.

Pitching Personal Clouds

I described the problem of silos, poor interoperability and how all the life-logging data should really be owned by the user. That would allow third parties to request access and provide way more value to users, while maintaining privacy and security. Building a centralised service makes a lot of sense in the first instance but what’s more disruptive than eschewing the current model of yet-another-silo and putting the user in control? If that sounds familiar, it’s because I’m trying to solve these problems already.

I’m working on a open source toolstack for building distributed systems that I call the MISO stack, which is analogous to the LAMP stack of old but is based on Mirage OS. With this stack, I’m putting together a system to help people create and run their own little piece of the cloud – Nymote. The introductory post and my post on ‘The Internet of my Things’ has more detail on why I’m working on this.

For systems like this to be viable, we must be willing to trust them with the core applications of Email, Contacts and Calendar. Without advanced and robust options for running these, it’s unlikely that anyone (including me) would want to switch away from the current providers. Of these three applications, I decided to talk about the contact management solution, since I happen to have wireframes and thought it might be simpler to implement something over the weekend.

There was quite a bit of interest in the overall concept but what really piqued my curiosity was that someone else presented some thoughts around Calendars and analytics. After a brief chat, we decided to join forces and tackle the problems of Calendar management. The team had a great mix of experience from product to design and several of them had worked together before.

The Clarity Team

Amir - Product - Worked in several startups, product & programme management experience, currently a Post Doc at Cambridge University Computer Science dept.

River - Product - Programme Manager at Dotforge Accelerator and lead organiser of StartupBus UK 2014.

Mani - Developer - Freelance web dev (CMS and APIs), winner of multiple hackathons, currently studying at Sheffield University.

Vlad - Developer - Started programming long ago and attended many competitions and hackathons. Currently studying Computer Science at the University of Southampton.

João - Developer - PhD in Theoretical Physics, Python enthusiast and moving into data science, currently doing data analysis at Potential.

Jeremy - Designer - Freelance UI/UX Designer, hackathon enthusiast, currently studying medicine at Sheffield University.

Thick fog and ambiguity on Friday evening

We had all decided to work together and we knew it would be on the problem of calendar management and analytics. We were fired up but it quickly became obvious that was all we knew.

The next four to five hours were spent discussing the rough shape of what we were going to build, what specific problem we thought we were solving and whether there were enough people with such a problem to care.

We had a look at each other’s calendars and talked about how we each use them and the things we like and dislike about them. For example, I have around nine calendars and I curate them carefully, adding contextual information and sometimes even correcting old events to reflect what happened. We even bounced around the idea of the contact management app several times as well as a few other ideas that came up during the discussions.

These conversations took a while and it seemed like we were going around in circles. Despite this, it didn’t feel particularly frustrating. I realised that the same sticking point was coming up repeatedly because we were forcing ourselves to imagine a prototypical customer and the problems they might have. This was never going to work since we wouldn’t have time to go and find such people and do basic customer development. Far better to constrain the problem to something we experience so that we can look to ourselves for initial customer feedback. Once we did this, things seemed to go a little faster and taking breaks for food helped us keep our energy up.

Mani grabbing falafel for dinner

There were a few occasions where I looked around the room and saw other teams with their heads down, headphones in, and bashing away at keyboards – we hadn’t even figured out what we were doing yet. Despite this, it was a great exercise because it allowed all of us to get a feel for what aspects each of us cared about most and it helped us form some kind of shared language for the product.

The only outcome from this first evening was an outline but it was an important one. It distilled what we what we were going to work on and the components of it. We did this so we’d have a clear starting point the next morning and could get going quickly. Here’s a paraphrased version of what we sent ourselves.

```text Smart Calendar App.

We are collecting data from mobile and from desktop. - Mobile includes: - Call logs - Location (if we can) - Messages - What application is being used and when - Desktop includes: - What application is active and timestamps of it - Location? - Git Logs … ? - Taking logs of their existing calendars.

Working out what people are doing. - Extrapolating info from active application (e.g browser page)

Present info back via calendar UI - Need to turn all this information into webcal events

Useful info we want - Time spent travelling (how much?) - Time in meetings - Time on phone - Who the meetings/calls were with - Relevant docs/emails these are linked with - Use labels ```

Then it was time for some late night snacks.

Midnight snacks

With one challenge out of the way, the next one was finding somewhere to sleep for a few hours (Campus was closing from 01:30). I had nothing planned but luckily for me, a couple of team members had booked a hotel room for the night. The minor complication was that we had to first find the hotel and then somehow get six people into a room meant for two – without the night manager kicking us out. That’s a whole other story, but it suffices to say that James Bond has nothing to worry about.

Rays of light and clearing haze on Saturday

After some card games and a few hours of sleep, we headed back to Campus and during this walk, we came up with ‘Clarity’ as the name of the application.

Once we arrived, development began. River volunteered his digital assets to the cause (i.e. his whole Google life). Mani worked on the Android app, Vlad on a Chrome extension with Joao pulling in the Google Apps data as well as combining it with data from the various platform apps. Jeremy worked on the front-end of the site, while River and I began wireframing the UI and user flow through the site.

Once the development was well underway, I realised how superfluous ‘the business guys’ can be. It would have been easy to simply sit there and let everyone get on with it but there were other things River and I did while developers were writing code.

Wireframing - We spent time thinking about what a user would actually see and engage with once they visited the Clarity site. We made a lot of sketches on paper and this was helpful because communication with the team was smoother with something to guide the discussion. It also helped to inform the design work and gave River and I something to show to potential users.

Talk to people - aka early customer development. We already knew that we were our own customers but it was useful to talk to other people for two reasons. Firstly, to get an idea of how they use their calendars and whether they have similar problems to us and secondly, to see what thoughts we prompt when we describe our solution (or show our wireframes). This led to useful information on how we should refine the product and and position ourselves against perceived competition.

Refine the product - Going through the wireframing and talking to people helped us come up with several new ideas for how to display the data back to users. Some of these seemed great at the time, but after showing some paper sketches to other people, we realised customers didn’t care about certain things, so we discarded them. Even though Jeremy had already done the work of putting together the UI for them (sorry, Jeremy!).

Examine the competition - After we described what we were working on. A few people mentioned potential competitors and asked how we were different. Initially, we didn’t know much about these companies but it was something we could explore while development was underway and consider our positioning.

Remind people to regroup - Every few hours, we would make sure everyone caught up with each other. We would check that things were going well, share what we’d learned from talking to people and discuss any technical problems and possible workarounds – including changing the scope of the product. The discussions we’d had on Friday meant that we spent less time debating when these questions came up during the weekend.

Work on the pitch - River and I began working on the pitch from just after Saturday lunchtime and kept building on it until Sunday afternoon. This, combined with showing our sketches to people, made it much easier to think about the story we wanted to tell the audience. In turn, that made it easier to think about the product development that had to be completed by Sunday. Especially in terms of a kick-ass demo.

Popcorn and candyfloss

Bright sunshine and achieving Clarity on Sunday afternoon

Development carried on through the night and we took a break to watch some sports via River’s laptop – at this point Clarity was actually logging this and other events. The next morning, we reiterated what we needed to get done for the demo and I was pretty ruthless about practising the pitch. River and I practised endlessly while everyone else made sure the the technical pieces were working smoothly. We had a lot of moving parts and making sure they were glued together seamlessly was important. At this point, we knew what Clarity was and how to tell its story.

Introducing Clarity

We all have calendars and we put a lot of time and effort into managing them but get very little back. A simple glance at your calendar for the past month will show you a sea of information with no idea where your time was actually spent. We believe your calendar should be working harder for you. Your calendar should give you clarity.


Over the course of the weekend, we built tools that can go through your calendar and understand the events you’re involved in and tie them back to the relevant emails, documents and people. With a suite of software that spans across your GDrive, Chrome and Android, we’re able to combine your calendars with rich, contextual information so you can really understand what your time is being spent on.

Clarity summary view

We built this system and plugged it into River’s digital life. If we take a look at River’s summary for the last month, we see that he’s spent around 32 hours in meetings in London last month, despite living in Sheffield. We can also see that he’s spent an hour on the phone with someone called Lee, but that all of them were short calls. The next person also totalled an hour on the phone but only across 3 calls. Already, River has learned something about the the people he interacts with most and how. We can also drill down further and see all this activity presented in a calendar view, except this now represents where his time did go, rather than where he thought it went. For example, he’s most active via text message between 4pm and 5pm during the week, and we can see that he spent a few hours watching sports last night.

Clarity can do much more than provide an accurate retrospective view of your time. Since it interacts with all the important components of your life, like your phone and laptop, it can even perform helpful actions for you. For example, say I have a meeting set up with River but I want to reschedule it. I simply send a text to him as I normally would, suggesting that we move it to another day. Clarity can pick up that message and is smart enough to understand its intent, find the relevant calendar event and reschedule it automatically. River doesn’t have to lift a finger and his diary is always up to date. It’s easy to imagine a future where we might never have to add or edit events ourselves.

Clarity is a smart calendar assistant that understands the context around you, provides you with insight into your life and helps you seamlessly organise your future. You can find out more at

Judging and announcements

After the pitches, we met a lot of people who had the same problems as we did with calendar management. Several offered to be beta testers. After waiting for the judges, prizes were announced and Clarity was declared the overall winner of Seedhack 5.0!

Things I learned

Looking back, there were a lot of things we did which I think helped us get to the winning slot, so I thought I’d summarise them here.

Think first - We spent time up front to define what we were going to work on. I thought this step was crucial as it meant we all understood the shape of the problem but also the areas that each of us were interested in. That helped later in the weekend as we could refer back to things we discussed on Friday.

Move fast - Once we did figure out what we were doing, then it was a matter of building the software to gather and crunch the data. A lot of this was done in parallel as Jeremy worked on the front-end UI while Mani, Joao and Vlad took care of the data aggregation, analytics and platform products. Don’t be afraid to throw away ideas if you find that they don’t work for people and remember it’s a hackathon (i.e gruesome hacks are the norm).

Remember the demo - At some point you’re going to be forced to stand up and talk about what you’ve done. We started thinking about this from Saturday lunchtime and sketched out the slides and the elements of the app we wanted to show. This helped inform the UI and technology that we were building and the pitch never felt like it was rushed.

Practice, practice, practice - we were told we’d have 3 minutes to pitch/demo and maybe a few additional questions. I made sure River and I practised repeatedly to get our time down to 3mins and ensure we getting across everything we wanted to. The important thing with such a short amount of time, is that we were forced to cut things out as well as ensure we emphasised the main points. It was a ruthless exercise in saying ‘no’. During the actual pitches, we realised everyone was taking longer (without repercussions), so I added an extra 30 seconds to cover the potential market sizes and business models.

Leave artefacts - After the pitches, we knew that we had to take the site down. It was built at high speed and in a way that ended up exposing someone’s data to the internet at large (thanks, River!). It might have been somewhat easier to build something using faked data that we could happily share the URL to and leave online. On the other hand, our demo would not have been half as compelling if we weren’t running it real-time on live data.

Looking to the future

This is a product we all want to use and the team is interested in taking this forward. There are a lot of things to think about and many things we would build differently so we’re discussing the next steps. For example, there are likely ways to empower the end-users to control their data and give them more flexibility, even though the work at the hackathon was already quite impressive. Given how well Seedhack went, you might even see us at Seedcamp Week later in the year. If you think this is something you’d like to work on with us, do get in touch!

To wrap things up, here’s a victory selfie!

Victory Selfie

Share / Comment