Free Software for Task Management

I am perpetually trying out online task management tools. My never-ending quest is to tame the massive sea of things I should be doing at any given moment, both making sure that important tasks don’t get lost in the mix, and to extract a reduction more closely approximating “the most important thing to accomplish right now”. My two favorites at the moment are Thymer and Rypple, but neither is perfect.

I like Thymer’s simple task creation, twitter-like tagging of tasks, and the smooth drag-and-drop motion for prioritization. But, at the end of the day, it’s just a massive web page of “things I should be doing” and gives me no assistance in taming the beast. I have to manually prioritize each task, and if I want the priorities assigned to tasks to be at all relevant, I have to go on manually gardening them every day. And, while task creation is as easy as tweeting, task editing is a clunky collection of buttons and drop-down menus. The tags are handy in small numbers (and projects really are just tags with a slightly different display), but any more than about 10 unique tags/projects across my whole data set becomes a jumble at the top of the screen and not at all helpful in finding anything. Thymer offers some reporting features, but I never found them particularly useful.

I like Rypple’s social features, it’s got a good take on sharing thanks and feedback, and the 1:1 pages (a collection of tasks you share with another person) are incredibly useful for weekly meetings with co-workers. I like the organization of tasks by goal rather than by project, it encourages grouping tasks into larger sequences toward an overall purpose. But, I found that I still needed some goals that were really just projects or a collection of semi-related tasks, so the construct was a little artificial. Rypple offers a tagging feature, but tag links don’t do anything useful (like take you to a page listing tasks with the same tag), and a task can’t live in more than one goal at the same time, so there isn’t really any good way to pull up a group of cross-cutting tasks. And, Rypple also gives me little help in managing the mass, though it has drag-and-drop priority setting similar to Thymer.

The worst thing about both of them is that they’re neither open source nor open data. Philosophical considerations aside, this is an immediate practical problem, since my access to Rypple was only a free trial which is now ending.  I started with the best intentions of only putting in a few things to try it out, but it quickly became an integrated part of my working life, and I now have well over a hundred little individual blobs of data (tasks) that I’m tracking there. Because it’s not open source, I can’t fire up my own instance of it. And because it’s not open data, I can’t get a dump of my tasks. So, I’ll have to manually copy every bit to some other task management system. Which means I’m in the market for a new task management tool, with a very immediate enlightened self-interest in picking something that’s both open source and open data.

Yesterday, I tried out Todo.txt. The biggest appeal is the simple open data format, so simple that it would work just fine as a manually edited plain text file. But, it offers a GPL licensed command-line client for easier task creation, searching, sorting, grouping by project, priority, or “context” (a notion from “Getting Things Done“). It also offers a GPL licensed Android client, which is in the process of being ported to the iPhone. On the downside, it doesn’t offer any collaborative features, so I can manage my own tasks, but can’t share tasks with others, or even provide visibility to others on a subset of my tasks or projects. And while creating tasks on the command-line is clean and simple, actually viewing/managing my 100+ tasks on the command-line (or Android client) feels a bit like viewing an elephant through a pinhole. It doesn’t have a desktop GUI client, though the wiki offers some suggestions on ways to integrate the simple plain text format into other desktop tools like Conky. The results weren’t thrilling (not really any better than the command-line), but they did give me an idea: how about a Unity Todo Lens?

I spent a few hours hacking on that, parsing the Todo.txt format in Vala and displaying the results in a Unity Lens with a general search box and filters for Project, Priority, and Context. I’m pleased with the result for a short experiment, but there are some drawbacks. The Lens really wanted my filters to be statically compiled in advance, while I wanted to create the filter sets on-the-fly from the Todo.txt file (i.e. let me filter by Projects that are in my tasks, not for some list of projects determined in advance). I may be able to hack around that with more time or a Python Lens instead of Vala. Also, a Unity Lens is a great interface for searching tasks, but not great for managing tasks. There’s only one “action hook” for a task, when you click on the icon/title. You can make that one action do anything you want, but it’s still only one action. I could make that one action mark a task as done (that seems most logical), but I’d still have to go back to the command-line to add new tasks, and edit task descriptions, priorities, projects, contexts, etc… Which takes me back to the original problem that the command-line isn’t a great interface for those tasks. What I really want is a slick, simple GUI client that the Lens could launch whenever a task is clicked in the search interface. Possibly a project for another weekend.

That’s all the time I have to work on the idea right now. While I leave it sitting for a bit, any suggestions on free software+open data task management tools you love? Or hate?

Appreciation for Kees Cook

Today is Ubuntu Community Appreciation Day, a new tradition in the Ubuntu community started by Ahmed Shams El-Deen of the Ubuntu Egypt LoCo. I’d like to take this opportunity to show appreciation for Kees Cook, who many years ago took time out of a busy conference to teach me how to build my first .deb package. That welcoming spirit — that patient recognition that every green newbie has the potential to become a future valuable contributor — is a key part of community strength and growth. It’s a pattern I emulate, and a gift I repay, by welcoming and mentoring other new developers. Over the years, Kees has demonstrated sane, sensible, calm, and wise technical leadership at OSDL (now known as The Linux Foundation), on the Ubuntu security team, and more recently on the Ubuntu Technical Board. There are many reasons I have confidence in the future of Ubuntu, and he is one of them. Thanks, Kees!

I’d like to thank the entire Ubuntu community for renewing my faith in the humanity of free software. When I stumbled on Ubuntu all those years ago, I had already been working in free software for what felt like a century, and was…well, tired. Your joy and delight in bringing free software to the world inspired me, and restored my passion for contributing. The heart and soul of free software is people like you, changing the world for the better every day. Thank you all!

Mythbusters – UEFI and Linux (Part 2)

Following up on my earlier post on UEFI and Linux, I got access to an identical system to the one with the original problem (an HP S5-1110) this week to do some install testing with various scenarios:

1) When I run through the standard install process with the Kubuntu 11.10 amd64 CD, I get exactly the same problem as James: I end up with a machine that has Kubuntu installed on a partition, but will still only boot into Windows. (I also get an explicit error message during the install saying “The ‘grub-efi’ package failed to install into /target/. Without the GRUB boot loader, the installed system will not boot.”)

2) Installing from the Kubuntu CD and wiping the HD has the same problem as (1), and the same error message.

3) Installing from the Ubuntu 11.10 amd64 CD into the same dual-boot configuration as (1) also won’t boot the Ubuntu partition, but it gives no explicit error message about the grub install failure.

4) When I install from the Ubuntu 11.10 amd64 CD and completely wipe the HD and replace it with Ubuntu, the install works perfectly, and the machine boots into Ubuntu afterwards with no problems. I can also install the ‘kubuntu-desktop’ package on the working system, and get a working Kubuntu desktop. This tells me that we’re not dealing with a UEFI or hardware compatibility issue here, just an issue with partitioning and the bootloader. Which is what James and I suspected last week, but it’s nice to have explicit confirmation (without wiping his friend’s machine).

5) Back to the Windows/Ubuntu dual-boot scenario in 3. Installing EasyBCD doesn’t quite work. It does give me a prompt in the “Windows Boot Manager” to choose between Windows and Ubuntu, but when I choose Ubuntu it just takes me to the grub prompt. That’s progress anyway. At the grub prompt, I type:

grub> root (hd0, 4)
grub> kernel /boot/vmlinuz-3.0.0-12-generic root=/dev/sda5
grub> initrd /boot/initrd.img-3.0.0-12-generic
grub> boot

And, it boots fine from the Ubuntu partition.

That’s all the time I had so far. A few observations about the system as it shipped from the factory. Windows is booting using a custom bootloader, the Windows Boot Manager which bypasses UEFI. In the dual-boot configuration that doesn’t work, the UEFI “BIOS” configuration and the efibootmgr command-line utility both recognize that the machine has a UEFI boot option for “ubuntu”, but choosing that during startup from the boot options still diverts straight to Windows. The machine didn’t ship with GPT partitions (which are one of the advantages of UEFI), instead it shipped with an old-fashioned MBR partition scheme (limited to 4 physical partitions). The working Ubuntu configuration (total machine wipe) does set up proper GPT partitions.

Quixperiment: Ubuntu and iPod

I have an old iPod that I occasionally use on car trips, but haven’t really modified in years (it mostly sits on a shelf). This morning I decided to play around a bit with hooking it up with my main Ubuntu desktop. I found a good list of options for managing an iPod in Linux on Wikipedia, and decided to try out both gtkpod and Rythymbox. Both seemed to work pretty well for interfacing to the iPod, no a super-shiny interface, but usable. A slight advantage to gtkpod, because it displayed my Smart Playlists, while Rhythmbox only displayed the static ones. Between the two, I can imagine using Rhythmbox as my primary music player, but would probably only use gtkpod for directly managing the iPod.

I copied my iPod music library over to Rhythmbox’s local library, just to try it out. It copied 3,249 tracks out of the 3,359 that were on my iPod. I got a few errors about duplicate files during the copy, all with generic file names like “01 – Track 01.mp3”. There were ~4-5 CDs like this, each with ~19-25 tracks, so that seems to account for the missing 110 tracks, though I didn’t keep exact notes, or do an exact comparison to see which files were missed. I’m guessing a handful of CDs I had loaded on the iPod were ripped with generic file names rather than specific titles, and that the iPod was separating them by directory structure, while Rhythmbox was loading them all in one directory so the file names conflicted. Just a guess, I’ll look into it more later if it ends up being useful.

Things I wish for in Rhythmbox:

  • The ability to copy a playlist from the iPod to the local music library, instead of recreating it.
  • The ability to synchronize my music and playlists between different computers/devices (will look into Ubuntu One for this later, it has some relevant features, though possibly not yet the full user journey I’m looking for).
  • A way to split up my local library into Music, Audiobooks, and Language Learning. Shuffle mode is pretty useless when it brings up random chapters of “The Hitchhiker’s Guide to the Galaxy” or snippets of Afrikaans language drills. I found suggestions that it’s possible to configure multiple Libraries for Rhythmbox in gconf even though it’s not displayed in the GUI, but there was no ‘library_locations’ key in /apps/rhythmbox, so I’ll have to poke around a bit more later to see if it’s still a valid key in current versions of Rhythmbox. (Separating libraries is a problem on the iPod itself, so this is just a same-old existing irritation repeated in a new piece of software.)
  • A shinier user interface, that makes it easier to find artists, albums, or songs I want to listen to.
  • More informative error messages when failing to copy files.
  • I found one work-in-progress on integration between Rhythmbox and the Music Lens, I’d like to see that complete.

Mythbusters – UEFI and Linux

A recent blog post about a user who was having trouble installing Ubuntu on an HP machine, sparked off an urban legend that UEFI secure boot is blocking installs of Linux. To calm FUD with facts: the secure boot feature hasn’t been implemented and shipped yet on any hardware. It was introduced in the 2.3.1 version of the UEFI specification, which was released in April 2011. Hardware with secure boot will start shipping next year.

It’s important to distinguish between UEFI in general and the new secure boot feature. UEFI has been around for a while, starting its life as the “Intel Boot Initiative” in the late ’90s. It has a number of advantages over old BIOS, including substantially faster boot times, the ability to boot from a drive larger than 2.2 TB, and the ability to handle more than 4 partitions on a drive. The UEFI specification is developed by the members of the UEFI Forum, a non-profit trade organization with various levels of free and paid memberships. UEFI is not a problem for Linux. At the UEFI Plugfest in Taipei last week, Alex Hung (Canonical) tested Ubuntu 11.10 on a number of machines, with success even on pre-release chipsets. The few failures seemed to be related to displays, and not particularly to UEFI.

The secure boot feature of UEFI is a concern for Linux, but not because of the specification. The features outlined in the 2.3.1 specification are general enough to easily accommodate the needs of Linux. But, within the range of possible implementations from that specification, some alternatives could cause problems for Linux. For full details, I recommend reading the two whitepapers released by Canonical and Red Hat and by The Linux Foundation. The short version is that secure boot uses signed code in the boot path in much the same way you might use a GPG signed email message: to verify that it came from someone you know and trust. The beneficial ways of implementing this feature allow the individual (or administrator) who owns the machine to add new keys to their list, so they get to choose who to trust and who not to trust. The harmful ways of implementing this feature don’t allow the user to change the keys, or disable the secure boot feature, which means they can’t boot anything that isn’t explicitly approved by the hardware manufacturer (or OS vendor). This would mean users couldn’t just download and install any old image of Debian, Fedora, Red Hat, SuSE, Ubuntu, etc. So, there’s real potential for a future problem here, but we’re not there yet. At this point, it’s a matter of encouraging the hardware companies to choose the beneficial path.

I’ve been chatting with the Ubuntu user who had the install problem, to see if we can find the real bug. It’s a friend’s machine rather than his own, so he doesn’t have easy access to it. I’ve arranged to get access to a similar machine next week to play with it. I’ll post back here if I find anything useful or interesting.

UDS-P Architecture Preview

Today we kick off the week-long Ubuntu Developer Summit, focused on the upcoming 12.04 release, “Precise Pangolin”, shaping the plans for the next 6 months, and breaking the goals into a manageable series of work items. With more than 20 rooms running simultaneous sessions, it’s a challenge to decide what to participate in, whether you’re here in Orlando or following remotely. As we dive in, it’s useful to take a step back and set the sea of sessions into the overall architecture and vision for Ubuntu, to trace the structure of threads running through the pattern.

The Ubuntu project is a tightly integrated collaboration between a community and a company, both focused on making free software approachable and easily available to millions of users. From the first inception of Ubuntu, I’ve always considered this collaboration to be its greatest strength. It’s a beautiful marriage of passionate dedication to software freedom and gravitas in the industry to help build visibility, partnerships, and self-sustaining growth. But, like all marriages, keeping that relationship healthy is an ongoing process, something you do a little bit every day. A few sessions to look out for here are renewing our commitment to encourage each other by showing appreciation for the hard work of all contributors [Monday][*] (including developers [Friday]), the standard Debian health check [Monday], embracing the cultural differences between designers and developers [Tuesday] while building up community participation in user experience and design [Wednesday], a more structured approach to mentoring developers [Wednesday & Friday], and how to welcome a new generation of developers who focus on application development [Monday, Tuesday & Wednesday].

12.04 is a Long Term Support (LTS) release, which means that both the server and desktop releases will be supported with maintenance and security updates for 5 years, instead of the usual 18 months. Ubuntu anticipates that more conservative users will upgrade from one LTS to the next instead of following the “latest and greatest” every 6 months. Because of longer support and conservative upgrades, LTS releases always focus more on quality, polish, and solidifying the user experience, than on introducing new features. A significant set of sessions build on this theme, including tracking high-priority bugs so they get resolved [Wednesday], improving the ISO testing tracker [Friday], process for toolchain stabilization [Tuesday], automated testing of package dependencies [Wednesday], automated regression testing [Thursday], tools for tracking archive problems like FTBFS, NBS, etc, so they can be rapidly fixed [Tuesday], accessibility polish [Thursday], printer setup tool refinements to contribute upstream to GNOME [Thursday], plans for the Lucid->Precise LTS upgrade [Monday], ongoing maintenance in desktop boot speed [Thursday], and automated testing of complex server deployments [Friday].

The world is moving from personal computing on a single dedicated device to “multi-screen” computing across a collection of devices: not just a desktop for home or work, with a laptop or netbook for portability, but handheld devices like phones, tablets, media players, and ebook readers are part of our everyday life. Other dedicated-purpose pieces of technology, like televisions and cars, are getting smarter and smarter, growing into fully-fledged computing devices. The nirvana of this new world is an integrated computing experience where all our devices work together, share data and content where it’s relevant, share interaction models to ease the transition from one device to the next, and also keep appropriate distinctions for different form-factors and different contexts of use. The Linux kernel is an ideal core for this kind of integrated experience, supporting radical diversity in hardware architectures, and scaling smoothly all the way from resource-constrained phones to mega-muscle servers. Ubuntu has had a focus on consumer users from the very start, so it will come as no surprise that the Ubuntu project (both the Ubuntu community and Canonical as a participating company) have a strong interest in this space. Ubuntu Mobile started as early as 2007 (also “UME” or “Ubuntu MID”), and Kubuntu Mobile in 2010. Mark Shuttleworth mentioned in his opening keynote this morning that Canonical plans to invest in the multi-screen experience over the next few years. If you’re interested in this topic, some areas you might want to participate in are: ARM architecture support [Tuesday], ARM hardfloat [Friday], and ARM cross-compilation [Friday] (many small form-factor devices these days are ARM-based), application sandboxing [Wednesday], what’s ahead for the Software Center [Friday], an interactive session on design and user experience in free software applications [Monday], power consumption (relevant for battery life) [Wednesday], printing from the personal cloud [Thursday], the Qt embedded showcase [Tuesday], and potential for a Wayland tech preview [Tuesday]. Also keep an eye out for touch support, virtual keyboards, suspend/resume, and web apps, which don’t have dedicated sessions (yet), but will certainly be weaving through conversations this week.

On the server side, general trends are moving from a traditional view of system administration as “systems integration” to a DevOps view of “service orchestration”. This may sound like a game of buzz-word bingo, but it’s far, far more. What we’re looking at is a fundamental shift from managing increasingly complex deployments by throwing in more humans as slogging foot soldiers, to letting machines do the slogging so humans can focus on the parts of administration that require intelligence, deep understanding, and creative thinking. This industrial revolution is still at an infant stage, converting individual manually operated looms (servers) over to aggregated sets of looms all doing the same thing (configuration management) and automated operation and oversight of whole factories of diverse interacting pieces such as spinners, looms, cutting, and sewing  (service orchestration). If this is your area of focus, it’s worth following the entire Server and Cloud track, but make sure not to miss sessions on Juju [multiple TuesdayWednesday & Thursday], Orchestra [Thursday], OpenStack [Monday & Friday], LXC [Thursday], libvert [Wednesday], cloud power management [Wednesday], and power-consumption testing for ARM [Thursday].

We’ve got an exciting week ahead, enjoy!

[*] UDS is a fast-paced and dynamic “unconference”, so the days, times, and rooms are subject to change. I’ve provided links to the blueprints for details and links to the day where the session is currently scheduled to help find each session in the schedule.

Ubuntu Brainstorm – Contacts Lens

It’s time for another round on the Ubuntu Technical Board’s review of the top ranked items on Ubuntu Brainstorm. This time I’m reviewing a brainstorm about a Unity Lens for contacts, together with Neil Patel from Canonical’s DX team. I volunteered to participate in this reply because I’d already been thinking about how to do it before I saw the brainstorm. I mainly keep my contacts in Gmail these days, for sync to my Android phone and tablet. But, with around 700 contacts, I find the Gmail interface pretty clunky.

The first key to a Contacts Lens is a standard format for contacts, and a path for contact synchronization. For the Oneiric release, coming up in a couple of weeks, Thunderbird is the new default email client, and as part of that, the Thunderbird developers (especially Mike Conley) added support to Thunderbird for the existing standard for contacts in GNOME, which is EDS (Evolution Data Server). Supporting EDS not only provides access to Evolution contacts from Thunderbird, which is important for users migrating from Evolution to Thunderbird, but also provides access to Gmail contacts and UbuntuOne contacts.

The second key is integrating EDS with a Unity Lens. The DX team isn’t working on a Contacts Lens for this in Oneiric or 12.04, but writing a lens is an accessible task for anyone with a little skill in Vala or Python, and is a great way to learn more about how Unity works. I’ll outline how to get started here, for more details see the wiki documentation on lenses. The architecture of a Unity Lens is pretty simple, I’d even say elegant. Writing a Lens doesn’t involve any GUI code at all, you only write a small backend that supplies the data to be displayed. This means that all lenses work for both Unity and Unity 2D, without any changes.

A Lens is a daemon that talks over D-Bus. To build one, you start with 3 files. (Throughout this illustration, I’ll pretend we’re working on a Launchpad project called ‘unity-lens-contacts’.)  The first file is the Lens itself, and the core of that file is a few lines that create a Lens object from libunity, and then set some default properties for it. In Vala, that would be:

lens = new Unity.Lens("/net/launchpad/lens/contacts", "contacts");

To go along with the Lens, you need a ‘contacts.lens’ file to tell Unity where to find your daemon, and a ‘contacts.service’ file to register the D-Bus service that your Lens provides. The ‘contacts.lens’ file is installed in ‘/usr/share/unity/lenses/contacts/’, and looks like:

Description=A Lens to search contacts
SearchHint=Search Contacts

[Desktop Entry]

The ‘contacts.service’ file is installed in ‘/usr/share/dbus-1/services/’, and looks like:

[D-BUS Service]

A Lens daemon handles requests for data, but it doesn’t actually do the searching. For that, you need to define a Scope (if it helps, think about searching the ocean through a periscope). A Lens can have more than one Scope, and when it does, each Scope collects results from a different source, so the Lens can combine them into one full set of results. Start with one Scope for one datasource: EDS contacts. A Scope is just another libunity object, and creating one in Vala looks like:

scope = new Unity.Scope ("/net/launchpad/scope/edscontacts");

The search functionality goes in the ‘perform_search’ method. For EDS contacts, you could use the EDS APIs directly, but Neil recommends libfolks.

A Scope can run locally inside the Lens daemon, in which case you add it directly to the Lens object:


Or, a Scope can run in a separate daemon, in which case you’ll also need an ‘edscontacts.scope’ file, so the Lens knows where to find the Scope. This file is installed in the same folder as ‘contacts.lens’ (‘/usr/share/unity/lenses/contacts/’), and looks like:


That’s the basic anatomy of a Lens, and enough to get a new project started. To see how it all fits together, there are several good examples of other lenses. The Unity Music Lens is the most relevant example for the Contacts Lens, and a fairly straightforward one to start looking at. For more complex examples, see the Applications or Files lenses. There’s also a Sample Lens, which is a working tutorial. And, once you get the core of the Contacts Lens working, and are looking for what to add next, read up more on Categories and Filters in the wiki.

If this sounds like an interesting project to you, drop us a line. You’ll find a lot of enthusiasm, and willingness to help out where you need it.

Harmony 1.0 Reflections

The month before the Harmony 1.0 release was quiet, and I was starting to wonder if anyone other than the drafting group was even paying attention any more. So, I was pleasantly surprised to see the posts start to appear last week after the Monday release. Some more positive, some more negative, but the most important thing right now is that people are engaging with Harmony, thinking through what the agreement templates mean, and how they fit in the general FLOSS ecosystem. So far I’ve read posts by: Bradley Kuhn, Dave Neary, Jon Corbet, Mark Webbink, Richard Fontana (part 1 & part 2), Simon Phipps, Stephen Walli and a Slashdot mention (glad for links in the comments if you come across others). I’ve observed a few common themes, so I thought it might be useful to take a step back and ponder through them.

Who’s the leader?

Various posts talk about Canonical leading the project, or Amanda Brock, or Mark Radcliffe, or me. For the first few months I was involved in Harmony, I thought it was lead by the SFLC. The surface question seems to generally be whether the “leader” influenced Harmony in a direction against the poster’s philosophy. But there’s a deeper question here: How is it possible to have so many different ideas of who is leading Harmony? Shouldn’t it be immediately obvious who the leader is? The answer is that Harmony has no leader, and since there is no leader, any member of the group looks as much like a leader as any other member of the group. Simon put it best, calling Harmony “the work of a loose grouping of people”. If that seems strange, think about this: In a group made up of so many radically different philosophies, including several who publicly state that they only got involved to keep the process from going off the rails, who would we choose as leader and how would we choose them? Is there anyone in the group who could adequately represent all the diverse perspectives? Anyone who could stand as neutral arbiter? I deeply respect the group of FLOSS lawyers and advocates who have participated in Harmony, and have only grown to respect them more over the past year, but there’s not one person I’d put in that position. Honestly, it’d be some form of cruel and unusual punishment to give anyone the responsibility of herding the rest of us cats. A second thing to think about is whether a group like Harmony really needs a leader. Of the FLOSS lawyers and advocates you know, are any of them shy about expressing their opinions? Do any of them seem likely to hesitate to negotiate for fair representation of their philosophy? The Harmony process works because it’s a table of equals speaking their minds. It could not have worked any other way.

I don’t have confidence in the Harmony documents because of some blessing from some authoritative leader. There is no blessing, no authoritative leader, and if you get right down to it, no one in the world I would want to be that leader or give that blessing. I have confidence in the Harmony documents precisely and only because they were born out of the chaos of collaboration.

What’s the agenda?

As some posters/commenters have pointed out, the Harmony agenda is stated plainly on the project’s website: “We hope that our work will enable more people to contribute code, by reducing the cognitive cost and legal time of reviewing contribution agreements.” It talks about how contributor agreements are “one available tool out of many” and not “a necessary part of all FOSS legal strategies”. So, the public agenda is clear and obvious. But, there’s been speculation about a hidden agenda, or an agenda behind the agenda. Why is that? Why is the simple and obvious answer not enough? I would guess it’s because the posters are students of human nature and know how complex and multi-layered human motivations usually are. The public statement is so straight-forward that they figure there must be more to it, and so speculate on the possibilities of what “more” might be there. I can pretty much guarantee that there are other agendas floating among the Harmony participants. Probably at least 200+ completely different agendas for the 100+ participants, many of them entirely incompatible. I don’t know all the deep and varied motivations of everyone involved (though I know enough to be confident that none of the speculations I’ve read so far are accurate).

The statements on the public website are the set of things that the Harmony drafting group agrees on. We spent a great deal of (probably too much) time hashing those out early in the Harmony process. The reason the public statement is so straight-forward is not because it’s hiding some big secret agenda, it’s because Harmony is so diverse that the simple stated goals are all that we could 100% agree on. For students of human nature, this won’t be too surprising either. Collaboration is rarely about gathering people who agree on absolutely everything, it’s mostly about gathering people who agree on a small set of things, to make progress on that small set.

Do the Harmony agreement templates meet the stated goals for Harmony? I believe they do, but I hope you’ll read them and decide for yourself.

What do they mean?

Some of the discussion has ranged around what various parts of the agreement templates mean, or what it means that various things aren’t in the templates. I’ll touch on a few key points.

  • inbound=outbound: I won’t argue against the “inbound=outbound” strategy, because I think it’s a good one, and works well for many projects. But, I will argue that it’s not the only valid legal strategy within the realms software freedom. There are reasons to choose inbound=outbound, and reasons not to choose it. If an inbound=outbound strategy is right for your project, you should use it. And if you do, I encourage you to take a look at the Linux Kernel Developer Certificate of Origin (DCO) as an example. In the short-term, part of my 1.0 todo list is to create a page about inbound=outbound for the Harmony website, and I’d welcome contributions to it. In the slightly-longer-term, even though drafting a DCO-style template was more than we could manage for 1.0, there was definite interest in the Harmony drafting group. I’d personally be interested in collaborating on something like this. If you would find a general DCO-style template useful, or would like to work on drafting one, please say so.
  • Copyright accumulation: There are various ways of talking about it, but one of the fundamental facts of FLOSS contribution is the process of assembling a collection of small bits of code from various sources into one big body of code. Some pieces are relatively independent of the whole, and could usefully stand alone. But for the most part, especially with maintenance work and refactoring over time, the contributions of any one person aren’t particularly useful separated from the whole. This is a beautiful thing about FLOSS, the whole is greater than a simple sum of parts. By copyright law, the default rule in place for all copyrightable work is “only the author has the right to distribute”, which means the project only has permission to distribute the contributors’ work because the contributors have granted that permission. Also by copyright law, the whole collection is itself a copyrightable work, owned by whoever assembled it (a “compilation”). These basic facts apply no matter what contribution policy a project chooses, no matter whether they’ve defined any policy at all. The contribution policy layers on top to more clearly define what kind of permission the contributors are granting the project and what they get in return. When you work on a project with no defined contribution policy, don’t make any assumptions about what the “implied” policy really is. Sometimes it’s inbound=outbound, but sometimes it’s a much older common FLOSS strategy of an original founder or foundation owning the compilation copyright. Fontana talks about how common inbound=outbound is, but in actual fact what’s common is not providing any clarity at all about the contribution policy, not even a simple web page stating “we assume you’ve given us permission to use your contributions”. That complete lack of information is a great disservice to contributors. The Linux Kernel DCO or the Mozilla Committer’s Agreement are good examples of how to be clear with contributors about an inbound=outbound contribution policy. Beyond inbound=outbound is every other possible way of granting permission for the project to distribute the code. Simon speaks against copyright assignment, and there are good reasons not to choose it. In Harmony, the only form of copyright assignment offered is one where the contributor gets back such a broad license to their contribution that there’s only a tiny sliver of a difference between that and ownership. If your project is going to adopt copyright assignment, and there are good reasons to choose it too, this is the best way to do it. Fontana talks instead about “maximalist” contributor agreements, where a project collects inbound permissions from contributors that are broader than the outbound permissions granted to users. There are good reasons for this too, in addition to the unhealthy reasons he mentions. Both talk about the inequality of rights held by the project versus the rights held by the contributors. But, that inequality lies in the fact that the project holds a collection of code, while the contributor holds a few pieces of that collection. All forms of collaborative development are copyright accumulation, there’s no way around it. The act of entering a group project is one of establishing a set of relationships with all the other contributors, which may include some form of legal entity. It’s all about building bonds, and trust, and confidence in this group you’ve adopted, and there’s no way around that either. I’m skeptical that the ultimately very thin differences between DCO/CLA/CAA contribution policies actually have much impact on the overall hurdles for joining a collaborative group or on the overall equality of the situation. Those fine differences will be important to some people, and I encourage people to adopt policies they’re comfortable with, and contribute to projects they’re comfortable with. But, I also ask you all to be charitable to others who have made different choices.
  • Outbound copyleft license: Harmony offers 5 options for outbound licensing. Fontana points out that only two of these options are a restriction to “only copyleft”. True enough, but this doesn’t mean copyleft licenses are underrepresented. One of those two options (a specific list of licenses) is flexible enough to handle any possible combination of copyleft licenses that a particular project considers compatible with their philosophy. A few months ago, I was hoping for a rigorous legal definition of “copyleft”, that would capture just exactly the set of “true copyleft” licenses, and exclude all others. I even tried drafting some language, but any way of wording it failed to accommodate the fact that different projects have different definitions of what they consider “true copyleft”. Some are only GPL (or even only a specific version of GPL), some include AGPL, some LGPL, etc… The option for a specific list of licenses is a simple (I’d even say elegant) way to capture this diversity, giving each project a fine-grained knob to tweak on their “level” of copyleft. The other option (“FSF recommended copyleft licenses”) provides an updating list of copyleft licenses, which some projects may prefer for “future proofing”. Note from Mark Webbink’s post that a Harmony CAA with a strong copyleft outbound license option is actually a stronger guarantee to contributors than the FSF’s CAA, which only promises some copyleft or permissive license.
  • Outbound permissive license: Of the 5 options, two are permissive. One is an updating set of OSI approved licenses (again, for “future proofing”) and the other is more generally permissive. Fontana talks about “the appearance of constraining outbound licensing”, as if the permissive options will sneak by unwary developers. This is something the drafting group took great pains to avoid, being very explicit about the types of licenses each option allows (e.g. “copyleft and permissive” for the OSI option). It’s also worth noting that the generally permissive option is also a bit copyleft, obligating the project to release a contribution under “the current license”. In the calls where we discussed it, this was considered by many drafters to be an important expression of developer’s rights, and a step ahead of existing permissive strategies. (I’m glad it made it in.)
  • Outbound current license: The fifth option for outbound license is “the current license or licenses”. I suspect this may be a less popular option, because one of the big advantages of a CLA/CAA is flexibility for future changes. If your project plans to release always and only under one license, or is willing and able to invest the effort to get a separate signoff from contributors for a license change, you may find that inbound=outbound is actually closer to your needs.
  • Incorporated feedback: Brad commented that the “1.0 documents differ little from the drafts that were released in April 2011” in support of the idea that they’re “primarily documents drafted in secrecy”. I asked about this and it turns out he ran a diff from Beta to 1.0, and so missed all the changes from Alpha to Beta, which is when we got the most feedback. Here’s a diff from Alpha to 1.0, showing that the changes in response to the public review periods were quite extensive, with only 21 lines of text (less than 10% of the total) remaining unchanged from Alpha to 1.0.

What happens now?

I expect that over the next year a handful of projects will adopt Harmony agreements. That may not sound like much, but I consider my time on Harmony well-spent when I count the collective human-years that will be saved from drafting and redrafting contributor agreements. That time can be much better spent on community building, documentation, coding and everything else that makes FLOSS projects great. Some posters expressed concern that the mere existence of Harmony might divert some projects from one philosophy or legal strategy to another. I just don’t see that happening. The community of FLOSS developers are some of the most legally aware and opinionated non-lawyers on the planet. Harmony will be useful to those projects who would have adopted a CLA/CAA anyway, or for projects who already have one and are looking for an update.

Dave Neary commented in closing “the goal should not be to write a better CLA, it should be to figure out whether we can avoid one altogether, and figure out how to create and thrive in a vibrant developer community.” I completely agree. I’m pleased to see the various efforts for copyright reform, as I think one of the fundamental problems within FLOSS is that we’re trying to build a system of open distribution on top of copyright law, a system that was inherently designed to be closed. I’m in favor of copyright reform, but realize that the law changes slowly and only with enormous effort. On the whole, that slow pace is probably a good thing for the stability of society, but it means there are a few areas where society has changed quickly and radically, leaving the law trundling along to catch up. I also recognize that there’s economic/political pressure to make copyright more closed rather than more open, and it’s not clear who will win.

There’s another idea floating around that I know has been an inspiration for some people involved in Harmony. Right now, FLOSS contribution is all about taking your piece of code and granting permission to one project to distribute it. What if, instead of making individual agreements with individual projects, developers could contribute their code into a kind of “FLOSS commons”, together with annotations on how the code may be distributed. Instead of figuring out each new project’s contribution policy, a developer can just check whether the tags requested by the project are compatible with the tags they’ve chosen. And instead of endless discussions on whether license A is compatible with license B, so library X or algorithm Y can be included in code Z, projects could just check if the code was tagged for their intended use. I don’t know if the idea will ever fly, but if it does, something like Harmony (or future versions of Harmony) could be one of the ways that code flows into a system like that, where Harmony’s “options” turn into tags a developer puts on their own code to identify the forms of distribution they want to allow. Interesting potential there.

A Brief History of Harmony

[This post represents my own memories of my own experiences. All stories have multiple sides, multiple perspectives, multiple angles, so if you’re interested in the story you should talk to other people involved in this first year of Harmony.]

I first heard about Project Harmony around the end of May last year when Amanda Brock sent a message to the FLOSS Foundations mailing list, inviting people to join some face-to-face and phone meetings around FOSS contributor agreements. Not a whole lot of detail, but the general gist was “Wouldn’t it be nice if we could do something a little more like Creative Commons, so projects don’t have to spend so much time creating contributor agreements, developers don’t have to spend so much time to understand them, and lawyers don’t have to spend so much time figuring out if they can approve employee’s requests to get them signed so they can contribute?” This got my attention. I’ve been a long-standing advocate of easy-to-understand legal language, a position I’ve held firmly through the drafting of the Artistic 2.0 license and the Perl contributor agreement, and through my participation on one of the GPLv3 review committees.

At the beginning of Harmony, I wasn’t involved with Canonical, though I’ve been an Ubuntu user and packager almost from the beginning of the Ubuntu project, and a number of my friends work at Canonical. I didn’t know Amanda, didn’t even remember hearing her name before. But, Harmony looked like it had interesting potential, and that was enough for me.

The first few meetings struck me as a little odd. They felt like political grandstanding, a lot of flash with little substance. It reminded me of the GPLv3 review committee phone calls. I kicked in ideas–I can debate like a lawyer even though I’m not one–but I didn’t have the sense that it was going anywhere. Looking back, that was an important first stage. It was how we all got to know each other, where we all stood, what was important to each of us, what we were there to support and to protect. It was a process of embracing chaos.

The group decided to adopt Chatham House Rule for our discussions. I’d heard of it before, but never actually used it. At first glance it seems quite sensible: encourage open participation by being careful about what you share publicly. But, after almost a year of working under it, I have to say I’m not a big fan. It’s really quite awkward sometimes figuring out what you can and can’t say publicly. I’m trying to follow it in this post, but I’ve probably missed in spots. The simple rule is tricky to apply.

At the second face-to-face meeting in early July, the massive block of email CCs that we’d been using to communicate was replaced by a mailing list. Someone brought a discussion piece to the meeting (no name, because of Chatham, but it’s already public knowledge that they were affiliated with SFLC, to my knowledge participating in Harmony as an interested volunteer). It was explicitly not intended as a first draft, but it did read very much like a contributor agreement, though rather too dense. The idea of drafting in options started to take hold. The sense was that we, the Harmony community, were not any kind of authority to say what was “right” or “wrong” in FOSS legal strategies. We were, instead, a rich collection of experiences in FOSS law and could help by explaining what choices FOSS projects could make, and the reasons they might pick one alternative or another. The agreements would then reflect the best practices around the various alternatives, basically “if you make this choice, here’s what experience has shown is the best way to do it”.

Through the rest of July, I was completely absorbed by OSCON planning. This blackout-month before the conference has been pretty common for me every year since I took on OSCON, and talking to other conference organizers I hear it’s common for all of us. In August I took a new job at Canonical and dove straight into “drinking from the firehose” to increase breadth and depth on my knowledge of the distro architecture. I also moved from the UK back to the US, suddenly and unexpectedly, as a visa requirement for taking the job. All that is to say, I was quite busy for a few months, and as a side-effect I didn’t really track what was happening in Harmony. I made it to a phone call or two, a face-to-face meeting in Boston, and half-way followed the mailing list, but I was mostly out of it. And, if I’m being completely honest with myself, I have to say I pretty much lost interest in Harmony during those months. I haven’t made any secret of the fact that I had a bad experience in the GPLv3 process. As a committee, we worked hard thinking through the issues and making suggestions, and as far as I can tell none of those suggestions for clarification, simplification, corrections, or perspectives on the needs of the free software community made it into the final draft. The final text released as the GPLv3 is, to me anyway, entirely disappointing. I saw Harmony going the same way. Those first drafts, intended as a seed for discussion, became the real drafts. They were bloated and dense legalese, and even though we had healthy discussions in meetings or on the mailing lists, somehow they didn’t seem to have any visible impact on the documents.  In my mind, and it was probably entirely unfair, I figured that since I had a bad experience with the GPLv3 process, and was having a similar bad experience in Harmony, that the similarity probably wasn’t a complete coincidence, and might just be rooted in the drafter. <shrug> I can’t really say for sure, but I do know that my suspicion killed my motivation to participate. I’ll call this the start of the “Dark Ages”.

At Linux Plumbers Conference in November, Michael Meeks gave an excellent keynote about LibreOffice, that I attended with interest because we’d already started talking about shipping LibreOffice instead of in the 11.04 release of Ubuntu. In the talk, he mentioned Project Harmony in an unfavorable light. I don’t remember the details, but it was something about companies pushing an agenda of copyright assignment. The comments really baffled me, they didn’t seem to fit at all with my experience of the process or the goals of any of the participants. Even after chatting with him over a group dinner (a fun evening, I found out his wife and I used to work for the same non-software non-profit organization), I still couldn’t reconcile the gap between what I saw and what he saw in Harmony. I was going to set it aside as “somebody else’s problem” when a friend who I deeply respect and admire took me aside at Plumbers, and expressed concern about Harmony. I promised that I’d look into it.

So, I started asking tough questions, and what I found was both better and worse than I expected. I found that no one at Canonical had a bizarre agenda to force copyright assignment on the world. I also found that Canonical had an interest in replacing their current contributor agreement with a Harmony one, and that “success” for them was measured in community-driven, community-approved, and community-adopted agreements. All good. I also found that Harmony was pretty much stalled, all meetings on hold, waiting on a draft with some changes requested by the Harmony group (substantial changes, but shouldn’t have taken terribly long). Not good.

In a message to the Harmony list just before the mid-December face-to-face meeting, someone (again, no name for Chatham) said that SFLC would no longer be drafting the Harmony agreements. The general gist was that someone (again no name) had a bigger idea for how to improve the current system of FOSS contributions and copyrights, and that Harmony wasn’t enough of a solution. No animosity, wishing us the best of luck, with some possibility that Harmony versions 2.0 or 3.0 might play a part in a longer-term vision. I genuinely believe there was no ill-will in their parting as drafters. I still don’t know the reason for the drafting delay. Maybe they just got swamped with other projects, they do offer pro-bono legal services to the free software community at large, and tend to be quite busy. It is what it is.

I have the feeling that Harmony could have fallen apart at that point. Stalled for something like two months, the drafters bailed, rumbles of public criticism, and nothing to show for 6 months of work. You might think the tone of that face-to-face meeting would be a little dark. Instead, the universal reaction of the group was more like a head-scratching “Huh. That was odd. What next?” We had the seeds of something really good in Harmony, I had a deep sense this was true, and saw it reflected in the faces around me. We decided to reboot, set a schedule for release, go public, and pick a new drafter. A few participants said they’d find out if their companies could help fund the drafting work so we could actually hit the planned April 7th Alpha release date. I’ll call this the “Renaissance”.

In January, we launched a public website, the first incarnation of It was a single page containing the text from the current “What Is?” page, ugly as mud, and hosted on my own web servers.

Also in January, we started up weekly drafting calls. We took the earlier drafts as an inspiration, but started with a rewrite that I handed our drafter in January with the note that it might not be entirely legally accurate, but it was at least comprehensible to ordinary humans. And then we proceeded to pick that rewrite apart, line by line, word by word. The tone of these meetings was entirely different than the previous year. Partly, we all knew each other pretty well by then, knew each other’s values and priorities and could just settle down to work. Partly, the driving beat of the upcoming Alpha release helped us lock our attention on what to ship “now”. I worked hard to attend every meeting, even from exotic time-zones, on weirdly laggy voice connections. (I’m amazed at the patience other participants showed for including me in a conversation hampered by 2 second delays.) The discussions ranged over whether the drafts accurately reflected our intentions, had the intended legal effect, were understandable without a legal education, and would fit into the customs and cultures of existing FOSS projects. Every week our drafter brought a new version of the document (we worked on it as one big file with variations embedded), with substantial changes based on the group discussion. And every week we combed through those changes to make sure they were what we, as a group, wanted. People ask who drafted the agreements, and the answer is that we all did, together.

As we approached the April release date, we had a professional designer, Ben Pollock, prepare a proper design for the website (I highly recommend him, and hope he gets lots of business out of it, because he charged us beans). Oregon State University’s Open Source Lab agreed to host the site and our mailing list (they charged us nothing, please donate). I had a frantic last few days getting everything set up for the launch. To give geek credit where it’s due, the site is static generated HTML (anything else seemed like overkill for the handful of pages, we’ll expand later) with templates processed using Template::Toolkit (the libtemplate-perl package in Ubuntu), and both templates and generated pages stored in Git, so my “site launch” is a simple ‘git pull’. The comment processor for the mailing list that generates the comment review page is a Python script, basically a small finite-state machine using line-by-line pattern matching. (I’ll have to see if I can use Ruby or PHP for the agreement picker forms on our 1.0 website, for even greater compulinguistic diversity.)

On launch day, I woke up early to set the new website live just before Amanda’s panel at the European Legal Network Conference in Amsterdam. I don’t see the names of everyone who spoke at that panel posted publicly, so I won’t list them here. But, as far as I know the ELN conference is completely open to the public, so someone will eventually post it. After resolving a last-minute technical crisis with the review mailing list, and answering some press questions on the launch, I headed over to the Linux Collaboration Summit for the day. Bradley Kuhn slipped me in as a surprise special guest at the end of his Legal track, where I walked through some very last-minute slides, and a round of questions. The questions were good and thoughtful, captured well by Jon Corbet in his post on the session. My immediate action items from the feedback in that session are around transparency: to replace the old “temporary” mailing list for drafting work with a second publicly archived list hosted by OSU OSL, and to put up a Sponsors page on the Harmony website as soon as we get approval from the various donors (names again withheld for Chatham). To anyone who is concerned about unfair influence by our drafter (even after I described how our drafting process actually works), let me say that sponsorship for the drafting work was only promised to the Alpha release, and is unlikely to be continued, since the workload of integrating revisions from the public review process is light enough to be handled by the volunteer participants.

I’m proud of the community group we’ve built around Harmony. I don’t think there has ever been such large-scale harmonious collaboration around FOSS legal strategies before. I take it as a positive sign for the future, that the FOSS legal community can embrace diversity, work together, and change the world for the better. I’m pleased with the template documents we’ve produced. They aren’t perfect, certainly, but they are a good reflection of current best practices, and I hope that through the public review process they’ll be even better by the time we make the 1.0 release.

With A Little Help From My Friends

I’ll take Dave Neary’s comments about private conversations to heart. To provide a little historical illumination, five years ago Dave and I founded a group, FLOSS Foundations, as a collaborative mailing list and face-to-face meetings for leaders in free/libre/open source software projects, focused on the community governance and legal structures of the projects rather than code. At the time, there was some ill will between various foundations, but as we talked we found that most of the roots of that ill will were in a lack of understanding. As a world-wide multi-project FLOSS community, we are a diverse collection of cultures, customs, and governance structures, and some things that seem strange to an outsider or a member of a different subgroup make perfect sense when you look more deeply. Dave and I have watched the Foundations group grow from a handful of people to a couple hundred, covering 50+ different projects. The most impressive thing about it has been watching the community grow to the point that we can discuss controversial issues with sharply divided perspectives among the group, but always in an attitude of open discussion and mutual respect. It’s such a healthy, mature forum for discussion, that when an occasional new-comer launches into the list in flames, the universal attitude is “let’s help them understand the group”. I haven’t seen anyone flameout twice.

With that half-decade of experience in how multi-project collaboration can work, I agree with Dave that we can do so much better, not just with GNOME, KDE, and Ubuntu, but with the whole sky full of constellations that make up the Linux desktop and beyond. To go public, I’ve been talking with various people for a month or so now (in email/IRC/etc) about starting up a series of face-to-face meetings and mailing list(s), very similar to what we did with FLOSS Foundations, but focused on code, and specifically around the ecosystem of the Linux desktop. So far the response has been positive. It’s just the barest seed of an idea, and needs a great deal more conversation on what form the collaboration might take, what the goals might be, who might participate and how. Dave’s post has some good ideas on structuring conversations around focused collaboration tasks. I’d love to talk about any thoughts you have, in any channel of communication you feel comfortable with.

Along the way I found that there’s already a list on created for the express purpose of collaborating across desktop tookits ( It hasn’t seen much traffic lately, but I talked to the list admin and he’s totally happy to have it revived. I’m not particularly tied to that list (actually, I’m really tempted by, or, or any one method or path of collaboration. What’s important to me is that we take the ongoing conversation, and the increased level of openness in that conversation that we’ve seen over the past week, and carry it on into a longer-term effort of healing and strengthening. Dave mentioned the upcoming Desktop Summit, and I hope we can plan some face-to-face meetings around this conversation there. And like Dave, I hope we can continue the conversation in other forums between now and then.

I know it’s tense right now, but I’m absolutely certain we can work through this, and collaborate more effectively in the future. This collaboration is not only good for the projects directly involved, it’s also good for the ongoing progress of Linux and software freedom, and absolutely essential for the future of technology as a whole.