UDS-R Architecture Preview

The 13.04 “Raring Ringtail” release of Ubuntu falls at the mid-point between the 12.04 and 14.04 LTS (long-term support) releases. This is the time in a development cycle when the balance starts to tip from innovation toward consolidation, when conversations form around what pieces need to be in place today to ensure a solid “checkmate” two releases down the road.

With that context in mind, it’s no surprise that Ubuntu Foundations–the central core behind the many faces of Ubuntu–plays a starring role in this release, both in sessions here at the Ubuntu Developer Summit in Copenhagen, and in the upcoming 6 months of development work. Look for sessions on release roles and responsibilities, release planning including Edubuntu, Lubuntu, Xubuntu, and Kubuntu, archive maintenance and improvements to archive admin tools, reverting package regressions and immutable archive snapshots, cross-compilation, user access to UEFI setup and plans for secure boot, xz compression for packages, image creation tools for Flavors, auto-generated apparmor profiles, PowerPC bootloaders, OAuth for Python 3, “prototype” archives for new hardware, Android ROMs, user experience in distro upgrades, build daemon resources, boot time on ARM, and installation tools on ARM. Also training sessions on the error (crash) tracker, Python 3 porting, and how to contribute to upstart.

On the Cloud front, the big topics continue to center around OpenStack (integrating Grizzly, QA, packaging improvements), Juju (the Charm Store, Charm developer tools, contributor onramps, application servers like Ruby on Rails/Django, development process), and Ubuntu Cloud images (testing and roundtable). While the broader Ubuntu Server discussions range over Xen, LXC, libvert, QEMU, Ceph, MySQL, Nginx, Node.js, and MongoDB, Query2, bigdata filesystem support, and Power architecture virtualization.

The Client side is a harmonic chorus, with sessions on Ubuntu TV, mobile devices and installing Ubuntu on a Nexus 7, plus multiple sessions on Ubuntu as a gaming platform. Also look for the usual sorts of nuts and bolts that go into building a beautiful client experience, like accessibility, battery life, connectivity, config sync, choice of file managers, and consistent typography.

Don’t miss the Design Theatre on Wednesday, where all are welcome to participate and learn about design thinking, solving real-world design problems for apps submitted by the audience.

I can’t wait for tomorrow!

UDS-Q Architecture Preview

This week in Oakland is the Ubuntu Developer Summit, a time for Ubuntu Developers from around the world to gather and plan the next release, version 12.10 codenamed “Quantal Quetzal”.

I’ve shuffled and reshuffled the sessions several times, looking for the “governing dynamic”, the thematic structure that holds the Quetzal together. I’ve settled, appropriately, on “quantization”. In general terms, quantization is a process of separating a continuous stream into significant values or “quanta”, such as image pixels from the continuous colors of real life, or discrete atomic energy levels. The theme applies on multiple levels. First, there’s the process attendees are going through right now (in person or remote), surfing the sea of sessions, determining how to divide their time for maximum value.

From a historical perspective, there was another UDS here in California not too long ago, where I recall the schedule was dominated by the desktop. We’re in a different world today, and what struck me reading through blueprints for Quantal is the segmentation of topics. Ubuntu has grown up, and while shipping a gorgeous desktop will always be important, other forms of hardware, both smaller and larger, have an equal (and sometimes greater) influence on Ubuntu’s direction into the future. How do you choose between cloud, metal, TV, and phones, when they’re all so interesting, and have so much potential as game-changers for Ubuntu (and Linux in general)? These different domains of use also lead to differentiation in design, development, and integration. Some significant quanta to watch are:

And like an atom that retains its fundamental structure at multiple energy levels, Ubuntu is still Ubuntu, unified at the core as a distribution and as a community, even across multiple “product” targets. Since this is the first release after an LTS, there’s more room than usual to re-examine the core at a fundamental level, with an eye to where we want to be by the next LTS.

And those are only the highlights. 🙂 It’s going to be a great week, and a great cycle!

Tody Task Manager

Failing to find any free software task manager I could live with, I created my own over the December holidays. I called it “Tody”. It’s a simple GUI app, focused on quick searching, editing, and tagging for tasklists. The file format it uses is identical to the plain text format used by Gina Trapani’s Todo.txt command-line tool and Android app, it even loads preferences from the Todo.txt config file. Since the file format is plain text, tasklists can be shared between machines (or users) over Ubuntu One or Dropbox.

I created it using Rick Spencer’s Quickly templates (GTK, Glade, and Python). I went for a streamlined workflow for the way I use tasklists, so I’m curious if it will map well to others. It appears as a simple text file, with a search box at the top of the window. Clicking on a tag performs a search for the tag (these are similar to Twitter tags, any word that starts with “@” or “+”). The list sorts tasks by priority (marked with “A”, “B”, “C”, etc) and then alphabetically. When the list is limited to search results, the search terms are highlighted in the tasks.

Clicking on the text of a task brings up an editor window, with a checkbox for “Done” tasks, a field to edit the task, and clickable palettes for task priorities and all the tags you’ve used previously in your tasklist. It’s streamlined with shortcuts, so typing Space, Enter marks a task as done, saves it, and closes the editor window.

I’ve started using Tody as my primary task manager, after dumping all my old tasks from other task managers into one text file. I’d like to tweak the search feature, right now it does a completely literal string search, but I’ll change it to split up search terms (so it’s not sensitive to order of terms). Then the next step is to link it up with my Todo Lens, so the edit window for Tody pops up as the action for clicking on a task in the Lens.

The Tody app is up on my PPA, let me know if you try it out and have any requests for features that fit your workflow:


Free Software for Task Management

I am perpetually trying out online task management tools. My never-ending quest is to tame the massive sea of things I should be doing at any given moment, both making sure that important tasks don’t get lost in the mix, and to extract a reduction more closely approximating “the most important thing to accomplish right now”. My two favorites at the moment are Thymer and Rypple, but neither is perfect.

I like Thymer’s simple task creation, twitter-like tagging of tasks, and the smooth drag-and-drop motion for prioritization. But, at the end of the day, it’s just a massive web page of “things I should be doing” and gives me no assistance in taming the beast. I have to manually prioritize each task, and if I want the priorities assigned to tasks to be at all relevant, I have to go on manually gardening them every day. And, while task creation is as easy as tweeting, task editing is a clunky collection of buttons and drop-down menus. The tags are handy in small numbers (and projects really are just tags with a slightly different display), but any more than about 10 unique tags/projects across my whole data set becomes a jumble at the top of the screen and not at all helpful in finding anything. Thymer offers some reporting features, but I never found them particularly useful.

I like Rypple’s social features, it’s got a good take on sharing thanks and feedback, and the 1:1 pages (a collection of tasks you share with another person) are incredibly useful for weekly meetings with co-workers. I like the organization of tasks by goal rather than by project, it encourages grouping tasks into larger sequences toward an overall purpose. But, I found that I still needed some goals that were really just projects or a collection of semi-related tasks, so the construct was a little artificial. Rypple offers a tagging feature, but tag links don’t do anything useful (like take you to a page listing tasks with the same tag), and a task can’t live in more than one goal at the same time, so there isn’t really any good way to pull up a group of cross-cutting tasks. And, Rypple also gives me little help in managing the mass, though it has drag-and-drop priority setting similar to Thymer.

The worst thing about both of them is that they’re neither open source nor open data. Philosophical considerations aside, this is an immediate practical problem, since my access to Rypple was only a free trial which is now ending.  I started with the best intentions of only putting in a few things to try it out, but it quickly became an integrated part of my working life, and I now have well over a hundred little individual blobs of data (tasks) that I’m tracking there. Because it’s not open source, I can’t fire up my own instance of it. And because it’s not open data, I can’t get a dump of my tasks. So, I’ll have to manually copy every bit to some other task management system. Which means I’m in the market for a new task management tool, with a very immediate enlightened self-interest in picking something that’s both open source and open data.

Yesterday, I tried out Todo.txt. The biggest appeal is the simple open data format, so simple that it would work just fine as a manually edited plain text file. But, it offers a GPL licensed command-line client for easier task creation, searching, sorting, grouping by project, priority, or “context” (a notion from “Getting Things Done“). It also offers a GPL licensed Android client, which is in the process of being ported to the iPhone. On the downside, it doesn’t offer any collaborative features, so I can manage my own tasks, but can’t share tasks with others, or even provide visibility to others on a subset of my tasks or projects. And while creating tasks on the command-line is clean and simple, actually viewing/managing my 100+ tasks on the command-line (or Android client) feels a bit like viewing an elephant through a pinhole. It doesn’t have a desktop GUI client, though the wiki offers some suggestions on ways to integrate the simple plain text format into other desktop tools like Conky. The results weren’t thrilling (not really any better than the command-line), but they did give me an idea: how about a Unity Todo Lens?

I spent a few hours hacking on that, parsing the Todo.txt format in Vala and displaying the results in a Unity Lens with a general search box and filters for Project, Priority, and Context. I’m pleased with the result for a short experiment, but there are some drawbacks. The Lens really wanted my filters to be statically compiled in advance, while I wanted to create the filter sets on-the-fly from the Todo.txt file (i.e. let me filter by Projects that are in my tasks, not for some list of projects determined in advance). I may be able to hack around that with more time or a Python Lens instead of Vala. Also, a Unity Lens is a great interface for searching tasks, but not great for managing tasks. There’s only one “action hook” for a task, when you click on the icon/title. You can make that one action do anything you want, but it’s still only one action. I could make that one action mark a task as done (that seems most logical), but I’d still have to go back to the command-line to add new tasks, and edit task descriptions, priorities, projects, contexts, etc… Which takes me back to the original problem that the command-line isn’t a great interface for those tasks. What I really want is a slick, simple GUI client that the Lens could launch whenever a task is clicked in the search interface. Possibly a project for another weekend.

That’s all the time I have to work on the idea right now. While I leave it sitting for a bit, any suggestions on free software+open data task management tools you love? Or hate?

Appreciation for Kees Cook

Today is Ubuntu Community Appreciation Day, a new tradition in the Ubuntu community started by Ahmed Shams El-Deen of the Ubuntu Egypt LoCo. I’d like to take this opportunity to show appreciation for Kees Cook, who many years ago took time out of a busy conference to teach me how to build my first .deb package. That welcoming spirit — that patient recognition that every green newbie has the potential to become a future valuable contributor — is a key part of community strength and growth. It’s a pattern I emulate, and a gift I repay, by welcoming and mentoring other new developers. Over the years, Kees has demonstrated sane, sensible, calm, and wise technical leadership at OSDL (now known as The Linux Foundation), on the Ubuntu security team, and more recently on the Ubuntu Technical Board. There are many reasons I have confidence in the future of Ubuntu, and he is one of them. Thanks, Kees!

I’d like to thank the entire Ubuntu community for renewing my faith in the humanity of free software. When I stumbled on Ubuntu all those years ago, I had already been working in free software for what felt like a century, and was…well, tired. Your joy and delight in bringing free software to the world inspired me, and restored my passion for contributing. The heart and soul of free software is people like you, changing the world for the better every day. Thank you all!

Mythbusters – UEFI and Linux (Part 2)

Following up on my earlier post on UEFI and Linux, I got access to an identical system to the one with the original problem (an HP S5-1110) this week to do some install testing with various scenarios:

1) When I run through the standard install process with the Kubuntu 11.10 amd64 CD, I get exactly the same problem as James: I end up with a machine that has Kubuntu installed on a partition, but will still only boot into Windows. (I also get an explicit error message during the install saying “The ‘grub-efi’ package failed to install into /target/. Without the GRUB boot loader, the installed system will not boot.”)

2) Installing from the Kubuntu CD and wiping the HD has the same problem as (1), and the same error message.

3) Installing from the Ubuntu 11.10 amd64 CD into the same dual-boot configuration as (1) also won’t boot the Ubuntu partition, but it gives no explicit error message about the grub install failure.

4) When I install from the Ubuntu 11.10 amd64 CD and completely wipe the HD and replace it with Ubuntu, the install works perfectly, and the machine boots into Ubuntu afterwards with no problems. I can also install the ‘kubuntu-desktop’ package on the working system, and get a working Kubuntu desktop. This tells me that we’re not dealing with a UEFI or hardware compatibility issue here, just an issue with partitioning and the bootloader. Which is what James and I suspected last week, but it’s nice to have explicit confirmation (without wiping his friend’s machine).

5) Back to the Windows/Ubuntu dual-boot scenario in 3. Installing EasyBCD doesn’t quite work. It does give me a prompt in the “Windows Boot Manager” to choose between Windows and Ubuntu, but when I choose Ubuntu it just takes me to the grub prompt. That’s progress anyway. At the grub prompt, I type:

grub> root (hd0, 4)
grub> kernel /boot/vmlinuz-3.0.0-12-generic root=/dev/sda5
grub> initrd /boot/initrd.img-3.0.0-12-generic
grub> boot

And, it boots fine from the Ubuntu partition.

That’s all the time I had so far. A few observations about the system as it shipped from the factory. Windows is booting using a custom bootloader, the Windows Boot Manager which bypasses UEFI. In the dual-boot configuration that doesn’t work, the UEFI “BIOS” configuration and the efibootmgr command-line utility both recognize that the machine has a UEFI boot option for “ubuntu”, but choosing that during startup from the boot options still diverts straight to Windows. The machine didn’t ship with GPT partitions (which are one of the advantages of UEFI), instead it shipped with an old-fashioned MBR partition scheme (limited to 4 physical partitions). The working Ubuntu configuration (total machine wipe) does set up proper GPT partitions.

Quixperiment: Ubuntu and iPod

I have an old iPod that I occasionally use on car trips, but haven’t really modified in years (it mostly sits on a shelf). This morning I decided to play around a bit with hooking it up with my main Ubuntu desktop. I found a good list of options for managing an iPod in Linux on Wikipedia, and decided to try out both gtkpod and Rythymbox. Both seemed to work pretty well for interfacing to the iPod, no a super-shiny interface, but usable. A slight advantage to gtkpod, because it displayed my Smart Playlists, while Rhythmbox only displayed the static ones. Between the two, I can imagine using Rhythmbox as my primary music player, but would probably only use gtkpod for directly managing the iPod.

I copied my iPod music library over to Rhythmbox’s local library, just to try it out. It copied 3,249 tracks out of the 3,359 that were on my iPod. I got a few errors about duplicate files during the copy, all with generic file names like “01 – Track 01.mp3”. There were ~4-5 CDs like this, each with ~19-25 tracks, so that seems to account for the missing 110 tracks, though I didn’t keep exact notes, or do an exact comparison to see which files were missed. I’m guessing a handful of CDs I had loaded on the iPod were ripped with generic file names rather than specific titles, and that the iPod was separating them by directory structure, while Rhythmbox was loading them all in one directory so the file names conflicted. Just a guess, I’ll look into it more later if it ends up being useful.

Things I wish for in Rhythmbox:

  • The ability to copy a playlist from the iPod to the local music library, instead of recreating it.
  • The ability to synchronize my music and playlists between different computers/devices (will look into Ubuntu One for this later, it has some relevant features, though possibly not yet the full user journey I’m looking for).
  • A way to split up my local library into Music, Audiobooks, and Language Learning. Shuffle mode is pretty useless when it brings up random chapters of “The Hitchhiker’s Guide to the Galaxy” or snippets of Afrikaans language drills. I found suggestions that it’s possible to configure multiple Libraries for Rhythmbox in gconf even though it’s not displayed in the GUI, but there was no ‘library_locations’ key in /apps/rhythmbox, so I’ll have to poke around a bit more later to see if it’s still a valid key in current versions of Rhythmbox. (Separating libraries is a problem on the iPod itself, so this is just a same-old existing irritation repeated in a new piece of software.)
  • A shinier user interface, that makes it easier to find artists, albums, or songs I want to listen to.
  • More informative error messages when failing to copy files.
  • I found one work-in-progress on integration between Rhythmbox and the Music Lens, I’d like to see that complete.

Mythbusters – UEFI and Linux

A recent blog post about a user who was having trouble installing Ubuntu on an HP machine, sparked off an urban legend that UEFI secure boot is blocking installs of Linux. To calm FUD with facts: the secure boot feature hasn’t been implemented and shipped yet on any hardware. It was introduced in the 2.3.1 version of the UEFI specification, which was released in April 2011. Hardware with secure boot will start shipping next year.

It’s important to distinguish between UEFI in general and the new secure boot feature. UEFI has been around for a while, starting its life as the “Intel Boot Initiative” in the late ’90s. It has a number of advantages over old BIOS, including substantially faster boot times, the ability to boot from a drive larger than 2.2 TB, and the ability to handle more than 4 partitions on a drive. The UEFI specification is developed by the members of the UEFI Forum, a non-profit trade organization with various levels of free and paid memberships. UEFI is not a problem for Linux. At the UEFI Plugfest in Taipei last week, Alex Hung (Canonical) tested Ubuntu 11.10 on a number of machines, with success even on pre-release chipsets. The few failures seemed to be related to displays, and not particularly to UEFI.

The secure boot feature of UEFI is a concern for Linux, but not because of the specification. The features outlined in the 2.3.1 specification are general enough to easily accommodate the needs of Linux. But, within the range of possible implementations from that specification, some alternatives could cause problems for Linux. For full details, I recommend reading the two whitepapers released by Canonical and Red Hat and by The Linux Foundation. The short version is that secure boot uses signed code in the boot path in much the same way you might use a GPG signed email message: to verify that it came from someone you know and trust. The beneficial ways of implementing this feature allow the individual (or administrator) who owns the machine to add new keys to their list, so they get to choose who to trust and who not to trust. The harmful ways of implementing this feature don’t allow the user to change the keys, or disable the secure boot feature, which means they can’t boot anything that isn’t explicitly approved by the hardware manufacturer (or OS vendor). This would mean users couldn’t just download and install any old image of Debian, Fedora, Red Hat, SuSE, Ubuntu, etc. So, there’s real potential for a future problem here, but we’re not there yet. At this point, it’s a matter of encouraging the hardware companies to choose the beneficial path.

I’ve been chatting with the Ubuntu user who had the install problem, to see if we can find the real bug. It’s a friend’s machine rather than his own, so he doesn’t have easy access to it. I’ve arranged to get access to a similar machine next week to play with it. I’ll post back here if I find anything useful or interesting.

UDS-P Architecture Preview

Today we kick off the week-long Ubuntu Developer Summit, focused on the upcoming 12.04 release, “Precise Pangolin”, shaping the plans for the next 6 months, and breaking the goals into a manageable series of work items. With more than 20 rooms running simultaneous sessions, it’s a challenge to decide what to participate in, whether you’re here in Orlando or following remotely. As we dive in, it’s useful to take a step back and set the sea of sessions into the overall architecture and vision for Ubuntu, to trace the structure of threads running through the pattern.

The Ubuntu project is a tightly integrated collaboration between a community and a company, both focused on making free software approachable and easily available to millions of users. From the first inception of Ubuntu, I’ve always considered this collaboration to be its greatest strength. It’s a beautiful marriage of passionate dedication to software freedom and gravitas in the industry to help build visibility, partnerships, and self-sustaining growth. But, like all marriages, keeping that relationship healthy is an ongoing process, something you do a little bit every day. A few sessions to look out for here are renewing our commitment to encourage each other by showing appreciation for the hard work of all contributors [Monday][*] (including developers [Friday]), the standard Debian health check [Monday], embracing the cultural differences between designers and developers [Tuesday] while building up community participation in user experience and design [Wednesday], a more structured approach to mentoring developers [Wednesday & Friday], and how to welcome a new generation of developers who focus on application development [Monday, Tuesday & Wednesday].

12.04 is a Long Term Support (LTS) release, which means that both the server and desktop releases will be supported with maintenance and security updates for 5 years, instead of the usual 18 months. Ubuntu anticipates that more conservative users will upgrade from one LTS to the next instead of following the “latest and greatest” every 6 months. Because of longer support and conservative upgrades, LTS releases always focus more on quality, polish, and solidifying the user experience, than on introducing new features. A significant set of sessions build on this theme, including tracking high-priority bugs so they get resolved [Wednesday], improving the ISO testing tracker [Friday], process for toolchain stabilization [Tuesday], automated testing of package dependencies [Wednesday], automated regression testing [Thursday], tools for tracking archive problems like FTBFS, NBS, etc, so they can be rapidly fixed [Tuesday], accessibility polish [Thursday], printer setup tool refinements to contribute upstream to GNOME [Thursday], plans for the Lucid->Precise LTS upgrade [Monday], ongoing maintenance in desktop boot speed [Thursday], and automated testing of complex server deployments [Friday].

The world is moving from personal computing on a single dedicated device to “multi-screen” computing across a collection of devices: not just a desktop for home or work, with a laptop or netbook for portability, but handheld devices like phones, tablets, media players, and ebook readers are part of our everyday life. Other dedicated-purpose pieces of technology, like televisions and cars, are getting smarter and smarter, growing into fully-fledged computing devices. The nirvana of this new world is an integrated computing experience where all our devices work together, share data and content where it’s relevant, share interaction models to ease the transition from one device to the next, and also keep appropriate distinctions for different form-factors and different contexts of use. The Linux kernel is an ideal core for this kind of integrated experience, supporting radical diversity in hardware architectures, and scaling smoothly all the way from resource-constrained phones to mega-muscle servers. Ubuntu has had a focus on consumer users from the very start, so it will come as no surprise that the Ubuntu project (both the Ubuntu community and Canonical as a participating company) have a strong interest in this space. Ubuntu Mobile started as early as 2007 (also “UME” or “Ubuntu MID”), and Kubuntu Mobile in 2010. Mark Shuttleworth mentioned in his opening keynote this morning that Canonical plans to invest in the multi-screen experience over the next few years. If you’re interested in this topic, some areas you might want to participate in are: ARM architecture support [Tuesday], ARM hardfloat [Friday], and ARM cross-compilation [Friday] (many small form-factor devices these days are ARM-based), application sandboxing [Wednesday], what’s ahead for the Software Center [Friday], an interactive session on design and user experience in free software applications [Monday], power consumption (relevant for battery life) [Wednesday], printing from the personal cloud [Thursday], the Qt embedded showcase [Tuesday], and potential for a Wayland tech preview [Tuesday]. Also keep an eye out for touch support, virtual keyboards, suspend/resume, and web apps, which don’t have dedicated sessions (yet), but will certainly be weaving through conversations this week.

On the server side, general trends are moving from a traditional view of system administration as “systems integration” to a DevOps view of “service orchestration”. This may sound like a game of buzz-word bingo, but it’s far, far more. What we’re looking at is a fundamental shift from managing increasingly complex deployments by throwing in more humans as slogging foot soldiers, to letting machines do the slogging so humans can focus on the parts of administration that require intelligence, deep understanding, and creative thinking. This industrial revolution is still at an infant stage, converting individual manually operated looms (servers) over to aggregated sets of looms all doing the same thing (configuration management) and automated operation and oversight of whole factories of diverse interacting pieces such as spinners, looms, cutting, and sewing  (service orchestration). If this is your area of focus, it’s worth following the entire Server and Cloud track, but make sure not to miss sessions on Juju [multiple TuesdayWednesday & Thursday], Orchestra [Thursday], OpenStack [Monday & Friday], LXC [Thursday], libvert [Wednesday], cloud power management [Wednesday], and power-consumption testing for ARM [Thursday].

We’ve got an exciting week ahead, enjoy!

[*] UDS is a fast-paced and dynamic “unconference”, so the days, times, and rooms are subject to change. I’ve provided links to the blueprints for details and links to the day where the session is currently scheduled to help find each session in the schedule.

Ubuntu Brainstorm – Contacts Lens

It’s time for another round on the Ubuntu Technical Board’s review of the top ranked items on Ubuntu Brainstorm. This time I’m reviewing a brainstorm about a Unity Lens for contacts, together with Neil Patel from Canonical’s DX team. I volunteered to participate in this reply because I’d already been thinking about how to do it before I saw the brainstorm. I mainly keep my contacts in Gmail these days, for sync to my Android phone and tablet. But, with around 700 contacts, I find the Gmail interface pretty clunky.

The first key to a Contacts Lens is a standard format for contacts, and a path for contact synchronization. For the Oneiric release, coming up in a couple of weeks, Thunderbird is the new default email client, and as part of that, the Thunderbird developers (especially Mike Conley) added support to Thunderbird for the existing standard for contacts in GNOME, which is EDS (Evolution Data Server). Supporting EDS not only provides access to Evolution contacts from Thunderbird, which is important for users migrating from Evolution to Thunderbird, but also provides access to Gmail contacts and UbuntuOne contacts.

The second key is integrating EDS with a Unity Lens. The DX team isn’t working on a Contacts Lens for this in Oneiric or 12.04, but writing a lens is an accessible task for anyone with a little skill in Vala or Python, and is a great way to learn more about how Unity works. I’ll outline how to get started here, for more details see the wiki documentation on lenses. The architecture of a Unity Lens is pretty simple, I’d even say elegant. Writing a Lens doesn’t involve any GUI code at all, you only write a small backend that supplies the data to be displayed. This means that all lenses work for both Unity and Unity 2D, without any changes.

A Lens is a daemon that talks over D-Bus. To build one, you start with 3 files. (Throughout this illustration, I’ll pretend we’re working on a Launchpad project called ‘unity-lens-contacts’.)  The first file is the Lens itself, and the core of that file is a few lines that create a Lens object from libunity, and then set some default properties for it. In Vala, that would be:

lens = new Unity.Lens("/net/launchpad/lens/contacts", "contacts");

To go along with the Lens, you need a ‘contacts.lens’ file to tell Unity where to find your daemon, and a ‘contacts.service’ file to register the D-Bus service that your Lens provides. The ‘contacts.lens’ file is installed in ‘/usr/share/unity/lenses/contacts/’, and looks like:

Description=A Lens to search contacts
SearchHint=Search Contacts

[Desktop Entry]

The ‘contacts.service’ file is installed in ‘/usr/share/dbus-1/services/’, and looks like:

[D-BUS Service]

A Lens daemon handles requests for data, but it doesn’t actually do the searching. For that, you need to define a Scope (if it helps, think about searching the ocean through a periscope). A Lens can have more than one Scope, and when it does, each Scope collects results from a different source, so the Lens can combine them into one full set of results. Start with one Scope for one datasource: EDS contacts. A Scope is just another libunity object, and creating one in Vala looks like:

scope = new Unity.Scope ("/net/launchpad/scope/edscontacts");

The search functionality goes in the ‘perform_search’ method. For EDS contacts, you could use the EDS APIs directly, but Neil recommends libfolks.

A Scope can run locally inside the Lens daemon, in which case you add it directly to the Lens object:


Or, a Scope can run in a separate daemon, in which case you’ll also need an ‘edscontacts.scope’ file, so the Lens knows where to find the Scope. This file is installed in the same folder as ‘contacts.lens’ (‘/usr/share/unity/lenses/contacts/’), and looks like:


That’s the basic anatomy of a Lens, and enough to get a new project started. To see how it all fits together, there are several good examples of other lenses. The Unity Music Lens is the most relevant example for the Contacts Lens, and a fairly straightforward one to start looking at. For more complex examples, see the Applications or Files lenses. There’s also a Sample Lens, which is a working tutorial. And, once you get the core of the Contacts Lens working, and are looking for what to add next, read up more on Categories and Filters in the wiki.

If this sounds like an interesting project to you, drop us a line. You’ll find a lot of enthusiasm, and willingness to help out where you need it.