Mythbusters – UEFI and Linux

A recent blog post about a user who was having trouble installing Ubuntu on an HP machine, sparked off an urban legend that UEFI secure boot is blocking installs of Linux. To calm FUD with facts: the secure boot feature hasn’t been implemented and shipped yet on any hardware. It was introduced in the 2.3.1 version of the UEFI specification, which was released in April 2011. Hardware with secure boot will start shipping next year.

It’s important to distinguish between UEFI in general and the new secure boot feature. UEFI has been around for a while, starting its life as the “Intel Boot Initiative” in the late ’90s. It has a number of advantages over old BIOS, including substantially faster boot times, the ability to boot from a drive larger than 2.2 TB, and the ability to handle more than 4 partitions on a drive. The UEFI specification is developed by the members of the UEFI Forum, a non-profit trade organization with various levels of free and paid memberships. UEFI is not a problem for Linux. At the UEFI Plugfest in Taipei last week, Alex Hung (Canonical) tested Ubuntu 11.10 on a number of machines, with success even on pre-release chipsets. The few failures seemed to be related to displays, and not particularly to UEFI.

The secure boot feature of UEFI is a concern for Linux, but not because of the specification. The features outlined in the 2.3.1 specification are general enough to easily accommodate the needs of Linux. But, within the range of possible implementations from that specification, some alternatives could cause problems for Linux. For full details, I recommend reading the two whitepapers released by Canonical and Red Hat and by The Linux Foundation. The short version is that secure boot uses signed code in the boot path in much the same way you might use a GPG signed email message: to verify that it came from someone you know and trust. The beneficial ways of implementing this feature allow the individual (or administrator) who owns the machine to add new keys to their list, so they get to choose who to trust and who not to trust. The harmful ways of implementing this feature don’t allow the user to change the keys, or disable the secure boot feature, which means they can’t boot anything that isn’t explicitly approved by the hardware manufacturer (or OS vendor). This would mean users couldn’t just download and install any old image of Debian, Fedora, Red Hat, SuSE, Ubuntu, etc. So, there’s real potential for a future problem here, but we’re not there yet. At this point, it’s a matter of encouraging the hardware companies to choose the beneficial path.

I’ve been chatting with the Ubuntu user who had the install problem, to see if we can find the real bug. It’s a friend’s machine rather than his own, so he doesn’t have easy access to it. I’ve arranged to get access to a similar machine next week to play with it. I’ll post back here if I find anything useful or interesting.

UDS-P Architecture Preview

Today we kick off the week-long Ubuntu Developer Summit, focused on the upcoming 12.04 release, “Precise Pangolin”, shaping the plans for the next 6 months, and breaking the goals into a manageable series of work items. With more than 20 rooms running simultaneous sessions, it’s a challenge to decide what to participate in, whether you’re here in Orlando or following remotely. As we dive in, it’s useful to take a step back and set the sea of sessions into the overall architecture and vision for Ubuntu, to trace the structure of threads running through the pattern.

The Ubuntu project is a tightly integrated collaboration between a community and a company, both focused on making free software approachable and easily available to millions of users. From the first inception of Ubuntu, I’ve always considered this collaboration to be its greatest strength. It’s a beautiful marriage of passionate dedication to software freedom and gravitas in the industry to help build visibility, partnerships, and self-sustaining growth. But, like all marriages, keeping that relationship healthy is an ongoing process, something you do a little bit every day. A few sessions to look out for here are renewing our commitment to encourage each other by showing appreciation for the hard work of all contributors [Monday][*] (including developers [Friday]), the standard Debian health check [Monday], embracing the cultural differences between designers and developers [Tuesday] while building up community participation in user experience and design [Wednesday], a more structured approach to mentoring developers [Wednesday & Friday], and how to welcome a new generation of developers who focus on application development [Monday, Tuesday & Wednesday].

12.04 is a Long Term Support (LTS) release, which means that both the server and desktop releases will be supported with maintenance and security updates for 5 years, instead of the usual 18 months. Ubuntu anticipates that more conservative users will upgrade from one LTS to the next instead of following the “latest and greatest” every 6 months. Because of longer support and conservative upgrades, LTS releases always focus more on quality, polish, and solidifying the user experience, than on introducing new features. A significant set of sessions build on this theme, including tracking high-priority bugs so they get resolved [Wednesday], improving the ISO testing tracker [Friday], process for toolchain stabilization [Tuesday], automated testing of package dependencies [Wednesday], automated regression testing [Thursday], tools for tracking archive problems like FTBFS, NBS, etc, so they can be rapidly fixed [Tuesday], accessibility polish [Thursday], printer setup tool refinements to contribute upstream to GNOME [Thursday], plans for the Lucid->Precise LTS upgrade [Monday], ongoing maintenance in desktop boot speed [Thursday], and automated testing of complex server deployments [Friday].

The world is moving from personal computing on a single dedicated device to “multi-screen” computing across a collection of devices: not just a desktop for home or work, with a laptop or netbook for portability, but handheld devices like phones, tablets, media players, and ebook readers are part of our everyday life. Other dedicated-purpose pieces of technology, like televisions and cars, are getting smarter and smarter, growing into fully-fledged computing devices. The nirvana of this new world is an integrated computing experience where all our devices work together, share data and content where it’s relevant, share interaction models to ease the transition from one device to the next, and also keep appropriate distinctions for different form-factors and different contexts of use. The Linux kernel is an ideal core for this kind of integrated experience, supporting radical diversity in hardware architectures, and scaling smoothly all the way from resource-constrained phones to mega-muscle servers. Ubuntu has had a focus on consumer users from the very start, so it will come as no surprise that the Ubuntu project (both the Ubuntu community and Canonical as a participating company) have a strong interest in this space. Ubuntu Mobile started as early as 2007 (also “UME” or “Ubuntu MID”), and Kubuntu Mobile in 2010. Mark Shuttleworth mentioned in his opening keynote this morning that Canonical plans to invest in the multi-screen experience over the next few years. If you’re interested in this topic, some areas you might want to participate in are: ARM architecture support [Tuesday], ARM hardfloat [Friday], and ARM cross-compilation [Friday] (many small form-factor devices these days are ARM-based), application sandboxing [Wednesday], what’s ahead for the Software Center [Friday], an interactive session on design and user experience in free software applications [Monday], power consumption (relevant for battery life) [Wednesday], printing from the personal cloud [Thursday], the Qt embedded showcase [Tuesday], and potential for a Wayland tech preview [Tuesday]. Also keep an eye out for touch support, virtual keyboards, suspend/resume, and web apps, which don’t have dedicated sessions (yet), but will certainly be weaving through conversations this week.

On the server side, general trends are moving from a traditional view of system administration as “systems integration” to a DevOps view of “service orchestration”. This may sound like a game of buzz-word bingo, but it’s far, far more. What we’re looking at is a fundamental shift from managing increasingly complex deployments by throwing in more humans as slogging foot soldiers, to letting machines do the slogging so humans can focus on the parts of administration that require intelligence, deep understanding, and creative thinking. This industrial revolution is still at an infant stage, converting individual manually operated looms (servers) over to aggregated sets of looms all doing the same thing (configuration management) and automated operation and oversight of whole factories of diverse interacting pieces such as spinners, looms, cutting, and sewing  (service orchestration). If this is your area of focus, it’s worth following the entire Server and Cloud track, but make sure not to miss sessions on Juju [multiple TuesdayWednesday & Thursday], Orchestra [Thursday], OpenStack [Monday & Friday], LXC [Thursday], libvert [Wednesday], cloud power management [Wednesday], and power-consumption testing for ARM [Thursday].

We’ve got an exciting week ahead, enjoy!

[*] UDS is a fast-paced and dynamic “unconference”, so the days, times, and rooms are subject to change. I’ve provided links to the blueprints for details and links to the day where the session is currently scheduled to help find each session in the schedule.

Ubuntu Brainstorm – Contacts Lens

It’s time for another round on the Ubuntu Technical Board’s review of the top ranked items on Ubuntu Brainstorm. This time I’m reviewing a brainstorm about a Unity Lens for contacts, together with Neil Patel from Canonical’s DX team. I volunteered to participate in this reply because I’d already been thinking about how to do it before I saw the brainstorm. I mainly keep my contacts in Gmail these days, for sync to my Android phone and tablet. But, with around 700 contacts, I find the Gmail interface pretty clunky.

The first key to a Contacts Lens is a standard format for contacts, and a path for contact synchronization. For the Oneiric release, coming up in a couple of weeks, Thunderbird is the new default email client, and as part of that, the Thunderbird developers (especially Mike Conley) added support to Thunderbird for the existing standard for contacts in GNOME, which is EDS (Evolution Data Server). Supporting EDS not only provides access to Evolution contacts from Thunderbird, which is important for users migrating from Evolution to Thunderbird, but also provides access to Gmail contacts and UbuntuOne contacts.

The second key is integrating EDS with a Unity Lens. The DX team isn’t working on a Contacts Lens for this in Oneiric or 12.04, but writing a lens is an accessible task for anyone with a little skill in Vala or Python, and is a great way to learn more about how Unity works. I’ll outline how to get started here, for more details see the wiki documentation on lenses. The architecture of a Unity Lens is pretty simple, I’d even say elegant. Writing a Lens doesn’t involve any GUI code at all, you only write a small backend that supplies the data to be displayed. This means that all lenses work for both Unity and Unity 2D, without any changes.

A Lens is a daemon that talks over D-Bus. To build one, you start with 3 files. (Throughout this illustration, I’ll pretend we’re working on a Launchpad project called ‘unity-lens-contacts’.)  The first file is the Lens itself, and the core of that file is a few lines that create a Lens object from libunity, and then set some default properties for it. In Vala, that would be:

lens = new Unity.Lens("/net/launchpad/lens/contacts", "contacts");

To go along with the Lens, you need a ‘contacts.lens’ file to tell Unity where to find your daemon, and a ‘contacts.service’ file to register the D-Bus service that your Lens provides. The ‘contacts.lens’ file is installed in ‘/usr/share/unity/lenses/contacts/’, and looks like:

[Lens]
DBusName=net.launchpad.Lens.Contacts
DBusPath=/net/launchpad/lens/contacts
Name=Contacts
Icon=/usr/share/unity-lens-contacts/data/lens-nav-contacts.svg
Description=A Lens to search contacts
SearchHint=Search Contacts
Shortcut=c

[Desktop Entry]
X-Ubuntu-Gettext-Domain=unity-lens-contacts

The ‘contacts.service’ file is installed in ‘/usr/share/dbus-1/services/’, and looks like:

[D-BUS Service]
Name=net.launchpad.Lens.Contacts
Exec=/usr/lib/unity-lens-contacts/unity-lens-contacts

A Lens daemon handles requests for data, but it doesn’t actually do the searching. For that, you need to define a Scope (if it helps, think about searching the ocean through a periscope). A Lens can have more than one Scope, and when it does, each Scope collects results from a different source, so the Lens can combine them into one full set of results. Start with one Scope for one datasource: EDS contacts. A Scope is just another libunity object, and creating one in Vala looks like:

scope = new Unity.Scope ("/net/launchpad/scope/edscontacts");

The search functionality goes in the ‘perform_search’ method. For EDS contacts, you could use the EDS APIs directly, but Neil recommends libfolks.

A Scope can run locally inside the Lens daemon, in which case you add it directly to the Lens object:

lens.add_local_scope(scope);

Or, a Scope can run in a separate daemon, in which case you’ll also need an ‘edscontacts.scope’ file, so the Lens knows where to find the Scope. This file is installed in the same folder as ‘contacts.lens’ (‘/usr/share/unity/lenses/contacts/’), and looks like:

[Scope]
DBusName=net.launchpad.Scope.edscontacts
DBusPath=/net/launchpad/scope/edscontacts

That’s the basic anatomy of a Lens, and enough to get a new project started. To see how it all fits together, there are several good examples of other lenses. The Unity Music Lens is the most relevant example for the Contacts Lens, and a fairly straightforward one to start looking at. For more complex examples, see the Applications or Files lenses. There’s also a Sample Lens, which is a working tutorial. And, once you get the core of the Contacts Lens working, and are looking for what to add next, read up more on Categories and Filters in the wiki.

If this sounds like an interesting project to you, drop us a line. You’ll find a lot of enthusiasm, and willingness to help out where you need it.

Ubuntu Brainstorm – Multimedia Performance

Ubuntu Brainstorm is a website dedicated to collaborative greenlighting–anyone can post an idea for change in Ubuntu, and developers and community members vote and comment on the idea. As part of an initiative to increase the visibility of the highest rated ideas and speed their progress into the distribution, I’ve been asked to post my thoughts on an idea about multimedia performance.

The fundamental concern is a classic one for large systems: changes in one part of the system affect the performance of another part of the system. It’s modestly difficult to measure the performance effects of local changes, but exponentially more difficult to measure the “network effects” of changes across the system. Without good performance metrics, one set of developers working on one component may have no idea that their enhancements and fixes are actually degrading performance of another component. And once you have good metrics, you have to step back and take the larger view, because a 10% performance loss in a rarely used component may be a perfectly acceptable trade-off for a 20% performance improvement in a commonly used component.

Three of the six proposed solutions (#1, #2, and #5) have a common problem, they try to address performance across the system by examining one tiny piece of the whole. There’s no harm in optimizing the performance of sound components like PulseAudio, and it might be worth exploring enabling kernel preemption, but these are micro-optimizations. Without good metrics there’s no way of knowing if one small optimization had any real impact on the whole, or of verifying that it didn’t actually have a negative effect on overall performance. Solution #5 has an added problem (mentioned in the comments) that there are good legal and philosophical reasons to require manual user action for codec installation, which is almost certainly why that solution was voted -56.

The other three proposed solutions (#3, #4, and #6) are variations on a theme, really just different angles on the true solution: testing, testing, testing. The good news is, there’s real work going on here right now. At the recent Ubuntu Developer Summit, automated testing for performance and functionality came out as one of the Top Five focus areas for this development cycle. As one sign of how high testing is on the priority list, a third of the jobs currently posted on the Canonical site emphasize testing: 5 have QA in the title, another is entirely focused on test automation, and 6 more include testing and test automation as key components of the job description. (If you have a strong background in automated testing, please apply.)

There are a number of ways to be part of the solution. You might be interested in helping with building multimedia performance benchmarks using existing tools like cairo-perf-trace and Spandex, or in generating performance tests from real applications. You might be interested in working on user-friendly tools to measure graphics and multimedia performance, and display the change in performance over time.  You might work on automated performance benchmarks for your own project, keeping in mind ways the benchmarks can be integrated into a larger testing framework. And if you aren’t solely interested in performance, but all this talk of testing piques your interest, you might join in to help with the overall test plan for the Natty release, working to improve automated testing, test frameworks, code coverage, or hardware coverage.

New Job, New Blog

I started a new job last week, as Technical Architect of Ubuntu. I’m thrilled to be here, I couldn’t have crafted a more perfect job if I’d written the job description myself. I’ve been reflecting this week on how I got here. Many people know me for my involvement in Parrot, Perl, or Python, but my first love in free software was Linux, specifically Debian. I was already actively involved in my local Linux user group over a decade ago, hacking and speaking, even teaching a summer class at the LUG. I became a public figure in the Perl community almost by accident (it’s a funny story involving a cruise ship, I’ll tell you sometime over beer). But, I took to the Parrot codebase so quickly precisely because it was so similar to the Linux kernel: a large, complex body of C code with subsystems for memory management, concurrency scheduling, I/O, and networking, plus a host of add-on modules. I mentored under Parrot’s founding Architect for the first few years of the project, then took on his role after he retired.

I’ve been involved in Ubuntu almost from the beginning, though not in a public way. In the training material for Canonical new hires there’s a photo of the original founding team, it was fun to see that I know most of them personally. My first UDS was in 2006, and even though I’d already been an Ubuntu user for more than a year, it was a life-changing experience. It wasn’t just about the code, though it certainly was exciting to see such enthusiastic action toward making Linux a compelling replacement for the “Big 2”. It was even more about the people, this amazing team of brilliant, wonderful humans joining together to change the world. It was something like being adopted into a new family. I’ve made it to about one UDS a year since then (I missed two in a row once, and was terribly disappointed). I’ve considered applying to Canonical multiple times over the years, but the specific roles I looked at were never quite right for me. And I was too busy getting ready to ship Parrot 1.0 and working to support my free software development habit (gotta pay the bills somehow), to do much more than packaging work and kicking in ideas at UDS and to the people I knew and talked to regularly. I’ve enjoyed watching some of those ideas come to life, even if I couldn’t be involved in the actual construction.

UDS-M this year was another life-changing experience for me, but in a different way. I was already kind of keeping an eye out for a “next thing” to do, not with any urgency, just an open mind. The Parrot project shipped 2.0 in January this year, and has moved into a stage of refinements rather than “pushing to release the first production version”. And since 1.0 last year, we’ve been moving the project more and more to distributed design involving a team of highly experienced contributors, rather than a single driving head, so it no longer consumes my every waking moment.

At the beginning of the week at UDS-M, I commented casually in an email to Mark, “These are my people and it makes me happy to be here.” And then it struck me how very true that statement was. As the week went on, I started looking for ways to get more actively involved. I talked with Kees Cook about helping on one of the security blueprints. I talked with Robert Collins and Martin Pool about helping out with Python 3.x migration for bzr. Towards the end of the week, I bumped into Robbie Williams and Rick Spencer talking about the new Technical Architect role they were planning on posting. Rick suggested I should apply for it. I sounded interesting, I mean I’m a Software Architect, that’s what I do, I even contributed to a book on the subject. But, it wasn’t until I talked to Colin Watson (who is also my Debian mentor) about the job over dinner on Friday at Ubuntu AllStars that I really decided to apply. Add on time for the job to be publicly posted, time for the interview process (where I understand many qualified candidates were discussed and considered), and time for me to wrap up my final year of chairing OSCON, and here I am.

Right at the start, I should make it clear that I am not the SABDFL. I’m here to help turn his vision into reality. That’s what architects do, translate between the potential for a building and carefully measured graphite on paper, then act as a resource for the whole crew as they work together to translate an abstract plan into hard steel, warm brick, and shining glass. I’m here to champion the community’s vision for Ubuntu, to facilitate conversations as we integrate multiple perspectives and balance multiple needs, to ask good questions that help us find better solutions. I’m here to help smooth some of the bumps in the road, because no road worth traveling is ever completely easy. I’m here to sing harmony to the SABDFL’s melody, with the whole choir, to soften the lows and brighten the highs. If you want a name (or just have a fondness for recursive acronyms like I do), you might call me the SABTAFL, that is, “SABDFL Appointed Benevolent Technical Architect For a-Long-time-but-not-necessarily-Life”. 😉 But, “allison” is a perfectly good name, I don’t need any other.

To give credit where credit is due, there have been 4 great influences on my career over the years, mentors, friends, people who believed in me, encouraged me to dream big dreams and try big things, who taught me that I’m better, smarter, wiser, more dynamic, and resilient than I ever imagined. In alphabetical order: Damian Conway, Greg Kroah-Hartman, Mark Shuttleworth, and Nathan Torkington. Thanks guys, I wouldn’t be here without you!

(Appropriately, this post was written on the machine that was my very first Ubuntu desktop. Well, I’ve upgraded pretty much the whole guts of the thing a few pieces at a time over the years, and it currently runs Lucid. But at some deep level, if computers have something like a soul, it’s the same machine.)